Last week I attended the HDS bloggers day at HDS European HQ with my colleague Fabio and a bunch of famous bloggers.
All the event was organized at best and I appreciated very much the opportunity to learn and share experiences from HDS top executives. It is often difficult to understand the real vision and strategy of these companies but an informal event like this one, allows to meet different point of views and the discussion can bring a lot of value to everyone!
The most interesting thing I saw in the two days meeting was the evolution of a product Hitachi bought some years ago Hitachi Content Platform. This product is the foundation base of a couple of solutions I believe are very interesting: Hitachi Clinical Repository ( a couple of link here and here) aimed to help storing and managing clinical data and one in the cloud computing space!
HCP is an object storage, a peculiar kind of storage where data aren’t managed as block or files but they are viewed as objects (data + metadata). It’s the emerging standard to build cloud storages and you can find many examples all around about this. Perhaps, the most famous is Amazon S3 but many startups and big vendors (like Scality or EMC) are trying to catch up on it to offer viable alternatives for public and private cloud.
The goal of this piece is to talk about HCP+HDI (Hitachi Data Ingestor) a solution from HDS to build a true and simple private cloud storage. HDS’s proposal has all the characteristic you look for when you want to build your own cloud:
smart object storage as the foundation base;
multitenant and secure architecture;
feature rich and scalable with advanced (object based) replication capabilities;
legacy-to-cloud smooth migration path;
the most important thing to me aren’t the first four points (all vendors can say that) but the last one. The killing application is there: the capability to migrate a traditional NAS environment on a cloud storage with the slightest impact on the final users!
a couple of words on HCP
Hitachi Content Platform is a well designed object storage to fit in enterprise environments.
From the hardware point of view there are two options:
one based on appliances without an external storage (each node of the cluster has some embedded storage and the whole cluster can scale out up to 85TBs)
and high-end version more aligned with the HDS’ storage vision: “a block storage with some intelligence (an appliance) on top”, not my preferred one indeed but, from the technical specs sheet, we can find that it can scale up to 40PB on a single cluster!
From the software point of view there are many interesting features for the enterprise ranging from WORM (write once ready many) capability, (file level) dedupe, integration with applications like SharePoint, etc.
Access protocols to the storage are all you can desire for this kind of solutions: CIFS, NFS v3, HTTP v1.1, WebDAV, SMTP, NDMP v4 and REST above all.
All spiced up with a relatively easy to use GUI (not so usual for HDS).
But, up to now I didn’t emphasize anything specific: HCP is a good product but you can find many similar solutions from different vendors, so what?
the beautiful add-on: HDI
Hitachi Data Ingestors are appliances (ranging from a VM to a full clustered system with dedicated external storage) acting as a NAS frontend to the final users but working as a cache (as big as your local performance and reliability need) to the HCP!
integration with AD, LDAP (and dynamic users mapping between Unix and Windows)
an automatic tiering capabilty
HDI can be placed in every remote or branch office (ROBO) providing for a traditional file service (it can also reuse the old NAS infrastructure as local storage for the cache): every file stored in it will be replicated to a central HCP repository.
The HDI architecture allows to define a threshold for the amount of local used space to maintain locally only the most accessed files: other files can be reclaimed (in a transparent manner) from the HDI when needed.
It’s awesome! with HDI you can consolidate all the unstructured data of your enterprise in a single, secure, big repository, forgetting, at the same time, all the problems related to remote backups/DR! Moreover, you’ll provide a simple and smooth migration path to a global secure private cloud storage for your company (with features like archiving, document versioning and dedupe in its DNA only to name the first coming to my mind).
Finally, an all in one end-to-end file to object storage solution that is not file server fake nor a patch of different products/vendors glued together. In the past I wrote an article to share my point of view about the next unified storage, I think HDS did a first step in the right direction!
Disclaimer: HDS invited me at this event and paid for travel and accommodation but I’m not under any obligation to write any material about this event.
I'm a passionate IT professional with 25+ years of industry experience in technical product strategy and management roles, advising Mid-Market and Large Enterprises across numerous industries and software companies ranging from small ISVs to large providers.
I'm constantly monitoring how markets evolve, and seeking new ideas and innovative solutions. This has been my passion for several years now, and I love to engage in interesting conversations, share findings, opinions and gather new perspectives from other people in the IT community.
The views expressed on these pages are mine alone and not (necessarily) those of any current, future or former client or employer. As I reserve the right to review my position based on future evidence, they may not even reflect my own views by the time you read them. Protip: If in doubt, ask.
I'm driving to my home airport to take a flight for London, and tommorow I'll be attending A3 Communications' Technology Live event... So I thought to record this episode (yes, while driving, again!) Last week I attended HPE Discover in Las Vegas and I wanted to share with you a few thoughts about the company, […]
I recorded this episode while driving (sorry about the audio quality, 100% hands free though!) and talked about my expectations for #VeeamON, HPE buying Cray, and the webinars about #dataprotection and #hyperconvergence that I'll be hosting later this week. Enjoy the listen!
A quick take about Cohesity acquiring Imanis Data and Nutanix entering the secondary data storage market. Data management for structured and unstructured data is one of the hottest topics in the IT industry now. If you like to learn more about unstructured data management please check out my work on Juku.it and GigaOm.com, or join […]
Host Enrico Signoretti speaks with Eric Bednash of RackTop Systems about data storage and security; a different way of thinking about it, and different way of thinking about security in general. Voices in Data Storage – Episode 19: A Conversation with Eric Bednash of RackTop Systems
In this episode host Enrico Signoretti discusses Dell EMC and the myriad ways they work with and in data storage with VP of Product Management Pierluca Chiodelli. Voices in Data Storage – Episode 18: A Conversation with Pierluca Chiodelli of Dell EMC
In this episode Enrico Signoretti speaks with APARAVI's VP of Business Development, Jonathan Calmes about data enhancement, intelligent archives and how to build them and leverage them to create additional value for existing data. Voices in Data Storage – Episode 17: A Conversation with Jonathan Calmes of APARAVI