Last week I attended the HDS bloggers day at HDS European HQ with my colleague Fabio and a bunch of famous bloggers.

All the event was organized at best and I appreciated very much the opportunity to learn and share experiences from HDS top executives. It is often difficult to understand the real vision and strategy of these companies but an informal event like this one, allows to meet different point of views and the discussion can bring a lot of value to everyone!
The most interesting thing I saw in the two days meeting was the evolution of a product Hitachi bought some years ago Hitachi Content Platform. This product is the foundation base of a couple of solutions I believe are very interesting: Hitachi Clinical Repository ( a couple of link here and here) aimed to help storing and managing clinical data and one in the cloud computing space!
HCP is an object storage, a peculiar kind of storage where data aren’t managed as block or files but they are viewed as objects (data + metadata). It’s the emerging standard to build cloud storages and you can find many examples all around about this. Perhaps, the most famous is Amazon S3 but many startups and big vendors (like Scality or EMC) are trying to catch up on it to offer viable alternatives for public and private cloud.
The goal of this piece is to talk about HCP+HDI (Hitachi Data Ingestor) a solution from HDS to build a true and simple private cloud storage. HDS’s proposal has all the characteristic you look for when you want to build your own cloud:
  • smart object storage as the foundation base;
  • multiprotocol access;
  • multitenant and secure architecture;
  • feature rich and scalable with advanced (object based) replication capabilities;
  • legacy-to-cloud smooth migration path;

the most important thing to me aren’t the first four points (all vendors can say that) but the last one. The killing application is there:  the capability to migrate a traditional NAS environment on a cloud storage with the slightest impact on the final users!

a couple of words on HCP

Hitachi Content Platform is a well designed object storage to fit in enterprise environments. 
From the hardware point of view there are two options: 
  • one based on appliances without an external storage (each node of the cluster has some embedded storage and the whole cluster can scale out up to 85TBs)
  • and high-end version more aligned with the HDS’ storage vision: “a block storage with some intelligence (an appliance) on top”, not my preferred one indeed but, from the technical specs sheet, we can find that it can scale up to 40PB on a single cluster!
From the software point of view there are many interesting features for the enterprise ranging from WORM (write once ready many) capability, (file level) dedupe, integration with applications like SharePoint,  etc.
Access protocols to the storage are all you can desire for this kind of solutions: CIFS, NFS v3, HTTP v1.1, WebDAV, SMTP, NDMP v4 and REST above all.
All spiced up with a relatively easy to use GUI (not so usual for HDS).
But, up to now I didn’t emphasize anything specific: HCP is a good product but you can find many similar solutions from different vendors, so what?

the beautiful add-on: HDI

Hitachi Data Ingestors are appliances (ranging from a VM to a full clustered system with dedicated external storage) acting as a NAS frontend to the final users but working as a cache (as big as your local performance and reliability need) to the HCP!
HDI provides some awesome features:
  • scalability (up to 400M files)
  • CIFS and NFS protocols
  • integration with AD, LDAP (and dynamic users mapping between Unix and Windows)
  • WORM
  • embedded replication
  • an automatic tiering capabilty
HDI can be placed in every remote or branch office (ROBO) providing for a traditional file service (it can also reuse the old NAS infrastructure as local storage for the cache): every file stored in it will be replicated to a central HCP repository.
The HDI architecture allows to define a threshold for the amount of local used space to maintain locally only the most accessed files: other files can be reclaimed (in a transparent manner) from the HDI when needed.
It’s awesome! with HDI you can consolidate all the unstructured data of your enterprise in a single, secure, big repository, forgetting, at the same time, all the problems related to remote backups/DR! Moreover, you’ll provide a simple and smooth migration path to a global secure private cloud storage for your company (with features like archiving, document versioning and dedupe in its DNA only to name the first coming to my mind).
Finally, an all in one end-to-end file to object storage solution that is not file server fake nor a patch of different products/vendors glued together. In the past I wrote an article to share my point of view about the next unified storage, I think HDS did a first step in the right direction! 

Disclaimer: HDS invited me at this event and paid for travel and accommodation but I’m not under any obligation to write any material about this event.