As you may be already aware, both Juku’s authors were invited to the latest HDS Blogger Day held in Sefton Park, we spent two full days shoulder-to-shoulder with many HDS executives where they presented us their strategy for storage, cloud and stacks, they also stated clearly that no information provided during the sessions was under embargo nor NDA so we’re free to discus everything presented.
One of the main points during the first day was their VMware integration, Hitachi always had a bad reputation on the software side: clumsy integration with applications, really old-fashioned CLI commands and the likes always downplayed the software offering of this company, which is in stark contrast with their hardware which is usually highly regarded (to a merit I must say).
VMware session started with Michael Heffernan (man, he’s a volcano!) who did a nice overview of the HDS integration efforts with the most used hypervisor out there, he started off with a really nice timeline of virtualization milestones (at least from a VMware perspective) and then pinpointed the server and storage challenges that virtualization has bring to the table: VM sprawl, increasing storage complexity and others were mentioned as major pain points.
Obviously, their platform of choice for the Data Center virtualization is VSP, AMS was barely mentioned during all the presentations, except for a couple of “AMS is supported as well” sentences or as a complement for other HDS offerings, but still, many of the features mentioned here are, or will be, also available on AMS.
Then a controversial (being strikingly similar to one made by EMC 😉 ) slide showed the six integration points with VMware:
- vStorage API for SRM – For Site Recovery Manager Integration.
- vStorage API for Data Protection (VADP) – Hitachi Data Protection Suite (Commvault OEM) backup integration.
- Hitachi Storage Viewer vCenter Plugin – A brand-new vCenter Plugin for Hitachi storage (still not in GA).
- vCenter integrated Hitachi Command Suite – Hitachi flagship storage management tool is now integrated with vCenter via API calls.
- VMware NMP (native multipathing plugin) – Hitachi fully supports the NMP from VMware, no HDLM MPP (at least for now).
- vStorage API for Array Integration – VAAI support on all the HDS line, including virtualized arrays.
So, even if they’re being a little late to the party, HDS is aggressively tackling the VMware integration, pushing on every single integration point available today, they’re also committed to having support for each new feature that will appear on vSphere 5 since day one.
They showed us a demo of their VAAI integration, doing a couple of VAAI accelerated tasks on a VSP that was presenting LUNs coming from an old EMC Clariion CX700 array (which is not VAAI-capable) showing that the performance gain is reflected even in old storages that will never see VAAI capabilities.
In-house benchmarks tell us that VAAI improvement is great, especially the locking conflicts that are diminished by 70 to 75%, benchmarks also shows that VAAI is particularly great when working in a spindle-constrained environment.
Hitachi is pushing full speed ahead with block storage on VMware, but what about NAS?
They seem to not care about this market for virtualization, they’re not going after the “unified” offerings that EMC and NetApp are selling, they seem to be more focused on being the best block-based storage company for VMware rather than competing in the ‘one storage to rule them all’ market, this reflects the fact that their NAS offering is currently limited to OEMed BlueArc systems, which are pretty high-end boxes.
So, in the end, Hitachi wants to position itself as the best storage for high-end VMware deployments, they definitely have the hardware pedigree to do that, their VMware integration was partially seen during the HDS Bloggers Day (VAAI and HCS integration were demoed) and everything they showed us looked solid, if they’ll execute the remaining points with the same quality they’re going to be the platform of choice for high-end virtualization projects.
Disclaimer: HDS invited me at this event and paid for travel and accommodation but I’m not under any obligation to write any material about this event.
For starters, I think it is important to point out that VMware’s co-development programs create standard integration points for all participants. No one vendor has any advantages over the other, VMware have built their “API Engine” and it is up to the vendor to invest in these programs for development and commitment to continue to support – Hitachi is totally committed to all VMware API programs and is delivering on this. Now it’s up to the ecosystem of storage vendors to leverage their core competency in their hardware to compete in this space.
Now another comment – There is a reason for everything, that’s why its important to understand the evolution of a technology and why some technologies come into play and why they will take a back seat when improvements are made overtime. The fact is our arrays are purpose built for this type of Integration by default and it’s all about our heritage. This is now evident with our ability to virtualize more than 100+ external storage devices where they automatically inherit all these VMware integration points…. Having been a customer for many years, this is invaluable; I don’t have to throw out my existing assets… Now if you match technology with technology it all makes sense. Keep your architecture simple, do it all in microcode natively and your done. Let VMware focus on all the advanced software to enhance the experience as that’s what they do best and let the Hitachi array handle all the grunt work. Now you can get onto what’s important, keeping IT up and running to service the business and save money rather than trying to troubleshoot and manage an overly complex environment.
Hitachi will continue to invest in VMware integration and our engineering team has been working extremely hard both in Japan and Santa Clara to focus on the Core vStorage API’s that VMware released to the storage partner ecosystem. These require lots of effort and QA so that the customer is confident that these work seamlessly. We see massive value to our customers that this integration provides them, especially now that ESX is offloading processes into the storage array.
Hats off to VMware for making this step to enable the ecosystem with this standard API Integration, which ultimately benefits the customer, now its up to the customer to decide what technology makes sense…
Heff
PS – Great picture of the group!
Micheal,
Thank you for chiming in, I definitely agree with you that VMware has done an incredible work with their API ecosystem and now the differentiator lies in the vendor implementation.
The mainframe heritage surely helped HDS in developing these features that, as we discussed, have their roots in the big iron days and what we saw during the VAAI demos was definitely a solid integration.
This is going to get interesting in the near future, much more with the vSphere 5 release due this year.
Fabio
For starters, I think it is important to point out that VMware’s co-development programs create standard integration points for all participants. No one vendor has any advantages over the other, VMware have built their “API Engine” and it is up to the vendor to invest in these programs for development and commitment to continue to support – Hitachi is totally committed to all VMware API programs and is delivering on this. Now it’s up to the ecosystem of storage vendors to leverage their core competency in their hardware to compete in this space.
Now another comment – There is a reason for everything, that’s why its important to understand the evolution of a technology and why some technologies come into play and why they will take a back seat when improvements are made overtime. The fact is our arrays are purpose built for this type of Integration by default and it’s all about our heritage. This is now evident with our ability to virtualize more than 100+ external storage devices where they automatically inherit all these VMware integration points…. Having been a customer for many years, this is invaluable; I don’t have to throw out my existing assets… Now if you match technology with technology it all makes sense. Keep your architecture simple, do it all in microcode natively and your done. Let VMware focus on all the advanced software to enhance the experience as that’s what they do best and let the Hitachi array handle all the grunt work. Now you can get onto what’s important, keeping IT up and running to service the business and save money rather than trying to troubleshoot and manage an overly complex environment.
Hitachi will continue to invest in VMware integration and our engineering team has been working extremely hard both in Japan and Santa Clara to focus on the Core vStorage API’s that VMware released to the storage partner ecosystem. These require lots of effort and QA so that the customer is confident that these work seamlessly. We see massive value to our customers that this integration provides them, especially now that ESX is offloading processes into the storage array.
Hats off to VMware for making this step to enable the ecosystem with this standard API Integration, which ultimately benefits the customer, now its up to the customer to decide what technology makes sense…
Heff
PS – Great picture of the group!
Micheal,
Thank you for chiming in, I definitely agree with you that VMware has done an incredible work with their API ecosystem and now the differentiator lies in the vendor implementation.
The mainframe heritage surely helped HDS in developing these features that, as we discussed, have their roots in the big iron days and what we saw during the VAAI demos was definitely a solid integration.
This is going to get interesting in the near future, much more with the vSphere 5 release due this year.
Fabio