Maxta is finally out of stealth mode, I was briefed by them a couple of weeks ago and I found the product really interesting, we are talking about a new VSA (Virtual Storage Appliance).
This is one of those products that could gain a lot of attention from the end users and for a reason: if we look at our infrastructures we always find that most of the costs (both TCA and TCO) are attributed to storage and a VSA addresses those issues.
Virtual Storage Appliances
Everyone is looking for solutions in this space, especially people who currently deal with ordinary architectures.
I won’t be going into detailing as to why ordinary storage has done its time but it’s clear the rigidity, performance and scalability are the most painful visible points!
Software-designed storage solutions (uh! yes! I used the buzzword) can literally change the way we think storage for virtualized infrastructures.
The VSA (Virtual Storage Appliances) addresses some major issues of the virtualized infrastructures while, at the same time, it takes full advantage of cheap resources available into the servers and simplifies the deployment of new resources.
Amongst the most important characteristics of a VSA you should find:
Strong integration with the hypervisor (and its management tools): this is because environments are complex and sysadmins are becoming ever more and more the “jack of all trades”. Their asked by their employers to mange the whole stack (from the application down to the last of the cables) and they need to do it fast and with the tools that they are more familiar with. The more stuff you can manage within the vCenter/System Center console the easier your workday is.
Intelligent usage of storage resources (You can call it hybrid storage): with a small amount of flash storage (probably between 5 and 15%) you can serve most of the IOPS while providing space with cheap SAS disks. There are different mechanisms that can be effectively used to transparently achieve this goal (like caching or automated tiering for example) and the results are good enough (or even impressive) for the majority of the end users.
Data footprint reduction: Compression and/or deduplication, if correctly implemented, can save a lot of space and also improve performance in particular cases. Data reduction techniques are also very important to optimize the usage of network bandwidth for the syncing operations between cluster nodes.
Efficient snapshots and replication: Not only this is not important for the VSA but, considering that you are primarily dealing with VMware (and its crappy snapshots), it’s important to get the best overall efficiency.
Scale-out architecture: last but not least the VSA should scale with the rest of the infrastructure. The usage of cheap commodity storage into the servers allows for a balanced architecture that equally scales in all its aspects.
All these characteristics lead to a lower TCO and because of the commodity hardware used in the servers it could be also easy to obtain a better TCA.
About Maxta (and Bottom line)
Maxta impressed me because, on paper, they have all the listed characteristics. The product targets medium sized companies and their VSA allows for a very interesting migration path from traditional storage too. It’s too early to comment on the product before a test in a lab but it surely deserves a look.
Maxta is not alone, others are working hard on similar solutions. Nutanix and Simplivity are doing pretty well for example (these are more radical approaches that involve hardware too, but you can easily find a VSA at the base of their products), HP has its own StoreVirtual VSA and also Vmware (with the VSAN) is in the game… only to name a few.
Solutions in this space are popping up like mushrooms and it’s going to be an interesting time… It’s obvious that for most of the end users this is the best approach and I’m curious to see what the ordinary vendors’ reaction will be to that.
You can also read this interesting article from Chris Evans about the topic.