I’m sure you already know that Nimble has just launched a new hybrid array which is capable of addressing more flash memory. At the same time, other players, like Tegile for example, are presenting similar solutions or All-Flash Arrays which can be expanded with hard disks.

This is a slight change compared to what has happened in the past; we are moving from a 1:10 Flash/HDD ratio to a 3:10 or, sometimes, a 5:10 ratio. What does it really mean?

Efficiency for everyone

nimbleTaking for granted all the data footprint reduction techniques available today, All-Flash arrays are still more expensive than Hybrid arrays. Compression and Deduplication help, but they help AFAs and Hybrid arrays in exactly the same way.

$/GB is going down for everyone but data growth is faster than storage price decreases. We are now reaching the magical $2/GB, which some vendors consider as the right price in order to be competitive with traditional storage systems… but this is not the point.

First of all, $2/GB is usually associated with traditional arrays based on Hard Disk Drives and, in most cases, the tipping point of $2/GB for an AFA is attainable not because of the better priced flash memory but because of the improved efficiency introduced in using it. (and not many AFAs are priced at $2/GB today…)



Furthermore, comparing a traditional (“traditional” means old technology, few features and poor efficiency in this case) with a next generation array is not the right comparison to make. And although I could totally agree on that, sometimes, not all modern features are applicable to Hard Disk based systems. Most of them are implemented and usable when it comes to hybrid arrays. In this particular case, the flash memory layer enables the deployment of data footprint reduction features (e.g. in-line deduplication or compression) and gives a consistent performance layer.

From my POV, when we talk about hybrid arrays, the problem doesn’t come from the average performance but from its consistency and predictability. In a hybrid storage array, every time a single data block is needed from outside the SSD layer, the system has to pick it up from a lower speed media, introducing latency and a different kind of behavior (even if it happens only for a few moments). Certain kinds of synchronous workloads could be very badly affected by this unpredictability.

Back to $/GB and why more flash

While $2/GB is a really good price when you think about a high performance array for tier 1 applications, this price level is no longer aligned with the average real enterprise demand. Enterprises want more (space) for less (money) and the All-Flash Arrays are still far from what is possible to achieve with a hybrid array in terms of $/GB.

Hybrid storage vendors are adding more flash to their products with the intent to become more predictable (hence more interesting for Tier 1 use cases) but the Hard Disk layer is still important to maintain a relatively low price.

Why it is important

4301pebble_towerAFAs are the preferable choice when looking for the best $/IOPS and their $/GB is getting lower. On the other hand, hybrid solutions are getting better performance numbers due to the addition of a bigger flash layer and their prices are hitting new lows thanks to bigger hard drives.

The predictability of hybrid arrays is still an issue, even when the flash layer is bigger, but with the proper software features (like QoS management for example) it could be easily avoided or contained. (problem here is QoS is still not being implemented properly by the majority of vendors)

The cost of flash memory is going down faster than in the past but at the moment it can compete with hard disk prices. It’s hard to predict when/if there will be an inversion of the parts but it’s not likely to be happening any time soon. In any case, when it does happen, hybrid will eventually become AFA… though it’s not clear if it will maintain a two tier approach (with MLC and TLC for example) or not.