Are you going All-Flash? Nah, the future is hybrid

But it will be an All-Flash Hybrid. And, you know this can’t be considered a prediction. It’s just a fact. If you look around, a large part of the vendors are craving for Intel’s 3D Xpoint memory and this will be the next tier 0 (or cache) for many newly designed storage arrays.

3D Xpoint in short

Capture3D-XPointIntel 3D Xpoint is a new class of memory which sits in the middle between RAM and Flash. It’s persistent (like Flash), fast (not like RAM, but much faster than Flash), dense (again, not like Flash, but more than RAM) and durable (more than Flash, less than RAM). And, of course, the price will be higher than today’s flash chips but lower than RAM. An interesting video about 3D XPoint memory was recorded at SFD8.

This makes the new non-volatile memory a good fit for any sort of landing zone for data in the storage array, where it can be easily reorganized, compressed, deduplicated and whatever else before being flushed on cheaper flash memory.

And I wouldn’t be surprised to see a new class of products taking advantage of this media. For example, think about what a company like Diablo Technologies could do by applying its technology to this kind of chip! (and in case you’re not familiar with Diablo yet, tune in for their session at Tech Field Day 10 next week. Knowing them, it will be worth a watch).

3D Xpoint advantages in practice

I’m risking some oversimplification here, but 3D Xpoint is not just about better IOPS and latency… it will have three major impacts in storage arrays.

  1. Higher density: It will be possible to implement a very dense backend. Much more focused on durability and reliability than today. It doesn’t mean that today’s AFAs are not reliable, but tomorrow they’ll be able to use relatively slower writing speeds, less I/O channels and, consequently, denser chips and flash modules to obtain similar results. And it probably won’t impact reads, or not that much at least, and reading speeds will remain in the same range of current flash modules.
  2. $/GB: this is just a consequence of the previous point. Using denser, and possibly cheaper, flash will drive down prices. Today it’s TLC, but, sooner or later, someone will be producing QLC and all the efficiency introduced by a 3D XPoint layer at the front-end will be fundamental to save write cycles and performance in the backend.
  3. Fast NVMeOF adoption: It’s useless to have a great latency, unless it can be taken advantage of!

Closing the circle

GMO AppleorangeMany vendors are already using NV-RAM in their arrays but, usually, it is a component made out of standard DIMMS and capacitors. It’s very expensive and small in terms of capacity (because of the components needed to make RAM persistent in case of a power outage and the size of RAM chips themselves).

3D XPoint will be able to bring a much larger capacity to the non-volatile memory layer of these systems, enabling bigger caches and efficient tiering mechanisms.

Looking at how modern hybrid systems are designed, I’m wondering if some vendors, like Nimble or Tegile, will be in a better position than AFA vendors when this new media is available. Next February I’ll be meeting with both Pure Storage and Nimble storage and questions about how they think they’ll be implementing 3D Xpoint in their systems will be on the top of my list!

At the end of the day, flash memory is just a fast media not a synonym of primary storage… and tomorrow, your next “hybrid” array will be better than your current AFA.

If you want to know more about this topic, I’ll be presenting at next TECHunplugged conference in Austin on 2/2/16. A one day event focused on cloud computing and IT infrastructure with an innovative formula combines a group of independent, insightful and well-recognized bloggers with disruptive technology vendors and end users who manage rich technology environments. Join us!

Independent Analyst, trusted advisor and Blogger (not necessarily in that order. Having been immersed into IT environments for over 20 years, Enrico's career began with Assembler in the second half of the 80's before moving on to UNIX platforms (but always with the Mac at heart) until now when he joined the "Cloudland". He is constantly keeping an eye on how the market evolves for new ideas and continuously looking for new ideas and innovative solutions. He's a fond sailor and unsuccessful fisherman. You can find Enrico's social profiles here:

  • Storage Alchemist

    Hey Enrico – I have been watching this post for a while – we recently did a post that talks to a topic very similar to this. I would be curious what you thought of this –

  • Hi Steve,
    thank you for chiming in.

    While I agree with you about some parts of the article. I have to say that it’s a little bit of a stretch under many aspects.

    On the positive side, looking at the datacenter today. How much of it is primary storage? 10/15% (maybe less). For that 10% end user wants low latency and predictability. if you can grant it with a hybrid -less expensive- system because better engineering that’s great, and I’m with you. Or, at the same time, if you can give that 10% and serving additional 1.5 tier workloads from a single system because you cost less than an AFA and you still grant some sort of Qos (no matter how you do it). i’ll love you because the better efficiency and consolidation.

    But then you lose me when:

    1) doing comparisons against VMAX is not totally fair. comparing your products designed 3/4 year ago with one that has more than 25 years on its shoulders doesn’t make sense. You are competing in the same space, but the reasons to buy a VMAX today aren’t the same that drive your sales…. are they? In fact, even EMC is cannibalizing VMAX installed base with newer, more interesting and less expensive products (like XtremIO for example). How many of your prospect customers are evaluating alternatives to their old VMAX?

    2) Any time a vendor in the storage industry mentions Google (as well as FB, Amazon, Azure or others hyper-scale cloud providers) to justify its design choices I feel that there is something wrong (totally wrong).
    Design choices made at Google are far from any other environment and, in most of the cases, their needs are not comparable to anyone else…

    I hope this makes sense. 😉