So, after months of rumors and driven anticipations, EMC’s project lightning has seen the light! I was very thrilled about this announcement but it’s not so exciting as I was expecting, probably we need to wait for the next version to see more.
New paradigms ahead
The next generation scaled out infrastructures are looking at new methods to store data closer to the CPU: no more in vertical silos but dispersed in the cluster and near to the CPU power.
At the same time, many storage vendors are working very hard on the opposite side trying to bring CPUs closer to data.
The final goal is the same: find new (and cheaper) ways to push up IOPS and throughput while shrinking latencies (the dream of every infrastructure architect).
Probably, if you can design a new architecture from scratch, without any kind of constraints (like a huge cloud infrastructure or a Big Data cluster), the first solution is the best but if you have to deal with traditional data and enterprise applications the last one could be better.
Project lightning
The EMC announcement is important because they have released a PCIe card (that you need to install on your server), this card can actively cache data to speed up your IO activity.
When I first read about this, months ago, I thought that it would be a disruptive technology but, as often when we speak about EMC, they worked a lot with marketing and a little less with R&D.
This is only a 0.x version of the product and it reminds me all the hype they generated on FASTv1 some time ago: the problem is the same, the idea is good but the implementation brings so many constraints and limitations that it is nothing special and adds more complexity than advantages.
Obviously you can’t use it as a write cache (actually, you can but it’s very dangerous) because it’s installed on the single server: data is promoted to cache if it’s frequently accessed (@Chrismevans, aka The Storage Architect, wrote on twitter this morning: “@Chris_Mellor Reading your review, the restrictions of Lightning make me think customers would be better off with simply more RAM” – here’s the review from Chris Mellor)… and I couldn’t agree more.
Actually, the backend array is not involved in any caching operation within the PCI card installed on the server. This means that the cache card could theoretically be used with any vendor but also that the Virtual FAST cache is really “virtual” (i.e. not an efficient use of the backend cache or FAST algorithms involved). There is also a “split-card mode” where you can use part of the card as local storage (like many other DAS flash cards).
If you add that there are limitations in conjunction with clusters (i.e.: it seems that you can’t use VMware’s vMotion!), you will have a full picture of what I mean when I say: few features and many limitations.
Buy today, get tomorrow
The most important things will come: you need to trust EMC. They are already working on the real Virtual Fast Cache. It will bring more integration with its arrays, a distributed coherency mechanism and some management features. Only time will tell when we will see these features, the cost, the complexity and so on.
Bottom line
You know, the EMC marketing is very strong, they haven’t announced anything special today but they have created a lot of expectations for the real product (that, sooner or later, will be delivered). The important message here is that EMC is really playing in the SSD space. Now I’m very curious to see the reaction of other big players and what small ones will do to reaffirm their technological edge (my mind goes to Fusion-io).