In the last two/three years I’ve talked a lot about “Flash & Trash”. A two tier storage strategy to cope at best with the increasing demand of high IOPS and low latency from primary applications on the one side and high capacity, associated with throughput at times, on the other. This kind of need depends on the type of applications you are running but also on the fact that it is quite difficult to find a balance between $/IOPS and $/GB on a single system.

The problem of $/GB

Hand pointing to board with total cost of ownership designThere are two kinds of $/GB: TCA (total cost of acquisition – aka the price tag) and TCO (total cost of ownership – aka price tag + additional money you spend to run the system throughout its life).

The TCA is quite easy to understand (look at the end of the quote from the vendor and you’ll find it). TCO is a totally different story and can drastically change, depending on the end user, environment (datacenter cooling is cheaper in northern latitudes than at the equator), location (a square meter in central London costs more than in the desert) and so on.

There are two kinds of $/GB: TCA (total cost of acquisition – aka the price tag) and TCO (total cost of ownership – aka price tag + additional money you spend to run the system throughout its life)

For example, if you are running the IT infrastructure in a university, you’ll probably have access to a lot of manpower (in the form of students and researchers)… in this case it’s probable that your TCO won’t be affected by the cost of Sysadmins and you’ll be able to choose a less automated or less efficient system (but with a lower TCA). At the same time, it’s highly likely that a private company in the same area, which pays much more for sysadmins, will choose differently.

This is less true for primary storage though, where you expect the best of predictability from the system, resources are sized differently and TCO is driven by a different key factor, such as resiliency, availability and the like. But the more we go towards capacity-driven storage, the less you expect in terms of absolute performance consistency and, consequently, features. One of the most vivid examples here is Amazon Glacier, storing data safely is very inexpensive but when you need it back you have to wait between 3 and 5 hours… which forces you to run some sort of asynchronous job to do that, which could add additional costs to what you are paying for waiting.

Finding the best TCO

One of the reasons of the success of some large scale storage systems (and cloud storage) is its low TCO at scale. It’s not uncommon, when you talk with Vendors like Scality or Cleversafe, for example, to hear that the average number of Sysadmins needed to manage their systems is in the range of one per several Petabytes.

If you manage a large scale storage system, built from several thousands of disks, it shouldn’t be of concern if a disk or a node fails

There are many reasons behind these numbers, architecture design, automation, data protection efficiency and more, which all contribute to change the way you deal with the storage system. But, on the other hand, the vast majority of the enterprises are not ready to change their operations accordingly. If you manage a large scale storage system, built from several thousands of disks, it shouldn’t be of concern if a disk or a node fails. You’ll have plenty of time to change the failed component. In some cases you have a planned datacenter trip per month when you do all the maintenance of your cluster. But, again, in most of the cases this doesn’t happen because you are used to the standard procedures in place for primary storage systems…

Make storage invisible

Dollar bill pinched in clampShrinking $/GB is not easy but it is possible by adopting the right technology and leveraging it with the right procedures. And that’s not all…

In the last few years I’ve seen many enterprises adopting “Flash&Trash” strategy but, albeit I usually describe it as a coordinated effort to make the storage infrastructure sustainable, most of them started to implement the second tier quite literally from Trash!

Albeit I usually describe “Flash & Trash” as a coordinated effort to make the storage infrastructure sustainable, most of them started to implement the second tier quite literally from Trash!

Software-defined solutions helped a lot in this, with ZFS and Ceph leading the pack and, lately, Microsoft Storage Spaces more present in the field too. My finding is that if your organization is not too rigid and not too large, it is not difficult to relocate old servers (sometimes decommissioned servers from other projects) to build a storage cluster.

The cluster will be unbalanced and the performance won’t be it’s primary goal for sure… but, thanks to the characteristics of the software it will be as reliable as any other storage system.

Bringing Hardware TCA down to 0

If software is good enough to give you all the features you expect to obtain a good TCO, then using old hardware is much more intelligent than you think.

Let’s talk about Ceph for example (but it could be any other software solution):

  1. it is built from the ground up to work on Linux and commodity hardware (no kernel driver issues),
  2. Its architecture allows for the balancing of data placement according to available resources
  3. Data protection is extremely secure and the system can be configured to support several failures at the same time
  4. It is highly tunable and many vendors offer interesting acceleration options (like SSD-based caching).
  5. It can start small and grow practically without limits.

zero euro coinIf we add that hardware comes for free this is not a bad deal. Furthermore, this hardware is out of support and you have third party options to make it more suitable for storage purposes. I’ve seen end users buying off-the-shelf hard drives and put them into old servers (in most of the cases, if you buy a larger version of the same disk you originally found in the server, it’s highly likely it will work like a charm). And you’ll also discover that buying disks directly from WD or Seagate is much cheaper than buying them from the server vendor.

And you know what? Using this skunkworks approach could save you money during the life of the repurposed hardware too. Many components are reusable (no matter the brand of the server); disks are under warranty and when they fail it’s likely you’ll get them back without spending additional money (without support contracts); In some very radical use cases, users choose consumer grade disks for their configurations without caring too much about MTBF and relying on a larger number of nodes for data protection.

I’ve seen end users grow from a very small lab cluster (just to try it out) up to more than 1PB before starting to consider it an important part of their infrastructure…

Invisible doesn’t mean irrational

In most of the cases these storage infrastructures started for testing, development and to store the copy of the copy of the copy of some data. Over time they start to store logs, backups, large file transfer service, secondary file servers and so on.

Storage is so cheap, almost free, that when users realize it they ask for more. That’s what usually happens. At a certain point this storage cluster becomes much more important than planned and you need to take it more seriously.

Even though most of the issues I recorded from these clients are about performance (which is not too much of a critical problem in this scenario), I have heard of more serious issues. If the cluster is out of the radar you can’t blame anyone but yourself… it’s invisible, it doesn’t exist after all. But before reaching this point you can make the cluster more “enterprise grade” and save your butt in case of a problem.

Even though most of the issues I recorded from these clients are about performance (which is not too much of a critical problem in this scenario), I have heard of more serious issues. If the cluster is out of the radar you can’t blame anyone but yourself…

Once you have realized that this storage infrastructure has some importance, there are two fundamental steps to take:

  1. Software support (some sort of support contract from the software vendor). Especially for open source based software, this could also add some nice management features usually not included in the community version.
  2. Better hardware. Rationalize the cluster configuration with newer and maybe better hardware. Hardware support is not strictly necessary if the cluster is large enough to sustain multiple concurrent fails.

It will cost you money, but since your organization will finally have understood how to deal with this kind of system, it will remain much cheaper than any other storage system you know.

Closing the circle

Invisible storage is not for everyone. It’s been done several times, especially in small-mid enterprises (1000-5000 employees) with relatively small, and agile, IT organizations. But I’ve never had the chance to see it in larger, more structured, organizations. I have only a limited observation of this and I’d like to know if you’ve had a different experience. Feel free to leave a comment or send me a message if you have.

The number of solutions to realize an invisible storage infrastructure is growing: ZFS, Storage spaces, Ceph and, lately, some vendors are becoming more friendly from this point of view by giving away their software for free and asking money only for an optional support contract (SwiftStack is a great example here).

Last but not least, on a similar wavelength, I really love NooBaa. I wrote recently about them and I think they really interpret this concept of “invisible storage” very well. They are actually a step forward, allowing you to start your invisible storage infrastructure out of unutilized storage resources already available in your network. A nice idea.

And if you don’t like the term “invisible storage” you can call it storageless storage… If you can do #serverless I can’t see why you can’t do also #storageless (I love these nonsense buzzwords!)

If you are interested in these topics, I’ll be presenting at next TECHunplugged conference in Amsterdam on 6/10/16 and in Chicago on 27/10/16. A one day event focused on cloud computing and IT infrastructure with an innovative formula combines a group of independent, insightful and well-recognized bloggers with disruptive technology vendors and end users who manage rich technology environments. Join us!

[Disclaimer: I did some work for Scality and Red Hat lately]