I spent a couple of days at a FujiFilm event for analysts, partners and customers, where I also gave a talk on what I was referring to as Invisible storage (now I think about it as “shadow storage”, which sounds more appropriate since it resembles shadow IT up to a certain extent).

My talk was pretty much along the same lines of what I’ve been writing in the last couple of months. The challenges of storage sustainability in terms of TCA and TCO, and what I’ve found in the field during research which I conducted for a client of mine. But there are a few things that I think are worth a mention, which could help end users to better define their future storage strategy.

The All-Flash Datacenter is still a dream

screenshot-2016-09-16-07-48-22In 2015 I used this slide in many of my presentations and this year I updated it. The fun thing is that yes, the $/GB of Flash has dropped big time but, if we look at raw $/GB (without considering any form of compression) we still find that Flash (4TB SAS SSD) is still around $1.61, HDD (8TB SMR) is at $0.029 and tape (LTO7) can be as low as $0.008/GB. (note: prices quoted are without considering discounts). It’s clear that it will take some time for Flash to become as competitive as disk in terms of pure capacity… but eventually, looking at different roadmaps, the gap will continue to reduce quickly. At the same time the $/GB for tape will remain competitive for a very long time.

Some of the clients presenting at the conference have shifted from the “Swiss knife” approach (only one storage system type to do everything) to a tiered strategy in the last couple of years, with a Tier 1 primarily made of very fast Flash-based storage and a second tier built of disks, tapes and cloud (which, again is based on Flash, Disk and Tape). This happened because $/GB was unsustainable but also because applications and workloads need to be addressed properly both in terms of performance consistency and service availability.

The two tiers are not defined by the media they have chosen but, actually, by the type of workloads they are serving and, consequently, the type of storage system and protocol which are able to provide the kind of performance expected.

S3 is the protocol for secondary storage

One of the most interesting things, and I won’t say “I told you so!” :), is that a large part of end users is also looking at S3 as the back-end protocol for many of their tier-2 applications, no matter if it is backup, archive or a file share.

S3 is also becoming more of an option when it comes to tape access, much more than LTFS. Automatic tiering mechanisms implemented by object storage vendors are a great help with this, also making it simpler to move data in the back-end, theoretically at least.

Tapes are for Petabytes and Cloud

Data Library - Tape StorageHow much data do you need to store, safely and for a long time? Well, the more you have, the more tape makes sense. Sounds trivial but that’s the way it is. Tapes are a large/hyper-scale datacenter thing now. And some of the use cases I’ve heard mention here are quite advanced and not applicable if you are the average end user.

Tape has a couple of advantages over any other media: it’s very reliable over time and it doesn’t consume power when not in use. And all of this without considering the most important aspect: the lowest $/GB.

The problem with tape hasn’t changed either. It’s slowness in first data access with a seek time, including tape loading, that could be over a minute… making it still only suitable for near-line or off-line data repositories.

At the end of the day, even though it has its limits, tape still has a place in large scale datacenters and a major space according to customers with large multi-petabyte environments… especially when reliability and $/GB are the most important feature of your storage system.

Closing the circle

In case you missed it, Petabyte-scale end users are still using lots of tapes and they are happy with it. Any other solution for backup and long term data retention is simply impracticable at the moment. It is interesting to note that the access protocol is changing however and REST protocols, such as S3, are becoming quite common.

We are currently witnessing an interesting trend, deduplication and encryption are now commoditized and have been moved to the backup software. S3 repositories are becoming common backup targets (just to connect the dots). This means that traditional VTLs no longer make any sense (I’ve never been a fan of VTLs)… at the end of the day it looks like traditional VTLs will die before tapes do… and do you remember who first said “tape is dead”? A VTL vendor.

One last note goes to the future of cold storage. The disk is alive and kicking as well, according to the roadmaps I’ve seen. Yes, shipments are falling but not in terms of capacity and not for large capacity enterprise disks. More effort is being put into creating bigger and relatively slower disks as well as bigger tapes. It will be interesting to keep an eye on the evolution of technology and products in this space and continue to compare different $/GB over the next few years.

If you are interested in these topics, I’ll be presenting at next TECHunplugged conference in Amsterdam on 6/10/16 and in Chicago on 27/10/16. A one day event focused on cloud computing and IT infrastructure with an innovative formula combines a group of independent, insightful and well-recognized bloggers with disruptive technology vendors and end users who manage rich technology environments. Join us!

Disclaimer: I was invited to this meeting by FujiFilm and they paid for travel and accommodation, I have not been compensated for my time and am not obliged to blog. Furthermore, the content is not reviewed, approved or edited by any other person than the Juku team.