In case you missed it, many backup softwares are already supporting both on-premises and public S3 repositories (object storage) as target for backup jobs. Others will be joining this trend soon. Object storage is perfect for backups and the capabilities offered by the latest versions of available products make this type of storage a good solution for short and long term backups.

Two pillars

Screenshot 2014-09-27 12.29.40As you probably already know, I’m strongly convinced that the future of enterprise storage will be sustained by two pillars: AFA (and hyper-converged AFA systems) on one side, and object storage on the other (here my recent presentation at Next Gen Storage Summit). In the future we will see many primary storage systems with some sort of (deep) object storage integration. For example, Solidfire can already backup its LUNs directly to S3 or Swift compatible storage systems… others will soon follow.

More than a VTL

Screenshot 2014-09-27 12.30.24VTLs (Virtual Tape Libraries) are expensive and sometimes slow. They need to be replicated (e-vaulting/DR) and they cannot grant the scalability of modern object storage systems. Each one of these flaws is amplified if your retention gets longer and longer (which is what actually happens in most enterprises!). In fact, it’s not uncommon to find mixed backup strategies leveraging VTLs and tapes at the same time.
Long story short, VTLs work for most cases but they are no longer enough to solve next generation data problems.

There is more. Contrary to VTLs, Object storage is not a point solution. An Object storage infrastructure can be used to do many things at the same time: it’s a backend platform to build a wide set of services ranging from deep archiving to next generation file services as well as VM or DB storage!

Short term backups

iStock_000015945408XSmallA local object storage system can be built with cheap storage servers and can easily scale from a few Terabytes to many Petabytes. It can be concurrently accessed by many different backup systems and its availability (when correctly implemented) is indisputable. Multiple DC installations as well as mixed (on-premises/cloud) configurations aren’t difficult to implement, allowing a great level of automatism regarding, for example, e-vaulting or DR practices.

Object storage systems usually don’t support deduplication but it can’t be considered a differentiator anymore: most modern data types are already compressed or encrypted and, in many cases, backup softwares implement compression techniques of some kind. On the other hand, not having compression at the storage level means that poor recovery performance due to data rehydration is no longer a problem.
As is absolute performance no longer a problem: many object storage vendors can show impressive throughputs.

Long term backups and tapes

Screenshot 2014-09-27 12.20.00Object storage tiering is already available from some vendors. It allows to set policies and move objects between different object storage systems or public clouds. Longer retention backups can be transparently moved to a cheaper and/or safer place like, for example, a secondary object storage system configured for this purpose.

HGST is building a new low power storage system capable of storing 1PB of data in 4Us and the price tag is apparently less than $60K. This system is designed with a Write-Once-Read-Never philosophy (I know that WORN is not an official acronym but I like the idea). active_archive-platformsJust to give an example… The system I’m talking about, will probably come with iSCSI ports (it’s a shame that they haven’t implemented S3 and Swift has primary interfaces) but you’ ll be able to easily integrate it with an object storage head.
A full rack of these systems will offer something like 10PB of cold near-line storage for less than $600K and power consumption is around 0.1W/TB when in standby! It doesn’t sound that bad to me.
As I said, this is only an example… other primary vendors and startups will soon be revealing similar solutions.

Why it is important

k3_j4500-array_1-300x239I realize VTLs have been successful, but for a guy like me who has embraced ZFS and huge storage servers (i.e. Sun fire x4500) since the beginning, they sometimes look overpriced and not particularly convenient (both in terms of TCA and TCO). There are some use cases where VTLs are really functional and until now they have been the only viable solution for certain types of backup infrastructures.
Things are quickly changing though. Object storage, when looked at as a platform and not as a single problem solution, helps to drastically simplify storage infrastructures. In fact, this enterprise-wide storage layer (which can serve different services) is not only cheaper and more durable but also much easier to manage, more scalable and more automated.