iStock_000005667739Medium copyToday’s business needs are quickly changing. IT is in charge of the infrastructure that should serve these needs but traditional approaches are no longer aligned with the requests. It’s not about what works and what doesn’t, it’s all about TCA and TCO: the way you buy infrastructure (storage in this case), provision, access and manage it.

If it is true that cost reduction is on top of the list of every IT manager, than we should take a look at what drives that cost up and find new strategies to work on the root of the problem.

SDS and Flash, are really happening

iStock_000016745994XSmallAt the beginning of last week Atlantis computing finally released a new product based on the technology that made them very successful in the VDI space. It’s a new VSA (Virtual Storage Appliance) that uses memory as the first tier built for tier1 virtualized environments… and first performance figures look impressive.
At the same time, Vmware demonstrated the ability of VSAN to reach an impressive 950K Read IOPS (4KB block) on a 16 node cluster (only using 10% of CPU).

Nonetheless, next generation All Flash Arrays and Hybrid Arrays are having a big momentum and they all show numbers that are impossibile to see on traditional primary storage systems… especially when you also compare prices.

But that’s not enough to solve your storage problems, is it?

Virtualization first!

A “long” time ago there were silos: one application stack, one dedicated infrastructure. That was due to many factors, most of them were technical and some psychological. Thanks to improved technology, in recent years, most organizations have tried to consolidate and rationalize infrastructures as much as possibile. Virtualization helped a lot in that process.

We still have logic silos, but the infrastructure is often shared now and both hardware and software have introduced QoS and multi-tenancy features, helping the end users to isolate one workload from another.
BTW, Legacy stuff is not out of the DCs (like unix server, for example) but its importance is quickly diminishing and, nowadays, there are only a few exceptions to the “virtualization first!” rule.

None of the storage vendors talk about specific applications anymore, they all consider virtualized environments to be in the first position. Some startups went even further, producing storage specifically designed for Virtualization.

There is TCO and TCO

iStock_000015338006MediumMost of the vendors have a very limited vision about TCA and TCO. They always look at the TCO of the system that they are trying to sell in that precise moment to you. The problem is that single system doesn’t radically change the TCO of all your storage infrastructure (small enterprises excluded, in this case).
TCO (measured in €/TB/year) for the whole infrastructure is much more complex to calculate and involves many more factors, which are usually related to managing and protecting data in all its aspects.

New challenges

Lately, things have radically changed. Many new factors have popped up in our business (and personal) life: new types of clients, applications and access patterns. Data and people mobility, private clouds, Big Data applications are only a few examples, but they help to describe the ongoing situation.

iStock_000013735386Small copyWe often talk about unsustainable data growth too but, at the same time, we can see a similar trend in IOPS requests and latencies reduction!

Fortunately, in most of the cases, applications request speed or capacity, but not simultaneously. We live in a world where most enterprise applications need to access a relatively small subset of data as soon as possible, while others have a huge amount of data that is not accessed at Warp speed!

Two worlds, two tiers…

iStock_000012910535SmallOn one side you have DBs, ERPs, application servers, VMs of any kind, and so on, on the other side you have all those kind of services and applications that rely on unstructured data: file servers, archiving, mail servers, web content, content management systems, backup, you name it.
Even some business applications like Big Data or traditional Data Warehouse can fit the model by storing data in the latter tier (i.e. huge data chunks, which are usually read sequentially, don’t need many IOPS, and throughput is the most important metric) and running jobs in the high speed virtualized environment.

In many cases the concept of unified storage, as we know it today, will no longer be applicable: in time you will need more specialized storage to solve both problems: one for speed, one for space.

…two storage platforms, same results

iStock_000004157711MediumThese two storage worlds solve the same problem: slashing TCA and improving TCO, but for very different reasons:

Primary storage will become more and more flashy and, in many cases, it will disappear as we know it in the current form. In fact, the SDS wave (storage software that runs on off the shelf hardware or, even better, into the hypervisor) has a disruptive potential.

Secondary storage, an object storage in my vision, is the repository where you can put everything else. It is based on commodity hardware too and on big mechanical He/SMR drives (event tapes sometimes!). It’s architecture allows many concurrent accesses and it’s perfect for next generation web applications, while tier 1.5 applications can easily be served by different types of gateways.

This scenario also fits in SMB environments where a VSA can be the primary storage and a public cloud object storage service (like AWS S3 or Azure cloud storage) can be the backend platform for all the other services.

Divide and rule!

At the end of the day, an ideal architecture designed from scratch, will only have two tiers.

A primary storage (tier 1): relatively small and able to cope with IO intensive applications/workloads.
As mentioned before, it has some unavoidable characteristics like, for example, flash memory. It has to be efficient, tightly integrated with the hypervisor, easy to use (let me say transparent!) but more importantly, as fast as hell.
We already know where the cost of primary storage comes from, so let’s try to write down a few points that address the problem:

  • Speed: Flash has the best IOPS/$ and latency. PCIe flash cards cost/performance ratio is even better when they are part of the architecture.
  • Scalability: Scale-out is the way to go. If the architecture has this kind of design (and it is well implemented), scalability and forklift upgrades become problem free.
  • Ease of use: Next generation storage systems are feature rich and really easy to administer when compared to legacy arrays. In some cases they can be managed directly from the hypervisor.
  • Integrated: Integration with the hypervisor is fundamental and the best integration is obtained when the storage system works like and is transparent like rest of the infrastructure.
  • Commodity HW: When commodity HW means that I buy the software and I choose the hardware.
  • Analytics: Some vendors give very good insights on what is happening with the storage systems,helping the end user to get forecasts and make strategic decisions about capacity planning.
  • Software defined: A VSA can take advantage of the hardware into the server part of the virtualized infrastructure.
  • Convergence: Storage and Network traffic should flow in the same ethernet wires.
  • Efficiency: Flash and modern CPUs allow for the adoption of in-line data footprint reduction mechanisms (like dedupliation), as well as many other features which are good for saving space and improving data volume management.

iStock_000012138370MediumIf you take a look at some of the new entrants in the storage space, you will find most of these characteristics in each one of them.
They all address, in a way or in another, the TCA and the TCO, but they have a limit. The GB/$ is good but not the best in absolute terms and, if your company is also storing huge piles of data, which is happening almost everywhere, overall TCA and TCO figures will not be very attractive.

So, you need a secondary storage (tier 3?) to serve your space needs.
As already mentioned above, applications in the tier 1.5/2 can be served through specialized gateways. These gateways can also have a local cache to speed up access and metadata operations minimizing the traffic to the backend. In an old post, I talked about object storage as a horizontal platform to store and manage data with gateways used to offer front-end interfaces for various services. Now, let’s take a look at the benefits of O.S. in terms of TCA and TCO:

  • Space: Helium and SMR hard disk drives have the best GB/$ ratio. Some Obj storage systems can also have a long term archiving tier on tape! In any case, scalability isn’t an issue with these systems.
  • Data protection: Huge repositories and huge disks mean a different approach to data protection. Object storage vendors are aware of this and they address the problem differently than array vendors do. Not only objects replication but also erasure coding and multisite distribution. For example, an O.S. can also be a backup target and, due to its nature, it also provides automatic electronic vaulting.
  • Geo distribution: Disaster recovery as well as follow-the-sun applications or mobile access, are part of the product DNA.
  • Energy efficient: Next generation hard drives can be very power savvy. These drives can be spun down when they are unnecessary and are good for long term archiving.
  • Commodity HW: Allmost all object storage systems are pure software solutions. Some of them are also deployable as VSAs.
  • Automation: The life of objects stored in the system is usually regulated by policies. Retention, versions, number of copies, access policy, and so on. This has both a direct and indirect impact on many aspects of the TCO.
  • Unified storage: Object storage is the best unified storage ever. Data can be accessed through APIs, traditional file sharing protocols, modern sync&share, block protocols or even through scale-out file systems interfaces like HDFS. Plenty of choice means freedom.
  • Integration: Many software vendors are releasing integrations with object storage systems. Content management, mail, archiving, backup servers, data collection servers and so on. Each new application added on top of an object storage system automatically inherits all its benefits and becomes easier to manage.

These two lists are not definitive, I’m sure that if you look at your particular environment, keeping in mind the basic characteristics and features of both types of storage systems, you will find more evidence of what I’m saying here.

Why it matters

Ordinary storage systems are not capable of dealing with today’s data challenges, and especially not all at the same time!
If your target is to drive down TCA and TCO, it’s time to look around and find new innovative approaches.

iStock_000012120788MediumThe scenario described in this short article, as the title states, is ideal and it doesn’t take into account our legacy infrastructure and applications.
In any case, targeting your ideal architecture and trying to stick to it as much as possibile is probably the best approach.
If your problem is TCA and TCO, I’m sure you’ll find some interesting points in this article that you can take into consideration the next time you decide to renovate your infrastructure.

Next march I’ll be attending Next-Gen Object Storage and Next-Gen Solid State Storage Summits in LA. This event will host most of the Object and Flash storage vendors, analysts, press, industry pundits and end users. It is going to be a great opportunity to join the conversation and stay updated on the evolution of this market. I will also have the opportunity to make a presentation during the end-user day (March 5th): “Enterprise Object Storage: why it is important for you!”.

Any comment is warmly welcomed!

Disclaimer: I was invited to NGOSS/NGSSSS by thruthinIT and they paid for travel and accommodation, I have not been compensated for my time and am not obliged to blog. Furthermore, the content is not reviewed, approved or published by any other person than the Juku team.