Thanks to Gartner we have a new buzzword: bimodal IT. It’s nothing special actually, just a new way to describe common sense, and the fact that the world, IT world in this case, is not black or white.
In practice, in modern IT organizations it is better to find a way to integrate different environments instead of trying to square the circle all the time. This means that you can’t apply DevOps methodology to everything, nor can you deny its benefits if you want to deploy cloud-based applications efficiently. (Gartner discover great truths sometimes, don’t they 😉 )
But here is my question, “Does bi-modal IT need separate infrastructures?”
Bi-modal IT doesn’t mean two different infrastructures
In the past weeks I published quite a few articles which talk about Network, Storage, Scale-out, and Big Data infrastructures. Most of them address a common problem: how to build flexible and simple infrastructures that can serve legacy and cloud-like workloads at the same time.
From the storage standpoint, for example, I would say that a unified storage system is no longer synonymous with multi-protocol per se, but it’s much more important if it has the capability of serving as many workloads as possible at the same time. Like a bunch of Oracle DBs, hundreds of VMs and thousands of container accessing shared volumes concurrently. The protocol used is just a consequence.
To pull it off, you absolutely need the right back-end architecture and, at the same time, APIs, configurability and tons of flexibility. Integration is another key part, and the storage system has to be integrated with all the different hypervisors, cloud platforms and now orchestration tools.
Storage for containers? (just an example of the wrong infrastructure)
Now, I haven’t had time to look at them yet, but there are a bunch of new startups focused on container storage. Really?! Container storage? Just storage for containers?! Sounds a little bit odd since most of the containers are stateless… In fact, I suppose most of these storage systems expose NFS protocol (or I hope so for simplicity, at least). But, why should anyone buy a storage that works well only with containers? It doesn’t make any sense. In which enterprise, or even ISP, do you have only containers? Like I said, I haven’t had time to investigate yet, but I will do soon, because specialized storage doesn’t make any sense any more… does it? Maybe, it’s only a marketing mistake.
Bi-modal Infrastructures are the key
No matter what kind of workload or the type of Ops you have, IT infrastructure must be ready to cope with all of them.
I think that a bi-modal IT/infrastructure has to implement a sort of macro multi-tenancy at its core. In this case we are not talking about multiple users accessing resources, but different technologies or platforms on top of the same infrastructure at the same time. For example, if your organization has three different teams for standard virtualization (Vmware?), IaaS cloud (OpenStack?) and next generation cloud (containers?), you have to offer a single horizontal infrastructure that can be quickly configured to offer the right kind of resource for each one of them when needed.
Technologies like Software-defined Networking, QoS, advanced storage analytics and monitoring, must be properly implemented to enable such a paradigm… but this is not enough.
Data management and transparency
By adding more and more workloads to the same system you’ll need a different kind of granularity to understand what is happening and to quickly optimize the infrastructure accordingly. For example, even though the classic IO-blender effect is no longer a problem with AFA arrays, some vendors, Like Coho Data, have started to analyze workload patterns to automate data positioning, and cache pre-heating, bringing automated tiering to the next level and allowing the deployment of smarter and cheaper storage infrastructures capable of serving a broader set of workloads at the same time. Watch this video from Storage Field Day 8 to understand better what I’m talking about:
The separation between logical and physical layers is fundamental for modern IT infrastructures and bi-modal infrastructures in particular. End users hate migrations, especially migrations that involve data; all infrastructure components should be swappable or upgradable without touching it. This also applies to data movements between different storage tiers.
In this same space, vendors are working to build mechanisms to transparently move data volumes between primary and secondary storage as well as the cloud, simplifying backup /DR operations while automating data/copy management.
Closing the circle
Bi-modal IT has always existed, even before DevOps. When enterprises started to adopt Servers (and client-server applications) after mainframes they had two different ways to manage operations, then it happened for virtualization and now it’s happening for the cloud.
For each new technology stack, organizations have built a new silo and organized Ops accordingly. And Ops has always been faster for new technology stacks than it was for the older ones (it’s easier to operate a virtual environment than physical servers for example).
But something has changed. In the past, each single technology silo had its own infrastructure stack. Now, thanks to SDN, SDS and next generation storage solutions, it is possible to collapse different infrastructure stacks in a single larger infrastructure capable of serving all physical, virtual and cloud concurrently, lowering costs and speeding up operations for both legacy and new infrastructures.
Coho Data is a client of Juku consulting.
If you want to know more about this topic, I’ll be presenting at next TECHunplugged conference in London on 12/5/16. A one day event focused on cloud computing and IT infrastructure with an innovative formula combines a group of independent, insightful and well-recognized bloggers with disruptive technology vendors and end users who manage rich technology environments. Join us!