A few weeks ago I attended the Italian VMUG user conference. One of the most interesting sessions at the event was “Strategic Private Cloud”, presented by Alan Civita (Sky UK), who confirmed what is now a common trend in very large IT organizations: a strategy based on two different private clouds.
Looking for operational efficiency
Cloud is a synonym of operational agility and efficiency. In fact, many IT organizations have been migrating to the cloud for a while now and, at the beginning, the public cloud seemed to be the cure for all ills (with AWS and Microsoft being the providers of choice)… well, it was true until the bills started coming in, only to discover that costs can quickly become unsustainable. As someone told to me a few days ago “Doing only public cloud is like living at the Four Seasons”… and I couldn’t agree more.
Private and hybrid clouds are not to be seen as a step back, on the contrary, they are actually becoming more and more popular because, when it comes to large organizations, they have the right balance between cost, efficiency and flexibility.
Why two (or more) clouds?
The problem is always the same, heading towards the future while managing the legacy. In this case, the legacy is virtualization or, more commonly, VMware-based infrastructures. On the other hand, the future is an AWS-like cloud for new applications designed to take advantage of its characteristics and OpenStack is the basic component that can help to realize this vision.
A VMware-based cloud is fundamental to all traditional enterprise needs, applications and workloads. Usually, it is just an IaaS but because of its status of “enterprise cloud”, everything is supported and managed end-to-end. This means that any resource part of it has a first-class status and end users are real end users who expect performance, stability, availability, consistency, backups and so on. They want it, and they don’t care how it’s done. Furthermore, in some cases, back to the point of “everything is managed end-to-end” users go through traditional provisioning processes and they don’t see any cloud aspect at all… The cloud part of this infrastructure is seen only by the Ops team and as an evolution of the legacy infrastructure they’ve always managed.
The OpenStack-based cloud is a totally different story and is seen as a much more strategic component. Even though someone is using it as a replacement for VMware (leveraging custom scripts and storage for managing HA and DR for example), applications should be designed from the ground up to be scale-out, with resiliency at the application level and all the characteristics you usually find in applications designed for public cloud. In fact, application repatriation is not as uncommon as you would think!
Contrary to what happens in VMware-based clouds, nothing is taken for granted and the end user (the developer in this case) is in charge of everything from performance down to data protection!
Strategy and tactics. Again.
As you can imagine we are not talking about small organizations here, and this is why building two different clouds, with totally different technologies, place these enterprises in a strong position.
Strategically speaking they can easily manage the migration from the virtualization era to the cloud with more confidence and less risks by choosing the right timing and technology. Staying with a traditional application model is much easier but, in the long term, rewriting applications could bring many more advantages. Infrastructure and application development are quickly evolving on two parallel tracks. On one side VMware, OpenStack, public clouds, more sophisticated orchestration tools and, possibly even containers.
On the other hand, applications are moving from an assumption that they are on a physical server and oblivious to failure, to starting to take on more and more “cloud native” functionality. This is giving plenty of choice and freedom for managing legacy and future apps together on the same building blocks.
In fact, a multiple cloud strategy is like having the upper hand. VMware is working very hard to be competitive and find a place in the OpenStack ecosystem and public cloud (I’m not saying they are succeeding at the moment though). But for the customer, having alternatives means putting pressure on vendors… which is always a good thing if you want to get better prices/conditions. 😉
Two clouds, one hardware layer
From the hardware point of view, building and maintaining different cloud infrastructures is not as hard as you might think. Enterprises have been adopting standard building blocks (PODs) for a while now (I wrote about this a few days ago). And these PODs are based on commodity x86 servers, SDN-capable networking and scale-out flash-based storage. These infrastructure components are very similar to each other and they can be at the base of traditional virtualization infrastructures and private clouds as well.
Yes, they have to be fully integrated (with VMware APIs and OpenStack Cinder), but this is a very common characteristic now. It’s also really interesting to note that we are not far away from a complete integration of all these hardware components and upper layers. Looking at Coho Data for example, they have a unique product architecture today that already brings seamless scale-out NFS storage thanks to SDN (and Arista switches)… I don’t think that it would take much to integrate those switches to also offer TOR networking and the next step could be managing all the components as a whole from Neutron or NSX. Right?
By using standard building blocks it’s much easier to manage different cloud infrastructures. It’s also possible to start small and migrate resources from a cloud platform to another if needed (at the end of the day it’s just a different software stack). And this helps to mitigate risks while protecting investments.
Closing the circle
At the end of the day IT organizations are looking for operational efficiency. Cloud gives efficiency and enables them to offer a better service, while private cloud also adds the benefit of sustainability in terms of cost. The number of organizations that can afford a multiple cloud strategy is limited, but this can add even more flexibility and freedom of choice for end users.
Thanks to the right components, the impact of hardware infrastructure could be minimized and can easily become the common denominator for all the different solutions. Today we are still seeing VMware dominating the datacenter, but large organizations are starting to deploy OpenStack more and more. An example? At VMUG usercon, Sky UK declared that it has a 200+ node cluster in production with OpenStack. Not a small number considering that the same organization has 5000 servers in its datacenters.