In my day to day job I talk to a lot of end users. And when it comes to the cloud, there are still many differences between Europe and the US. The European cloud market is much more fragmented than the American one for several reasons, including the slightly different regulations in each single country. Cloud adoption is slower in Europe and there are a lot of organizations that still like to have their data and infrastructure in their premises. The European approach is quite pragmatic I’d say and I think many enterprises take somewhat advantage of the experiences made by similar organizations on the other side of the pond. One thing that is very similar though is cloud storage or, better, cloud storage costs and reactions.

The fact that data is growing everywhere at an incredible pace is nothing new, and often more than predicted in the past years. At first glance, an all-in cloud strategy looks very compelling, Low $/GB, less CAPEX and more OPEX, increased agility and more… until of course you start seeing your cloud bill growing out of control.

As I wrote in one of my latest reports, “Alternatives to Amazon AWS S3”, the $/GB is the first item on your bill, there are several others, including egress fees, that come after that. An aspect that is often overlooked at the beginning and has unpleasant consequences later.

There are at least two reasons why your cloud storage bill can get out of control:

  1. Your application is not written properly. In fact, you written or migrated an application that is not specifically designed to work in the cloud and not resource savvy. This happens often with legacy applications that are migrated as-is. Sometimes it’s hard to solve because re-engineering an old application is simply not possible. In other cases, the application behavior could be corrected with a better understanding of the API and the mechanisms that regulate the cloud (and how they are charged).
  2. There is nothing wrong with your workload, it’s just that you are creating, reading and moving data around more than in the past…


The first thing you can do is to optimize your cloud storage infrastructure. Many providers are adding additional storage tiers and automations to help you with this. In some cases, it adds some complexity (somebody has to manage these new policies and check if they are working properly). Not a big deal but probably not a huge saving either.

You can try to optimize your application. But we all know that it’s not always easy, especially if you don’t have control on the code and the application wasn’t already written with the intent to run in a cloud environment. Anyways, this could pay off in the mid to long term, but are you ready to invest in this direction? 

Bring your data back…

A common solution, adopted by a significant number of organizations now, is data repatriation. Bringing back data to your premises (or a colocation service provider), and accessing it locally or from the cloud. Why not? 

At the end of the day, the bigger the infrastructure the lower the $/GB and, above all, no other fees to worry about. When you start thinking about petabytes, there are several ways to optimize and take advantage of which can lower the $/GB considerably: fat nodes with plenty of disks, multiple media tiers for performance and cold data, data footprint optimizations, and so on… translating into low and predictable costs.

At the same time, if this is not enough or you want to keep a balance between CAPEX and OPEX, you can go hybrid. Most storage systems in the market allow to tier data to S3-compatible storage systems now, and I’m not talking only about object stores – NAS and block storage systems can do the same. I covered this topic extensively in this report but you can check with your storage vendor of choice and I’m sure they’ll have solutions to help you out with this.

…Or go Multi-cloud

Another option, that doesn’t negate what I wrote above, is to implement a multi-cloud storage strategy. Instead of focusing on a single cloud storage provider, let’s try to abstract the access layer and pick up what is best for you depending on the application, the workloads, the cost and so on, determined by the needs of the moment. Multi-cloud data controllers are gaining momentum with big vendors starting to make the first acquisitions (RedHat with NooBaa for example) and the number of solutions is growing at a steady pace. In practice these products offer a standard front-end interface, usually S3 compatible and can distribute data on several back-end repositories following user-defined policies. This leaves the end user with a lot of freedom of choice and flexibility regarding where to put (or migrate) data while allowing to access it transparently regardless of where it’s stored. Last week for example, I met with Leonovus which has a compelling solution that associates what I just described to a strong set of security features. 

There are several alternatives to major service providers when it comes to cloud storage, some of them focus on better pricing, and lower or no egress fees, while others work on high performance too. As I wrote last week in another blog, going all-in with a single service provider could be an easy choice at the beginning but a huge risk in the long term. 

Closing the circle

Data storage is expensive and cloud storage is no exception. If you still think you’ll save money by just moving all of your data to the cloud as-is, you are making a big mistake. For example, cold data is a perfect fit for the cloud, thanks to its low $/GB, but as soon as you begin accessing it over and over again the costs could rise to an unsustainable level.

To avoid running dealing with this problem later, it’s best to think about the right strategy now. Planning and executing the right hybrid or multi-cloud strategy can surely help to keep costs under control while giving that agility and flexibility needed to preserve IT infrastructure, therefore business, competitivity. 

If you want to know more about multi-cloud data controllers, alternatives to AWS S3, and two-tier storage strategy, please check my reports on Gigaom. And subscribe to Voices in Data Storage Podcast if you want to listen to the latest news, market and technology trends with opinions, interviews and other stories coming from the data and data storage field