The enterprise data storage landscape has been rapidly changing in the last few years. Demand for more storage resources is unstoppable but, contrary to the past, enterprises and ISPs are experiencing a strong diversification in the type and quality of resources needed to satisfy each single different user or specific business need.

New requirements are being heavily driven by different workloads and types of data that were quite uncommon just a few years ago. It’s not only the growing amount of data to manage or faster access speed to serve more transactions in the same time. Cloud and mobile computing, as well as new access patterns imposed by web applications, global data access and user behaviors are impacting the entire infrastructure stack and ask for more agility and flexibility.

As a consequence the entire storage industry is living one of the most interesting transformations ever. Software-defined and Server-based storage solutions are quickly growing in popularity among end users, putting even more pressure on traditional vendors, and forcing them to consolidate their position on the market with acquisitions and reorganizations aimed at improve efficiency and cut costs.

Looking at what has happened in the last six months alone, it is not difficult to get a clear picture:

  • Dell is in the process of acquiring EMC with the hope to improve overall efficiency, cut costs, and optimize its product portfolio.
  • IBM has acquired Cleversafe to improve its cloud services and build hybrid cloud storage infrastructures for its customers.
  • Pure Storage, after the IPO, is now the second largest independent storage vendor in terms of market capitalization and has recently added a new product to cover file/object storage needs.
  • Netapp, which has invested for years in the concept of unified storage sytems, has completely revised its strategy. Now it is heavily working on in its object storage system and has acquired Solidfire.
  • HPE, which has concentrated all its efforts for years in 3PAR arrays, has recently made a strategic investment in Scality.
  • Now, all primary vendors have a stronger proposition for both primary and secondary storage. Trying to cope with the increasing demand in storage expressed by end users, both in terms of latency-sensitive and capacity-driven workloads.

    Once again, it is interesting to note that most competitive capacity driven solutions fall in the category of software-defined storage, based on commodity x86 servers and capable of serving multiple file/object protocols at the same time.

    Understanding the market

    Following a similar market study done by Jerome Lecat (Scality’s CEO) last year, defining size and market trends of FOBS (File and Object Storage combined) will help to easily identify new opportunities and the right strategy to build sustainable infrastructures for the next decade.

    A look at the market landscape

    There are some important considerations, coming from market observation, analyst researches and end user interviews, that are fundamental to understanding what is really happening in the current market:

    From the end user point of view:
    • Traditional market segmentation (SAN, NAS, Object, …) is no longer enough to identify end user needs: workload diversity and infrastructure scale are changing the way different solutions are evaluated by customers.
    • Thanks to cloud computing, the difference between scale-up and scale-out solutions is now clearly understood and reluctance against to the latter (because of potential complexity issues), is no longer a problem, even for the most conservative enterprise environments.
    • It’s not only about storage, Software-Defined solutions are getting a lot of attention at every level. Adoption rates of Software-defined Networking and hyper-converged infrastructure solutions are also quickly increasing too. Hyper-convergence in particular is rapidly moving from specific use cases (like VDI for example) to a wider set of business applications.
    • Private and Hybrid cloud infrastructures are becoming more common in large and medium sized organizations. End users are now more inclined to invest in human resources (developers and SysAdmins) than in the past.
    • Large ISPs and public cloud providers prefer software-defined DIY solutions based on open source software and, where this is possible, cost effective open-source hardware. On the other hand enterprises will likely continue to leverage commercial software to achieve similar results. Smaller organizations will continue to prefer appliances and commercial software.
    • Mobile computing and geographically dispersed clients are demanding new forms of storage (e.g. Sync & Share and distributed NAS for remote and branch offices – ROBO), which are not feasible with traditional storage architectures.
    • As a consequence of the previous point, demand for object-based storage solutions accessed via standard APIs is growing rather quickly even in smaller organizations. It’s not unusual to find end users looking at object storage for on-premises implementations starting in the order of 100TB.
    • Multi protocol support is a basic requirement to manage legacy and modern applications.
    • Many end users are relatively happy with traditional storage solutions already in place, but complain about the high overall TCO.

    From the vendor point of view:
    • There is a proliferation of new startups working on storage platforms with an object storage back-end, some of them don’t even expose RESTful APIs at the moment, but they all have similar basic characteristics like a scale-out design and a software-defined approach.
    • The number of solutions supporting object storage APIs (primarily Amazon S3 API) is growing at an incredible pace, and now counting more than 4,000 different products.
    • There is an interesting Market consolidation happening. Three startups developing object storage systems have been acquired in the last 18 months (and the number of acquisitions is much higher if we consider all the operations that have involved storage products designed to work with an object store at the backend).
    • Most successful vendors in capacity-driven systems are those who are capable of exposing multiple file/object protocols at the same time.
    • Now, all primary storage vendors have an object storage solution in their product line up, or an agreement to resell a third party solution.

    Calculating the market size

    Traditional storage vendors are seeing an overall flat or negative growth with a steady decline in revenues for legacy storage systems. The latest reports on EMC, for example, have shown that VMAX, VNX and Data Domain product lines have all experienced declines between 3 to 15 per cent in the last four quarters. In the same period of time, software-defined and scale-out solutions from the same vendors have grown rapidly. This is in line with what is usually seen in the field, with end users moving away from expensive and rigid storage solutions towards hyper-converged and All-Flash systems for primary data.

    appliance schemaAll analyst forecasts made between the last 6 to 12 months anticipate about an overall substantial increase in scale-out File- and Object-based storage solutions in the next few years. For 2015, IDC (Worldwide File- and Object-Based Storage 2014–2018 Forecast, #251626) expects a total revenue for scale-out systems in the range of $B21.6 (the equivalent of 82.9 exabytes of capacity shipped). Numbers that are expected to grow up to $B35.6 and EB249.1 by 2018, and showing a CAGR between 2013 and 2018 of 21.5% and 47.9% respectively! The lion’s share from now until 2018 is being taken by object-based solutions (which also includes SaaS and Cloud NAS). Considering the same time frame (2013-2018), file-based scale-up solutions and single storage servers will be seeing a negative revenue growth, while the total capacity shipped will show a less impressive CAGR of 24%. On a similar wavelength a note, recently published by IDC, that is showing an important increase in enterprise storage spending for server-based and hyper-scale storage infrastructures in the first quarter of 2015.

    table1

    These trends confirm, once again, that traditional storage solutions are less attractive than in the past to end users, who are now looking for better TCO and scalability.

    The numbers above do not include storage services. But they are forecasted for a CAGR of 6.2% from 2013 and 2018 (Worldwide Storage Services 2014–2018 Forecast #252135). By unpacking this number, it is interesting to note that expenditure for integration services is in line with Management, Support and Consulting registered in the past years. This is important because it proves that moving away from traditional storage systems to software-defined infrastructures will allow to manage much more storage capacity while maintaining integration and management costs under control.

    Last but not least comes $/GB ratio. In fact, due to the strong consolidation that is possible with scale-out systems, $/GB will continue to decrease and remain in a range of one fourth to one third of the $/GB forecasted for scale-up systems.

    Sizing the opportunity

    It’s clear that data storage, as well as all IT infrastructure, is experiencing great change. Agility requested by modern business needs has opened the door to an all out rethinking of IT organizations and infrastructure design.

    The two major paradigm shifts that we are seeing, strictly related to each other, are the adoption of Software-defined infrastructures and cloud computing. In fact, looking at the modern datacenter, the best way to deploy private and public cloud is through software-defined solutions, and this is particularly true for larger infrastructures.

    According to Gartner (Forecast: Datacenter Worldwide 2010-2018 2Q14), between 2015 and 2018, enterprise (101-500 racks) and large (500+ rack) datacenters will grow more than smaller datacenters in terms of end user storage expense. This is due to various factors:

    1. maller organizations will spend less for on-premises infrastructures by leveraging more public cloud computing in the form of IaaS or SaaS. In most cases non-primary storage needs will be moved to the cloud (for example, backup, sync & share)
    2. Large size organizations will continue to invest and build their own private cloud infrastructures.
    3. Hyper-scale xSPs will see the largest increase, since they will be providing resources to small end users. xSPs are the type of end user that will benefit the most from scale-out software-defined storage infrastructures because the higher consolidation ratio and better TCO as well.

    table2

    From another research, published by IDC (IDC, WW Storage for public and private cloud, Nov 2014) about public and private cloud storage revenue, it’s interesting to note that both public and private cloud storage are expected to experience a fast growth rate in the next few years, confirming once again the trend already described. Large datacenters (large enterprises and cloud providers) will absorb most of the storage spending and the sum of public and private cloud (considering only on-premises installations) will reach a revenue of $B19,2 with a CAGR 15-18 of 8%. This means that most of the datacenter expenditure for storage between now and 2018 will be allocated to cloud infrastructures.

    table3

    By 2018 the total of $B19.2 reported in the table above will count for a capacity of 303.66 Exabytes. About 80-90% of that capacity (and around 50% of the total revenue) will most likely be stored in large scale-out software-defined infrastructures optimized for distributed performance, less latency sensitive, workloads. The rest of the data (between 10 and 20%) will be stored in All-Flash arrays and hyper-converged systems configured to serve local and high latency sensitive workloads.

    Closing the circle

    Object-based storage is the only option today to address large multi-petabyte environments with ease. FOBS is already a $B21 market already and growing rapidly.

    Storage is in great demand with end users but specialized infrastructures are needed for different data types and workloads.

    Enterprises are beginning to build data lakes and new challenges are also coming up, like IoT for example. Huge repositories are needed to store data coming from multiple globally dispersed sources that will eventually be analyzed later, if not in real time. This requires massive object streaming throughput, reliability and data availability across geographies, all features that can be found only in modern object-based storage platforms.

    Among startups, Scality has one of the few products on the market that has already proven its potential with massive installations in the order of hundreds of petabytes. The kind of installation that will become more and more common from now on. The software-defined approach and multi-protocol support make Scality’s product a particularly interesting solution for designing storage infrastructures of any size, capable of serving either legacy or next generation applications.

    Among the larger vendors, IBM picked up a strong solution with Cleversafe. Pure’s new offering, still in beta, is an interesting addition to their all flash array product. Both of them further validate the need and demand for secondary storage, and how technology that was once considered futuristic is being adopted right now.

    Disclaimer: This article has been sponsored by Scality. The second part of this report is avaialble for download here.

    TECHunpluggedIf you want to know more about this topic, I’ll be presenting at next TECHunplugged conference in London on 12/5/16. A one day event focused on cloud computing and IT infrastructure with an innovative formula combines a group of independent, insightful and well-recognized bloggers with disruptive technology vendors and end users who manage rich technology environments. Join us!