I’ve met up with Coho Data twice in the last few months and never got the chance to write a blog about them. Coho is a brilliant startup under many aspects and, although the product is a newly born, its potential is really impressive.

You can easily find many technical deep dives about Coho all around, so I’m not going to do the same (I just added a few links at the end of this blog for your convenience). I’ll try to talk about Coho from different perspective.

Who is Coho data

A few words about the company and its team….. just to understand the DNA of the company, where they come from and and why you should give them some credit to the idea behind the product.

If you look at the website, the company is new and they exited from the stealth mode only a few months ago. Coho has already raised $M25 from various investors to develop a next generation storage system that they describe as “for the cloud generation” (Ok, the statement is strong but I don’t think it’s only marketing…)
In the core team you can spot names like Andy Warfield (CTO), Keir Fraser (Chief Archittect) and Ramana Jonnala (CEO). They have been working in storage for many years and they were in the team who invented Xen (yep! the hypervisor). They surely have a clear view of the meaning of virtualization and virtualized storage, don’t they?

Object storage

I’ve written many articles in recent months about object storage. An object store in the back-end of a storage system is a real platform enabler giving the developer many options and an open path to implement many new innovative ideas. If well implemented, each block/page/chunk of data has metadata and there is no filesystem (or other fixed type of layout). This simplifies the development and has many benefits for the user, starting from the theoretically unmatchable scalability to the freedom of choice about the layout of data placement into the system. And more, thanks to the power of metadata, it’s relatively easy to implement new features (just to make an example, think about the automated tiering: if a metadata tag counts the number of accesses made in the last period of time it’s quite easy to figure out what kind of media it should be placed in)

Commodity hardware

front-angleStorage is software, isn’t it? And Coho Data scores another point with me here. I love commodity hardware: it’s powerful, cheap and it comes off the shelf with no effort!
Hardware design is very risky nowadays, it’s hard to build good hardware and if you make a mistake, recovering it from a faulty design could be a real pain… especially if you are a tiny startup!
Even if commodity hardware hasn’t got the same characteristics of efficiency, resiliency and availability when compared to proprietary hardware, you can easily design a smarter software than can cope with those deficiencies at a fraction of the cost and this is an advantage for the end users.

Scale-out

If we were in the fashion industry we could say that Scale-out is the “new black”. It’s not a question of fashion (of course!), this type of architecture is winning over with end users because of the benefits it brings in terms of performance and scalability. It depends on the implementation, but adding a node to the cluster adds a precise amount of resources in terms of space and IOPS: easy to understand, easy to buy, easy to install and, sometimes easy to manage… simply easy.

Software-defined networking

We always talk about “software-defined something” (too much indeed!) but you should take a look at how Coho implements SDN and then I’m sure you’ll come back saying Brillllliant! (yes, brilliant with a capital B and 5 Ls).
Coho data masquerades the complexity of scale-out and the limits of the communication protocol through SDN: you only see a single exposed IP address and all the data connections/paths are managed transparently. One protocol at the moment and more to come. It will be interesting to see how they apply this technique to other different protocol or access methodologies.

Hybrid

Only 10/15% of enterprise data need absolute performance, the rest is space (a lot of space). Flash means performance while 7200 RPM SAS drives mean plenty of cheap space. that’s it!
Coho writes on the flash and uses various caching algorithms to move data up and down. It’s not new, it’s not innovative but it’s the way to go today: flash is still too expensive to cover all the enterprise needs, even when we talk about primary storage systems.

Analytics

We all would like a crystal ball to see what will happen in the future, wouldn’t we?! Analytics doesn’t have the shape factor of a ball (more like a dashboard) but it does it!
Capacity planning, error prediction and correction, trends discovery, workload analysis and management and so on and so on. I love it.
Coho is not the first to have this kind of feature but I think it’s a must to have in the modern storage world. At the moment, for Coho, we have to admit that analytics is still at the very early stages but they showed something interesting during the meetings and the potential for a great tool is there.

Why it matters

I want to conclude with two thoughts.

1) Coho data rocks! This is the kind of innovation that I really like to see. They have all the characteristics that any next generation storage system should have to be considered as such. From my point of view, architecturally speaking, they are setting a new benchmark.
2) The product is conceptually off the charts but it’s also immature (v. 1.0) and lacks many enterprise features at the moment (like, for example, remote replication, multi protocol access, and so on). These features are in the works (of course) and Coho deserves all of our close attention. It’s not ready for prime time yet but if its development stays on the right track I’m sure we will hear a lot about Coho data in the near future!

Related links:

Virtualtothecore.com: Coho Data Deep Dive: the future of storage, now.
SFD4: Coho Data DataStream Architecture Deep Dive
Virtualizationsoftware.com: Coho Data Brings SDN to SDS for a Killer Combination
Yellow Bricks: Startup intro: Coho Data