I saw an interesting discussion between Duncan Epping and others in the Virtualization Twitterverse and I was wondering how the claims of upwards scalability in the storage world are effectively true or just plain fiction.
I take, as an example, Hitachi and its USP-V: they claim that the USP-V can address 247 Petabytes of storage behind it (virtualized and internal), and that the storage can sustain 4 MILLION IOPS!. That’s pretty impressive! πŸ™‚ (even if you read their SPC-1 Benchmark report and see that with their best config they can reach just 200.000 IOPS but that’s another story).
Well, it’s not so impressive actually πŸ˜‰
If you do some simple math you can clearly see that 247PB (maximum addressable space) divided by 4.000.000 IOPS (maximum IOPS achieved) give you a mere 0.01521 IOPS per GB, that’s really a NOT impressive.
So, even with the best configuration, best practices followed, best Tier 3 storage virtualized behind it you cannot squeeze the system to its maximum capabilities.
Considering that, I think that Monolithic storage is on a dead end railway, the Scale out approach is surely the way to go in the near future.