The dawning of the new millennium was billed as a turning point in the evolution of enterprise storage. Much was made of the complexity of storage, the lack of integration between systems and an apparent stubborn refusal by major suppliers to make their storage area networks (SANs) interoperable.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
This, it was argued, exacerbated lock-in challenges for customers, promoted the proliferation of silos and added to the overall management burden.
Given the entire notion of a SAN was still fairly new, this was not entirely surprising, but in the early 2000s two separate ideas took hold that promised to banish integration headaches to the history books.
First, standards were proposed at the management interface level that would harmonise the way many common storage tasks were handled across multiple arrays, thus reducing the SAN management burden. (We can argue about the effectiveness – or otherwise – of standards such as SMI-S another time.)
Second, and more significant, a notion dubbed storage virtualisation was advanced, and startups such as DataCore and FalconStor emerged with a simple but seemingly effective idea – why not add a software abstraction layer in front of your existing storage arrays, and treat all of them as a single, logical pool of capacity that can be divided and shared as required, all via a single point of management and control.
Virtual storage a non-starter
The idea quickly took hold. Its brilliance was its simplicity – at a stroke it could emancipate multiple storage silos and make lock-in a thing of the past. The investment community sniffed the opportunity, a bunch more startups emerged, got funding, and some were even acquired.
And then… pretty much nothing changed. Was it an idea ahead of its time? Probably. This was when VMware was still a twinkle in Diane Greene’s eye. Did some of the major incumbents, worried about losing account control, mount smear campaigns to frighten customers about the supposed risks? Most definitely.
This is not to say that it hasn’t been a total failure – IBM and Hitachi Data Systems have both squeezed plenty of mileage, and a decent amount of business, from their respective virtualisation capabilities.
Heck, even EMC even came around to the idea, though its product (remember InVista?) was pretty half-hearted and quickly disappeared from view. And in some senses the idea of virtual storage has definitely taken hold; the wave of storage systems startups that emerged in the early 2000s – such as 3PAR, Compellent, EqualLogic and Lefthand – all partly appealed because of their ability to carve out logical pools of storage from physical resources in more flexible and efficient ways than traditional systems, though none of these systems were able to virtualise external, third-party systems.
Overall then, it is fair to say that storage virtualisation has not come close to replicating the success – and truly transforming role – that the hypervisor has enjoyed in the server space.
And then a strange thing happened. At last month’s EMC World in Las Vegas, the storage giant spent devoted much of the limelight to the pre-announcement of a product it calls ViPR.
Storage virtualisation back in the running
Though billed as a software-defined storage platform, in line with current marketing trends, it is clear that the real intent for ViPR is to act as a unifying storage layer that can hide the myriad complexities of a fragmented storage infrastructure and allow IT managers to create the storage services they require without getting involved in the nuts and bolts of performing every little step. In other words, it is storage virtualisation.
So why are companies such as EMC – once apparently so opposed to storage virtualisation – now backing this particular horse? For a start, it makes pragmatic sense for EMC to align its own strategic messaging with the software-defined datacentre push that VMware is making.
More importantly though, it highlights that many of the core issues that dominated the storage agenda a decade ago – too complex, too costly to manage, too inefficient, too inflexible – still haven’t been fixed, and in relation to the other parts of the datacentre (especially the virtualised server estate) are actually getting worse.
At the same time, hyperscale operators – such as Facebook, Amazon and Google – have demonstrated that storage can be made simple. This is serving as a wake-up call to the incumbent suppliers; if they want to stay relevant then they need to be able to arm their customers – particularly service providers looking to stand up cloud-like storage services, but also larger enterprises – with the tools and capabilities that can help them make storage simple, or at least no more complex than it needs to be.
Whether ViPR will be able to fulfil this role remains to be seen. Whether it actually qualifies as a storage virtualisation platform at all is still the matter of some debate, but we believe EMC is at least focusing on the right problem.
For too long the storage industry has been obsessed with building better mousetraps, when what many customers really need are the tools to manage fragmented and heterogeneous environments more effectively. What’s for sure is that we can’t look back in another 10 years and wonder why we haven’t moved on.
Simon Robinson runs the storage and information management practice at 451 Research.