Software-defined storage: Making sense of the data storage technology
A comprehensive collection of articles, videos and more, hand-picked by our editors
No doubt about it, 2012 was a tough year in storage. IT budgets experienced significant pressure, with IT managers compelled to do more with less. Meanwhile, most major suppliers struggled to deliver much in the way of growth.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Perhaps the experience of 2012 will dispel the myth that the storage industry is somehow recession-proof. Data may be ballooning, but storage budgets are generally not.
Many enterprises Computer Weekly talks to report that storage budgets are being cut – sometimes substantially – and although “doing more with less” has become the mantra of the times, the grim reality for many storage professionals is that if an upgrade or new project does not have a short term payback, it simply does not get approved.
What does this all mean for the storage world in 2013?
The watchword of 2012 was “optimisation”, and we expect this to remain the case in 2013 with continued high levels of interest in optimising technologies. These include automated storage tiering (especially in conjunction with flash SSD), data deduplication, thin provisioning and other space-saving techniques. The challenge for the industry is that such functions are now offered as standard by most suppliers; rather than being a basis for differentiation, they are now table stakes.
But, we are also starting to see a shift in emphasis. What is becoming increasingly important is not what is being offered, but how it is being implemented.
Offering a deduplicating backup system is great, but how can this be applied to primary storage as well? Got flash as a tier in a hybrid array? Great, but how does this integrate with those tier-one transactional apps that need turbo charging as well?
What about that growing virtual estate; does that run on the same storage and backup infrastructure as your legacy estate, or is it entirely different infrastructure that has to be implemented and managed separately? Using array-based replication for your primary backups? Fine, but how does this integrate with your disaster recovery and archiving plans? And so on…
This speaks partly to the breakneck pace of the IT industry in general, and partly to the fact that storage in particular remains a complex and highly-fragmented sector. The last decade has seen something of a Cambrian Explosion in terms of new storage technologies, but despite these innovations, the storage infrastructure remains far too costly to implement, run and manage.
This reality – which has been building for a number of years – is slowly but surely leading some IT organisations to consider a different approach to managing their storage environment.
In 2013, we will see the continued adoption of technologies and approaches that range from just slightly different to quite radical. These disruptive forces can be categorised into three areas, though note that they are not mutually exclusive, and can in fact feed off each other.
First, there is the rise of software-defined storage. Over the past few months, this has become a bit of an overused catch-all term in IT marketing, but the storage industry is primed for a more software-centric approach. Again, this is something that has been building for several years, but we expect that certain tenets of software-defined storage will drive some end-user purchasing, startup activity and M&A over the coming 12 months. These include, but are not limited to:
Second, storage is increasingly being deployed as part of a converged infrastructure. Interest has been slow to build, but the economic imperative and maturation of industry offerings has combined to boost the appeal of the converged model in 2012, and we think adoption will continue to grow in 2013. There is no single approach to converged infrastructure, but IT decision makers seem increasingly willing to forfeit flexibility in terms of vendor choice for an IT infrastructure stack – across server, storage, networking and even OS, virtualisation environment and application – that just works.’
We also note that the convergence theme gaining currency in a more narrowly storage-specific context. In other words, storage arrays are becoming increasingly able to cater to multiple protocols, workloads and data types. The mid-range market in particular is becoming more unified as systems are able to deal with file, block and even object protocols in a single system.
The third element underpinning next-generation storage is the cloud. So far, outside of a few specific providers (and Amazon in particular), the cloud – as a third-party on-demand service – has largely failed to deliver in an enterprise storage context.
But, there are signs this could change in 2013. The underpinnings for service providers to develop more cost-effective storage clouds are emerging in the form of open source projects and providers. The likes of OpenStack, CloudStack and InkTank (with Ceph) are attracting more interest, which could encourage others to explore the open source model.
Meanwhile, interest in cloud storage onramps, gateways and other similar enablers has been piqued following Microsoft’s recent purchase of StorSimple; a move that could provide validation for other players in this fledgling space. Also, the Dropbox effect combined with the ongoing consumerisation of IT continues to create new headaches for IT departments that storage professionals are now getting involved with.
In summary, it’s important to recognise that these trends won’t lead to overnight change.
Even though disruptive startups remain the engine of innovation for the storage industry – note that over $1bn was invested in storage startups in 2012 – it is still dominated by a small number of large and dominant providers.
This is not something that will change in the near-term; indeed the need for better integration across storage silos and convergence generally perhaps will lead to further concentration.
Storage has also proven quite resistant to change in the past, and remains one of the most conservative and risk averse domains of the enterprise datacentre; and for good reason, we might add.
But we think the economic imperative is starting to pick away at the edges and challenge conventional wisdom and traditional thinking; we saw glimpses of this in 2012, and expect that 2013 will continue to see challenges to the status quo.
Simon Robinson is research vice-president; storage and information management at 451 Research.