Separating software from hardware is an emerging trend in data storage. Under the phrase software-defined storage it has been at the forefront of vendor hype in recent years.
The appeal is that it potentially allows customers to build their own storage arrays by deploying storage software – which is after all where the main smarts in a storage system reside – on commodity server hardware.
In some ways that goes counter to a rising trend.
Vendors that offer storage software range from hardcore hardware pushers such as HP with its StoreVirtual VSA, virtualisation giant VMware with its Virtual Storage Appliance and VSAN, suppliers that have made their name with storage software such as Nexenta and DataCore, and more recently startup flash array providers like SolidFire that now offers its Element X operating system (OS) as a virtual appliance. Even hardware giant EMC offers software version of its VNX and Celerra products for lab use.
But Nimble Storage says it won’t go down that road. CEO Suresh Vasudevan told me this week that while its engineers use software versions of the Nimble OS in test and dev, the company would not offer a software-defined storage product to customers.
Basically, he said there’s nothing in it for Nimble. His argument went like this:
Vasudevan said: “If, for example, a storage system costs $100 and the hardware from China is $40 of that, here’s how the rest is spent: $28 on sales and marketing, $12 on engineering and R&D, $5 on company admin. So, if I sell just software and the cost of the hardware is the same or possibly more then we would have to sell software at much less. But I still have to pay the same amount to sales and marketing people and to engineers, so it’s really not clear there’s a benefit.”
Vasudevan also said that the software-only hyper-converged model – where server and storage reside together on the same box – is likely to lead to increased customer costs due to the need to protect numerous discrete compute/storage instances.
“When you have software on commodity servers the belief is it will lower costs, but in fact it often leads to overprovisioning. That’s because the way people protect data on nodes without redundancy features such as dual controllers is to mirror data, often with triple replication,” said Vasudevan.
So, that’s why one array supplier will not go down the route of software-defined storage. I guess the argument works, for them, but it could be argued that they could at give the customer that choice. The most compelling part of Vasudevan’s case is probably that the hardware will cost the same or more to a customer, but for the largest organisations out there that may not be the case.