vege - Fotolia

Software-defined storage: The pros and cons, and what is available

Software-defined storage is a rapidly rising trend in the datacentre, but what are the advantages and disadvantages of building your own storage, and is it suitable for all organisations?

Software-defined storage, in which storage array features are delivered by software products, is gaining popularity.

It promises cost savings, because software-defined storage (SDS) is run on commodity server hardware and can use spinning disk and flash drives to provide high-performance, fully featured storage for organisations ranging from small companies to enterprises.

But is software-defined storage suitable for all? We weigh up the pros and cons, but first let’s look at its key characteristics.

Typically, one or more of the following features are part of a software-defined storage deployment:

  • Commodity hardware – The use of non-proprietary components that allow systems to be built by the user. Software-defined storage should be able to consume standard hard drives, SSDs and work within a typical server chassis.
  • Hardware abstraction – Separation of the logical aspects of data storage from the physical components, such as HDD/SSD performance and Raid. Software-defined storage should use more general terms that define latency, IOPS and throughput, independent of the hardware used.
  • Automation – The ability to drive the configuration (both for provisioning and policy) at an API or CLI level. The key benefit here is the use of abstracted policies that deliver customer business-focused requirements.

The rise of software-defined storage has been made possible by two major factors – the standardisation and commoditisation of hardware components.

Standardisation has seen the industry settle on x86 as the platform of choice for applications and storage. Almost all suppliers have migrated their hardware platforms to exploit the x86 architecture and its associated ecosystem, such as PCIe and NVMe.

Meanwhile, the commoditisation of components means spinning disk and flash drives (as well as other hardware components) are reliable, predictable and easily available to the user to build storage platforms.

Build or buy?

With the rise of software-defined storage, is there any advantage to buying storage from an array supplier? Can’t the user simply build their own storage hardware more cheaply?

The cost argument is definitely worth considering, but let’s look first at the technical pros and cons.

From a hardware perspective, the components used to build all but the highest-performing systems are readily available. Array suppliers put a premium on the price of their hardware, despite having the buying power to get lower wholesale prices than their customers.

However, the storage suppier may add hidden value that is not always obvious. For example, the components chosen will have gone through significant testing to identify edge cases and scenarios that stress component hardware. Suppliers work closely with suppliers and can influence firmware upgrades that optimise disks, SSDs and adaptors for their storage platform.

Suppliers also take feedback from the field that collects data on thousands of hardware deployments. This ensures issues are addressed in future code releases in a virtuous feedback loop. This same process does not exist for software-defined storage suppliers, who may get critical feedback from customers only when products fail to work or lose data.

Regression testing issues

But the ability to use any hardware for software-defined storage can actually be problematic. Although hardware has standardised, it is possible to build from a huge range of configurations based on multiple server supplier products, with various generations’ and manufacturers’ components, each of which could be running one of many versions of code. This could bring significant regression testing issues.

Finally, we should consider the problems involved in maintaining software-defined storage.

With software-defined storage, the user becomes responsible for sourcing hardware components, testing new configurations and firmware, and for liaising with the software supplier for patches, updates and fixes. Much of this work, including actual upgrades, would normally be done by the storage array supplier.

This brings us back to the cost discussion.

We can see that hardware suppliers do add value and can justify the higher cost of their products.

That said, smaller customers may feel it more cost-effective to acquire hardware themselves and simply buy software to run on top. Large customers may feel the economies of scale are such that they can afford to be both builder and consumer.

One thing is sure – users that directly control the hardware cannot be held to excessive maintenance charges in the three to four years after initial purchase.

Multiple tracks

As the software-defined storage market evolves, two contrasting approaches are being taken by storage suppliers. Specialist software-defined storage suppliers have moved to offer standardised hardware platforms for their products, whereas array suppliers have started to produce software versions of theirs.

Software-defined storage products that are available with standardised hardware configurations include Maxta with its MaxDeploy configurations, Atlantis Computing with HyperScale, and Dell-EMC with its ScaleIO Ready Nodes.

Some customers simply do not want to design their own hardware, so by providing a software and hardware solution with less mark-up, these suppliers have found a middle ground and used the opportunity to pivot more towards hyper-convergence.

Meanwhile, Dell-EMC and HPE provide software versions of their hardware offerings, including HPE StoreVirtual (Equallogic), HPE StoreOnce, Dell-EMC Unity and Data Domain Virtual Edition.

These are fully supported platforms that have minimal capacity and additional paid-for licences.

Read more on software-defined storage

  • Software-defined storage reflects important trends that affect storage, such as increasing separation between hardware and software.
  • The rise of storage software and APIs on commodity hardware means there is something in the idea of software-defined storage.

NetApp used to offer a software-only implementation of Data ONTAP, but that appears to have been discontinued. The company does offer a software-only version of the SolidFire operating system that can be deployed on specific hardware configurations.

Another area of software-defined storage adoption is object storage. Object stores are well suited to being deployed on commodity storage, where throughput, rather than latency, is an important metric.

Almost all object storage suppliers – including Scality, Cloudian, Caringo, Cleversafe/IBM, OpenIO and NooBaa – can be deployed as software, either onto bare metal or as a virtual machine.

In the cloud, we see offerings from existing storage suppliers, both array makers and software-defined.

NetApp offers Data ONTAP as ONTAP Cloud for AWS, SoftNAS has CloudNAS and Zadara Storage has VPSA, a hardware-based SDS offering. Meanwhile, Cloudian HyperStore is available as an AWS AMI (Amazon Machine Image) and Panzura offers its Global Cloud Storage System.

Container-based storage

The world of containers is seeing an increase in storage offerings, such as Hedvig’s Universal Data Plane, which can be used for container-based storage.

Portworx and StorageOS both offer storage systems for containers that are also built on containers. This is a novel approach considering the fact that storage was typically the persistent layer for transient containers sitting above it.

Finally, we should not forget a range of other commercial software-defined storage systems from DataCore (SANsymphony), Starwind (Windows-based), Datera (distributed storage) and Storpool (distributed storage).

There are also open source platforms from Ceph (scale-out storage), CoreOS (Torus) and Gluster. Both Ceph and Gluster are supported commercially by Red Hat.

Looking forward

This is not intended to be a comprehensive round-up of all the products on the market, but it is clear there are many options available for users and IT departments. Lines of deployment are being blurred between buying hardware and software combined, or buying software and using commodity hardware.

Possibly the biggest benefit of moving to a software-defined model is the future transition to hybrid and multi-cloud operation.

Many storage systems can already be deployed in public cloud environments and provide equivalent functionality to that already available on-premises. This means IT departments can begin their transition to hybrid cloud and manage the big issue of data mobility.

Data can be migrated between on-site and cloud-based platforms using whatever replication techniques the physical/virtual appliance offers.

This means we are likely to see hardware-only storage being reserved for high-end performance requirements or niche applications such as the mainframe.

Software-defined storage deployments will continue to increase as suppliers improve commodity hardware support. Ultimately, this can only benefit the customer, with an embarrassment of choice in available systems.

The most difficult transition will be for the traditional hardware suppliers because they have to adapt to software-based licensing models and a different way of selling.

Read more on Virtualisation and storage

CIO
Security
Networking
Data Center
Data Management
Close