Opinion

Shared storage model faces challenges from virtual servers

Antony Adshead, UK Bureau Chief
Ezine

This article can also be found in the Premium Editorial Download "IT in Europe: Object storage: An elementary approach to file structure."

Download it now to read this article plus other related content.

The landscape of computing is changing rapidly, largely driven by the move towards virtual servers, and that’s bringing change to storage too.

Perhaps mere use of the word “change” understates what’s happening here. The advent of server virtualisation, led by the juggernaut that is VMware, has really stirred up the entire IT environment and could bring seismic shifts for storage.

In a relatively short space of time -- say, four or five years -- the needs of virtualisation have helped consolidate the traditional storage array and then have moved swiftly on to sow the seeds of its downfall.

Once upon a time, the world was a simple server-centric one. Applications ran on physical servers, and if they were important applications, they had their own dedicated compute resources. Storage was directly attached and all lived in one tin box.

Then came server virtualisation. The logical business of a server application was liberated from its former physical surroundings, and many servers could run in one physical machine.

This development drove the uptake of shared storage. While putting many virtual servers in one place greatly improved the utilisation of server compute resources, it also pushed existing direct-attached storage (DAS) beyond practical limits. Having many virtual servers in a few machines meant a shared pool of (usually SAN) storage was needed to serve data with the required I/O, throughput and scalability.

But as the implementation and use of virtual machines, and latterly virtual desktops, have taken root, shortcomings have become apparent in existing approaches to delivering shared storage. The cause: the extremely random I/O profile that results from placing multiple virtual machines in a few physical servers. In particular, write requests can really pile up and deal a blow to hard drive seek times if nothing is done about them.

The I/O issue is a big thing for storage, and we are seeing a range of responses, from throwing solid-state media at the problem to new storage architectures that go beyond the traditional controller/disk array approach. These responses show the potential for the SAN/NAS array as we know it to become a thing of the past.

Server-side flash, for example, is one way in which the array is sidestepped to provide better performance for virtual machine and desktop I/O. Here a wedge of solid-state media is located at the server, and in cases such as Fusion IO’s ioMemory Virtual Storage Layer software, integrates the added flash with the server’s own memory.

Of course, it’s also possible to put flash into a traditional array -- and most vendors now allow that -- or even to have an entire array of solid-state drives. These approaches pose no real threat to the traditional array architecture.

There are, however, a number of storage hardware architectures arising that do break the traditional mould.

Perhaps the most radical is Nutanix and its Complete Cluster, which provides compute resources for VMware hypervisors along with a Google-style mass, distributed approach to storage. What you get are discrete appliances that include server, storage software controller, solid-state drives and hard drives in a 4U unit that can be clustered to create a grid-style file system. It’s like a turn full circle back to direct-attached storage, but this time with clustering that allows for huge scalability and ditches traditional shared storage and its associated fabric and cost.

Another approach that appeared this year came from Nimble Storage, which combines solid-state and SATA drives with data deduplication. The company claims to solve IO issues and backup all in one box. Flash caching allows for super-rapid read I/O, nonvolatile RAM (NVRAM) buffers random writes that are then written sequentially and striped to SATA drives, and dedupe provides for extremely efficient use of spinning disk capacity.

Meanwhile, Nexenta tackles the virtual machine/desktop random IO challenge by its use of Sun’s ZFS file system. ZFS allows for three levels of cache -- memory, SSD and spinning disk -- and sequentialises random writes to spinning disk from the first two. True, Nexenta is a storage array, but it’s a software product that’ll work with commodity hard drives. That could pose a very great threat to the traditional storage array vendor model, as we have discussed previously.

The traditional storage array is also under attack from the virtualisation hypervisor.

Virsto, for example, provides a virtual storage appliance that lives in the hypervisor and controls storage on vanilla RAID arrays. Its software also tackles the random I/O challenge and creates a log in front of physical storage that smoothes out the write process. It’s an example of the intelligence in the job of providing storage moving from the array to the server.

Perhaps the largest slew of such examples comes from VMware, which this year introduced a range of new storage management features in its vSphere 5 environment. These enhancements include APIs that can detect, read the characteristics of and manage storage devices from the server; the ability to move virtual servers automatically according to (among other things) storage I/O and disk capacity; the ability to lock VMs to specific storage; the ability to set LUNs from the hypervisor; and the ability to reclaim disk space from deleted VMs.

While the likes of Nutanix and Nexenta do demonstrate radical alternative approaches to the way storage hardware is configured and sold, they are up against mighty and well-entrenched products and marketing machines from leading array makers like EMC, NetApp, Hewlett-Packard, IBM, Hitachi Data Systems and Dell.

The big boys can soak up these kinds of punches with ease for the time being. Perhaps the bigger threat in the medium term is from the migration of storage intelligence to the hypervisor. The brain of the virtual server infrastructure is in some ways the ideal location from which to control storage functions. If VMware’s efforts to date are anything to go by, we could see more storage functions taken over by the hypervisor in the years to come, especially as other virtual server vendors such as Microsoft and Red Hat gain traction in the market.

Antony Adshead is Bureau Chief for SearchStorage.co.UK.

Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in December 2011

 

COMMENTS powered by Disqus  //  Commenting policy