By Ian Murphy
Despite not making the same kind of splash as server virtualisation, storage virtualisation has been gaining some significant traction in the UK. The technology can help storage managers save on equipment costs, help them boost capacity utilisation in their data centres, and simplify the management of heterogeneous storage environments. Earlier this year, the SearchStorage.co.UK Purchasing Intentions survey showed that 54% of respondents had virtualised at least part of their installed storage.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Rene Millman, a senior research analyst at Gartner Technology and Service Provider Research, believes storage virtualisation should be a priority for IT shops faced with power and cooling challenges, simply because the technology translates to fewer disks and less hardware.
"Anyone who needs to get the most out of their storage investment, irrespective of company size, needs to look at storage virtualisation," Millman said.
Benefits such as hardware independence, responsiveness to new storage demands, and vastly simplified management rather than just a reduction of physical devices and lower power costs, make storage virtualisation an attractive proposition for any data centre.
What is storage virtualisation?
All virtualisation -- server, storage or network -- is about abstracting the underlying physical components and their functions into an apparently combined resource. You no longer need to think about the device, just take advantage of the common pool of storage that virtualisation offers.
With storage virtualisation, a number of arrays or drives disks can be combined into a single large pool from which you allocate space. The virtualisation software manages the physical locations and placement of the data.
Storage virtualisation products usually operate in one of three locations in the storage stack -- in the fabric, in the array controller or as a discrete appliance. Of these three approaches, the last is the most common.
Appliance-based approach reduces hardware, improves resilience
One user that went for the appliance-based approach is Oxford University Computing Services, which implemented SAN/iQ from LeftHand Networks when it needed to consolidate two existing computer groups, each running 14 TB, into a single location using virtualisation. While LeftHand Networks -- now owned by Hewlett-Packard (HP) -- doesn't support heterogeneous storage like Hitachi Data Systems arrays and IBM's SAN Volume Controller (SVC), HP calls it a virtual system because you can cluster LeftHand nodes
"We were approached by two different groups who wanted advice around the use of virtualisation to improve the way they worked," said Jon Hutchings, senior systems engineer at Oxford University Computing Services. "Each had its own legacy hardware and storage, and was supporting a wide range of applications and data. What they wanted to do was reduce the amount of hardware they had and improve their resilience."
The two groups had approximately 200 servers with direct-attached storage (DAS) at multiple physical locations in Oxford.
Initial discussions focused on reducing systems to a single rack for each department, but this created a single point of failure. "During the discussions," Hutchings said, "we realised that each team had access to its own data centre. This opened up the possibility of cross-site failover using two-way replication of volumes using storage virtualisation."
The decision was to use two LeftHand Networks SAN/IQ iSCSI storage-area network (SAN) nodes running on HP ProLiant DL320s servers in each data centre, with the whole infrastructure managed as a single SAN. Although initial requirements were for just 8 TB of storage, a decision was taken to provision 14 TB to account for future growth.
Both data centres are connected to the network through multiple routes. These include the university's own backbone and two 10 GB private fibres, each of which takes a different route under the city for resilience. To reinforce that resilience, Spanning Tree Protocol (STP) has been deployed on the backbone so that any failure of an active link is automatically rerouted. Using private fibres also ensures low latency, without which it would be impossible to do real-time replication of data.
That use of multiple data centres and routing has already proven its value to the university. A recent outage caused by overheated hardware shut down the main data centre. As a result, all user requests were automatically routed to the backup data centre with no calls from any end user about a delay or problems. Once the problem was resolved, the primary data centre was brought back online and data was resynchronised, again with no impact on end users.
Alongside the operational efficiencies, there have been significant management efficiencies. "Allocating additional storage was previously a major task that we can now do in just minutes," Hutchings explained. "With a complete view of all our storage as a single pool, we just point applications at the storage they're going to use."
The success of this project has influenced future IT planning at Oxford University. According to Hutchings, "although the original was on a project basis, the experience we gained is being worked into our own infrastructure. Having tasted the benefits of a big pool that can be allocated as required is driving how we pre-engineer our storage."
Virtualisation helps provision storage, simplifies storage management
Mike Duxbury, senior networking specialist at Volkswagen Financial Services, also decided on storage virtualisation when looking for a solution to replicate data between two sites. Volkswagen Financial Services manages two data centres in Milton Keynes that are 700 metres apart. They are connected over fibre (1 GB) and a microwave link, allowing data to move across either route should there be a problem with the other.
Initially, Volkswagen Financial Services was using continuous data protection (CDP) to mirror data across the two sites to meet compliance and other regulatory requirements, and to support a range of hardware, software and bespoke applications. Although the data was mirrored between the two sites, rebuilding hardware and restoring data was a wholly manual process. "We were over-provisioning storage and spending a lot of time managing our systems," Duxbury said, "so we needed an alternative."
That alternative came along when Volkswagen Financial Services decided to deploy DataCore Software's SANmelody product on two new HP servers. It also bought two HP MSA30 storage subsystems and reused its existing HP MSA20 Serial ATA (SATA) enclosures.
The company experienced several immediate benefits. "We no longer buy any storage when we buy new servers," Duxbury said. "Everything is provisioned from the central pool." In addition, shortly after going live, Volkswagen Financial Services had a server failure and then a power outage that exceeded the UPS capability. In both cases, Duxbury said, the system recovered with no loss of data and no impact on users.
Other benefits Duxbury points to include being able to back up virtual machines from the SAN without taking them offline, vast improvements in simplifying storage management, and much higher performance from servers and storage. There has also been a significant cost benefit from the use of a storage pool rather than local disk.
Looking toward the future, Volkswagen Financial Services sees the virtualised SAN as part of its business continuity planning and an essential element in delivering internal service-level agreements.