There are lessons to be learned from early adopters of server virtualisation. Some organisations found, for example, that they needed to revamp existing data storage environments to improve the performance of their virtual machines.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Others discovered their backup systems became less efficient, licensing costs rose significantly or that promised reductions in management overheads simply didn't materialise. We'll look at some of the causes behind these issues and what can be done to resolve them.
Table of contents
Why virtualise servers in the first place?
The need for shared storage
The implications of virtualisation on storage
The implications of virtualisation on backup
Specialised virtualisation backup tools
The chief benefit of virtualising server infrastructures lies in the ability to improve resource utilisation, raising it from as low as 5% to as high as 60% to 70%. This is done by sharing a single physical machine's resources between multiple virtual machines (VMs) each running as discrete instances of operating systems.
This makes it possible to consolidate numerous small physical servers often running a single application into fewer larger servers running several applications, all of which are able to use available spare capacity. The issue of resource utilisation is important because fully loaded servers use only slightly less power than lightly loaded ones -- and electricity costs are a chief concern for many UK organisations.
Underutilised servers also take up a lot of precious room in overcrowded data centres. This scenario led the Royal Borough of Windsor and Maidenhead to go down the server consolidation and virtualisation route using EMC's VMware when it found it was running out of space to house new machines. It was also keen to conform to government targets for local authorities to reduce their carbon footprint year-on-year.
One of the first things organisations must consider when initiating a server virtualisation project is a move to shared storage. While organisations may get away with direct-attached storage (DAS) for small installations for development and testing, the larger a virtualised production environment becomes, the less sustainable such an approach gets.
"We found having disparate disk systems was inefficient because while we experienced [data storage] capacity problems with some applications, disks were underutilised with others. But we couldn't reassign capacity as everything was siloed," said Shakeel Khan, PC and network support manager at Young & Co.'s Brewery.
Implementing a storage-area network (SAN) is vital to take advantage of tools such as VMware's VMotion. Such offerings make it possible to move VMs to a second host in real-time without incurring downtime if a problem should occur with the primary host or it needs to be taken down for maintenance. But because all of the virtual machines are stored as disk images in the SAN, each physical server has to be able to see them to know when and where to allocate the required spare processing capacity.
When purchasing a SAN, it's important to think about sizing in relation to performance load. After virtualising servers, many organisations find that their applications run more slowly, but fail to understand that the problem relates to the frequency of requests VMs make to the storage infrastructure.
"The physical disks in a SAN can only spin and process a certain number of I/Os per second," explained Gary Collins, co-founder and chief information officer at IT services provider Intercept, which specialises in virtualisation technology. "But virtualised servers generate so many [I/Os] that the storage architecture often can't keep up, and this affects performance," he said.
As a result, prior to procurement, he recommends using workload analysis and planning tools such as Novell's PlateSpin to create a usage profile of how existing physical servers use memory, CPUs, disk and network bandwidth, as well as what capacity is likely to be required in a virtualised world.
A key benefit of this approach, said Keith Clark, head of ICT at the Royal Borough of Windsor and Maidenhead, which employed Intercept's services to help the borough virtualise its server environment, was that it facilitated capacity planning.
This activity, in turn, "gave us an idea how much money could be saved, which created part of our business case," Clark said. "It also helped ensure the storage environment was big enough to cope without going over the top so we didn't end up wasting money." The Council purchased two 5 TB Dell EqualLogic iSCSI SANs, one of which runs at its new remote disaster recovery (DR) site.
Planning around storage volumes is another thing to bear in mind. Having multiple virtual machines, all with heavy workloads that attempt to access the same volume, will degrade performance. Therefore, thought needs to be put into mixing VMs with heavy and light workloads to balance the situation.
Another area where the impact of virtualisation tends to be underestimated is backup. In the physical world, an agent is usually installed onto a server's operating system (OS) to back up applications and data to disk or tape. In a virtualised environment, however, the VM is a complete logical environment that includes an OS, application and data, and is treated as if it were a physical server.
Although many organisations continue down the familiar route of simply installing a backup agent on each of their VMs -- mostly because it provides granular, application-specific recovery -- there are drawbacks to this approach. First, if the VM fails, it may be necessary to recreate and reconfigure the system from scratch before the backup can be restored because only the applications and data, rather than the OS, are backed up.
A second problem involves resource contention. Backup activities require high levels of server processing power, which may compromise the performance of the individual virtual machine being backed up, as well as the others running on the same hardware. It's therefore important to leave some unused server resources available to cope with the situation. Inadequate bandwidth can also lead to network bottlenecks.
A third challenge, according to Tom Brand, senior consultant at IT services company Morse, is that if multiple virtual machines are backed up to the same volume at the same time, "you could end up overwriting one with another or recover the wrong ones to the wrong place." Because all of these issues can lead to an increase in the administrative overhead required, "everything has to be planned carefully from the word go," he warns.
Yet another consideration relates to licensing costs. Although it was usual practice to purchase a backup licence per physical server, options in the virtual world range from buying a licence for each VM to procuring an enterprise licence for each virtual host such as VMware's ESX, which covers all of the virtual machines running on it. This scenario can work out either less or more expensive depending on the environment, and should be looked at on a case-by-case basis.
Where costs can really rack up, however, is at the storage-area network level. Replication and snapshot licence fees are usually charged for on a per-GB basis; however, VMs comprise not just data but also OSs, which can significantly increase data volumes and, therefore, price.
Another option is to use specialised backup tools that have been optimised to work in a virtualised environment. Such offerings minimise resource contention at the host and VM level and make it possible to restore the entire virtual machine instance or upload complete storage snapshots to newly created ones, enabling storage professionals to 'clone' virtual servers on demand.
The downside of such an approach is that in a virtual world it's usually necessary to restore an entire snapshot for recovery purposes, even if only one file is corrupted or lost.
Another consideration is that more capacity is likely to be required at the SAN level because, regardless of how much data has changed between each virtual snapshot being taken, VMs are always backed up in their entirety.
This means that such snapshots continue to use the full backup window and consume the same amount of disk or tape space unless additional tools such as data deduplication are employed to back up only the information that has been altered or added to. That's something that can add to the cost.
One organisation that has gone down the virtualised tool route is Young & Co.'s Brewery. After consolidating 12 physical servers down to two Hewlett-Packard (HP) ProLiant DL580 quad-core machines running 17 virtual machines and introducing a Hitachi Data Systems 9570 SAN in 2004, it deployed Vizioncore's vRanger Pro backup and restore software in early 2007.
"Our systems were becoming more and more 24/7 because we'd introduced flexible working, and our 125 pubs needed to work until 1 am," Young & Co.'s Brewery Khan explained. "The backup window was shrinking all the time, so we wanted a tool that could do online snapshots and enable us to back up and restore our VMs quickly and efficiently."
While the company had used Symantec's Veritas NetBackup to handle its physical servers, there were two downsides to continuing this approach. "We needed a client licence for each VM, which meant our costs were increasing. And because we were still reliant on tapes, our backup and restore was still stretched due to the speed of the tapes," Khan said.
The new offering, however, was cheaper in that it's paid for on a per-processor basis. It has also enabled Young & Co.'s Brewery to back up its data to disk and to mirror it to an offsite vaulting service without the need for human intervention.
"Our processes have become more streamlined as we don't have to rely on someone changing tapes or managing the system as closely, which has cut down drastically on admin," Khan said. "Recovery has also gone down from eight [hours] to between three and four hours, and restores can be run in parallel, whereas previously with tape contention, we could only do one at a time."
But Roger Bearpark, assistant head of ICT at Hillingdon Council, which runs a Compellent SAN, believes the real secret to success is planning. "You can't look at any system in isolation. It has to be treated as part of an integrated infrastructure because everything has a knock-on effect on everything else. It's about taking a holistic approach and seeing where everything fits together," he said.