Indeed, the recession is a key driver in attempts to sweat data centre hardware and further extract value from currently underutilised processors by means of virtualisation. Research conducted by Quocirca in 2008 has found a strong correlation between the need to deal with data centre power and floorspace constraints and the solutions offered by virtualisation.
VMworld Europe 2009, a three-day event at The Palais des Festivals in Cannes next week, will highlight many of the newest virtualisation strategies. This is a good time for storage professionals to get up-to-date information on virtualisation and its potential to help their shops maximize investments.
One of the touted benefits of virtualisation is faster deployment of a new virtual server image onto an existing physical server. Virtualisation vendors -- who clearly have an agenda to push -- generally point to time savings on the order of several weeks when provisioning new images compared to the deployment time and effort required to create a new physical server. While such claims make a lot of assumptions, such as the existence of streamlined and automated change approval processes for new images, there is no doubt that provision of new server images is, comparatively speaking, a vastly more agile process in the virtual world. But such agility can present a new raft of problems.
Making it easier to create new images means that more server images will be created. Developers and testers will create new images to isolate their nascent applications from the production workload and the work of their fellow coders. Additional production processing environments may have to be brought online to cope with peak load conditions and cyclic variations in load. Images may also be moved from one server to another to balance workload or to ensure absolute isolation between various images.
Unchecked, the ability to rapidly provision new images may quickly lead to a veritable kitchen drawer full of server images – a condition known as virtual server sprawl. In the decades since Wintel and Unix servers entered the data centre, the IT department has often struggled to stay abreast of the status and existence of the many physical machines they have deployed, a problem that is only amplified by virtual server sprawl.
IT departments should ensure they have the means to keep track of which images have been created, who created them, their purpose and how often they are being used. Ideally, provision should be made to automatically detect infrequently used images and offload them from the physical machine, following a tiered hierarchical storage management (HSM) approach. Such an approach should include the ability to trigger a graceful shutdown of the image, as well as migration of a copy of the image's file structure to nearline and then offline media, with eventual permanent deletion. All of this should be done in a fashion that notifies the image creator of the current status and allows them to easily resurrect the virtual machine up until such time as it is permanently deleted.
It is important to ensure that new images are automatically brought into existing information security, information governance and disaster recovery (DR) regimes. An image you are unaware of will be an image that is not being backed up, and that may represent a potential security exposure due to inappropriate access controls, or lax system and application patching. It is also necessary to keep track of the software licences each image consumes to ensure you do not exceed your total allowable count.
In addition, check that the deployment of many virtual servers does not cause a bottleneck in the physical infrastructure. For example, with all of the images on a physical server sharing the same network connection, care must be taken to not overload the network interface card (NIC) or the local-area network (LAN) connection with too much concurrent network traffic. Such a problem might occur if all of the virtual servers sharing a physical machine all attempt to perform an application backup at the same time, resulting in a heavy network load or a shortage of drive units if backing up to tape.
Getting the most out of server and storage virtualisation requires planning and consideration of existing policy and processes. At VMworld Europe 2009, you should challenge virtualisation technology providers to architect a solution that takes the entire environment into consideration rather than simply the world of their own product installation. Virtualisation provides real benefits if done correctly but, like every other technology, spawns a real set of problems if not carefully managed.
Simon Perry will be attending VMworld Europe 2009 from 23-26 February, and will further explore the issues raised there in server and storage virtualisation on SearchStorageUK after the event.