With a virtual infrastructure, it's a jungle out there

Why virtualisation projects fail and a rundown of what else is to come from this UK analyst. The first in a series of monthly articles, on when you your infrastructure has gone virtual, from market watcher Quocirca.

If 2009 was the year organisations started taking virtualisation seriously, 2010 is going to be the year when pan-data centre virtualisation becomes mainstream. Driven by the need to save energy, space and human resources, physical IT platforms that are rapidly becoming non-competitive at the business level due to inflexibility, non-resilience and high capital and operating costs will need to adopt virtualisation to provide the needed competitiveness in the market.

However, although virtualisation at the server, storage and network level is pretty well proven, Quocirca has found that many virtualisation projects are out of control and are seen by the business to be failing. This series of articles will try to help in identifying those areas where Quocirca has seen users struggle with virtualisation, and will aim to provide pointers for data centre managers to consider, helping to avoid the major pitfalls.

Ten areas that Quocirca will expand on over the coming months are:

1) Getting the architecture right
Virtualisation is not just about making everything into a single resource pool: it's about virtualising where it makes sense, using existing and new assets in the best possible way. Heterogeneity is possible - different workloads need different hardware.

2) Keeping the data centre dynamic
Next generation data centres need to be able to grow and shrink with the needs of the organisation. Building a 100,000 square foot data centre where only 40,000 square foot is to be used for the first few years does not make economic sense - nor does building one of 40,000 square foot, when 45,000 square foot will be required in a year's time - unless you can look at renting out the excess space as platform as a service (PaaS) or colocation data centre space.

3) The big enemy: the application
Hypervisors make server virtualisation easy. TCP/IP makes the network inherently virtual. Storage virtualisation is proven. However, many enterprise applications are still ill-architected for true virtualisation: the flexibility of having multiple small instances of an application, of being able to roll it out across an estate of unknown physical hardware and the capability to only use what functionality you need is still some way off being supported by most vendors.

4) Keeping control of virtual images
One of the biggest issues with virtualisation is that it is so easy. A developer can create a new image of operating system, application server, application and so on, spin it up in seconds, carry out some work and then spin the image back down again. However, the image still needs to be maintained - patches, upgrades, security and so on all need to be applied, and if you have thousands of images, then this can become a major ongoing task.

5) Maintaining control of software licensing
The same virtual images that may need continuous patching and upgrading are also eating up licences. An image that has been used once, but has not been deconstructed, is still using up an operating system licence, as well as an application server and application licence. Dynamic licence management is required, using libraries and check in/check out capabilities.

6) Optimising energy usage
Virtualisation should lower immediate energy needs just through server consolidation. However, there is more that can be done to drive energy usage much lower. Higher data centre temperatures are viable, targeted cooling and technologies such as hot aisle/cold aisle and heat pumping can all help in driving energy costs down.

7) Separating data and power pathways
Unstructured cabling in any data centre is bad practice, and in the virtualised data centre becomes a real issue due to the increased density of mission critical assets. By structuring the cabling, not only are air pathways maintained and cleaning made easier, but data transmission issues caused by running power too close to poorly shielded or unshielded data cabling can be avoided.

8) The dark data centre
Data centres are for machines and for bits and bytes. They should not be designed for humans, and the majority of data centres should be running in an essentially lights out environment, with automated monitoring and eventing occurring to administrators outside of the data centre itself.

9) The death of the application
Massive applications should be seen as the dinosaurs they are rapidly becoming, and organisations should be looking at far more dynamic composite applications built from aggregated functionality. Web services and SOA help this - and cloud computing will move things even further forward.

10) The hybrid cloud
Much has been said about cloud, and much has been misunderstood. Cloud is an important part of the future, but it is not a replacement for the general data centre - it is an adjunct. Understanding this will help to create a scalable, responsive and highly competitive IT platform for the future.

The next article will look at how to approach the general architecture of a virtual data centre - the use of tiering, the role of virtualisation in  business continuity, and how best to drive optimum utilisation.

Clive Longbottom is a Service Director at UK analyst Quocirca Ltd and a Contributor to SearchVirtualDataCentre.co.uk

Read more on Datacentre capacity planning