As more businesses move to virtualising their datacentre infrastructure, they will start to experience the limitation of their existing hardware, IT administration tools, procurement and licensing models.
On the hardware side, storage arrays may not offer the throughput required by highly virtualised architectures, so IT departments need to consider storage optimisation.
Server-side flash, for example, is one way you can sidestep a storage array to provide better performance for virtual machine and desktop I/O. Here a wedge of solid-state media is located at the server and integrates added flash with the server’s own memory.
Similarly, while it is clearly more efficient to deploy workloads in a highly virtualised datacentre, this puts extra strain on the network infrastructure.
Looking at the next wave of virtualisation, Forrester principal analyst Galen Schreck warns that networks have to scale to very large numbers of ports. In fact they need an entirely new architecture. In the Forrester's TechRadar For Enterprise Architecture Professionals Infrastructure Virtualization Q2 2011 report he notes: “Networks need to adapt to the increase in application-to-application (also called east-west) traffic that runs counter to traditional designs that favour client-to-server communications (or north-south traffic). Chassis virtualisation allows firms to build networks that are simpler and more predictable by reducing the number of tiers required to aggregate traffic into a small number of core switches. In addition, management is greatly simplified because switches behave as if they are a part of a much larger virtual switch.”
Not having physical assets may also cause problems. It makes it more of a service to the business, governed by SLAs.
While the cloud computing has a great business case, and virtualisation is key to underpinning this, Debra Lilley, chair of the UKOUG, says: “Simpler licensing is needed, and it also needs to match the cloud consumption model.” Lilley believes IT needs to shift to a utility-based model, with the tools supporting it, not just to manage the IT but to measure the use to support utility pricing.
“The virtualisation suppliers need to work with service providers and large enterprises who are running private and public clouds to ensure their management solutions work seamlessly together, and that the pricing models match the consumption models,” she adds.
Ray Titcombe, chairman of the IBM CUA, believes licensing is improving: “The marketplace in all things virtualisation has changed a long way from the original days of outrageous and silly licence situation. I think licensing is now at a realistic place - and terms and conditions for VMware and others enables customers to purchase sensibly and expand their licence inventory at reasonable levels.”
However, in Titcombe's experience, it is still too complex to design and implement resilient virtual desktop environments. “Managing daily operations still seems a 'black art' and favouring costly consulting rather than local informed technical management. I know of several companies that suffer outages with VDI facilities and error log messages are too basic/non existent to indicate problems.”
He warns that the situation with virtualisation management is similar to how server/disk storage systems were 10 years ago: “Some maturity is needed,” he says.
For instance, Titcombe says hardware integration of companies like Netapp and IBM/Dell/H P servers are quite advanced but they still have basic flaws that require expensive technical engineers to 'remedy'. He said, “In 2012 this should not be required - plug and sophisticated play should be here.” But since the server makers are also selling their own storage products there is little motivation to make it too 'easy'.
Is virtualisation suitable for all workloads?
Clive Longbottom, founder of analyst Quocirca, notes: “Whereas some low-end, non-mission critical services can be easily moved over to a virtualised environment, many organisations baulk at the prospect of moving enterprise applications such as SAP or Oracle over onto virtual hardware, either due to the perceived complexity and possible impact on the organisation, or purely down to a feeling of needing to wait until it has been proven elsewhere.”