Server virtualisation is at a key turning point in its evolution. Now that its value has been proven by commodity...
and test & development servers hosting less critical utility applications, the technology can now tackle higher-priority, I/O-intensive applications. However, there are some myths around virtualisation that need to be dispelled before organisations can move forward and gain real return on investment (ROI).
Myth #1: Virtualisation guarantees performance
One of the biggest misconceptions is that virtualisation assures enhanced performance. As virtualisation projects typically involve procurement of new hardware, it is true that many apps get a performance boost simply by being moved from an older, slower physical server into a virtual machine (VM) hosted on a newer, faster, larger server, particularly if this new server is a dedicated, purpose-built virtualisation appliance.
One of the biggest misconceptions is that virtualisation assures enhanced performance.
Rupert Collier, virtualisation product manager at COMPUTERLINKS,
By sharing large servers, it's also true that each VM can take advantage of any excess host capacity that might be available. However, virtualisation in itself does not guarantee application performance. Overloaded host servers can still deliver poor performance to all resident VMs. Simply increasing reserves to create an overhead buffer just reduces the sharing opportunity - from a cost perspective this is as bad as dedicating physical resources to non-virtualised applications.
Resource pooling doesn't guarantee performance
Unreserved server capacity can be doled out to applications according to a "share" setting. Shares change over time due to the dynamic competition from other applications, so should be analysed and monitored. Hungry applications can also share any unused capacity reserved by other VMs (using the same share process as above), but apps that run beyond their total entitlements are at high performance risk, as that extra capacity is not always guaranteed.
Dynamic resource scheduling doesn't guarantee performance
Some performance problems can be alleviated by load-balancing VMs across a cluster. In many virtualisation projects. These cluster level features are relied on to fix performance hotspots automatically, albeit on scheduling intervals of an hour or more. However:
- A cluster of servers used as a total resource pool may not be big enough for all clients at peak times.
- The scheduler may simply move a problem from one host to the next.
- The performance problem may not be solved by more server capacity, especially if it's really an I/O bottleneck.
- Even an hour of bad performance can be disastrous.
Myth #2: Virtualised IT silos can be managed in isolation
Another common misconception is that virtualising each IT silo makes it easier to manage each as an independent utility. There is no doubt that application owners find it much easier to deal with a standardised VM. However, the abstraction presented by virtualisation represents a huge challenge for cross-domain system management. If you can't see through each layer, it becomes difficult to find unintentional resource contention.
Bottlenecks are often hidden under multiple virtualisation layers
Finding performance bottlenecks buried under layers of cross-domain "wiring" (e.g. applications to servers to storage to SANs to arrays to disks) requires cross-domain data path visibility and end-to-end contention analysis. Many resource management tools can provide logical or physical connection mapping to the next IT domain up or down, but not usually with over-arching performance analysis across neighbouring domains.
Optimisation requires peeling back the virtual "curtains"
Cross-domain performance optimisation requires:
- Upstream visibility of client load profiles and service requirements.
- Downstream visibility into nested technology layers mapped to their physical performance characteristics.
- Analysis of non-linear performance delivery under load and the relevant cost-performance trade-offs.
Myth #3: Virtualised IT can be managed separately from the physical infrastructure
Sooner or later, a virtualised enterprise data centre will develop contention through the sharing of SAN-based storage between physically and virtually hosted applications. Sharing, both intentional and unintentional, can cause problems that are hard to diagnose, particularly if no over-arching intelligence is available.
Physical and virtual storage must be managed together The reality in most data centres is that some applications are in transition; some are in physical servers, some completely virtualised, and some with modules both physically and virtually hosted. For applications that require SAN-based storage performance, it's also critical to be able to manage end-to-end I/O performance.
Expert capacity planning is more important than ever
As performance-sensitive apps migrate into virtual environments with pooled and shared resources, capacity planning becomes even more critical. It is therefore imperative to achieve maximum cost savings but ensure sufficient resources are allocated to each application.
So, on the whole, whilst performance in the data centre can be enhanced through virtualisation, it is vital to understand that this is not guaranteed without careful planning and management of the virtual environment. Performance within virtualised data centres requires a virtualisation-specific management solution that can assure virtual infrastructure performance, determine optimal load requirements and provide visibility across both virtualised, as well as physical, environments.
Rupert Collier is the virtualisation product manager at distributor COMPUTERLINKS and a contributor to SearchVirtualDataCentre.co.uk.