Datacentre management: Taking a Rubik’s Cube approach to hypervisor fine-tuning

In this guest post, Frank Denneman, Chief Technologist at PernixData, advises operators to apply Rubik Cube-solving strategies to datacentre management problems.

When attempting to solve a Rubik’s cube most people pick a colour and complete one face of the cube before moving on to the next. While this approach is fun, it is ultimately doomed to fail, because addressing the needs of one side of the cube causes the remaining five to be thrown into chaos.

The components of a virtual datacentre are similarly inter-twined, as isolated changes in one part of the IT infrastructure can have massive implications elsewhere. For example, a network change might result in bad SAN behaviour. Or, the introduction of a new virtual machine might impact other physical and virtual workloads, residing on a shared storage array.

The shared components of the virtual datacentre are fundamental to its success, as it allows operating systems and applications to be decoupled from physical hardware. This allows workloads to move around dynamically, so they can be paired with the available resources.

This provides better ROI and arguably benefits the quest for business continuity. But it’s this dynamism that makes it so difficult to solve the problems when they arise.

Tackling datacentre management problems

Due to the sheer complexity of today’s datacentre, troubleshooting is typically done per layer. This is an interesting challenge in the world of virtual datacentres, where more virtual machines and workloads are introduced daily, with varying behaviour, activity and intensity.

While context is critical for virtual machine troubleshooting, it is very hard to attain because the hypervisor absorbs useful information from the layers below it.

Furthermore, applications running on top of the hypervisor are very dynamic, which makes traditional monitoring and troubleshooting methods inadequate. You need to take a step back and ask, “Are my current monitoring and management tools providing an answer to a single side of the cube, or are they providing enough perspective to solve the whole puzzle?”

The only way to solve all aspects of a datacenter management problem is to use big data analytics, which have been changing the way things operate for years.

Wal-Mart and Target, for example, are able to correlate many data points to accurately predict customer behaviour. Similarly, bridges are equipped with sensors and big data analytics to identify changes in heat signatures, vibrations and structural stress to prevent mechanical and structural failures. With this in mind, IT should use the power of big data analytics to improve results in their own datacentres.

Applying big data analytics inside the hypervisor taps into the vast collection of data present, with insight into application, storage and other infrastructure elements. You can create a context-rich environment that provides an in-depth understanding of the layers on top and below the hypervisor.

For example, you can get unprecedented insight into workloads generated by virtual machines, and how they impact lower level infrastructure, like storage arrays. You can discriminate workloads from one another, and understand how the infrastructure reacts to these workloads.

This, in turn, helps developers optimise their code to the infrastructure, which then lets infrastructure teams optimise their systems as needed.

With big data analytics inside the hypervisor, everyone wins. You can view your datacentre in a holistic fashion, instead of solving individual problems one at a time.

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.