News

Everything you ever wanted to know about virtualisation

Danny Bradbury

Only a few years ago, trying to get one operating system to run another was an excruciating task. The original process, called emulation, enabled Windows to run Linux, for example, but the overhead of emulating a whole operating system in software was prohibitive.

Virtualisation has evolved as an alternative to emulation, and the hypervisor - a small segment of code designed to share physical resources between a number of logical virtual machines - is considered to be the most efficient way of doing it. The hypervisor will take contended resources such as interrupt controllers and network cards, and present a synthetic version of them to each of the guest operating systems running in their own virtual machines.

 In this article:

 

What is a hypervisor?

The definition of a hypervisor can be confusing if it is not pinned down. Some commentators will describe two types of hypervisor: a type 1, which runs directly on the processor and supports a number of guest operating systems, and a type 2, said to run on an operating system, which itself runs on the processor.

This type 2 hypervisor then supports guest operating systems of its own. This is a confusing way to describe the term. Better simply to call the latter example virtualisation software. For the purposes of this article, the only hypervisor is the type that runs on the bare metal.

Why use one? IBM asked itself that in the late 60s, when it began using hypervisor technology in its mainframe equipment. For years, mainframes have been large enough to run multiple software programs alongside each other, but they were also focused on reliability. They supported mission-critical systems and could not be allowed to fail. Isolating each system using a hypervisor made things more reliable.

Gradually, as other architectures emerged and became more capable, introducing virtualisation to those platforms was a logical step. "If you look at Power from IBM or Sparc from Sun, you see virtualisation appearing on those systems first," says Mike Neil, general manager of virtualisation strategy in Microsoft's Windows Server division.

X86 was the last architecture to benefit from hypervisors, not least because it was not designed with them in mind. But 64-bit computing, combined with multicore processing and a general increase in performance, made it inevitable eventually. "I don't need a quad-core 10GB machine to run my DNS server," he quips.

However, hypervisor vendors faced significant challenges making the technology work on the x86 platform. To understand why, we need to examine a hypervisor's underlying functions, and these revolve around the physical resources that it allocates.

A hypervisor will handle interrupts from the operating system to the CPU, schedule CPU time among the guest operating systems and allocate cores to virtual machines, manage devices and allocate memory.

How it works

Because an operating system does not speak directly to the CPU, the hypervisor must act as the intermediary when an operating system has an interrupt for the processor. "Here is something to think about: scheduling," says Johan Fornaeus of hypervisor company Wind River.

When a guest wants to interrupt the processor, and when the processor has a response to that interrupt, the hypervisor must manage the delivery of those messages. Because guest operating systems compete for processor time, not all guests will be running all the time. The hypervisor must store the response from an interrupt until the guest is operational again.

The hypervisor must also coordinate the memory in which all of this takes place. Because guest operating systems do not know of each other, they will all assume that they have access to the same portions of physical memory.

In reality, the hypervisor must translate between them all to translate between the address space that the guest operating system thinks it is accessing, and the real-world physical address space.

Finally, an operating system might think it has access to a generic network card, but in reality, the hypervisor may be translating those access calls to a particular device driver. And of course, access to that device must be shared between the different guest operating systems.

The x86 architecture did not support such activities natively. Instead, it left it up to the hypervisor to handle it all in software, which created a computing overhead. "It was hard to implement virtualisation software, especially the virtualisation of the CPU," explains Mike Neil. "The hypervisor had to wrest control from a virtual machine and give that slice of processing time to the virtual machine."

Chip architectures

It took Intel, with its VT technology, and AMD, with its AMD-V extensions to the x86 architecture, to fix that. They gave the hypervisor a privileged status, making it easier for the thin piece of code to adopt the role of supervisor, interpreting instructions from the guest operating systems without passing them blindly through to the processor.

This becomes important if, for example, a guest operating system sends a ring zero-level command such as HLT, which would normally instruct the processor to be idle. In a virtualised environment, the processor will still probably be working on other tasks, for other systems, so the hypervisor must make this instruction local to their guest operating system so that it does not affect the whole computer.

Roger Klorese, senior director of technical and marketing for XEN at Citrix, explains that the process of suppliers are starting to make their hardware extensions richer. For example, memory management used to be done purely in hypervisor software, but nested page tables enable the hardware to maintain both the machine memory and the physical memory that would previously have been controlled by the hypervisor.

This leaves the guest operating system to handle the virtual memory that it has been allocated, as an operating system would normally do.

"And we are also seeing the beginnings of chips shipping with I/O support. That requires some awareness on the I/O card's side as well," Roger Klorese says.

Paravirtualisation

Another approach to handling the guest operating system is paravirtualisation. In the scenario, the guest operating system is modified to become aware of the hypervisor, enabling it to omit certain instructions that could cause the hypervisor problems.

XEN originally worked just with modified versions of the Linux operating system, until the processor vendors released versions of their systems with virtualisation extensions.

Microsoft has also included an element of paravirtualisation in its own system. Windows server 2008 includes what the company calls "enlightenment", enabling the operating system to talk directly to the hypervisor, issuing commands designed to make it easier for the hypervisor to control its resources.

Windows server 2008 can communicate directly with the hypervisor in this 'enlightened' state, enabling it to request things like reassignment of processor cores and reallocation of memory. This can lead to improvements in operations as the operating system can give up memory that it doesn't need, for example, says Neil.

The problem with old-school architectures

Why can't all this simply be done with a microkernel? Developed years ago, microkernel-based operating systems stripped down the privileged kernel part of the software, extrapolating many functions into separate components that can then be co-ordinated by the kernel.

Ostensibly, this creates a similar situation to the hypervisor, with a small 'shim' that marshals communications between these components, and between the components and the processor. However, as experts from embedded hypervisor company VirtualLogix explain, there are significant differences between the two.

Microkernels are not suited to run meaningful applications, say its experts. Instead, they must be extended with higher level APIs to run UNIX-like applications. Whereas hypervisors only know about the virtual machines and the guest operating systems they are managing, microkernels take it upon themselves to handle tasks, threads, and memory contexts, impinging on the ground normally occupied by the guest operating system.

"Hypervisors are non-intrusive. We don't want to change the guest operating system," explains J.P. LeBlanc, vice-president of marketing and business development for the company.

Nevertheless, there is still a place for microkernel-based systems in embedded environments, says J.P. LeBlanc. "My phone owes me 40 seconds every day," he says, referring to the boot process for smart phones with monolithic kernels, which have to load all their components before they can do anything.

Things are different with a microkernel, he explains. "We bring things up in two seconds, and the other stuff comes up in the background." So, while microkernels may not make good virtual machine managers, they are useful in situations where equipment needs to be nimble and quick to respond.

The management question

One element that must be addressed in a hypervisor is management. Not only must it be prepared during the pre-deployment phase, but it also needs to be monitored and managed once it is operating. This can be done with various levels of sophistication, but the holy grail is to have virtual machines started and stopped dynamically according to the system load.

XEN's Klorese says that in an ideal world, this would be linked to an application SLA, so that if performance dropped dangerously close to the threshold, more guest operating systems could be started automatically in more virtual machines, all of it managed by the hypervisor.

In practice, this requires active involvement from systems vendors who would need to provide an end-to-end view of an application's performance from the server to the client.

Nevertheless, Scott Drummonds, group manager for performance marketing at VMware, says that the company is making moves in this direction. It acquired a company called Beehive and has renamed its product AppSpeed. Now in public beta, it includes application latency measurements using a multi-tiered client/server architecture operated from outside the virtual machines.

The product will measure the time taken to get a transaction from the front end through to the database and back, for example. "These don't just map to user experiences - these are the user experiences," he says.

Both the microkernel and the hypervisor are long established technologies (Apple's OS X is partly on a microkernel-based system, for example). Nevertheless, they are suited to different tasks. As the hypervisor evolves, and as the x86 architecture involves alongside it, we will see an increasing level of performance and sophistication in the way that it operates.

Read more about virtualisation:


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
 

COMMENTS powered by Disqus  //  Commenting policy