Squeeze more out of your CPUs

Virtualisation systems aim to reallocate resources according to shifting business needs, maximising CPU usage and saving...

Virtualisation systems aim to reallocate resources according to shifting business needs, maximising CPU usage and saving companies time and money. But if the systems are to work, separate departments must be prepared to pool their computing resources, writes Danny Bradbury.

When the open systems movement started at the turn of the 1990s, many people thought it would usher in a new era of interoperability. Industry standards would level the playing field, they predicted, making it possible to plug anything into anything and have it work with only the minimum of fuss.

But while systems from different suppliers can be made to work together in a way that was unfeasible in the supplier-dependent era, it is still far from easy. Managing the systems is even more difficult, because IT experts must accommodate the idiosyncrasies of each platform and device.

Suppliers hope to solve that problem by introducing a new product category: systems virtualisation. In a virtualised system, a layer of middleware is introduced between the systems administrator and the disparate resources that make up the infrastructure; the administrator gives instructions to the middleware, asking it for specific resources as and when the need arises. The middleware deals with the necessary devices, reallocating storage space, CPU power and other necessary elements under the covers.

Mainframes have always used virtualisation techniques to ensure that the maximum resource is available to the business at any given time. The virtualisation products now starting to enter the non-mainframe market can be broken down into three components: storage, processing power and network resources. Many companies are concentrating solely on the storage part of the story, while others, such as Canadian firm Inkra, are focusing on networking. Few firms are attempting to bring all three components together; but there are some, Sun Microsystems, Hewlett-Packard and IBM among them.

One of the main benefits of bringing systems virtualisation to the non-mainframe market is economic. With many CPUs and storage systems under-used, it makes sense to reallocate resources dynamically according to the need of particular applications or lines of business. If, for example, a technical support application is barely ticking over, some of its CPU time could be given to an accounting application being heavily used by a finance department coming up to its reporting period. Should the situation reverse, however, the CPU allocation would need to be adjusted yet again. Doing this manually would be too time-consuming.

Theoretically, using system resources more efficiently in this way could help systems administrators squeeze more applications on to less equipment.

John Mills, technical marketing manager at Sun Microsystems, stresses the value of "sweating your assets" this way. Using the company's N1 virtualisation initiative - announced last February but so far unaccompanied by any products - he hopes to increase average CPU usage to 85% from a figure that can, he says, be as low as 15%. HP puts the average CPU usage figure at 35%.

There are three main phases to the N1 strategy. The first, originally scheduled for late last year but now likely to happen in the first quarter of 2003, will enable customers to marshall various parts of their systems infrastructure into a single set of resources. The second phase, also due to start this year, will enable systems managers to hook business processes into the front-end middleware. Sun points to electronic banking as an example of a business service that can be described as a single entity to the N1 infrastructure, and served accordingly.

The third phase of the initiative focuses on automating service-led systems management using policies. Business-level policies can be set to govern the allocation of resources to specific business services. A practical example might be switching priorities for different services to customers depending on the system load - so, an investment bank providing free quotes to visitors on its website as a means of generating business might want to throttle back the CPU time allocated to such quotes as the number of transactions initiated by paying customers increases.

Blade servers figure heavily in discussions about systems virtualisation. Blade servers, which began in the telecommunications sector, are relatively new to the mid-range server market. Multiple servers on "blades" are slotted into a very dense chassis, creating a low-footprint array of servers, connected using a high-bandwidth internal backplane. Sun Microsystems is scheduled to ship its blade servers early this year, following a delay to the original ship date.

Egenera, a relatively young company specialising in virtualisation, uses blades as a pivotal point in its product offering. The company developed its blade server and associated Processing Area Network (Pan) Manager virtualisation software from scratch, which gives it an edge over larger players, according to European services director John Warnants. He believes he can fit up to 1,000 traditional Intel servers and redeploy them into two of his Bladeframe servers, taking up one square metre.

The virtualisation part of the system works by rendering blades stateless, enabling the Pan Manager to dynamically allocate everything from Mac addresses to storage connections and operating system images on the fly. All blades assume that resources are local, but they are actually provided over a fast-switched fabric.

The case for virtualisation sounds unassailable but, in practice, there are challenges both suppliers and customers need to consider. Perhaps one of the most significant issues is interoperability. Because the product category is so new, no standards have yet been defined for connecting different suppliers' equipment with virtualisation middleware. Consequently, Mills says his company will open up the N1 software infrastructure to third-party suppliers, enabling them to hook into it.

Egenera's Warnants is less bullish. "Once we've built one of these virtual servers, it looks like just another server on the network," he says. "But what we don't do is use our management software to virtualise non-Egenera hardware. That's another generation of this technology that is hugely more complex."

Standards are, at least, being developed in the storage virtualisation sector. Last August, the Storage Networking Industry Association launched the Storage Management Initiative in a bid to turn a specification submitted by several suppliers into a universal standard for storage virtualisation middleware.

The standard, called Bluefin, was submitted by companies including Sun, EMC, Dell Computer and Hitachi Data Systems and will go some way towards easing the concerns of customers. But companies buying virtualisation products in the processor sharing or network resource space will find themselves on their own when it comes to making decisions about interoperability. Everything depends on the supplier's own APIs and its ability to get other suppliers to work with its system, or to write its own drivers for other companies' devices.

And for customers wanting to virtualise their systems, there are two more, related problems - internal accounting and politics. "Server hugging" by particular departments unwilling to free up their own computing resources could create problems for companies wanting to implement virtualisation software. Business managers may have to be convinced that security and reliability will not suffer as a result of throwing computing resources into a central pool. One way to get around this is by hard-partitioning certain critical applications.

But even if such resistance is overcome, internal accounting procedures may present a challenge. IT departments often need to log the use of computing resources so they can be charged back to particular departments. Virtualisation systems need to support these customers.

Most virtualisation suppliers are aware of the internal accounting challenge and have facilities for logging resource usage. HP, which has been selling system virtualisation software since March 2002, also provides this facility. Its Utility Data Centre (UDC), first announced in November 2001, stems from HP's historical goal of providing computing power on tap according to variable customer need. The UDC is interoperable with equipment from multiple suppliers thanks to HP's internal driver development effort, and HP is also about to publish its own driver software development kit, enabling customers or third-party suppliers to write their own device drivers for the UDC this year.

Bringing up the rear is IBM, which is still researching its ambitious autonomic computing initiative, which mixes system virtualisation facilities with software that uses biological computing algorithms to create self-governing infrastructures. In addition to redirecting workloads to different sets of system resources, the company wants to make the interface between the user and the computing system easier. It wants to abstract the users' interface with the system even further than its competitors, letting business managers instruct the back-end infrastructure in plain English.

In the company's ideal world, a user of the IBM system would be able to tell a computer to "watch my competitors and adjust pricing and supply for competitive advantage".

For now, however, even the suppliers of the less ambitious virtualisation systems have their work cut out. IT departments need to be convinced that the systems will work reliably. The danger is that if you abstract your systems management through a piece of smart middleware and the middleware fails, you could end up with more administrative overhead than you started with.

Given the relative immaturity of many virtualisation systems on the market today, it will be a while before the growth curve for such systems gets steep enough to contribute significantly to most suppliers' bottom lines.

Sanger Institute goes virtual    

Many hands make light work, according to Phil Butcher, head of IT at the Wellcome Trust Sanger Institute. The institute has conducted some of the most significant research into the human genome to date, but to do so it has required heavy computing resources.  The institute maintains a network of 1,100 server nodes, each of which can be used to help process number-crunching jobs submitted by end-users.

Server virtualisation was vital to make the most efficient use of the computing infrastructure and minimise the administration overhead, according to Butcher.  

He has been using Platform Computing's LSF workload management software to help him administrate his systems. "We have gone from clustering to distributed resource management to virtualisation, but the trick's the same," he says. "You want to ensure that you put your workload on to a number of systems because that is why you get the efficiencies." 

Using the Platform software, Butcher distributes computing jobs automatically to servers across the network. However, he stops short of fragmenting a single job across multiple nodes, which is a fundamental tenet of grid computing, a technology using virtualisation techniques. He does fragment databases so he can distribute relevant parts of them across the 1,100 node farm for pattern matching purposes. 

Policy-based administration of his virtual computing resources is a key part of Butcher's strategy. "If you have a facility for people to put 10,000 jobs on the system, you can soon end up creating your own denial of service," he says. The policy facilities within the Platform software enable him to set rules so that end-users get a fair share of the resources. 

Although internal accounting wasn't on the Institute's agenda, Butcher nevertheless keeps logs of how much CPU time has been used by each group, because he needs to allocate resources according to the importance of each scientific project.


What is virtualisation?   

In a virtualised system, a layer of middleware is introduced between the systems administrator and the disparate resources that make up the infrastructure; the administrator gives instructions to the middleware, asking it for specific resources as and when the need arises. The middleware deals with the necessary devices, reallocating storage space, CPU power and other necessary elements under the covers.  The virtualisation products now starting to enter the non-mainframe market can be broken down into three components: storage, processing power and network resources.

Read more on Server hardware

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close