Feature

Datacentres move towards virtualised future

Although it started life as a back-office support function, the datacentre has transformed from a "glorified computerised filing cabinet" (in many people's eyes) to a major consumer of resources. The big question now is how to rationalise it. There are several business and technical options.

Roughly 2% of global carbon dioxide emissions are generated by the IT industry, according to Gartner research in September 2007. Worse still, 23% of that share could be blamed on the power needed to run and cool servers in datacentres. Until fairly recently, power consumption has never really been an issue. But suddenly we appear to have reached a tipping point, socially at least.

"These days companies can be assessed on their corporate social responsibility," says Una Du Noyer, head of infrastructure engineering at consultant Capgemini. "Judgements on reputational risk are likely to seriously affect a company's share price."

As datacentres grew, organically, in response to increasing demands to manage ever more voluminous and eclectic data, they have typically produced an inefficient mish-mash of hybrid technologies that often span several generations. These consume all kinds of expensive resources, including office space, manpower, equipment and electricity. A combination of business and technology solutions is being mooted to combat the problems created by the datacentre's hunger for ever more expensive (and rare) resources.

The manpower problem, for example, could be tackled by a business strategy, such as outsourcing, or a technical solution, such as autonomic computing, in which datacentres manage themselves.

The consumption of power has several solutions. If the price of electricity is a problem, outsourcing of datacentre functions to other countries, where power is cheaper, would help. "India is a popular destination for obvious reasons," says Roy Illsley, senior research analyst at the Butler Group. "The US has cheap power, and its reduced manufacturing sector means it often has the resources to house a datacentre."

On the other hand, if the datacentre's electricity consumption is an environmental concern serious enough to merit action, there are technical remedies. A raft of new technologies is emerging that will offer faster, more fuel-efficient computing - from solid state disc (SSD) and 64-bit computing at the server level, through to new-generation mainframe technology, such as IBM's recently announced Z10.

But the development that renders all other datacentre advances insignificant is virtualisation. This decoupling of information from the underlying hardware complements all other advances in datacentre technology, says Dennis Szubert, principal analyst at market researcher Quocirca.

"Virtualisation will have the biggest impact of any technology on datacentre computing over the next few years, as there are so many good things that result," he says. "From consolidation, to automation, to green computing - virtualisation has its finger in all these pies."

By partitioning each physical server into a number of virtual servers, datacentre managers can halve the number of machines they need to run, so they require less office space and consume less power. In October 2007, Gartner research predicted that virtualisation would be the top strategic technology for 2008.

"The widely accepted figure for the current number of servers virtualised today is 10%, but we think it may be more like 24%," says Szubert. "This is a trend that will really take off. The main server suppliers are all shipping servers with VMWare pre-installed in flash memory."

Virtualisation would be a huge step toward efficiency for many datacentres, but few datacentre managers have been able to document every single move and change to their computing infrastructure as it has grown organically. So there are many ghost machines out there that are not actually needed and run no applications. This is why rationalisation must precede virtualisation. Rationalisation tools, such as Tideway's Foundation, which map out the IT terrain and identify redundant hardware than can be removed, may prove every bit as significant as virtualisation.

As the de-facto method used for server consolidation, virtualisation can claim responsibility for some incredible power savings. As part of its virtualisation strategy, BT recently managed to reduce 700 racks of servers down to 40, cutting power consumption from 2.1MW to 240kW, not including cooling and power distribution.

The rise of virtualisation will, of course, bring its own management problems. As creating a new server becomes a moment's work, virtual servers may mushroom in number - if the unmanaged growth of all other kinds of IT are a precedent. In which case, management of virtual servers may become the next pain point, says Szubert. But that day may be some way ahead.

Meanwhile, faster and more energy-efficient technology is set to appear on the market. SSD flash-based data storage technology is not only fast, but it fits in with the two overriding requirements of any datacentre manager - it takes up less space and consumes less power than the previous generation of machines. Because it has no moving parts, SSD hardware is faster and more reliable. It is more shock resistant and need not be kept in the rarefied atmosphere needed by traditional datacentre equipment - which could possibly be crashed by smoke or dust particles. It also creates less heat that an old-fashioned hard disc drive. So it consumes less energy while processing, and needs less energy to cool it and keep it working. The only possible drawback at the moment is cost.

Ian Osborne, project director at grid computing trade association Intellect, predicts that although the benefits of virtualisation could extend to storage and networking, big challenges lie ahead. These hidden obstacles, and the ability to overcome them, could shape the datacentre's future. Moving running applications across physical servers and dynamically starting new servers will be pretty radical steps, says Osborne. "The needs of a flexible, virtualised computing infrastructure are a little more challenging when it comes to assuring service delivery."

In old-fashioned data silos, where resources were static, it was easy to test how a datacentre could cope under extreme loads. The licensing and configuration of software applications were relatively simple too, with control maintained by physical components. In the new, virtualised, flexible datacentre, such matters will be a lot more complex.

Grid computing goes hand-in-hand with consolidation. By standardising equipment, consolidating servers, storage and applications, and automating management, businesses can see a rapid return on investment in grid computing technologies. Cross-industry efforts to build a unified enterprise grid infrastructure that can outperform traditional SMP systems more cheaply, continue to trundle along. Dell, Oracle, EMC and Intel linked up to form Project MegaGrid, which offers a single, validated set of deployment best practices for grid computing.

Meanwhile, some datacentres have serious Big Iron legacy issues. The public sector has particularly difficult issues here, says Capgemini's Du Noyer. "Everyone wants to consolidate, but there's a massive backlog of kit to get rid of, but it runs applications that are not documented, written in ancient languages that nobody knows how to transcribe."

The process of putting on web front ends to access mainframe data still continues apace, 10 years after it was mooted as a temporary measure. Back then, it was presumed that, by now, all mainframe data would have been converted.

Mainframes still have inherent advantages, which is why non-western economies such as India, China and Brazil are increasingly using them, says the Butler Group's Illsley. "They have got a small footprint and they are great for the green economy," he adds. New machines, such as IBM's V10, are built for virtualisation too.

So will Big Iron die in the datacentre? "Almost certainly not," says Quocirca's Szubert. "It is still at the heart of many of the world's top enterprises. And it is not certain it will die any time soon. But, in a growing market, it is not growing."

With enterprise servers and cheaper X86 servers providing the flexibility mainframes lack, the jury is still out on which operating systems are best to cater for this new, flexible environment. While the notion of an all-Windows datacentre in a company of any size would have seemed absurd to all but Microsoft die-hards a few years ago, recent releases of Datacenter editions of Windows Server have provided a solid foundation for business workloads.

Meanwhile, Unix has stagnated, says Szubert. "Most of the new growth is in Linux installations," he points out.

Virtualisation will eventually pave the way for remotely controlled autonomic computing, says Illsley, but don't expect anything from IBM and the rest of the autonomic remote management control camp for at least half a decade. "It still requires an enormous leap of faith," he adds.

But virtualisation still has some way to go before it enables autonomic computing. Matt McCormack, systems consultant at researcher IDC, says, "The majority of storage infrastructures were not designed for virtualisation and they do not have the input-output capabilities."

Gartner analyst Rakesh Kumar adds, "The first two generations of datacentre designs are not appropriate for current and future needs. New datacentre designs need to be based on flexibility and high levels of monitoring, and to incorporate a mixture of power and cooling technologies alongside virtualisation and management tools."

Kumar says new datacentres should be less static and more like an "agile, living organism that evolves as the server and infrastructure changes".

That's easy for him to say. Whether virtualisation delivers a "living organism" remains to be seen. Gartner confidently predicts that by 2010, all PC and server shipments will be virtualised. But analysts have been wrong before.





Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in April 2008

 

COMMENTS powered by Disqus  //  Commenting policy