Pay-as-you-go computing

Feature

Pay-as-you-go computing

Think of on-demand computing and you might think of application service providers or grid computing. But there is a new contender on the block, as Mark Vernon finds out

Today's server infrastructures are a mess. So says a new report from Forrester entitled The New Computing Utility. The report says, "Look around your company. The aftermath of the e-business boom isn't pretty." But although the report goes on to point out that network administrators are not getting any money this year to tidy up, the deeper point is that mere server aggregation would not solve the problem anyway.

Servers are required to handle huge peaks and troughs in demand. But catering for occasional peaks, as companies must, is expensive and inefficient. So Forrester turns to a different idea, one that is gaining favour in many circles. Computing as a utility. In the same way that you plug into mains electricity and are billed for what you use, so utility computing is computer capacity on demand and - the critical second condition - utility computing is pay for use.

It sounds like a good idea. It would be economical, efficient and effective. However, whilst proponents point to a number of similar delivery models already available in support, such as application service providers (ASPs) and grid computing, utility computing proper is some way off. Indeed, sceptics wonder if it will ever truly materialise.

"I have doubts whether it can ever be done," says Michael Hjalsted, marketing director at Unisys. He points to the fact that utility computing relies on a universal agreement of standards, ubiquitous broadband and other kinds of high speed connectivity such as Infiniband and Gigabit Ethernet, and the end of platform dependency which IT suppliers have a great interest in preserving.

Customer demand
"That is why you hear people saying we can do utility computing, but you have to buy it from us," Hjalsted says. It is not that he does not recognise the benefits. But the only force he can see that could realise utility computing in time is demand. "Pressure from customers would bring utility computing about, if anything," he says. But even then he believes it will be five years before utility computing proper appears.

However, Hjalsted's voice is relatively rare amongst IT suppliers, all of which are pushing utility computing, if admitting it is not quite with us yet. For that reason there is still some jostling for thought leadership.

As mentioned previously, a number of IT suppliers are talking about utility computing as a development of ASPs or as part of their grid computing initiatives.

"In some respects, utility computing could be viewed as a traditional bureau service or ASP depending on your viewpoint - end-users buy time on an external supercomputer to run some particular problem," says Bill McMillan, senior technical consultant at Platform Computing.

"But from the user point of view, this approach is not transparent because it is clear that they are running an external system. What you really want, from a supplier viewpoint, is to have a pool of resources that are automatically and transparently allocated to people when they need them and to be able to have detailed visibility into the use and trends of these resources for accounting and capacity planning."

That is not possible now. Indeed, many companies are sceptical of the ASP model. Which is where grid computing - companies looking at sharing resources within organisations - comes in. "Two or more groups within the same organisation can share the extra capacity, while maintaining autonomy, as required with the appropriate chargeback and controls," McMillan says. "The same idea can be applied to geographically dispersed sites, so while the US sleeps, the UK can make use of its computer power."

A growing number of global companies are making use of this enterprise grid computing to deliver computer power as a utility to their global workforce.

A slightly different line is taken by Sun that is focusing instead on a shared risk concept associated with supplying computer services. This is different because instead of saying utility computing is about satisfying immediate demands - such as plugging into electricity as and when - it says that supplying business computing is in fact no longer short term but needs to be seen from the perspective of longer term relationships.

So, whilst utility computing gives customers the option to pay for the computing power they need, "utility computing requires suppliers to back up their proposed solutions by being prepared to take a short-term hit on profitability to help the customer and build a long-term relationship," explains Mark Lewis, datacentre server product marketing manager at Sun Microsystems.

Neil Brooks, UK marketing manager, Interliant, agrees. "The utility supplier has invested in highly reliable systems, networks and processes to ensure that the end-user can rely on their IT systems to be available for use at any time," he says.

Forrester, for its part, has a different term again - fabric computing, defined as a computing model that provides utility-like processing power on demand using a non-proprietary high-speed network. But regardless of terminology, Forrester also make the point that utility computing will not be with us for some time, though as the Galen Schreck, author of the report says, "The voyage begins today."

So what steps should companies be taking if they want to set off on this trip? Firstly, Schreck says, when renewing or purchasing new servers, make the most of the technologies available today that mimic utility computing ideals. For example, whilst many servers are designed to be scaled horizontally - adding new instances of them as demand requires - it is expensive and labour intensive. In the case of storage, firms should stop buying direct-attached storage in favour of networked storage which allows them to virtualise storage resources alongside processing capacity, not conflate the two - making way for utility services when they appear.

By late 2003, Schreck predicts some suppliers will start shifting fabric computing-based boxes, but they will still be dedicated to particular applications. Not until 2006 will fabric computing start to look generic, and even that is not utility computing proper. Hjalsted's scepticism for now seems about right.

Why do organisations need utility computing?
"When companies were experiencing or preparing for a substantial period of growth, their approach was to throw technology at the issue and not to worry about the cost," says Bernard Tomlin of HP Consulting.

"Unfortunately, although they projected further growth, it did not always materialise. The repercussion is that many now feel they were burnt by that provision of overcapacity. It is insane and many organisations know it and feel how much it hurts, but to date they haven't had any other way to do it," he says.

"The point is they need to be able to respond much more dynamically to changes in their business environment. However, today, over-provisioning is not an option. You cannot have maybe 14 Web servers hanging around just in case, when all you need most of the time is eight.

"The same principle of unnecessary redundancy applies to disk and network capacity too. Utility computing allows disk, server and network capacity to be added from anywhere within your data centre in just a few seconds. It allows resources to be borrowed from applications that only run in the day and use them for other systems at night.

"However, current IT architectures are too rigid for this kind of flexible reallocation. Most network resources are dedicated to specific applications or functions. That is why a new computing model is needed, one that reconfigures resources on the hoof.

"Utility Computing is thus a way of linking computers over a network to pool processing power, and like an electricity generator, provide the resources an organisation needs as it needs it."

Why utility computing is not an ASP
Utility computing is not to be confused with ASPs or grid computing, explains Wendy Currie, professor and director of the Centre for Strategic Information Systems (CSIS) in the Department of Information Systems and Computing at Brunel University.

"Utility computing offers firms the benefit of paying for their software applications on a pay-as-you-go basis rather than the traditional software license plus maintenance contract," she says. "The early ASP market failed because it tried to lock customers into signing long-term, three-year deals which went against the utility computing model. If utility computing is to succeed, the customer will only sign up for short-term deals, such as paying for cable TV."

However, if it happens, utility computing will appeal. "The business benefits are about paying for software applications on a flexible rather than fixed model," Currie says. "Suppliers need to address the fact that many small- and medium-sized enterprises do not have experience or a history of IT outsourcing. They therefore need to sell services which offer real benefits, not just collaboration tools on a hosted model." She points to three key areas which suppliers need to concentrate on if utility computing can be made to work:


  • Scale - how many customers they can practically target?
  • Scope - what type of applications they should be offering under the model?
  • Integration - how applications can be integrated across business functions.


"The early ASPs failed to address this and ended up offering not one-to-many applications but same-for-all," says Currie.

How would utility computing work?
In the same way that it does not matter to the user which generator produced the electricity, utility computing aims to free processing power or access to applications from any particular computer. Data processing or applications need to be unlocked from any physical machine and become part of a network. What is more, because this network is too expensive for one firm, even one service provider, to maintain - since it is not optimally utilised all the time - the supply of services to the network has to be shared, with payment for services being made only when used.

As Tosca Colangeli, worldwide director of e-sourcing solutions, IBM Global Services, explains, the development of key technology areas makes this possible: the virtualisation of resources, massive network bandwidth and radically distributed architectures. Developments in storage technology provide the best example to date. "Over the past three decades, mainframes have hosted thousands of virtual servers, while providing extremely high levels of availability," Colangeli explains. "Sophisticated workload management techniques allow virtual machines to be created on demand, guaranteeing each machine a certain amount of resources and allowing it to use free cycles beyond that level, if any are available, thus saving much 'white space' - unused cycles procured for peaks." The result is that an entire server farm of virtual Linux machines, for example, can fit into a single box, saving on electricity, network hardware, space, and most significantly, maintenance costs.

"Likewise, very large storage systems are now delivered with built-in availability functions normally not found in discrete server storage," Colangeli says. These devices can be accessed through different networking technologies, such as storage area networks and IP networks. "Associated storage management software allows hundreds of servers to be dynamically allocated storage 'volumes' on these systems, providing users with fast provisioning of additional space, aggregated storage capacity planning - rather than doing it on a per server basis - and greatly reduced operational costs."

If, and it's a big if, this model was generalised across a whole infrastructure, utility computing would be the result.

Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in June 2002

 

COMMENTS powered by Disqus  //  Commenting policy