The three stages of grid computing

The story of computing is no stranger to the tale of the exotic turning into the mundane. Furthermore, technology that was...

The story of computing is no stranger to the tale of the exotic turning into the mundane. Furthermore, technology that was developed for the specialist applications needed by scientists has repeatedly migrated into the world of corporate computing, writes Julia Vowler.

"Think of parallel processing and reduced instruction set computing," reminds John Barr of Sun Microsystems. "Once they were weird and wacky. Now, they are a general part of our technology."

What will be next to move across to corporate IT? "The most obvious technology is grid computing," says Barr.

Grid computing - or metacomputing or utility computing as it is sometimes known - works on the premise that "spare" capacity and resources can be siphoned off as required, rather like drawing electricity off a grid.

"Fundamentally, [grid computing] enables a virtual organisation to use distributed computing resources in a fairly ambiguous way," says Barr.

"The nirvana would be for you to go to a device, access a portal to run your application 'somewhere' and get billed for it on your credit card."

Before the nirvana of a national computing grid can be achieved, users can implement three stages of grid computing: local, campus or global. All three stages exploit spare cycles, or unused capacity, on existing machines, but with varying levels of tasks that can be performed.

The lowest stage is a cluster, such as all the desktops on a network. This collection of resources is owned by one single department and the task is to match the workload to the available power.

The next stage is a campus or site. This, says Barr, is bigger than the cluster, but no more complex. You will, however, need tools to manage the workload and agree inter-departmental use of spare capacity.

The final stage is to go global, where a task could potentially run anywhere in the world, which means that data has to be safely transported and securely authenticated.

Computer farms are unexceptional within the world of technical computing. Not just because technical users find it easier to use novel technology, but because the jobs they are processing tend to be computationally intensive.

Most importantly, these tasks are easily divisible into standalone units of work that can be farmed out and completed remotely using spare capacity on a machine, overnight, for example.

Barr acknowledges that business applications may be less suited to grid computing of this nature.

"Technical computing usually requires you to run 'this job with this data' - or 100 jobs simultaneously - and the process of moving jobs around [the grid] is straightforward. But the business environment is more complex - you have many interlinked applications and operations that are dependent on large databases attached to one particular machine," he says.

Nevertheless, given the current global economic downturn, corporate IT is increasingly looking to contain costs, or better still reduce them, so the challenge to get the most out of existing capacity is very real. And the amount of unused capacity in corporate IT could be considerable.

"The typical utilisation of technical computing is high, but in commercial IT it can be as low as 30% utilisation," argues Barr.

"The flexible use of resources can really drive down costs. One user I know got a three-fold increase in throughput on what he already had, because he was using unused cycles."

Barr argues that if companies want to manage grid computing, certainly at cluster level, the necessary are already in place. At the global level, however, companies will probably have to use an open source tool kit.

In the realm of corporate IT the idea of grid computing is still novel, but it may be the next technology to move out of technical computing into the business world.

Read more on Business applications