Companies including Compaq, Hewlett-Packard, SGI and Platform Computing, have come together to develop a standard set of application programming interfaces (APIs) for distributed resource management - the administration of processing tasks across different machines.
The companies behind the initiative - which could be described as distributed processing or "grown-up" clustering - essentially want to take control of the billions of idle CPU cycles across corporate networks and the rest of the Internet.
For example, on desktop computers, the CPU spends a large amount of time doing nothing when it is waiting for its user to return from a coffee break or even when waiting for them to press the next key while typing a document.
Projects such as [email protected], which uses volunteers' spare clock cycles to analyse radio data from space in the search for extraterrestrial life, have already explored this concept and created a great deal of media attention. Now, the NPI wants to make it more manageable in the corporate context.
Bill McMillan, technical product manager at Platform Computing, says the NPI API offers two main benefits. Firstly, it will enable companies to tap into unused computer power and save money by being more effective in their processing.
"A good example is a company using a supercomputer to process a job," he says. The job will run in half a day, but it will stay in a queue for five days before it executes. You could run the same application on departmental servers or desktops, and it may take longer to run, but it won't queue for as long so it will be sent back more quickly."
Perhaps the most exciting benefit is that the NPIAPI will enable companies to accomplish things that were impossible before by using more computer power than was previously available. Tasks such as genome research and better real-world simulations, for example, will be easier to tackle, McMillan believes.
There are still many obstacles for the initiative to overcome before it finally creates a commercially-usable API - planned for release before autumn 2001. For one thing, it needs to decide how the infrastructure will co-ordinate diverse machines across a network. This will be no mean feat, given that different members of the group will probably want to contribute disparate technology models to the mix.
McMillan wants a layered technology that will tackle different aspects of the task - say, one level to handle latency issues across different types of network and another to handle inter-application communication.
McMillan is quick to play down any potential overlap with existing pervasive computing technologies, such as Sun Microsystems' Jini, or Microsoft's new .net initiative. "It is entirely possible that any framework we come up with could implement in Jini entirely," he says. "As long as you write to the API, how you implement it will be largely irrelevant."
This also applies to application suppliers, he says, who will hopefully be able to write to the API rather than dealing with low-level issues. The hope is that this will make it easier for suppliers to adapt their software to support the standard than it is for them to write for traditional clustering systems.
Although the initial target audience for software supporting the standard will doubtless be IT departments wanting to maximise their internal efficiency, it also has the potential to create new markets beyond the firewall.
Imagine, for example, a scenario in which consumers with DSL links could sell their processor cycles to CPU-hungry corporates with lots of numbers to crunch. This, of course, would require a robust security model as part of the API suite and that may not be available yet, but it is something the NPI group is already beginning to address.