Dell announced its high-performance computing clusters technology in February. The company is bundling its PowerEdge servers with specialised clustering and message-passing software from MPI Software Technology and Paralogic.
The bundles come configured with 16 to 128 processors and can be tied together to build large systems for use in scientific research, financial modelling and data-intensive business applications.
Paris-based Compagnie Generale de Geophysique (CGG), for example, used Dell's technology to build a 3,072-processor cluster that analyses seismic data for oil companies.
Derrick Deaton, an executive vice-president at CGG, said the cluster was delivering performance that is comparable to the throughput of special-purpose supercomputers, at roughly a quarter of the cost.
"The fact that you can tie so many [servers] together so inexpensively allows you to generally get the same processing power," Deaton said.
Sandia National Laboratories in the US is using a 128-node Dell cluster to try to simulate the impact of nuclear weapons.
Apart from the lower costs, the fact that such clusters can typically be built and put into use much more quickly than traditional supercomputers is a big advantage, said Milt Clauser, a principal member of Sandia's technical staff.
The increasing availability of open-source clustering and parallel computing software and the growing power of Intel processors are making high-performance clusters increasingly feasible, said Dan Kusnetzky, an analyst at IDC.
IBM offers tools that help users assemble large Intel-based clusters, Kusnetzky said. Hewlett-Packard plans to harness commodity servers internally to offer utility like computer services to customers, he added. But thus far, Dell is offering the most formal program for building the clusters, Kusnetzky said.