Dell announced its high-performance computing clusters technology in February. The company is bundling its PowerEdge servers with specialised clustering and message-passing software from MPI Software and Paralogic.
The bundles come configured with 16 to 128 processors and can be tied together to build very large systems for use in scientific research, financial modelling and data-intensive business applications.
For example, Paris-based Compagnie Generale de Geophysique (CGG) used Dell's technology to build a 3,072-processor cluster that analyses seismic data for oil companies.
Derrick Deaton, an executive vice-president at CGG's Houston office, would not disclose pricing information. But he said the cluster is delivering performance that is comparable to the throughput of special-purpose supercomputers, at roughly a quarter of the cost.
"The fact that you can tie so many [servers] together so inexpensively allows you to get generally the same processing power," Deaton said.
For the same reasons, Sandia National Laboratories, is using a 128-node Dell cluster to try to simulate the impact of nuclear weapons.
Apart from the lower costs, the fact that such clusters can typically be built and put into use much more quickly than traditional supercomputers is a big advantage, said Milt Clauser, a principal member of Sandia's technical staff.
But Deaton said users should be aware that the clusters can occupy considerably more space than supercomputers and generate a lot of heat.
The increasing availability of open-source clustering and parallel computing software and the growing power of Intel processors are making high-performance clusters increasingly feasible, said Dan Kusnetzky, an analyst at IDC.
IBM offers tools that help users assemble large Intel-based clusters, while Hewlett-Packard plans to harness commodity servers internally to offer utility-like computer services to customers. However Dell is so far offering the most formal programme for building the clusters, according to Kusnetzky.