Research from Merrill Lynch analyst Thomas Kraemer claims that TPC-C benchmarks from the Transaction Performance Council, an independent and influential performance metrics company, show HP Superdome to be comparatively tepid. IBM's pSeries 680 performed at 221,000 trans-actions per second (tps) while Superdome only managed 197,000tps.
"It did not beat the high-end servers that IBM introduced a year ago," the research stated. "And it was more expensive than Sun's on a price-performance basis."
Terry Walden, HP's mission-critical marketing manager, replied, "Merrill Lynch has cocked up and demonstrated that it does not understand benchmarking."
Walden explained that the TPC-C figures are based on disparate databases and differently configured systems and said the results cannot be taken on face value.
"It is not comparing apples with apples, and the report completely ignores the SAP ATO [assemble to order] results which position Superdome as the world's fastest server [within the test environment] - almost twice as performant as the p680 in a test that IBM rates as a good example of a real-world application," he said.
The whole area of benchmarking is a minefield of variables and it takes time for companies to tune their systems to gain the best result. In TPC-C tests, the p680 originally clocked 105,000tps - just over half as fast as Superdome - and HP reckons that, given time to tune the hardware and software to the same degree as the second IBM machine test, Superdome will render figures of at least 300,000tps.
In many ways, HP has only itself to blame because the TPC-C tests are so well known and are probably the most-quoted figures when it comes to "benchmarketing" - using the results as "scientific" proof of a system's superiority over what has gone before.
SAP ATO is the ERP company's benchmark based on commonly-used functions in its applications. This covers steps from order entry to production, delivery, workflow, invoicing and product costing.
The Transaction Processing Performance Council's TPC-C measures performance and scalability of online transaction processing (OLTP) systems. It tests a wide range of database functions including enquiry, update and queued mini-batch transactions for a simulated order entry and distribution environment. Results are used to compare database performance as well as comparing hardware.
This was first published in January 2001