Supercomputers running up against the barriers of Moore's Law

The next generation of supercomputer will require a radical re-think, as computer design begins to run up against the limits of Moore's law.

The next generation of supercomputers will require a radical re-think in technology, as computer design begins to run up against the limits of Moore's law.

Research is underway around the world to develop supercomputers containing millions of processors, that will run at exascale speeds and be capable of processing a million trillion calculations a second.

They will open up the potential for breakthroughs in oil and gas exploration, fusion power, aircraft design, and genetic research, say experts.

But technology to develop these superfast machines is rapidly running up against the limits of physics, a meeting of computer specialists in Hamburg heard.

William Harrod, director of research for high performance computing at the US Department of Energy, said it was no longer possible to rely on Moore's law to create faster supercomputers.

“We are at a point in the history of computers where a very substantial change has to occur. Historically Moore's law has meant that speed has doubled every two years," he told the meeting. “We can no longer continue to double the performance and think that we will get to exascale.”

Although major IT suppliers, such as Intel and IBM, are developing technology to boost super computer power, they are unlikely to develop genuinely practical exascale computers, without government intervention, he claimed.

“Industry is not going to go there. They are going to find an easier path than to make the significant architectural changes required,” said Harrod.

One of the biggest barriers is the rising cost of power as supercomputers rely on a growing number of processors to perform faster calculations.

If current power consumption rates continue, the cost of running an exascale supercomputer will far exceed the cost of building it, and few datacentres have access to the power resources needed.

The US Department of Energy (DoE) argues that a practical exascale computer should consume no more than 20 megawatts (MW) of electricity – the power level of a large datacentre.

The target is "audacious", said Harrod, but even a machine running on that level would have an electricity bill running into tens of millions of dollars a year

“20MW is an important target – that is 20 million dollars a year. That is a lot of money. 400MW is 400 million dollars a year. That is not going to happen. One gigawatt, that is really not going to happen,” he said.

Suppliers are tackling the problem by building computers with a larger number of processors operating at lower voltages, but this will inevitably create reliability problems, Harrod warned.

“The systems hardware is going to be less reliable than we have today. For many reasons, including reducing the voltage, but also because the number of parts is huge,” he said.

The biggest challenge will be developing software for exascale computers. It will need a radical change of approach, said Harrod.

“One of the things that needs to be changed is the software stack. It has evolved over the last 30 to 50 years based on assumption of a single core,” he said.

US Department of Energy goals for exascale computers

  • 500 to 1000 times more performance than today’s fastest computers
  • 20MW power requirement
  • Maximum of 500 cabinets
  • Highly commercial off-the-shelf technology

But with governments and research laboratories aiming to have the first exascale devices in production by 2020, the industry is running out of time to persuade suppliers to take up the challenge.

“I really think we are in a position to reinvent computing. It's really too easy to say to a computer vendor, just evolve what you are doing, but that really does not achieve the goals,” he said.

The DoE is pushing for designs based on commercial off-the-shelf technology, which could be more widely used in industry and research labs, and could more easily make use of existing software, he said.

But the DoE acknowledges that suppliers are unlikely to make radical changes to their computer architecture without government cash. The department is funding the development of an exascale prototype. Similar research programmes are underway in Europe, China and Japan.

 

Read more on Chips and processor hardware

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close