A processor that will scale from a data centre server to a smartphone, saving energy consumption and real estate...
but with much greater work throughput is the goal of researchers at Microsoft's joint venture with the supercomputing centre in Barcelona, Spain.
Working with colleagues in Microsoft Labs' Redmond headquarters and in Israel, the researchers aim to apply vector processing technology, once the preserve of supercomputers such as Cray, to commercial applications such as making data centres and mobile handsets run more efficiently.
Timothy Hayes and Oscar Palomar, from the supercomputer centre, said the aim of the energy efficient composable vector processor project was to build a device that uses very simple processors to process multiple streams of data in parallel, and for the device to reconfigure itself more or less on the fly in response to the workload it receives.
The technique is called grid or tiled computing, and uses some of the concepts of the reduced instruction set computing (RISC) of the mid 1980s. It also uses innovative programming so that a single instruction can initiate an array of complex processes, including calling up more processors when needed.
The group set up in December last year. It is presently using a simulator to discover which is the set of most common instructions and data streams that could be built into hardware.
In parallel, the researchers are working on scheduling algorithms to allocate work efficiently and to recombine results from processes accurately.
Hayes said the move to multicore processors had brought benefits in terms of reduced energy consumption per cpu-cycle. But multicore chips such as the present Intel family still carried a heavy overhead of system calls to work properly, he said. As a result, they were inefficient in terms of work throughput and hence energy consumption.
The grid processor they envisage will be much more effective in its use of silicon real estate, reconfiguring itself to avoid on-chip "hot spots". This will let it run cooler but get through more work in a unit time.
It is so early in the project that the discussion and results are still in the public domain, said Hayes and Palomar.
If their theories work, data centres will run smaller, cooler, faster and cheaper, and so will smartphones and network control equipment. In fact any application that requires fast processing of multiple streams of data would benefit.
But don't hold your breath. Key decisions on how to implement the instructions, either as software or hardware, are pending, and will not be taken soon.