Intel may only have been playing catch-up with the rest of the market at its Developer Forum this year, but the company's future direction still provides some insights into how hardware manufacturers and users will be utilising processors in the coming years for desktop PCs and departmental and high-end servers.
Intel is following an industry trend towards smarter processors with better power consumption and performance. It has also vowed to move most of its products to 64-bit capabilities, from the server to the desktop.
Intel's move to 64-bit computing is now well under way, and users who have not already embraced it will be pushed in that direction as Intel joins AMD with a 64-bit push.
From next year, 64-bit will be the rule rather than the exception. Companies such as Sun Microsystems and IBM have been pushing 64-bit with their own high-end processors for some time, but now that the commodity players have made the switch, the market will surely reach its tipping point.
"We have already seen customers taking advantage of what this offers," said John King, enterprise server manager at Hewlett-Packard. "Customers are always screaming out for more memory."
The ability to address larger memory spaces has always been the key success factor for 64-bit computing. It makes large in-memory databases possible, meaning that companies can process data more quickly because they do not have to pull it from the hard disc. On the other hand, how many users are large enough to take advantage of this?
Even those that are not will find themselves running 32-bit applications on 64-bit equipment. The fact that the functionality is there does not mean you have to use it, and unlike the Itanium processor, which runs 32-bit applications poorly, the new collection of 64-bit processors from Intel (along with AMD's 64-bit Opteron) handle 32-bit code with a minimal overhead.
On the desktop, the benefits of 64-bit computing are less pronounced, said Mark Quirk, Microsoft's UK head of technology. "Always with 64-bit, the initial gains come where you add more memory," he said. "So for your average desktop user today, my view is that they will not see much difference."
The benefit of putting two processor cores on one piece of silicon is that you can increase the performance of the processor without having to pump more electricity through the silicon.
Intel has faced increasing challenges putting more power through processors with decreasing component sizes. Its move to 65nm designs creates transistors just three times the size of a polio virus. "Multicore enables us to deliver continued performance without the power penalties that we saw with the gigahertz approach," said Intel chief executive Paul Otellini.
However, In-Stat research analyst Jim McGregor said, "Are end-users going to see more performance out of it? On maybe one or two applications, but overall, no." Quirk said there would still be some benefit. Servers generally run more multithreaded applications, which is where dual-core processors can excel.
Desktop application suppliers will need to work hard to build multi-threading into their software to take full advantage of the new chips. Nevertheless, Quirk said the scheduler within the XP operating system would divide up processes between the different cores so that, for example, a desktop running a virus scan would be able to offload that task to one processor, while serving user requirements with the other.
Users may not be able to expect double the performance from a dual-core processor, but the performance-per-watt ratio is likely to increase, because the processors are much more power-efficient. Intel, like AMD, is concentrating on reducing power consumption among processors.
AMD launched the dual-core version of its Opteron processor in April this year, and Intel shipped some early dual-core designs with its Pentium D and Pentium Extreme units. These have already been adopted by Dell in some servers, while some other pioneering manufacturers have included them in high-end desktops.
However, the design on these early dual-processor chips is far from perfect. Rather than putting two processors on a single die, Intel went for the quick and easy solution, which Steve Pawlowski, chief technology officer of Intel's digital enterprise group, said helped to introduce the processor quickly.
In this design, which will be mirrored in the Presler chip designed to supersede the Pentium D, the processing cores do not communicate with each other inside the chip, rather the core-to-core communication happens off the chip via an 800MHz front-side bus, reducing efficiency. Pawlowski said that in future, dual-core design would put both cores on a single die, which should make core-to-core communication much more efficient.
Further ahead, both Intel and AMD expect to produce multicore processors, shipping quad-core units in 2007, and probably moving to larger numbers after that.
But more cores are not the only things that are being folded into silicon in future generations. Intel has been heavily pushing its "*T" technologies - functions that have traditionally been managed in software but which it wants to put directly into the processing hardware.
The most significant of these technologies is virtualisation, said Brian Gammage, vice-president at analyst firm Gartner. Companies such as VMWare have offered software-based virtualisation for a while, but Intel is building its virtualisation technology, called Vanderpool, directly into the processor, introducing a layer of abstraction in between the operating system and hardware.
Microsoft plans to support virtualisation technology with its own virtualisation middleware, known as Hypervisor, which will ship as part of Vista Server in 2007. However, Microsoft has not yet said much about what it will do on the client, and Gammage thinks that client-side in-chip virtualisation technology could be disruptive.
Gammage said. "I think Vanderpool opens up this space between hardware and software. I can see renewed interest in developing more tools for these machines."
Gammage saw scope for such tools on the server, less so on the client. But then, one could see a single PC with multiple operating environments for work and home, for example, and perhaps another one for remote support staff to use when things go wrong.
In the meantime, AMD is also preparing to ship its rival virtualisation technology, Pacifica. This will represent an interesting challenge for software suppliers because the two technologies will not conform to any standard. Hypervisor suppliers may find themselves having to write their software to support both technologies.
But where does this leave Itanium, Intel's 64-bit "big iron" processor, developed over an eight-year period with Hewlett-Packard? Intel added 64-bit extensions to its Xeon processors some time ago, and its announcement that most of its next-generation micro-architecture will be native 64-bit and dual-core appears to narrow the gap between these processors and Itanium.
IBM thinks so. The company, which carried both Itanium and Power processors, has decided to support only Xeon processors with the X3 chipset, designed to string up to 32 processors together in an SMP configuration. With dual-core technology coming down the pipe, and with Intel's hyperthreading technology potentially dividing up each core into a further two virtual processors, this provides a theoretical ceiling of 128 virtual processors for each X3 board - and that is before we even get into quad-core chips.
The increased power of the next-generation Intel chips makes the "build out" proposition, where companies string lots of commodity processors together, attractive when compared to the idea of using smaller numbers of ultra-high-performance processors like the Itanium, said McGregor.
Although AMD and Intel may be moving in the same direction as companies such as Sun and IBM with dual-core and 64-bit, there are some areas where they are not following other players - at least for now.
Richard Barrington, head of government affairs at supplier Sun, said the company was taking a task-oriented approach to its chip design. Niagara, the next-generation chip architecture, is planned for introduction this year in a four-core configuration, with eight-core due next year.
"Rather than doing one chip that does everything, this chip is good for web stuff, for standard I/O type things," he said. Conversely, the Rock family of processors, shipping much later, is designed for number crunching, floating point operations.
For the time being, both AMD's and Intel's chips are task-neutral - one lump of silicon supposedly fits all. But hardware manufacturers use the different characteristics of the two companies' architectures to suit different tasks anyway. The AMD design, with its processor and memory tied closely together, sits nicely for applications where you can put the workload close to the processor, said Tikiri Wanduragala, IBM regional eServer and storage consultant. "We targeted AMD towards high-performance computing," he said.
Conversely, AMD's high-speed inter-chip communication technology, Hypertransport, makes it less appropriate to string the chips together using the X3 chipset. Consequently, the X3 only supports Intel's Xeon.
With power consumption, dual-core, and 64-bit computing underpinning developments in microprocessors, silicon has become exciting again. As companies begin making their chips smarter as well as more powerful using in-chip technology, these new platforms will hopefully deliver tangible benefits in terms of performance, computing density and manageability.