Intel and AMD multicore chips promise more consistent performance, argues Clive Longbottom
The virtualisation capabilities that Intel is building into its chips will offer the capability to make each processor core an entirely separate system, or even for each core to be broken down into multiple separate systems.
Unlike existing methods - such as where memory space is dedicated - virtualisation will allow application stacks to be built in which each application runs in its own controlled space. A crash within any virtual space will have no impact on the other spaces. This also provides a greater degree of performance consistency.
With each application running in a dedicated space, only underlying and dependent processes can affect the application.
The desktop is often a critical area, and yet it is dependent on many single points of failure. Many of these can be dealt with through redundancy, such as dual power supplies and Raid (redundant array of independent discs) systems. But without virtualisation it is expensive and difficult to configure multiple CPUs at the desktop to provide true disaster tolerance.
By steplocking the virtual spaces (see below), should one fall over, the other can carry on as if nothing has happened. This is like Raid systems, where data is mirrored across two physical drives. It adds little in performance, but it provides a disaster-tolerant desktop that may be of greater financial benefit to the company than raw processing power.
Today's desktops run a raft of processes that consume CPU cycles, such as operating system services, anti-virus, management agents and indexing tools. Each of these can have variable demands on a system, some of which tend to peak just when power is needed most. Offloading these processes to guaranteed spaces can ensure consistent performance.
A lot of these processes may not even require a full operating system to underlie them. In many cases, such as personal firewalls, it would actually be better if the process ran closer to the silicon. Being able to partition multicore processors to run just a Java Virtual Machine (JVM), for example, could bring massive overall performance and security benefits.
Many management functions can also be offloaded. The capability to offload management processes that are continuously being polled or have the need to feed data back to a central environment on a regular basis is very useful. It can stop most of those annoying "hour-glass" moments where the computer suddenly runs slow.
In the future, processors with many more cores could allow cores to be rapidly provisioned and deprovisioned. Process stacks would either run on an operating system or a silicon-based JVM to react to a user's needs. Companies could begin to look at bringing clients into a corporate grid network. Under-utilised machines could donate cores to enterprise needs, even if only for short periods.
Not all of this is feasible yet, but Intel and AMD are moving towards these capabilities. For many, the benefit will simply be greater performance. For some, it will be consistent performance or more resilient desktops. Looking to the future, multicore virtualisation may be the means to a far more flexible environment, where enterprise processes can more easily call on available excess desktop processing power, without affecting performance.
Clive Longbottom is senior director at analyst firm Quocirca
What is steplocking?
Steplocking is a technology used in high-availability servers, where each CPU runs the same task, ensuring that each one has carried out the function and that the outcome is the same. If there are differences, the system sees which of the CPUs is right, and the "faulty" CPU is either disregarded or taken offline, with the other CPUs taking over.
Every "step" taken is "locked" into place once checked. State is always maintained and high availability is provided. The more CPUs there are, the better the steplocking.
This was first published in November 2005