What are the fundamentals of 64-bit computing that your company needs to be aware of? Why should you consider moving away from tried-and-trusted 32-bit computing and what are the key steps in making such a migration? Cath Everett investigates
While commercial 64-bit systems have been around since the mid-1990s in the shape of high end but expensive RISC-based UNIX machines, it is only in the past year or so that commodity offerings have started making inroads into the market.
Such machines are ideally suited to undertaking huge calculations on large data sets, as is the case with data warehousing, computer-aided design or large enterprise resource planning (ERP) applications, all areas that have traditionally deployed 64-bit systems.
But with the advent of more affordable true 64-bit processor technology - that can run 64-bit operating systems and 64-bit applications and also stepping-stone solutions such as 64-bit technology that is backwards-compatible and can run native 32-bit applications, interest in 64-bit computing is becoming more widespread.
These systems have proved particularly popular among small and medium-sized enterprises (SMEs) that have one or two compute-intensive packages, but do not see the need to invest in an entire 64-bit architecture.
Performance-hungry applications Brad Day, an analyst at Forrester Research, says, “For performance hungry applications such as heavy databases, data warehouses and media applications such as graphics and movies, 64-bit computing will rule, while 32-bit systems will be relegated to office productivity applications.”
He also expects commodity 64-bit servers, which generally run the Linux and Windows operating systems, to replace RISC UNIX machines at the edge of the network to perform such tasks as proxy serving, caching and web serving.
But as these commodity offerings - which are usually based on between two and eight processors - move up the food chain, the more expensive eightto 64-way, 64-bit Risc Unix machines will push increasingly into your traditional mainframe realm.
“By mid-2006, we believe that 10%-20% of the mainframe nstalled base will migrate to high-end Unix - traditional mainframe feature functions such as virtualisation have cross-pollinated and are now available here, while acquisition and maintenance costs are still an issue for many mainframe customers,” Day explains.
For the moment, however, the main inhibitor to migrating to commodity 64-bit systems is a lack of widespread commercial applications.
Gary Hein, research director at the Burton Group, says, “Maturity and application support are the two biggest problems and it’s a bit of a chicken-and egg situation. Hardware manufacturers don’t want to invest in 64-bit hardware designs unless there are applications and hence customer demand, yet application developers are reluctant to make investments unless there is a viable market to sell their solutions.”
Although the situation has improved with the arrival of the backwards-compatible stepping-stone technologies, Hein believes that companies like yours are most likely in evaluation rather than purchasing mode as you wait for the technology and the market to mature.
This means that uptake is unlikely to be high for the next couple of years, although 64-bit systems will continue to appeal to the small part of the market that has hit a performance and scalability ceiling with its 32-bit systems.
Nonetheless, says Hein, in large organisations that have used 64-bit for some time, there is a “big move towards Linux as a replacement for the traditional Unix software/hardware combination, because customers see a migration path without needing to do significant application redesign or user retraining”.
The driver is that IBM’s OpenPower systems or Intel Itanium-based offerings running Linux can be between five and 10 times cheaper than traditional RISC Unix offerings, depending on the application.
Map out requirements
But Hein warns against you going for a wholesales migration to 64-bit systems for the sake of it. “I’d recommend mapping out requirements to solutions; 64-bit application support is still in its infancy, so a large migration won’t provide that many benefits,” he says.
As a result, you should evaluate where you have a critical need for the technology and establish what viable software and hardware alternatives are available in that specific context.
It is not essential to adopt a big-bang approach. For example, it is perfectly feasible to purchase 64-bit hardware when upgrading, but to run existing 32-bit packages in emulation mode - for example, Intel’s 64-bit Itanium can run 32-bit software in such as way - although in some instances they may actually run more slowly than on 32-bit hardware. Acquiring stepping-stone technology is another viable alternative.
After having invested in a 64-bit machine, however, you have the option to either buy new 64-bit packages and databases or to port your 32-bit bespoke applications over to the new environment at a later stage.
Hein concludes, “My recommendation is not to get wrapped up in 64-bit computing as a magic solution for all your performance problems. Consider it for certain data- or database-intensive applications where you’re coming into architectural limitations. But think about other alternatives such as clustering to bring up the performance barrier.”