Big iron's battle of the boxes

Despite predictions to the contrary, the mainframe is still motoring on. Danny Bradbury examines where using the big machines pays off, and where applications are best migrated to clusters of smaller systems

Despite predictions to the contrary, the mainframe is still motoring on. Danny Bradbury examines where using the big machines pays off, and where applications are best migrated to clusters of smaller systems

Predictions in the IT business are two a penny, but good predictions are rarer than hen's teeth. For example, in 1991 a marketing manager at Informix predicted that the mainframe would be dead in 2000. In 2004, revenues for IBM's zServer mainframe business grew by 15%. Not bad for a dead product category.

Then in 2005, IBM (which incidentally bought Informix's ailing business in 2001) launched the 38-processor Z9, its biggest mainframe yet. So much for predictions.

Still, the mainframe market has its fair share of challenges. One of the problems with mainframes and the applications that run on them is that they are often relatively old. So are the people that make them work, says Patrick Pochard, who runs Computer Associates' mainframe centre of excellence in the Czech Republic.

"Most of the mainframe experts today are in their 50s. The percentage of people younger than that is very small," he says. "This is not good because even if you provide great hardware, systems and products, if you do not have the right people to maintain it and keep it up and running, you are in trouble."

Skills are not the only problem facing mainframe users. Suppliers contend that mission-critical applications should stay on machines with high reliability and strong performance, but this is changing. Mainframe and minicomputer applications are increasingly devolving to high-end multiprocessor Unix boxes or clusters of Windows machines.

For example, Swiss air traffic control firm Skyguide recently migrated its radar data processing software from a Data General MV9800 minicomputer to a Windows server. It does not get much more mission critical than that.

The MV9800 was already considered obsolete when the company installed it in 1995, according to Philippe Chauffoureaux, Skyguide's radar data processing group leader. "The air traffic control world back then was quite conservative," he says.

The turning point for Skyguide was when the support for the MV9800 from EMC (which owns Data General) was due to expire.

This created big problems for Skyguide, as rewriting the software would have taken too long and commercial off-the-shelf software was going to be too expensive. So Chauffoureaux's team decided to migrate the code instead.

"PCs benefit from the mass market effect," Chauffoureaux says. "Companies like Asus or Intel sell millions of pieces of hardware so they are tested well. But if you take specialised hardware, it is often being used by only 100 to 1,000 customers."

Mainframe suppliers are working hard to protect their market. For example, most of them these days run open operating systems such as Linux, and can do so in multiple virtual partitions.

Chander Khanna, vice-president and general manager of the ClearPath solutions and services business at Unisys, says the company's mainframes now run Java applications alongside Windows and Linux, which can run on separate processors in the mainframe chassis.

But such provisions are not enough for some of Unisys' customers. For example, Express Newspapers recently abandoned its Unisys mainframe for Intel-based Windows servers.

Simon Cohen, Express Newspapers' product development manager, says his team had already pulled many applications off the mainframe onto other equipment. They finally ended the lease on the mainframe after migrating an eight-year-old Cobol-based prepress processing application onto an open platform.

"The saving that we have made is on the support and service from Unisys. That paid for the migration and we saved money to boot," says Cohen. The migration, including all of the Windows hardware, cost £300,000.

"It gets to a point where there is so little running on the thing that you cannot justify it to the board," he says. The remaining savings went on new equipment, including Apple Macs.

Development opportunities have soared since moving away from the mainframe platform, says Cohen. "Running [the system] on Windows has given it a new lease of life, mainly because of the database that is behind it," he says.

The firm originally used a flat file database but moved to SQL Server's relational structure. The initial plan was to run the migrated system for four years at the most and then redevelop, but now things may change. "It has become a lot easier to modify it and write new parts," Cohen says.

This type of migration, according to Mike Gilbert, director of product strategy at Cohen's chosen migration software tools supplier Micro Focus, is called "lift and shift". The lift and shift process takes existing legacy code and ports it to an open platform like Windows for recompilation.

Generally the code has been written using a version of a language such as Cobol or PL/1, with additional proprietary features added by the mainframe supplier, which is a problem that the migration tool can help with.

Even so, migration is not always plain sailing. At Skyguide, Chauffoureaux used the services of migration consultancy Transoft, a Micro Focus partner. Transoft took the legacy code and delivered new code to be tested and recompiled by Skyguide's internal team.

The Ada and Fortran migration went according to plan, but the assembly language (second generation mnemonic code designed to speak more directly to the hardware) was more problematic.

"They rewrote 90% of what was delivered because the quality of the port was not at the level we would expect in a multi-level environment," says Chauffoureaux.

The problem lay with the multithreaded, multiprocessor nature of the original hardware configuration, which could have been more expertly written in the first place. With a single-processor box it would have been much easier, says Chauffoureaux.

The team spent around six months rewriting the assembler code. Such unforeseen problems can turn a product from "lift and shift" into "lift and sift".

Nevertheless, the benefits for Skyguide of moving away from a minicomputer environment were impressive. For one thing, says Chauffoureaux, it was impossible to use separate systems for development and testing purposes, prior to the migration. These days the company is running multiple copies of the same system for training, simulation and testing purposes.

"That is something we could not do in the past because we could not buy new MV9800s," he says.

But what options exist for companies that want to modernise their mainframe applications without moving away from their chosen platform? One approach is to take the existing application and wrap it with code that lets you create new interfaces for end-users.

Doing it this way can also allow you to logically view mainframe applications as service-oriented architectures or object-oriented systems. However, be prepared for the cost of the middleware and the middle-tier software development to escalate.

One company taking this approach is Allianz Cornhill Insurance. Mainframe operations manager Keith Walker still uses a Z900, a mainframe a couple of generations behind IBM's Z9 machine.

"A significant proportion of the company's business is operated from the mainframe," he says. "The reason for that is reliability, scalability and to some extent cost effectiveness of the platform when looked at in an efficiency context."

People often forget to factor ongoing support into costs when migrating to mid-range and PC-based systems, says Walker.

Walker manages the mainframe, which runs line-of-business applications for the insurance market, using Computer Associates' tools for tasks like job scheduling. It has experienced 100% uptime in the past 12 months, and has only been down twice in the past four years.

"You do not have to move these applications to provide a 21st century front end," says Walker.

Allianz Cornhill Insurance is implementing web-based front ends for its mainframe applications, which will replace the traditional green screen interface. It is using IBM's Websphere MQ middleware to transform the applications. Pricey? "Yes, but it is horses for courses," says Walker.

One of the most important developments for Walker has been the change in software licensing models on the mainframe. To bring them more in line with utility computing movement in the open systems market, mainframe software suppliers have been reconfiguring licence arrangements to charge customers for how much computing power they use.

"Some suppliers are going towards the 'pay for what you use' model and Computer Associates is one of them," says Walker, adding that he may only be using 50% of the mainframe's capacity at any one time.

"The new pricing model uses a tool to measure how you use each piece of software and there is a scale of charges based on that."

This makes the mainframe proposition more effective for Walker, because including depreciated capital expenditure on the mainframe, the hardware cost amounts to just 20% of the total running cost of the system.

Mainframe hardware and software suppliers may be trying to shake up their business models to help fight a rearguard action against increasingly powerful mid-range boxes, but they have their work cut out.

IBM's revenues on the zSeries may be up over last year, but it is likely that while the percentage of its customer base running very high-end high-mips applications will grow, the sub-1,000 mips market will continue eroding from the bottom up.

For an increasing number of smaller customers, the best things really do come in small packages.

Case study: IMS cleans up with a mainframe and clustered PC combination

For some, neither a mainframe nor a PC server can completely satisfy their computing needs. For Terry Kelly, director of production support for IMS, a combination of mainframe and clustered PCs and Unix boxes offers the best performance.

IMS collects and cleans medical data from 29,000 suppliers at 225,000 global sites. The data, which arrives in thousands of unique formats, must be correlated and formatted for each customer's system.

The IMS system uses a mainframe for much of its transaction processing (it handles one billion transactions each month), but it also runs farms of Windows and Unix servers. When necessary, it pulls some jobs off the mainframe onto these farms for initial processing before pushing them back to the mainframe.

This is because sometimes the cost of processing a batch of data on a non-mainframe cluster can be lower than doing it on the bigger box.

"Where data volumes are small enough for an NT or Unix solution to handle that first stage of processing, we can take it on there," says Kelly.

Similarly, if the jobs require manual processing, it is easier to create a workstation graphical user interface to the mid-range operating systems than to the mainframe, he says.

Job scheduling is an important part of making the platforms work together. A job often needs to be processed on both the mainframe and a Windows or Unix cluster at different points, and in different locations, so Kelly needs to automate it as much as possible.

He uses ESP, a job scheduling system from Cybermation, that automates the transfer of batch data between these resources.

The company is unlikely to abandon its mainframe in the immediate future. As the main transaction processing workhorse and the repository for a huge 79Tbyte datastore, it sits at the centre of the IT infrastructure.

But IMS' use of open systems to complement its mainframe shows that sometimes the two worlds can co-exist.

Read more on IT for small and medium-sized enterprises (SME)

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close