Feature

When mid range met mainframe

Prof Martin Healey ponders the threat to the mainframe

There was a time when computers were efficient. They had to be. The hardware was so relatively expensive that the software had to make the best use of it that was possible. Today hardware, in terms of price/performance, is incredibly cheap, a huge advance, but with a few negatives. In particular, efficient software has become a thing of the past, except in dedicated devices, in particular those with battery power, such as mobile phone/computers.

Because of the freedom from hardware constraints, software has become excessively multi-functional. There are specially designed software systems, which are embedded in products, such as routers and terminal controllers, but the main emphasis is on operating systems, which are all-things-to-all-people.

The main culprits are Windows and Unix. Today they are targeted at both the graphical workstation and the server. Unix is also multi-user! And yet the requirements of a workstation (single-user, graphics dominated) and a server (up-time, database, communications, multi-user, but no graphics) are very different. Indeed the combination of a Windows client and an OS/390 server has serious attractions if only Windows was more reliable.

With Windows 2000, NT gets closer to Unix, although it still doesn't scale, is unproven, is single-user, and has security short comings. But even more than Unix it has this vision of encompassing both client and server in one operating system. It is a 'Jack of all trades', and as such isn't especially good at either. It's far too big and complex for the client, and is too encumbered with GUI details for a server.

Nevertheless, the combination of Windows 2000 or Unix with a top-end PC, possibly, say, with 4 Intel processors, and many gigabytes of disk, is a very potent machine, despite software inefficiency. Such a machine has the power of a mainframe of not so many years ago! And soon there will be much faster Intel processors, 64-bit options, and practical support for more than four processors. In particular, if Linux is employed, then such a machine will cost a fraction of the cost of the current Unix machines from IBM, Sun, or HP. It is inevitable that the bread and butter mid range systems will become dominated by top-end PC/Server products running Linux, and quite a lot of Windows 2000 systems as well.

However, the technical advances are not exclusive to the PC sector and Intel. Thus IBM, HP, et al will continue to produce even more powerful products themselves. It now follows that as PC technology pushes up into the traditional mid range market, then the advances in mid range Unix (and possibly specialised versions of NT from Unisys, etc) systems will attach the traditional mainframe market. Indeed this is the objective of HP and Sun, but also, inadvertently, IBM with RS/6000 and AS/400, both of which will impact their own S/390 market.

But a mainframe is not just a bigger computer with more processing power than a Unix system. Unix is a time-shared system, well-suited for interactive and server functions, but the mainframe architectures have evolved with high performance transaction application requirements. Thus the attention to up-time and to transaction services eg. Cics/DB2 is very well developed. In particular, while some Unix machines can get within touching distance of the processing power of a mainframe, they are hopelessly outperformed in I/O bandwidth by S/390 machines, albeit at a cost (again the TCO is usually lower even if the initial cost is higher). But the real advantage of a mainframe is the inherent support for batch applications, which are essential to so many business applications. The Job Control processors in OS/390 far outclass anything available for Unix or NT.

The key to the ability of a mainframe to support such complex mixtures of transactions, time-sharing, batch and database lies in partitioning. Logical partitions (LPAR) allow separate subsystems to be run, each optimised for a specific task, such as fastest response for transactions, maximum throughput for batch, best I/O for database. There is no overall compromise as in Unix. Indeed LPARs can support multiple similar subsystems such as two Cics partitions, one for production and one for development and testing.

It is not surprising therefore that much of the current development on the bigger 'mid range' systems, both Unix and OS/400, are focussed on introducing logical partitioning a la mainframe. These services have a long way to go yet before they become competitive with OS/390, but it is a beginning. The question then is what will happen to the mainframes to keep them ahead of the game?


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in August 2000

 

COMMENTS powered by Disqus  //  Commenting policy