Feature

Power and control: a history of the operating system

Operating systems are something we take for granted, but they have not always been a feature of computing. In the early days, they did not exist at all. Instead, early programmers merely loaded cards representing a job into memory and retrieved the results. But as computers became more complex, it became necessary to have a layer between the programs and the hardware that could administer the interactions of one with the other. The fundamental job of an operating system has always been the same: it enables applications and their users to interact with the various hardware components including the hard disc, network, graphics card and memory.

The function may have stayed the same, but the method of operation changed radically in the early years. IBM, which had developed more than 20 separate families of general-purpose operating systems by 1980, gradually added new features over time as it recognised shortcomings.

OS/360, announced in 1964, was the first operating system that allowed users to share hardware resources including memory, offline storage and I/O devices. It did this using "tasks", which would request memory from the system to carry out their activities before releasing it back to the pool. In this way, OS/360 introduced the concept of operating system processes, which are still a fundamental tenet of operating system theory today. In modern systems, processes are handled by the kernel, which is the basic part of the operating system that co-ordinates system resources and inter-process communication. In Windows, for example, the kernel runs processes that can be interrupted by one another, providing the illusion of a system that is doing lots of different things at once. In fact, it is doing lots of very small things very quickly in a sequence.

IBM also introduced memory paging, which was a technique used to help stop resource contention among processes vying for limited amounts of memory in the system. Today, paging is a means for the operating system to make the system look bigger than it really is. Virtual memory systems can swap areas of memory to an external store when not immediately needed, so that other processes can use Ram for something more pressing. Then, when the original process needs to use the page data again, it is bought back into Ram.

The first minicomputer - the DEC PDP-8 - launched in 1964, was followed by a host of competitors. It also heralded the development of Unix. Dennis Richie and Ken Thompson, part of the AT&T Bell Labs team originally developing the Multics mainframe operating system, decided to develop Unix when the company pulled out of the Multics project in 1969.

They envisioned Unix as a "more congenial" communal operating system that would be able to serve multiple users via computing terminals rather than punch cards, and initially developed a version for the PDP-7. After a more fully fledged version was developed for the PDP-11, the operating system was rewritten in the C programming language, which was a bold move, because the conventional wisdom said that operating systems written in high-level languages would be slow. Recrafting Unix in C made it possible to port the system relatively easily to hundreds of other machines, turning it from a PDP-specific system to a hardware-independent one, and with this the standardisation of minicomputing operating systems was set in motion.

Rich Wolski, a professor in the computer science department at the University of California, Santa Barbara, says that the perceived role of the operating system changed during the eighties. "Originally, they were there to provide an environment that arbitrated between the needs of multiple users," he says. Operating systems developed in the context of mainframe computing, when scant resources had to support multiple users, and did so using round-robin time-sharing techniques. The development of the PC in 1981 changed all that. Operating system methodologies and structures were suddenly needed for a different purpose that did not include user arbitration. "We went from this notion of providing secure, fair access to complex devices and hardware to one of providing convenient access to complex hardware for a single user," he says.

Since then, OS companies have spent their time tweaking operating systems on both the desktop PC and the server. Alan Freeland, technical sales manager in IBM's Systems and Technology Group, breaks down the drivers for operating system development into three key areas: resilience, efficiency, and usability. "The feedback I am getting is that customers are having to do more and more with fewer people," he says.

"In the past, where they may have done a lot of capacity-planning work themselves today more of these systems have to look after themselves." To this end, the company has been working on operating system enhancements such as autonomics, which can help a system recover from critical errors and hardware faults.

The company has also developed self-tuning operating systems that can learn the characteristics of their workload, he says. System Z - which has its origins in OS/360 - is able to learn about the processes it is carrying out, so that it can marshal them more effectively and make the best possible use of the hardware.

On the client side, Microsoft Windows operating group director, John Curran, says that the operating system has had to change dramatically as it moved into the internet world. Windows was originally a standalone operating system, and Microsoft failed to see the internet coming. "Today, we are in an always-on world," he says. "That fundamentally requires the operating system architecture to work very differently. Security becomes an important part of the equation, along with uptime and reliability."

We have seen some tweaking and poking around the architectural edges to increase security. For example, Microsoft introduced Patchguard, a system that protects the kernel from unauthorised hooks from third-party software. But ostensibly, Windows on the desktop has remained largely the same under the hood. It used much the same code as previous versions, which is why an army of programmers had to go in and clean it, getting rid of security bugs and loopholes in the source code over a period of years before Vista was released.

Vista still uses a kernel based on its predecessor, Windows NT, which has largely monolithic properties, running all of the core systems processes in kernel mode, which is privileged over user mode. This gives it efficiency, and the ability to maintain a certain level of security, as long as no one finds their way around the kernel patch protection mechanism. However, it has also been described as a hybrid kernel, because there are some emulation subsystems running in user mode.

However, there have been several successful attempts over the years to develop operating systems with true microkernels. In this configuration, only the bare bones of the kernel - the part responsible for basic address space management and managing basic communication between different processes - runs in kernel mode. Everything else, including device drivers and file system management, runs in user mode.

"Mach used this idea," says Wolski. "Rich Rashid felt like the separation of components into dedicated services was the right way to go." Rashid and the other Mach developers produced a very thin kernel, designed as a replacement for the traditional kernel used in Unix. Wolski also worked on his own microkernel at Lawrence Livermore National Laboratory. "We focused on making that microkernel as minimal as possible. Mach basically consists of message passing and ports," he says.

The idea of an operating system with components that can be swapped in and out is an appealing one. Microsoft has gone some way towards producing a componentised operating system with Windows Server 2008, which can be installed using different profiles. Using it purely as a file server would enable you not to install parts of the system that fileserver did not need, for example.

It can also be installed in the "server core" configuration, says Microsoft's Windows Server product manager, Gareth Hall. "Server core is Windows without the user interface, without media player, without Windows Explorer and without the .net framework," he says. "When you turn it on, you get a command line." The idea is that a system like that will be cheaper to run, because it will not need as many patches. "It is for infrastructure roles only. You cannot install SQL Server, SAP, or any of those things."

Today, things have changed yet again. The role of large, central computing resource that used to be provided by the mainframe is increasingly provided by the cloud. Cloud computing - as popularised by services such as Amazon's Elastic Computer Cloud EC2 - are designed to provide on-demand imputing capacity across the internet, providing hundreds or even thousands of server instances on the fly.

"So now, the large provider can maintain the infrastructure, and you can be given a tool that lets you customise your footprint on that hardware," says Wolski. This is one area in which virtualisation is dramatically changing the nature of the operating system. "The key thing that virtualisation enables you to make your piece of the mainframe look like you want it to look," he says. "So in the cloud, I can run Windows, and you can run Linux."

Advocates of virtualisation say that it could substantially change the way that operating systems work. The introduction of the hypervisor, which sits underneath the operating system and directly atop the processor, already calls in some cases for extensions to the operating system.

"Typically, the operating system is what schedules the processor, and runs in control of that process, at what we call ring zero," says Reza Malekzadeh, senior director, products and marketing for EMEA at VMware. "We take that spot. We will install first as an application but then drop something at the kernel level so that we can take control of the processor and do arbitrage and allocation."

Paravirtualisation is the concept of a virtualised operating system that knows it is virtualised, and makes allowances for that in the way that it interacts with the hypervisor. While operating systems don't necessarily have to know that they are virtualised, their performance on top of a hypervisor can be enhanced with extensions. Microsoft's hypervisor supports both models, with optimised extensions for Windows and Suse Linux.

Ultimately, hypervisor virtualisation could lead to operating systems that are pared down to perform a minimum set of functions. "We will see operating systems dedicated to just running database applications, but I also think we will see people go further than that, having applications running with just a very thin layer of operating system on top of the hypervisor," says Ian Pratt, XenSource co-founder and Xen project leader at Citrix. He cites one company which ported a Java virtual machine to run on top of the Xen hypervisor, with no operating system present at all.

Wolski says that this ability to customise how an environment will look to the end-user will accelerate that trend. "Operating system abstractions will need to support this customisation, with modularity, multiple interfaces, and self verification," he says. "If I plug module in, I would want it to figure out whether it could work on its own."

Linux could be a good candidate for such a modular, virtualised system. Although it uses a monolithic kernel, it employs loadable kernel modules that make the system easy to customise and pare down. In the future, whether with a whole new operating system on adaptation of a suitable existing one, we could see operating systems looking markedly different to the ones we use today.





Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in May 2008

 

COMMENTS powered by Disqus  //  Commenting policy