Is Vista the last monolithic release?

The complexity of operating systems has analysts predicting the end of major releases, with a move towards regular, more modular deployments

Vista is finally here, but Microsoft's journey to this point has been painful. Does the company really want to do that again? Some industry watchers, such as Gartner, are arguing that operating systems are now so complex that the cycle of major releases is at an end.

Instead, they predict that we will see a series of regular functional updates to the operating system, accompanied by the occasional release of kernel code. Is the operating system going to become more modular in the future?

Linux has been a relatively modular operating system for years. The system has hundreds of packages that can be either compiled directly with the kernel's source code or installed on top of the compiled kernel.

Because of the sheer number of Linux packages, package management is a crucial part of many Linux distributions such as Debian, says Ian Murdock, chief technology officer of the newly formed industry group, the Linux Foundation.

Package management programmes will work out which packages are needed to support the sytem chosen by a user, and will then go and find them online, installing them automatically.

Unix users have long supported component-based deployments, says Andrew Josey, director for certification within the Open Group. "Most Unix systems have supported packaging since the early 1990s. Unix suppliers have historically been able to create cut down system-specific configurations - think phone systems, cash registers etc."

Such packages generally operate outside the kernel. The other option with Linux, and some Unix systems, is to use dynamically loadable components - typically device drivers - that run directly as part of the kernel. In the Linux world, loadable kernel modules perform this task. AIX allows for dynamic kernel extensions, as does Solaris.

But this still leaves the operating system with a monolithic kernel, says Jim Craig, software marketing manager at Sun. "I would say that Linux and Solaris are both monolithic when compared to microkernel operating systems," he says. In contrast, microkernel-based systems pull most, if not all, basic operating system functions out of the kernel, leaving just a simple messaging functionality to enable the components to communicate.

Researchers maintain that there are significant benefits to be gained from carving up more of the kernel to create an increasingly component-based operating system. Julie McCann, a senior lecturer in computing at Imperial College, London, led a research effort into microkernel-based operating systems between 1994 and 2000 in her previous position at City University.

"It was the only way that operating systems could go as they got more complex, because component based software development was establishing itself as a way of retracting the complexity of large systems," says McCann. Her team developed Go, a system that ended up with a tiny sliver of kernel that purely coordinated communications between the other components.

The more granular a system's core operating code is, the smaller its code base can become, depending on how implementers configure it. This becomes particularly important when dealing with specialist systems such as point of sale terminals, industrial controllers and even IP handsets.

The best way to secure an IP handset that does not need, say, SNMP protocol to work, is to strip SNMP out of the software altogether, rather than simply turning it off. The same goes for file systems and other operating system components. Not only does it reduce the complexity of code, but it reduces the software footprint, making it easier to squeeze code onto a smaller system.

As an open source system, Linux can be compiled with as much stripped out as possible. Not all Unix suppliers want to take that direction, however. "I do not think it is what we are attempting to do with AIX," says Nick Davis, Linux UK leader at IBM.

"What we are trying to do with AIX is add more functionality to it, so that it can scale better and serve its market, which we see at the high end." Davis also points to the complexity of re-engineering a monolithic kernel in an operating system that directly targets mission-critical users.

Craig makes the same point. Sun made Solaris an open source system in 2005, creating the conditions for re-engineering the kernel, but the most dramatic thing that the community has done with the system is to introduce some of the package management tools in Debian into a new distribution of Solaris.

"The important thing here is that because we have long been a major supplier of operating systems to large, commercial organisations, we cannot afford to mess around with things in the same way as other operating systems," says Craig.

Microsoft has been doing its best. The company, which ships the embodiment of a monolithic operating system on the desktop, targets both high-performance computing and the embedded device market with derivatives of the same operating system.

Microsoft has tried to introduce component-based architectures where it thinks they can do the most good. The XP Embedded operating system can be installed according to a custom configuration, and the XP Embedded SP2 Feature Pack for 2007, released in December, has more than 10,000 components that can be mixed and matched depending on the target platform.

However, the company is still struggling, says Mike Silver, vice-president of research at Gartner. "A client recently was talking to someone trying to use XP Embedded who tried to strip out everything they could, and they still only reduced the footprint of what they needed to support by about 20%, which was not nearly as much as they were hoping to," he says.

Microsoft is also making only limited progress in fragmenting its larger operating systems. A great deal is made of the profiling capabilities in Longhorn, the next server version under development, which will allow the system to be deployed in one of four configurations, including file and print server, and active directory server. But this still leaves a largely slab-like implementation of Windows on the server.

In some ways, Windows server is going backwards from a componentisation perspective. For example, in Windows NT 4.0, the Win32 libraries ran in user mode, essentially separating them from the kernel. Today, they are back running in kernel mode, purely for performance reasons, says Mark Quirk, head of technology for Microsoft UK's developer and platform business.

But the performance issues traditionally associated with highly component-based operating systems are becoming increasingly irrelevant, says McCann. "Not only can you get a reasonable performance out of a fully component-based operating system, but also we proved that you could speed the system up."

Performance is not the only challenge to operating system suppliers, however. Backwards-compatibility is an issue that academic researchers do not have to contend with. The lengthy delays that preceded the release of Windows Vista were due to the complexity of the operating system.

Microsoft chose to retain the Win32 code base rather than releasing a .net-only version of the system and extending support for XP, which would have essentially created a dual-track market for Windows. Consequently, it had to deal with the security and compatibility issues that came as baggage with Win32.

This cannot last forever, says Silver. He believes that Vista will be the last major launch of the Windows operating system. In the future, Gartner anticipates a more modular system in which functional updates will be provided to users, sitting atop a core base layer of code.

What would this look like? Instead of a big bang Windows release cycle, we would have the odd low-key kernel release interspersed with a series of small pops.

We are already starting to see this happen at the application level, as new versions of Media Player appear independently of major operating system releases. The question is how much the company would pull out of the kernel and put into functional updates.

Such predictions leave Murdock cold. "I think it is complex enough that if I want to buy Vista today, I need to choose from six different editions," he says.

That is an interesting comment from someone in the Linux community, which supports well over a hundred distributions of the same source code. Still, Windows users are used to safe, comfortable version control. Making the move to a more modular system would entail huge changes in the way people buy and use the software.

This is what makes Quirk most sceptical. Forcing corporate users to deal with version control for components that used to be integral parts of the operating system could dramatically increase management overheads. "This is one of the benefits that Microsoft tried to bring with its operating systems. We do not want people to be in that situation," he says, highlighting version complexity as one of the key problems with Linux.

But regardless of complexity, it is difficult to dismiss the opportunities such a situation would present to Microsoft, which could then repackage Windows into two key components: the base layer, and an exclusive stream of low-level feature enhancements provided through a Software Assurance agreement.

Given the lack of significant operating system updates between the introduction of Software Assurance licences and Vista, this would help to restore customers' faith in the licensing scheme, says Gartner.

It would also help to regulate the time between operating system upgrades which have a direct effect on the firm's bottom line.

Assuming that Microsoft adopted this route, how would it re-engineer what is still a slab-like kernel to support such a vision? Silver predicts that virtualisation will be the key. By introducing a virtual layer between the hardware and the operating system, the company could partition services on user systems, implementing them as needed.

"A service partition dedicated to systems management, a security function, and at least one partition for a user application. They could do something that is appliance-like - perhaps putting media functions into their own virtual machine," says Silver.

To do this, Microsoft would have to ship its own hypervisor to retain sovereignty in a key technical area, he says. Gartner anticipates that a Microsoft-developed hypervisor - which would provide the crucial layer between the hardware and the operating system - will happen in 2008 or 2009, possibly as part of a service pack for Vista.

Quirk is sceptical, citing Bill Gates who has predicted Vienna as the next major release of Windows. On the other hand, a lot changed between the company's initial demonstration of Vista in October 2003 and its subsequent release, including the death of its WinFS filing system.

This was originally meant to ship with the operating system, then as a separate bolt-on. Last summer, Microsoft killed it as an operating system component altogether, and the surviving functionality was moved to the company's SQL group.

As operating systems become more bloated and virtualisation becomes more mainstream, pulling components out of the kernel and running them as services seems like a way to escape from increasing complexity. Microsoft already has research and development groups examining component-based operation.

How much of that exploratory coding makes it out of the labs and into the box remains to be seen, but with the likes of Google subtly shifting computing culture towards web-based services and software distribution, the stars are aligning for a fundamental shift in operating system architecture.

Time has come to upgrade to Vista

Microsoft releases six Vista deployment tools

 

Comment on this article: computer.weekly@rbi.co.uk


Read more on Operating systems software

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close