Vista is finally here but Microsoft’s journey to this point has been infamously painful. Does the company really want to do that again? Some industry watchers, such as Gartner Group, are arguing that operating systems are now so complex that the cycle of major releases are at an end.
Instead, they predict that we will see a series of regular functional updates to the operating system, accompanied by the occasional release of kernel code. Is the operating system going to become more modular in the future?
Linux has been a relatively modular operating system for years. The system has hundreds of packages that can be either compiled directly with the kernel’s source code or installed on top of a binary kernel. Because of the sheer number of Linux packages, package management is a crucial part of many Linux distributions such as Debian, explains Ian Murdock, chief technology officer of the newly-formed industry group, the Linux Foundation.
These systems will work out which packages are needed to support the package chosen by a user, and will then go and find them online, installing them automatically.
Unix aficionados have long since supported component-based deployments, argues Andrew Josey, director for certification within the Open Group, “Most Unix systems have supported packaging since the early 1990s,” he says. “Unix vendors have historically been able to create cut down solution-specific configurations (think phone systems, cash registers etc).”
Such packages generally operate outside the kernel. The other option with Linux, and some Unix systems, is to use dynamically loadable components (typically device drivers) that run directly as part of the kernel. In the Linux world, loadable kernel modules perform this task. AIX allows for dynamic kernel extensions, as does Solaris.
But this still leaves the operating system with a monolithic kernel, admits Jim Craig, software marketing manager at Sun. “I’d say that Linux and Solaris are both monolithic when compared to microkernel operating systems,” he says. In contrast, microkernel-based systems pull most if not all basic operating system functions out of the kernel, leaving just a simple messaging layer to enable the components to communicate.
Researchers maintain that there are significant benefits to be gained from carving up more of the kernel to create an increasingly component-based operating system. Julie McCann, a senior lecturer in computing at Imperial College, led a research effort into microkernel-based operating systems between 1994 and 2000 in her previous position at City University.
“It’s the only way that operating systems could go as they got more complex, because component-based software development was establishing itself as a way of retracting the complexity of large systems,” says McCann. Her team developed Go, a system that ended up with a tiny sliver of kernel that purely co-ordinated communications between the other components.
The more granular a system’s core operating code is, the smaller its code base can become depending on how implementers configure it. This becomes particularly important when dealing with specialist systems such as point of sale terminals, industrial controllers and even IP handsets.
The best way to secure an IP handset that doesn’t need, say, SNMP protocol to work, is to strip SNMP out of the software altogether, rather than simply turning it off. The same goes for file systems and other operating system components. Not only does it reduce the complexity of code, but it reduces the software footprint, making it easier to squeeze code onto a smaller system.
As an open source system, Linux can be compiled with as much stripped out as possible. Not all Unix vendors want to take that direction, however. “I don’t think it’s what we’re attempting to do with AIX,” says Nick Davis, Linux UK leader at IBM. “What we’re trying to do with AIX is add more functionality to it, so that it can scale better and serve its market, which we see at the high end.” In short, the firm doesn’t want an AIX-powered wristwatch or router. It has Linux for that.
Davies also points to the complexity of re-engineering a monolithic kernel in an operating system that directly targets mission-critical users. Sun’s Craig makes the same point. The company made Solaris an open source system in 2005, creating the conditions for re-engineering the kernel, but the most dramatic thing that the community has done with the system is to introduce some of the package management tools in Debian into a new distribution of Solaris.
Someone else produced a version of Solaris running on a USB key. But none of this activity tinkers much with the kernel.
“The important thing here is that because we have long been a major supplier of operating systems to large, commercial organisations, we can't afford to mess around with things in the same way as other operating systems,” says Craig.
Microsoft has been doing its best. The company, which ships the embodiment of a monolithic operating system on the desktop, targets both high-performance computing and the embedded device market with derivatives of the same operating system.
The company has tried to introduce component-based architectures where it thinks they can do the most good. The XP Embedded operating system can be installed according to a custom configuration. The XP Embeded SP2 Feature Pack for 2007, released in December, has over 10,000 components that can be mixed and matched depending on the target platform.
However, the company is still struggling, says Mike Silver, VP of research at Gartner. “A client recently was talking to someone trying to use XP embedded who tried to strip out everything they could, and they still only reduced the footprint of what they needed to support by about 20%, which wasn’t nearly as much as they were hoping to,” he says.
The company is also making only limited progress in fragmenting its larger operating systems. Executives make a great deal of the profiling capabilities in Longhorn Server, which will allow the system to be deployed in one of four configurations, including file and print server, and active directory server. But this still leaves a largely slab-like implementation of Windows on the server.
In some ways, Windows server is going backwards from a componentisation perspective. For example, in Windows NT 4.0, the Win32 libraries ran in user mode, essentially separating them from the kernel. Today, they are back running in kernel mode, purely for performance reasons, says Mark Quirk, head of technology for Microsoft UK’s developer and platform business.
But the performance issues traditionally associated with highly component-based operating systems are becoming increasingly irrelevant, argues McCann. “It reminds me of the early days where they said you couldn’t program an operating system with high-level languages. You had to do it in assembly language,” she says, arguing that this changed. “Not only can you get a reasonable performance out of a fully-component-based operating system, but also we proved that you could speed the system up.”
Performance isn’t the only challenge to operating system vendors, however; backwards-compatibility is an issue that academic researchers don’t have to contend with. The lengthy delays that preceded the release of Windows Vista were due to the complexity of the operating system; Microsoft chose to retain the Win32 code base rather than releasing a .Net-only version of the system and extending support for XP, which would have essentially created a dual–track market for Windows.
Consequently, it had to deal with the security and compatibility issues that came as baggage with Win32.
This can’t last forever, says Silver. He believes that Vista will be the last major launch of the Windows OS. In the future, Gartner anticipates a more modular system in which functional updates will be provided to users, sitting atop a core base layer of code. What would this look like? Instead of a big bang Windows release cycle, we would instead have the odd low-key kernel release interspersed with a series of small pops.
We’re already starting to see this happen at the application level, as new versions of Media Player appear independently of major operating system releases. The question is how much the company would pull out of the kernel and put into functional updates.
Such predictions leave Murdock cold. “I think it’s complex enough that if I want to buy Vista today, I need to choose from six different editions,” he says. That’s an interesting comment from someone in the Linux community, which supports well over a hundred distributions of the same source code. Still, Windows users are used to safe, comfortable version control. Making the move to a more modular system would entail huge changes in the way that this user base consumes software.
This is what makes Microsoft’s Quirk most skeptical. Forcing corporate users to deal with version control for components that used to be integral parts of the operating system could dramatically increase management overheads. “This is one of the benefits that Microsoft tried to bring with its operating systems. We won’t want people to be in that situation,” he says, highlighting version complexity as one of the key problems with Linux.
But regardless of customer complexity, it is difficult to dismiss the opportunities such a situation would present to Microsoft, which could then repackage Windows into two key components: the base layer, and an exclusive stream of low-level feature enhancements provided through a Software Assurance agreement. Given the lack of significant operating system updates between the introduction of Software Assurance licences and Vista, this would help to restore customers’ faith in the licensing scheme, says Gartner.
It would also help to regulate the time between operating system upgrades which have a direct effect on the firm’s bottom line.
Assuming that Microsoft adopted this route, how would it re-engineer what is still a slab-like kernel to support such a vision? Silver predicts that virtualisation will be the key. By introducing a virtual layer between the hardware and the operating system, the company could partition services on user systems, implementing them as needed.
Silver predicts “a service partition dedicated to systems management, a security function, and at least one partition for a user application. They could do something that’s appliance like – perhaps putting media functions into their own virtual machine.”
To do this, Microsoft would have to ship its own hypervisor, he says, to retain sovereignty in a key technical area. Gartner anticipates that a Microsoft-developed hypervisor – which would provide the crucial layer between the hardware and the operating system – will happen in 2008 or 2009, possibly as part of a service pack for Vista.
Microsoft’s Quirk is skeptical, citing Bill Gates, who has predicted Vienna, the next major release of Windows. On the other hand, an awful lot changed between the company’s initial demonstration of Vista in October 2003, and its subsequent release, including the death of its WinFS filing system.
This was originally meant to ship with the operating system, then as a separate bolt-on. Last summer, they killed it as an operating system component altogether, and the surviving functionality was moved to the company’s SQL group.
As operating systems become more bloated and virtualisation becomes more mainstream, pulling components out of the kernel and running them as services seems like a way to escape from increasing complexity. Microsoft already has research and development groups examining component-based operation.
How much of that exploratory coding makes it out of the labs and into the box remains to be seen, but with the likes of Google subtly shifting computing culture towards web-based services and software distribution, the stars are aligning for a fundamental shift in operating system architecture.
Comment on this article: email@example.com