Fotolia

Future of the server operating system

We look at the evolution of the server operating system, and how the next generation is moving into the cloud

Microsoft’s new Windows Server 2016 operating system (OS) is just being launched. Linux is celebrating its 25th birthday. IBM has its mainframe operating system and its Power operating system, Oracle has Solaris – and that is just a few of the OSs that still abound in the market. But what is the role of an OS in the modern world?

Going back to the early days of servers, the stack required to get a computer up and running was pretty simple – a basic input/output system (BIOS) to get the hardware started, followed by an operating system to provision basic services, followed by an application to carry out the actual work.

Let’s focus on Intel-architected systems, as other chip architectures have slightly different approaches. A means of initiating the hardware is still required. The BIOS has moved on to the unified extensible firmware interface (UEFI), but it is still a core link in the chain of getting a server up and running. Without some sort of base-level process, the hardware would not be set up correctly to support anything else that was to be layered on it.

However, now there is also a hypervisor, such as ESX, Hyper-V or KVM, which is also initiated for virtualisation. It is still the case that an OS is installed on top of the hypervisor in one way or another.

Then there may be application server, middleware, microservices, virtual machines, containers, databases, layered security, a full cloud platform (such as OpenStack) – or we may still just be putting a good old application, as we were in the past, on top of what is a far more complex platform.

So just what is the role of the OS now? In the past, it had several key capabilities. It created the basic interfaces between the server and other systems around it, such as storage, networks and peripherals. It provided libraries of capabilities, such as modem settings, base level drivers for various items of equipment (such as small computer systems interface, SCSI, peripherals), and so on.

But as the whole computer ecosystem grew, the size of the OS ballooned as it introduced new functionality to try to manage that whole environment – while removing virtually nothing as people moved on and stopped using things like modems.

Read more about server operating systems

On 25 August 1991, Linus Torvalds released the Linux kernel. We look at how the open source operating system has evolved in the last quarter of a century.

Windows Server 2016 is a major release of Microsoft's server operating system, with more changes than in Windows Server 2012 R2 back in 2013.

Complex stack

As part of a complex stack, is the OS becoming just another link in a chain that has become too complex – and is possibly the weakest link in that chain? As the world moves towards abstracted, more software-defined virtual platforms such as cloud, should we have a directly implementable cloud operating system rather than a pairing of a base OS with a cloud platform?

Look at some of the pared-back OSs around – some, such as RIOT, Contiki and eCos, are stripped back to fit into embedded devices; others, such as Rancher OS, Project Atomic and Core OS, are aimed at being thin-layer platforms for running containers; others still, such as VMware’s Photon, are more aimed at presenting themselves as a new type of ultra-lightweight OS.

There are promising signs of bloated OSs starting to shrink. Windows Server 2016 is moving towards adopting more of a cloud-based platform, borrowing much from the public Azure cloud platform while also introducing the concept of a “nano server” – a stripped-back version of the Windows Server OS that is tuned for running containers.

Many of the Linux variants around are pretty much becoming bootstraps for OpenStack systems or, as previously mentioned, thin-layer container facilitators.

This is all good progress – but more can be done. In Quocirca’s view, the next generation of OSs will be CPs (cloud platforms).

By having a compressed stack, performance can be improved. Virtual machines and containers can talk more directly to the underlying platform without going through multiple layers of abstraction.

Management becomes easier – rather than having to manage these different layers, a more unified platform enables a simpler environment, with root cause analysis of problems easier to carry out. Patching and updating are easier; standardisation of functions across and through the platform can be carried out. Updates to a single layer are less likely to break an existing function at a different level in the overall stack.

After all, a cloud platform is the ultimate abstraction layer. It needs to be hardware agnostic, not just at server level, but also at network and storage resource level. Hardware has become more standardised – it has had to, with the increasing need for a common infrastructure platform.

By working against a more standardised hardware environment, the pools of resources required for cloud computing are easier to put in place – bypassing the complexities of the OS makes more sense than trying to ensure that the cloud platform understands all the idiosyncrasies of all the different possible operating systems.

Greater resilience

By having hardware assets that can boot themselves just to a point where a cloud OS can take over, the whole concept of provisioning and running a cloud platform becomes easier. Greater resilience and availability can be built in as those “weak links” in the stack are removed.

It also leads to greater standardisation and fidelity in managing workloads. If the fundamental means of dealing with the hardware is as direct as possible, then the abstraction capabilities of a “software-defined datacentre” becomes closer to reality.

There is still a need for intelligence at the hardware level. As Quocirca wrote about almost two years ago, a “hardware-assisted software-defined” approach makes sense, but it does require hardware to operate at a high-function, highly standardised level.

So, let’s forget about the OS and focus on where the next-generation platform needs to go. It must have the capabilities to work against “bare metal” – hardware that has just enough intelligence to boot from cold and offer itself as available resources to whatever is going to be layered upon it.

The hardware may have its own intelligence built in at the firmware level – it may even have some side-loaded software that enables it to make intelligent decisions around, for example, root cause analysis of problems or to carry out a degree of overall systems management.

Cloud platform

But let’s put the main platform intelligence into the cloud platform itself. Let’s create a fabric, rather than a complex stack – a fabric that works as closely as possible against the hardware resources, cutting through all the latency-inducing and performance-ruining layers of the complex stacks that have grown up around an approach that has tried to embrace new ideas without getting rid of any of the old baggage that has been with us since the dawn of modern computing.

So what does all of this mean to the main players in the private cloud environment? Well, there are few enough of them – but their power will grow as the need for hybrid clouds expands.

For OpenStack, let’s look to a platform that uses a Linux-type kernel on top of a KVM hypervisor, but one that is stripped right down to just provide a bootloader for the OpenStack environment itself.

The same applies for the private version of “Azure” – let’s get rid of the need for a full Windows Server installation, instead focusing on what a private “Azure” needs as a bare minimum, such as focusing on the nano server as the main means of getting up and running.

Emphasis on OpenStack

VMware seems to be more defocused on its own proprietary private cloud technology these days, with more of an emphasis on OpenStack (although its recent announcement of a technology partnership with AWS around a VMware cloud platform may undermine this).

Taking ESX and Photon could push VMware (and then also Dell EMC) as a new platform player – one that has little in the way of its own old baggage to carry, with more of a clean sheet on which to present its vision for a new hardware/software platform that has only a bare shadow of what we now recognise as an OS.

Server-based computing has not changed that significantly over the years – it is now time that someone grasped this nettle and dealt with the issues created by not moving with the times rapidly enough.

Clive Longbottom is founder of Quocirca.

Read more on Platform-as-a-Service (PaaS)

CIO
Security
Networking
Data Center
Data Management
Close