ra2 studio - Fotolia
Containers are a serious and emerging contender as a method of application delivery. Although they are by no means in use universally yet, most enterprises have deployed containers somewhere or are investigating their capabilities.
Their advantages centre on the ability to abstract everything needed to run applications away from the hardware, and with the potential for many – very many, in fact – container instances to be created and run on demand, they are supremely scalable.
Of course, quite often container clusters and orchestration are run in virtual server environments, but they don’t have to be. They can also directly run on bare metal servers. In this article, we’ll look at bare metal vs virtual machines (VMs) and where to run containers.
Running containers: bare metal or VM?
Containers are a form of virtualisation, but one in which the application and all the microservices needed for its execution run on top of the host server operating system, with only the container runtime engine between the two. Virtualised server environments, meanwhile, see the hypervisor run on the host operating system, with guest OSs on top and applications running in those environments.
Most questions around whether to run containers on VMs or on bare metal derive from this basic fact.
Key decisions: Performance vs convenience, perhaps cost too
The decision whether to deploy container infrastructure centres pivots on performance requirements vs convenience. It’s way more convenient to run container orchestrators and their nodes on VMs, but you’ll lose out somewhat on performance. Having said that, if you want to gain the performance benefits of bare metal, you probably need to be running your own on-prem environment and be prepared to do the work needed to make up for the convenience the hypervisor environment brings.
Also, cost can come into things. Because bare metal servers can run a lightweight Linux OS (such as Core OS or its descendants), they avoid a lot of the cost of hypervisor licensing. Of course, that also means they miss out on advanced functionality available from virtualisation environments.
Benefits and penalties of virtualisation
Putting a virtualisation layer on top of the host OS means adding a layer of software to the environment, which brings benefits as well as penalties.
In a virtualisation environment, the hypervisor brings a lot of functionality and allows for maximised hardware utilisation.
Key benefits here are that workloads can be migrated between hosts easily, even when they don’t share the same underlying host OS. That is a useful thing for containers especially, which are desirable for their portability between locations, but are dependent on the OS they were built in. Use of a particular virtualisation landscape will provide a consistent software environment in which to run containerised applications even if the host OS differs.
Read more on containers and virtualisation
- Kubernetes vs VMware: Drive the choice with IT architecture. It isn’t easy to classify Kubernetes as the superior product for managing VMs and containers compared to vSphere 7. The decision depends on how widely admins use containers.
- Containers vs VMs: What are the key differences? Containers have rapidly come into focus as a popular option for deploying applications, but they have limitations and are fundamentally different than VMs.
But at the same time, all the things about virtualisation that bring benefits also come with penalties. That is rooted in the fact that your physical resources simply have to do more computing because of the added layers of abstraction.
That is most clearly visible in the performance difference between containers that run on bare metal and in virtualised environments. Benchmarking tests carried out by Stratoscale found that containers on bare metal performed 25% to 30% better than in VMs, because of the performance overhead of virtualisation.
Meanwhile, VM environments tend to have resources – such as storage allocated at startup – that remain provisioned to them. Diamanti, which provides a Kubernetes platform aimed at use on bare metal and in the cloud, claims resource utilisation can be as low as 15% in virtualised environments and that it can cut hardware use by 5x.
Despite the inherent performance advantages of bare metal over the added complexity of virtualisation, VMware, with its Tanzu Kubernetes platform, has made efforts to mitigate overheads.
Bare metal downsides
Having said all that, there are downsides to containers on bare metal.
Key among these is that container environments are OS-dependent, so one that is built for Linux will only run on Linux, for example. That will potentially put limits on migration and may work against you in the cloud, where bare metal is limited and where you can find it, it costs more. Given that one of the key advantages of containers is to be able to migrate workloads between on-prem sites and the cloud, that’s not good news.
Bare metal container deployments will also lack features that come with virtualisation software layers, such as rollback and snapshots.
Deploying containers to bare metal can also make it more difficult to mitigate risk via redundancy. VMs allow you to potentially split nodes between them, whereas when container nodes are installed on bare metal, there are likely to be less of them and they will be less portable and less shareable.