Fotolia

DevOps and containers: How to get connected

We look at how to apply a DevOps approach to a container-based microservices architecture

This article can also be found in the Premium Editorial Download: Computer Weekly: Power play – how GE builds digital into its products

Enterprise app development is in the middle of a transition from the traditional world of the waterfall model to a more agile approach that emphasises rapid iteration and continuous delivery – the so-called DevOps model.

A key technology that has become strongly associated with DevOps is containers, or at least the Docker model of containers, whereby code is deployed into separate sandboxed execution environments, each with its own resource limits. By decomposing applications into discrete components, these can be deployed and maintained separately, and scaled up separately as conditions demand. This approach is referred to as microservices.  

Network connectivity

However, a major factor in the success of a microservices architecture is how all the components can communicate with each other, and with shared resources such as databases or storage services. As the DevOps approach can mean that containers and their workloads are continually being spun up and at some point later retired, it can present a big challenge to ensure that each has the network connectivity that it requires.

In a container deployment, the containers could be hosted on clusters of physical servers, or on virtual machine (VM) instances running on those clusters of servers. The hosts all have their own network connection and need to map this to each container they are hosting. This means allocating IP addresses and applying security policies to lock down myriad container instances.

When Docker developed its container platform, it came up with four modes of network support: Bridge mode, Host mode, Container mode and no networking. The is the default and creates a virtual network linking all containers on the host and its external network port. Host mode links a container into the host’s network stack, so that all network interfaces on the host will be accessible to the container. Container mode reuses the network namespace of an existing container, while no networking gives each container its own network stack but does not configure any interfaces, allowing users to set up their own custom configuration.

Already, the virtual Ethernet bridge that Docker implements to interconnect containers means we are dealing with a simple kind of software-defined networking (SDN). This support is fine if you are running standalone workloads or all your containers run on just a single host, but for workloads operating at any kind of scale, your application is likely to require containers running on clusters of servers, many of which will need to communicate with containers running on another host.  

This is where it starts to get complicated, because your containers need to know the ports and IP addresses to access those containers running on other hosts. Your IT engineers could configure this manually, but that would be too slow and complex in a DevOps environment. What is needed is some mechanism that can automate the configuration of network interfaces for containers as they are created and deployed.

This type of automation is referred to as orchestration, so it is no surprise that real-world deployments of containers rely on an orchestration tool of some kind. The industry largely seems to have settled on Kubernetes, which originated out of Google but is now an open source project overseen by the Cloud Native Computing Foundation (CNCF). Alternatives include Docker’s Swarm and Apache Mesos.

One side-effect of Kubernetes’ rise to dominance is that it has also popularised the container network interface (CNI), a standard application programming interface (API) with which to configure the network layer, resulting in this also finding its way into other platforms. CNI was actually developed by CoreOS, the firm behind the rkt container runtime. Docker developed its own container network model (CNM), but also supports CNI.

CNI is basically a way to plug in network services that will handle tasks such as IP address management (IPAM) and, in the case of Kubernetes, apply the network policies defined to govern how groups of containers are allowed to communicate with each other.

One such platform with support for CNI is VMware’s NSX, an SDN system that can create and manage multiple virtual networks that operate over the existing physical Ethernet infrastructure, essentially using it as a packet-forwarding backplane to carry virtual network traffic. These virtual networks can each have their own IP address ranges and other characteristics.

NSX actually comes in two flavours. The original was developed to integrate closely with VMware’s vSphere hypervisor and is also known as NSX-V. A second version, NSX-T, has been developed to operate with other hypervisors, such as KVM, which is key for supporting Linux and the OpenStack cloud framework.

Both versions operate by acting as a virtual switch that directly handles traffic between VMs or containers running on the same node. If the destination is a VM or container running on a different node, the virtual network packet is encapsulated into a standard Ethernet packet and sent across the network to the virtual switch running on the node in question.

Other platforms have similar capabilities. Windows has featured built-in SDN functions since Windows Server 2016, including Hyper-V Network Virtualisation (HNV) running on each node and a network controller central management service that is operated from a cluster of three Hyper-V VMs.

In OpenStack, the Neutron networking service provides an API that allows users to set up and define network connectivity. It comprises a Neutron server plus agents that run on the server nodes, with plugins that allow Neutron to interface with network platforms such as Open vSwitch, Midokura Midonet, Juniper OpenContrail and Cisco’s NX-OS, plus VMware’s NSX. OpenStack supports Kubernetes and other container orchestration tools via its Magnum service, while Neutron is also CNI compliant.

Security is another important factor in container deployments, especially when operating at any kind of scale. As with physical networks, there is a need for firewalls and other security policies to safeguard applications from any attack that somehow manages to get inside the infrastructure. However, in some deployments, traffic between containers is often not secured at all.

The complexity of the virtual networks that interconnect the containers means that once again, the orchestration layer or the SDN layer is best placed to get an oversight of what is happening within the container network traffic.

This has not been lost on SDN suppliers. VMware, for example, is pitching NSX for securing infrastructure as much as for virtualising the network. This is because it effectively offers a distributed firewall running in each compute node, able to filter traffic within the infrastructure as easily as packets coming in from outside.

Lightweight components

Getting back to microservices, a key feature of this architecture is that the components are lightweight and typically implement a single function, but link together to deliver the full application functionality. This means that the code in a container needs to be able to discover other available services and where to find them on the virtual network.

There are various ways of achieving this, but one of the most common is to use a DNS server that enables services to be located via name rather than their IP address. This allows for IP addresses to change, as they will do if container instances are continually being started up and retired. In fact, Docker implements an embedded DNS for just this purpose in versions since Docker 1.10, and so does Kubernetes from version 1.11.

Another factor is load balancing. A microservice architecture is intended to enable an application to scale by spawning another instance of a particular function, but the infrastructure then needs to ensure traffic is routed to every one of those instances. Kubernetes implements a basic form of load balancing in its kube-proxy module that performs round-robin forwarding of packets to copies of the same service. But this may not prove adequate, and organisations may need to implement load balancing using other software-based tools, such as HAProxy, NGINX or Istio.

Read more about SDN and containers

Composable applications can be built from connecting microservices that run in their own containers. This cloud- approach requires a new approach to networking.

Containerization tools have taken on a critical role in managing communication between microservices, and two Kubernetes-based tools in particular are getting the job done.

Can of worms

So network support for containers is a bit of a can of worms, which gets more complex as container-based DevOps deployments get more ambitious. Therefore, many organisations are turning to ready-made platforms that integrate most or all of the capabilities needed to stand up container-based infrastructure.  

These range from a platform-as-a-service (PaaS) that supports containers such as Red Hat’s OpenShift, to platforms such as VMware’s Pivotal Container Service (PKS). The latter includes NSX-T and Kubernetes, plus a tool called BOSH that provides monitoring and lifecycle management for the entire platform. However, PKS focuses on container operations and does not provide much in the way of developer tools, whereas OpenShift provides continuous integration support for DevOps out of the box.

Whether you choose to integrate it yourself or go for a platform that takes away much of the complexity, a network that can be configured and reconfigured under software control is likely to be an essential part of any container and microservices strategy.  

Read more on Software-defined networking (SDN)

CIO
Security
Networking
Data Center
Data Management
Close