As businesses continue to modernise their server estate and move towards cloud-native architectures, the elephant in the room is the monolithic core business application that cannot easily be rehosted without significant risk and disruption.
These days, it is more efficient to deploy an application in a container than use a virtual machine. Computer Weekly now examines the modern trends, dynamics and challenges faced by organisations now migrating to the micro-engineered world of software containerisation.
As all good software architects know, a container is defined as a ‘logical’ computing environment where code is engineered to allow a guest application to run in a state where it is abstracted away from the underlying host system’s hardware and software infrastructure resources.
So, what do enterprises need to think about when it comes to architecting, developing, deploying and maintaining software containers?
This post is written by Steve Judd in his role as senior solutions architect at Jetstack — the company’s technology helps businesses to build and operate modern cloud native infrastructure with Kubernetes.
Judd writes as follows…
Migrating enterprise IT infrastructure to benefit from the distributed nature of containerised applications carries a set of challenges which apply irrespective of whether the application is built and hosted in a legacy environment. However, there are some specific challenges relating to migrating legacy apps designed for hosting on stable physical or virtual machines. This article will look at these challenges and suggest some approaches to help.
Almost all real world applications need to maintain state, including containerised ones. But different approaches are needed to enable containers to store state, compared with the legacy applications hosted on physical or virtual machines. Legacy applications often store session state in memory, whereas this is rarely true of containerised apps, which instead ‘outsource’ this to highly available data stores, such as Redis.
Containers, as well as physical and virtual servers, rely on machine identities – i.e. digital certificates and cryptographic keys – for protection to ensure the servers or clusters in your network are legitimate and authorised. The nature of distributed architecture implies that containers will undoubtedly drive an increase in the volume of short-term certificates needed to maintain a high level of protection and a machine identity management solution should be used to ensure containers are protected and aligned with security best practices.
Integration & connectivity
Legacy applications tend not to be API driven in the same way as container applications, so ensuring secure communications between containerised and non-containerised apps can be challenging. Cloud native environments are increasingly using mTLS, via a service mesh, to enforce security for internal workloads and this can be operationally demanding for applications that are not designed with API access in mind.
Legacy applications employ logging capabilities that often work differently to logging in a containerised world, which tend to be more structured and often involve writing to multiple log files hosted on the same server. In the container-world these are considered anti-patterns and need to be replaced with just writing to common standard output which is then collected and pushed to a centralised logging service.
Fundamentally, containers are intended to be ephemeral and container native apps are designed with this in mind: startup times are short and storage of state is outsourced. Container resources exist across the environment, rather than within the application itself. Legacy apps are rarely designed in this way, so this is something that needs to be addressed when considering the right approach to containerisation.
This ephemeral nature is often what drives the enterprise to embrace DevOps approaches to operations management. In this sense, adopting the twelve-factor best practices is highly recommended. However, legacy apps do not usually follow these practices and are often unable to take full advantage of the purpose built tooling & patterns for container building, testing, configuration and deployment.
Scaling efficiency is of course one of the key advantages of containerised apps which are designed to scale horizontally quickly, easily and, often, without manual intervention. Legacy applications tend to be much less tolerant to scale so a containerised approach will need to judge how much overhead is required to manage those legacy applications that cannot scale in the same way.
Approaches for migrating legacy applications
Given the desire to adopt containerised infrastructure and benefit from scale, there are 3 strategies to consider when migrating legacy applications and workloads into a container environment, such as Kubernetes.
First, a “Lift n shift” approach will take existing apps and build them into Container Images and deploy. Refactoring is minimal, and only to ensure the container starts and runs successfully. Although this is usually straight-forward to achieve, and there are tools available that can help, this approach doesn’t address all the challenges outlined above. So, for anything other than a simple legacy application, some measure of ‘fettling’ will be required; and this may not be optimised for container environments.
Second, a “Refactoring” approach optimises the legacy application for containerisation by following the 12 factor principles mentioned previously. This will address most of the above challenges by ensuring the application is best designed to work within a Container platform (such as Kubernetes). This approach can work well as an approach to precede more wider adoption of Devops practices.
Third, a “Rewrite” approach will aim to replace a legacy application with a new cloud native one. This will work on a selective basis and targets specific applications for migration.
Of course, as well as having considered these 3 approaches, the option exists to leave the applications to remain hosted in its present legacy environment. In this instance, the risk to the business from altering its current state is too high.
That said, one advantage of Kubernetes is that it comes with a wide range of tooling and support designed to purposefully migrate legacy applications to cloud native, containerised infrastructure.
Whatever the particular technical challenge with a legacy application, it’s likely a similar challenge has been addressed elsewhere and an open source solution may exist to help solve it. Taking advantage of the ecosystem of open source tools from Kubernetes means enterprises can often easily test and experiment with the various migration approaches and quickly determine the right path forward for them.