As businesses continue to modernise their server estate and move towards cloud-native architectures, the elephant in the room is the monolithic core business application that cannot easily be rehosted without significant risk and disruption.
These days, it is more efficient to deploy an application in a container than use a virtual machine. Computer Weekly now examines the modern trends, dynamics and challenges faced by organisations now migrating to the micro-engineered world of software containerisation.
As all good software architects know, a container is defined as a ‘logical’ computing environment where code is engineered to allow a guest application to run in a state where it is abstracted away from the underlying host system’s hardware and software infrastructure resources.
So, what do enterprises need to think about when it comes to architecting, developing, deploying and maintaining software containers?
Walker writes as follows…
Containerisation brings massive efficiencies to the enterprise tech stack, but it can also add complexity to the way we deploy and manage services and applications. What was once a few instances of a monolithic database application has become an army of individual services, each requiring administrative/operational attention to make sure they are healthy. As we scale our systems and applications, the operational complexity of managing multiple containerised services grows exponentially.
Over the past few years, we’ve seen the emergence of ‘orchestration’ tools designed to help automate these services. But while tools such as Kubernetes have the potential to bring major efficiencies, the stovepipe architecture of legacy relational databases presents an impedance mismatch with its distributed scale-out architecture and, so, adoption has stalled as a result. We’re at an inflection point where developers need to start thinking about Kubernetes as an extension of their development team and their database.
The legacy challenge
Simply put, Kubernetes allows companies to easily scale services up/down and ensures the state of services running.
It consists of two major components, a control plane and a pod, or pods. If the control plane recognises a pod is down, it spins it up elsewhere. If it senses there are more pods present than are commissioned, it spins down accordingly. Ultimately, it automates total management of the instances associated with a service.
Most Kubernetes deployments need a database and will run it alongside the distributed services in a traditional stovepipe implementation – a term used to describe software that typically scales as a single unit with hardware as the key factor.
The promise of a distributed architecture is that it can scale easily, by simply adding new instances of the containerised software to the system. This can also deliver some next-level resiliency as each unit can function individually and alone.
Stovepipe spews bottleneck
Our legacy databases were simply not built to be distributed and no matter how much they are modified, they cannot escape their stovepipe past.
Management of a legacy database on modern infrastructure like Kubernetes is ner difficult and many just choose to run it alongside this scale out environment.This often creates a bottleneck, or worse, a single point of failure for the application. Running a NoSQL database on Kubernetes is better aligned, but you will still experience transactional consistency issues.
The stovepipe architecture of our legacy relational databases contradicts the distributed scale out architecture of Kubernetes, as they weren’t built with the same architectural primitives. Kubernetes promises scale and resilience, but this type of antiquated architecture and deployment requires manual intervention, voiding the value of automated orchestration.
So how should we move forwards?
Your apps are containerised, your databases must be too
Ultimately, the database should look and feel like a traditional database, while simultaneously taking advantage of all the benefits of cloud infrastructure. Like pieces of a car, you can replace certain parts but the rest of the car stays the same. This is the type of architecture that developers and enterprises need to think about to develop, deploy and maintain software containers efficiently.
As we’ve moved further into cloud adoption, ‘cloud-native’ has emerged as an approach to the development and delivery of applications in cloud environments. It favours the shift towards as-a-service consumption and takes direct advantage of the scale, resilience and availability of these resources.
Cloud-native services typically will be:
- Self-contained and non-reliant on other services.
- Designed to scale autonomously and survive any failure.
- Reusable across multiple applications.
This has the look and feel of a single server but is broken into containers, orchestrated by K8s, that ensure it’s always running. This empowers developers to decide where and when to break their application into different pieces that can scale independently.
With the shift to as-a-service consumption, developers need to think about orchestration as an extension of the development team. It’s a paradigm shift in responsibility as developers need to shift focus to optimising workloads instead of maintaining always-on service.
Don’t go halfway
Without addressing this issue with the database, you may only get a fraction of the value containers and orchestration offer. We’ve reached an inflection point: you’re either in, or out. There’s no in-between that maximises value.
We’ve seen great momentum in Kubernetes adoption, but it was originally designed for stateless workloads. Adoption has been held back as a result. The real push to adoption will occur as we build out data-intensive workloads to Kubernetes.
Developers and enterprises need to adjust from the old, monolithic ways of software development and delivery. With customers becoming increasingly global, businesses need to shift to a distributed mindset and think about tools like Kubernetes as the building blocks for a global application.