Containerisation in the enterprise - Couchbase: Standardisation keeps a lid on container sprawl
As businesses continue to modernise their server estate and move towards cloud-native architectures, the elephant in the room is the monolithic core business application that cannot easily be rehosted without significant risk and disruption.
These days, it is more efficient to deploy an application in a container than use a virtual machine. Computer Weekly now examines the modern trends, dynamics and challenges faced by organisations now migrating to the micro-engineered world of software containerisation.
As all good software architects know, a container is defined as a ‘logical’ computing environment where code is engineered to allow a guest application to run in a state where it is abstracted away from the underlying host system’s hardware and software infrastructure resources.
So, what do enterprises need to think about when it comes to architecting, developing, deploying and maintaining software containers?
This post is written Anil Kumar in his position as director of product management at Couchbase — Kumar is an authority on containers to the extent that he wrote one of the ‘best Kubernetes books of all time, according to Bookauthority.
Kumar writes as follows…
It’s widely accepted that organisations have accelerated their digital transformation plans, whether in reaction to Covid-19, or by accepting that digitalisation is key to remaining competitive. Either way, finding a way to automate and scale workloads while reducing costs is more important than ever. This explains why containers have become so popular: the ability to build applications once then run them anywhere has been a boon to enterprise agility. According to Gartner, by 2022 more than 75 percent of global organisations will be running containerised applications in production, up from 30 percent in 2020.
Yet containers can still present teething problems, preventing organisations from making the most out of them. These then take up too much of development teams’ time: which might be why 40 percent of CIOs say their development teams are behind schedule with their current projects.
Containers & databases don’t always mix
Chief among these teething problems is container complexity.
While containers are highly composable, deploying, managing, maintaining and orchestrating multiple container clusters, each containing hundreds of containers, while ensuring reliability and connectivity, can be incredibly complicated. Organisations will typically deploy and manage their container clusters independently and yet IT is still expected to understand the function and health of each individual container. This complexity not only increases developers’ workloads, it means more time and money invested in maintenance, negating many of the agility benefits containers offer in the first place.
Many see orchestration tools such as Kubernetes as the answer and to an extent they are.
Kubernetes goes some way to easing the burden of managing various container clusters, while its ease-of-use minimises the time taken to get containers up and running. Yet for all its benefits, orchestrating containers remains a complicated business. For instance, what happens when adding key services such as a database to containerised applications? Each containerised application has its own dedicated data services, resulting in the creation of data silos or a hugely complex environment where multiple containerised applications must all access centralised databases to establish a single source of truth.
Then there’s the issue of container sprawl.
Similarly, there is the issue of how much services can scale. In a container sprawl, an instance under heavy demand might scale out of control, demanding the same of the databases that feed it and ultimately resulting in a hefty bill. In other words, while peak performance may be assured, it comes at a heavy cost.
So then, how do we move forward and use containers productively and effectively? The answer is standardisation.
Setting the standard
That’s easy to say as a short sharp statement, so how does containerisation standardisation work in practice?
In practice, standardisation involves processes such as automatic scaling for stateless and stateful services – placing pre-configured thresholds on each container so they can scale out or down and maintain performance without unexpected costs. Similarly, simply metering usage provides additional control giving IT a handle on what’s being used, when it’s being used and who is using it. Support for service mesh is another ingredient, allowing IT to manage a distributed microservices infrastructure, regardless of size.
Maintaining one standard across the development, test, pre-production and production environments can substantially reduce complexity and put IT in greater control.
Database services are an example of how this looks in practice. In principle, the entire development architecture within a container ecosystem can be standardised, with a single target architecture for both stateless and stateful applications. This architecture can even be managed on the same platform. Ideally, IT will be able to choose whichever cloud infrastructure – or any other infrastructure – they wish, ensuring they can standardise across their entire organisation, instead of being locked into specific infrastructures by different service providers.
Similarly, security and networking should be standardised so that management is as simple as possible and the possibility of errors or risks falling through the gaps is reduced. By standardising their developer environment and infrastructure along these lines, IT teams can cut down on complexity and costs, while adding flexibility and control.
Ultimately, the accelerating pace of digital transformation means containerisation will keep growing. As containers hit the mainstream, any success stories we see are likely to have featured a high degree of standardisation, since standardising the container environment so it works as efficiently as possible will go a long way to taming container complexity and placing control back where it belongs: with the IT team.
Anil Kumar is director of product management at Couchbase — his career spans more than 15 years building software products, including enterprise software, mobile services and voice and video services. Before joining Couchbase, he spent several years working at Microsoft Redmond in the Entertainment, Windows and Windows Live divisions.