The Computer Weekly Developer Network is deep in container-land, we’re Kubed into Kubernetes and we’re about as hybrid multi-cloud cloud-natively virtualised into abstraction Nirvana as its possible to be.
Attempting to take us ‘one louder’ into this maelstrom of datacenter-driven world and wrap us tighter in the data fabric of delight (Ed – enough already, we get the picture) this month is Anil Kumar is his role as director of product management, Couchbase.
Kumar takes us on a tour of what he sees as the key issues in this space – as he looks for the K-shaped silver bullet that allows this new breed of technologies to be implemented without the natural consequence of increased complexity – and writes as follows…
Kubernetes goes some way to easing the burden of managing various container clusters, while its ease-of-use minimises the time taken to get containers up and running. Yet for all its benefits, orchestrating containers remains a complicated business.
For instance, what happens when adding key services such as a database to containerised applications? Each containerised application has its own dedicated data services, resulting in the creation of data silos or a hugely complex environment where multiple containerised applications must all access centralised databases to establish a single source of truth.
Similarly, there is the issue of how much services can scale. In a container sprawl, an instance under heavy demand might scale out of control, demanding the same of the databases that feed it and ultimately resulting in a hefty bill. In other words, while peak performance may be assured, it comes at a heavy cost.
No silver bullet
Business and IT leaders are well aware that there’s no such thing as a silver bullet. In one sense, Kubernetes is like any other hot new technology innovation: incredibly powerful on the one hand, but capable of creating more problems than it solves on the other, if implemented without the proper consideration first.
Chief among these is complexity: a confusing mix of deployment strategies with various pros and cons.
Take microservices deployments with database management for example — you can pick from shared microservices with a centralised database, a database for each microservice architecture, or even microservices deployed in other Kubernetes platforms than the database. Each option presents complications around security, networking and performance – it’s remarkably difficult for IT leaders to find the right way forward.
In practice, standardisation involves processes such as automatic scaling for stateless and stateful services – placing pre-configured thresholds on each container so they can scale out or down and maintain performance without unexpected costs.
Similarly, simply metering usage provides additional control giving IT a handle on what’s being used, when it’s being used and who is using it. Support for service mesh is another ingredient, allowing IT to manage a distributed microservices infrastructure, regardless of size.
Orchestration consternation & frustration
Kubernetes has become wildly popular for good reason. Yet while it goes some way towards easing the burden of orchestrating containers, it can make things even more complicated than before.
Kubernetes environments start off simply enough, but as new services are added here and there, they can quickly get out of control. It’s akin to a delivery business procuring a fleet of vehicles: it may have one policy in place to manage them, but what if their requirements are different? What if some cost more to run than others? Keeping tabs on the individual requirements of every vehicle can be a complicated business.
The same principle applies to Kubernetes: it may provide a single policy for managing containers, but how can you be sure your various add-ons aren’t operating out of your control? How can you make sure that instances under heavy demand aren’t going to scale up without you knowing about it? Without full-stack visibility into all the services getting deployed and their resource usage metering, things can get out of control in terms of performance and costs. This lack of visibility and unpredictability may be too much to bear for those operating on fine margins.
Database services are an example of how this looks in practice. In principle, the entire development architecture within a container ecosystem can be standardised, with a single target architecture for both stateless and stateful applications. This architecture can even be managed on the same platform. Ideally, IT will be able to choose whichever cloud infrastructure – or any other infrastructure – they wish, ensuring they can standardise across their entire organisation, instead of being locked into specific infrastructures by different service providers.
Across the stack
Whether businesses are able to make a success of containerisation depends on whether or not they take a standardised approach. This is by no means a radical proposition – standardisation is common practice elsewhere in the business because it streamlines the process of getting up and running with new supplies. Containers are no different. Standardisation simply refers to the ability to pick the right technologies to build the stack across the entire organisation.
That will streamline the development, testing, and onboarding of new applications and implement security standards. In other words, why opt for a mishmash of different approaches and policies when having a standardised approach makes life so much easier?
Anil Kumar is director of product management at Couchbase — his career spans more than 15 years building software products, including enterprise software, mobile services and voice and video services. Before joining Couchbase, he spent several years working at Microsoft Redmond in the Entertainment, Windows and Windows Live divisions.