Containerisation in the enterprise - Red Hat: API accuracy unlocks container complexity

As businesses continue to modernise their server estate and move towards cloud-native architectures, the elephant in the room is the monolithic core business application that cannot easily be rehosted without significant risk and disruption. 

These days, it is more efficient to deploy an application in a container than use a virtual machine. Computer Weekly now examines the modern trends, dynamics and challenges faced by organisations now migrating to the micro-engineered world of software containerisation.

As all good software architects know, a container is defined as a ‘logical’ computing environment where code is engineered to allow a guest application to run in a state where it is abstracted away from the underlying host system’s hardware and software infrastructure resources. 

So, what do enterprises need to think about when it comes to architecting, developing, deploying and maintaining software containers?

This post is written by Markus Eisele in his capacity as developer adoption lead for EMEA at Red Hat. 

Eisele Tweets at @myfear and writes as follows…

Containers are complicated, which means that any value they bring to an organisation has to be weighed against the costs of wrestling with their complexity. 

There’s no point to containerisation for its own sake.

There’a also no point in trying to force containerisation where there’s no commercial or engineering [team] need for. That would be like building a Rube Goldberg machine. If you have a monolith that’s performing as needed and looks set to continue to do so, then you have no reason to consider containerising it – not every application needs to update overnight or have limitless capacity to scale.

When you should embrace containers

Containers shouldn’t be treated as a panacea, but rather as a tool with trade-offs — so here are some scenarios in which containerisation is a sufficiently useful tool to justify itself:

  • You need to be able to infinitely scale an application across clouds.
  • You need to take full advantage of your cloud computing hardware.
  • You need to make sure your DevOps teams speak the same technical language and can quickly solve problems together.
  • You need applications you can iterate on very frequently.

If any of the above are true for you, then containers are probably a really good (and well workable) solution. Generally, if you are facing problems posed by modern, hybrid and cloud-native applications, the complexity of containers is worth taking on. Beyond that, though, their use-case isn’t as self-evident and the costs of reckoning with their complexity might offset any utility you gain by deploying them.

APIs are ‘key to unlocking’ container complexity

Testing (software) 1-2, testing 1-2, Markus Eisele is developer adoption lead for EMEA at Red Hat.

Containers rely on APIs to work together and scale-up and no matter what you do, you’ll have to make sure your APIs are defined and as efficient as possible. The more distributed a containerised application’s workload is, the more rigorous standards you’ll have to develop and enforce for your APIs as they grow in number. 

A clear and well-thought-through API management system will allow you to assess how APIs are being used by containers, how they should be governed, make sense of resource usage, harden against security risks… and enable you to scale up your application further. In the world of containers (and elsewhere too, to be honest) the costs of an ad hoc approach to API management tend to grow geometrically, so the best time to look at any ambiguities or prospective shortcomings in how you’re handling the approach is now. 

It’s always worth going back to the drawing board and creating (or recreating) your domain model to make sure you’ve codified the lines of communication between your APIs.

This will often be the best way to diagnose a potential or real problem in your API architecture and will highlight what should be built to counteract it.

The complexity of containerisation is a necessary fact and should not be discarded.

If you choose to containerise, you need to make sure that handling and minimising this complexity becomes a systematic concern for everyone in your team. If you avoid getting complacent about it, you can make containerisation’s complexity manageable and leverage the full benefits of a cloud-native model for development and operations.

 

 

CIO
Security
Networking
Data Center
Data Management
Close