As businesses continue to modernise their server estate and move towards a cloud-native architecture, the elephant in the room is the monolithic core business application that cannot easily be rehosted without significant risk and disruption.
Enterprises previously virtualised their server estates in their entirety (or a point close to it), with an almost carte blanche approach to shifting and running a complete enterprise application architecture onto modern cloud computing hardware.
Despite the option to move essentially ephemeral computing resources and data between public, private and hybrid clouds, there was still an all-encompassing push to migrate monolithic applications in their original unwieldy (and often tough to manoeuver) original forms.
From monoliths to micro-engineering
These days, it is more efficient to deploy an application in a container than use a virtual machine. Computer Weekly now examines the modern trends, dynamics and challenges faced by organisations now migrating to the micro-engineering software world of containerisation.
As all good software architects know, a container is defined as a ‘logical’ computing environment where code is engineered to allow a guest application to run in a state where it is abstracted away from the underlying host system’s hardware and software infrastructure resources.
Those essential computing resources of course include core processing power, memory, data storage and Input/Output (I/O) provisioning, plus all the modern incremental ‘new age’ functions and services such as big data analytics engine calls, AI brainpower and various forms of automation.
The trade-off, it’s complicated
Although the move to containers provides more modular composability, the trade-off is a more complex interconnected set of computing resources that need to be managed, maintained and orchestrated. Despite the popularisation of Kubernetes and the entire ecosystem of so-called ‘observability’ technologies, knowing the health, function and wider state of every deployed container concurrently is not always straightforward.
So, what do enterprise need to think about when it comes to architecting, developing, deploying and maintaining software containers?
Some say (**Jeremy Clarkson Top Gear voice**) that stateless containers (i.e. an ephemeral stateless compute resource that neither reads, onwardly-analyses or records information about its state, capacity, value and power while in use) are tough to integrate with deeply integrated enterprise systems.
Some say that containers will never easily fit legacy architecture systems that fail to exhibit a natural affinity (and support) for API connections. Container ecosystems live and breathe the API dream, so connecting legacy non-API-compliant systems to them will always be tough – right?
Where the above (and other anomalies) exist, enterprise organisations may be forced into a predicament where they are running containers, but forced to create parallel systems at some level to work with older legacy systems that can’t be migrated successfully to run alongside cloud-native technologies – right?
The great cloud flexibility swindle
Partly as a result of Cloud Services Provider (CSP) contract lock-in, partly as a result of the already aforementioned incompatibilities and partly as a result of security issues that prevent data traveling safely from one virtualised location to another, we find that in the real world, the boxed-up moveable feast that containers are supposed to offer is not quite that simple to pull off – right?
Containers are not some golden passport to web-scale scalability and infinite elasticity after all… we’ll just leave that comment there for dramatic (if not technical) effect – right?
DevOps can’t contain containers
Then there’s the whole challenge of how the IT department adapts process to support containers.
As noted here on TechBeacon by Jennifer Zaino quoting Chris Ciborowski, CEO and principal consultant at enterprise DevOps consulting firm Nebulaworks, while developers have been taking an Agile approach [and embracing the fluidity of containers] for a long time, IT Ops staff largely haven’t been thinking in the same way.
“Due to the nature of how container images are created (highly automated build, test, and release of apps and dependencies) and the velocity of container creation and scheduling (deployment), existing IT operations processes are unequipped to support container delivery beyond simple use cases,” said Ciborowski.
… and finally, what about skills?
At the end of the day, surely we also need to realise that we can’t get away from the fact that container development, management & monitoring, augmentation & enhancement… leading to eventual retirement (or repurposing) is a tough job and not every firm will have the right skills in place in-house to perform these tasks. Should we all have gone to container university (aka containiversity) first?
Containers are here, but let’s consider how to use them so please, contain yourself.