EIS tech lead: infrastructure’s journey to (containerised) code

Yes we’re still in lockdown (or a version of it) and all meetings are virtual, but the Computer Weekly Developer Network is still talking to members of the industry glitterati about containers.

Joel Yarde is technology marketing lead at EIS, a provider of digital insurance platforms.

For Yarde, containers are a key advantage due to the fact that they allow organisations to collapse most of the infrastructure tasks into the development organisation allowing infrastructure to become, in effect, just another coding/DevOps task.

Therefore he says, an organisation’s first challenge is to build containers that incorporate as much and as many infrastructure requirements as possible (performance, scale, resource consumption etc). 

In this regard they have help: there are thousands of pre-built containers available to be used as the base for building your own. This allows enterprises to take advantage of external knowledge and effort in their own builds.

The next big challenge for building containers is ensuring they are manageable, scalable and reliable once deployed. This has to be done with ‘intent’, since the act of containerising doesn’t guarantee or provide those outcomes. 

“From our experience working with global insurance providers, we create our software as deployable artifacts, baking our knowledge right into the containers so customers don’t need experience deploying the basic infrastructure and we also provide the script to enable auto-scaling. This means our customers can truly move fast and break things when adjusting to new market and regulatory requirements,” said Yarde.

Some say that containers will never easily fit legacy architecture systems that fail to exhibit a natural affinity (and support) for API connections — and container ecosystems live and breathe the API dream, so connecting legacy non-API-compliant systems to them will always be tough – right?

Joel Yarde, technology marketing lead at EIS.

Not at all says Yarde.

“That problem is not a container problem, it’s a legacy system problem. Organisations and people move away from legacy due to a lack of access and interoperability. One workaround is to inject an API layer on top of the legacy systems. I recommend infrastructure teams use connecting software to bridge the gap between old and new. Even without APIs, containers can still function as a unit of infrastructure (e.g. running Apache web server),” said the EIS tech leader.

He advisies that this problem will exist regardless of whether containers are used or not. But why so?

Legacy and containerised apps can cooperate and co-exist with the right intermediary – such as messaging apps – which can also be containerised. But says Yarde, if you cannot inject an API layer, then consider a complete overhaul of IT systems by first understanding the benefit of the legacy system and how easy it is to migrate to a new system.

No golden passport

So we can say that containers are not some golden passport to web-scale scalability and infinite elasticity after all – right?

“You’re absolutely right, it’s not a magic wand. The barrier is your application that you are trying to scale and not the container itself. The containerised apps need to support web-scale and elasticity of scale. You can containerise anything that can run on Unix/Linux but that doesn’t mean the app is built to scale. For example, if you have an application based on a Java server, when you spin up the container and it takes you 15 mins to start up and use, no amount of containers can solve this lag problem,” said Yarde.

He thinks the key is to architect or choose your applications with utilising containers for scale and redundancy in mind.

Containerversity?

At the end of the day, surely we also need to realise that we can’t get away from the fact that container development, management & monitoring, augmentation & enhancement… leading to eventual retirement (or repurposing) is a tough job and not every firm will have the right skills in place in-house to perform these tasks. Should we all have gone to container university (aka containerversity) first?

According to Yarde, we’ll need to quash the assumption that containers are easy to run for every business. They are not. However, there has been a lot of good work done to help any organisation frame their container strategy. 

For one example, he advises that we take a look at Docker Hub for a start because it has 166 official container images (open source and commercial software). You don’t have to reinvent the wheel when it comes to containerisation but find the container you can use for your specific business purpose. Shift a portion of the burden for containerisation from design to curation.

“The job of containers is to speed the process from software design to production by making it modular, programmable and repeatable. There is an element of ‘economies of scale’ as well if you’re reusing existing published containers. That allows you to take advantage of the pre-work done by infrastructure and software experts. The ideal state is to see software, infrastructure and engineering teams collapse into one, seamless unit,” concluded EIS’ Yarde.

CIO
Security
Networking
Data Center
Data Management
Close