LzLabs CEO: app modernisation is (a form of) development

This is a guest post for the Computer Weekly Developer Network written by Mark Cresswell in his role as CEO of LzLabs.

LzLabs develops what it calls its Software Defined Mainframe (SDM) package — the tool is intended to ‘liberate’ legacy applications to run unchanged on both Linux hardware and cloud infrastructures.

Cresswell reminds us that ‘resting on one’s laurels’ is rarely a sound strategy for innovation — so, even those wedded to the mainframe platform have accepted that its legacy applications (which have often been running business-critical functions for decades) require some modernisation to meet the needs of the business in an ever more competitive environment.

Similar to the ‘chicken or the egg’ debate, many organisations question whether to start modernising the application, whilst still on the mainframe, or moving applications off their legacy infrastructure first: moving then improving.

Cresswell writes as follows…

It can, of course, be tempting to take a look at these applications, running in a stable fashion on a platform that has been ticking over consistently for years… and conclude that this is where they should belong. After all, wouldn’t the process of trying to move them off just make the job harder?

On the contrary. The process of legacy modernisation should be about far more than just the application itself and the avoidance of perceived difficulty. Modernisation is a continuum and the only way to future-proof these business-critical applications is to liberate them from the constraints of their legacy environment.

Our argument therefore is that a move first, improve second process is the only way to modernise mainframe applications cost-effectively, whilst mitigating risks and positioning the business’s IT systems for future innovation.

There are several reasons for this assertion:

It makes no sense whatsoever to target a software and hardware infrastructure that has negligible open source activity (the mainframe itself), when the goal is modernisation. Any modernisation effort will be impaired, to the point of making it totally impractical, without access to the wealth of open source innovation, which has forever altered the trajectory of modern application development. Starting modernisation before rehosting to an infrastructure that participates in the open source movement, is therefore just a bad choice.

Mainframes, which support product workloads, simply do not exist within public cloud environments and none of the major public cloud environments support the legacy implementation of those mainframes. This market situation contrasts with the highly competitive nature of x86-based public cloud systems, which has been shown to drive continual price reductions and innovation.

While public-cloud deployments for modernised legacy applications may not be top of a priority list right now, it would be a brave CIO that rules it out forever. Thus it makes sense to choose a hardware architecture with a better long-term direction of travel in terms of cloud-deployment.

A knot of interdependencies

Additionally, the process of modernising legacy mainframe applications in-situ may start small, but due to the massive knot of interdependencies between the target application and others which have built up over the decades, this process can grow exponentially. When changing one aspect of the target application can trigger all kinds of unintended changes and consequences, the whole exercise is rendered impractical from a risk and cost perspective.

Therefore, the key is enabling a modernised program to interact with legacy components, upon which it depends, without forcing changes to those legacy components in some way.

Options for this ability on the legacy mainframe environment are limited and costly, whereas today’s open systems rehosting technologies have features designed to enable individual programs within an application suite to be modernised without impacting their interaction with those programs yet to be modernised. Again, it is clear that the only option is to rehost to Linux before modernisation starts.

The punitive pricing models of software suppliers that remain in the mainframe space are arguably the single greatest gripe with the platform, and has been a core motivation for migrating to open environments. These pricing constraints have a profound impact on the availability of resources for test and development. Embarking on a modernisation exercise, which runs on a legacy mainframe, will place extra demands on these mainframe resources.

No such constraints on testing and development exist on x86 Linux environments, making the platform far more cost-effective for modernisation.

Modernisation is (a form of) app dev

Finally, it’s important to remember that modernisation is just a form of application development.

The fact is that other than some graphical tooling, legacy mainframes benefit from none of the DevOps toolchain facilities or containerisation, responsible for much of the agility in modern application development. Ongoing development, when executed within a legacy mainframe environment, will move at a glacial pace compared to distributed alternatives. Consequently, any modernisation project will move significantly faster once the applications are rehosted to a Linux environment.

The reasons to move the applications before starting the modernisation process are overwhelming. Once applications have been rehosted to an x86 Linux environment, modernisation becomes far easier.

Some might argue that the effort to move the workload can be a barrier. Indeed over the years rehosting mainframe applications to modern Linux environments has been a challenge, but with today’s containerisation techniques and the emergence of Software Defined Mainframes, the effort and risk of such a move is greatly reduced.

< class="wp-caption-text">Cresswell: modernisation is just another form of development.

CIO
Security
Networking
Data Center
Data Management
Close