This is a guest post for the Computer Weekly Developer Network written by Didier Durand in his capacity as VP of product management at LzLabs — the firm’s so-called ‘software defined mainframe’ product enables both Linux and cloud infrastructure to process thousands of mainframe transactions per second. It includes a faithful re-creation of the primary online, batch and database environments.
Durand writes as follows…
Modernisation is a continuum – machines need to be maintained and businesses are in essence built upon machinery. Enabling continuous modernisation is the key to future-proofing a business and those relying on legacy systems for core business processes run the risk of being unable to make use of new platforms and technologies in future.
History of the mainframe
The mainframe began life as a powerful computing platform from which big business could scale resources and enhance functions to meet customer demand. Over the past half-century, it has been cemented as a pillar of industry. Today, over 70 percent of the world’s financial transactions are processed by a mainframe application.
There has long been a heated debate on whether or not mainframes are still viable platforms to run business from in a modern, competitive landscape, however this article is aimed at those who think an as-is ‘re-hosting’ approach to mainframe migration and modernisation sounds too good to be true – a long sought-after enigma. After all, the mainframe has been the subject of much pain (full application rewrite, discrepant results after migration, etc.) and anguish for those wishing to migrate and modernise their mainframe applications onto modern platforms and into modern languages.
While there are many reasons why one might wish to modernise their applications, there are just as many methods that have been used to enable that modernisation to take place. The ‘re-hosting’ approach is a classic in terms of system migration and there are a number of ways of doing so.
The latest and perhaps the fastest and most reliable method of doing this is centered around the (once-theoretical) idea that a migration can take place with no changes to the application code, so that the applications can be migrated onto the new platform as fast as possible, with no risk involved. Changes to code are often associated with system failures, as they have not been tried and tested across as many scenarios compared with the previous system.
So how does it work?
This type of “re-hosting” in essence, boils down to three stages:
- Packing up all of the application artifacts in their executable form on the source system
- Moving the package to the new platform
- Unpack and run
In order to achieve this, the system services running on the mainframe subsystem must be re-written natively for Linux. This includes systems for transactional activity, batch scheduling, indexed datasets and relational and hierarchical data.
To paraphrase Carl Sagan; if you wish to faithfully recreate a mainframe environment to run on modern, open platforms, you must first invent the universe.
A translator may also be needed in order to transpose the mainframe binaries, based on the assembly language of the mainframe architecture, into Intel x86 Xeon assembly language, so that it can be used on Linux. This process is called “Dynamic Instruction Set Architecture Translation.” These transformed binaries are then able to run in a managed container.
There are numerous tangible benefits to this approach, including:
- No issues with internal end-users, or with customers due to issues (discrepancies in results, etc.) as a result of re-writing the application code. The consequence of these issues, associated with traditional migrations, is often a cause of distrust emerging against the new system.
- No need for retraining and no loss of productivity by end-users. They can continue with their daily tasks just as before – nothing has changed in terms of daily interactions with the application.
- It’s an efficient way to validate the migration: iso-functionality makes the testing simple to define and validate. Results are either strictly identical or they are not. The approach doesn’t leave room for subjective interpretation, it’s entirely objective.
- It enables easy automation of testing at large scale.
- It solves the first half of the modernisation problem in a short time-frame. The applications have been migrated onto a modern long-term sustainable platform, based on very standard components, and can be operated by Linux sysadmins which are abundantly common in the job market, thus far-easier to recruit than mainframe sysadmins.
- The cost savings presented by modern IT platforms are achieved instantly, before any longer-term application modernisation begins – a quick and highly positive ROI.
After the “re-hosting” takes place, applications can begin their modernisation process, using state-of-the-art technology, to rejuvenate the legacy application into a modern one, with a rich web-based interface, accessible through web services, restructured in containerised microservices, etc. It then becomes ideal for inter-operability.
A mainframe migration can be a bit like re-inventing the wheel. Traditionally, the mainframe applications would have to be re-invented before any migration can take place. In an as-is ‘re-hosting’ migration, the applications can be migrated straight away, with no changes to the application code. In essence, the mainframe has been reinvented, but it is operating on modern and open infrastructure – the benefits of which can be achieved immediately.
LzLabs’ software allows the executable form of legacy customer mainframe programs to operate without changes in what the firm calls ‘contemporary’ computing environments. The software enables mainframe data to be written and read in its native formats. This new environment works without forcing recompilations of COBOL and PL/I application programs or making complex changes to the enterprise business environment.