The 'software defined mainframe' - smoke and mirrors, or reality?

It is a truth universally acknowledged that the mainframe’s future stopped in 1981 when the PC was invented.  The trouble was that no-one told IBM, nor did they tell those who continued to buy mainframes or mainframe applications and continued to employ coders to keep putting things on the platform.


This ‘dead’ platform has continued to grow, and IBM has put a lot of money into continuing to modernise the platform through adding ‘Specialty Engines’ (zIIPs and zAAps), as well as porting Linux onto it.

However, there are many reasons why users would want to move workloads from the mainframe onto an alternative platform.

For some, it is purely a matter of freeing up MIPS to enable the mainframe to better serve the needs of workloads that they would prefer to keep on the mainframe.  For others, it is to move the workload from what is seen as a high-cost platform to a cheaper, more commoditised one.  For others, it is a case of wanting to gain the easier horizontal scaling models of a distributed platform.

Whatever the reason, there have been problems in moving the workloads over.  Bought applications tend to be very platform specific.  The mainframe is the bastion of hand-built code, and quite a lot of this has been running on the platform for 10 years or more.  In many cases, the original code has been lost, or the code has been modified and no change logs exist.  In other cases, the code will have been recompiled to make the most out of new compiler engines – but only the old compiler logs have been kept. 

Porting code from the mainframe to an alternative platform is fraught with danger.  As an example, let’s consider a highly regulated, mainframe-heavy user vertical such as financial services.  Re-writing an application from Cobol to e.g. C## is bad enough – but the amount of retro testing to ensure that the new application does exactly what the old one did makes the task uneconomical. 

What if the application could be taken and converted in a bit-for-bit approach, so that the existing business logic and the technical manner in which it runs could be captured and moved onto a new platform?

This is where a company just coming out of stealth mode is aiming to help.  LzLabs has developed a platform that can take an existing mainframe application and create a ‘container’ that can then be run in a distributed environment.  It still does exactly what the mainframe application did: it does not change the way the data is stored or accessed (EBSDIC data remains EBSDIC, for example).

It has a small proof-of-concept box that it can make available to those running mainframe apps where they can see how it works and try out some of their own applications.  This box, based on an Intel NUC running an i7 CPU, is smaller than the size of a hardback book, but can run workloads as if it was a reasonable-sized mainframe.  It is not aimed at being a mainframe replacement itself, obviously, but it provides a great experimentation and demonstration platform.

On top of just being able to move the workload from the mainframe to a distributed platform, those who choose to engage with LzLabs will then gain a higher degree of future-proofing.  Not because the mainframe is dying (it isn’t), but because they will be able to ride the wave of Moore’s Law and the growth in distributed computing power, whereas the improvements in mainframe power have tended to be a bit more linear.

The overall approach, which LzLabs has chosen to call the ‘software defined mainframe’ (SDM) makes a great deal of sense.  For those who are looking to optimise their usage of the mainframe through to those looking to move off the mainframe completely, a chat with LzLabs could be well worth it.

Of course, life will not be easy for LzLabs.  It has to rapidly prove that its product not only manages to work, but that it works every time; that it is unbreakable; that its performance is at least as good as the platform users will be moving away from.  It will come up against the fierce loyalty of mainframe idealists.  It will come up, at some point, against the might of IBM itself.  It needs to be able to position its product and services in terms that both the business and IT can understand and appreciate.  It needs to find the right channels to work with to ensure that it can hit what is a relatively contained and known market at the right time with the right messages.

The devil is in the detail, but this SDM approach looks good.  It will not be a mainframe killer – but it will allow mainframe users to be more flexible in how they utilise the platform.