alex_aldo - Fotolia
According to analyst firm Gartner, by 2025, 90% of current applications will still be in use, and most will continue to receive insufficient modernisation investment. Its Application modernisation should be business-centric, continuous and multiplatform report warns that, by 2025, technical debt will continue to compound on top of existing technical debt, consuming more than 40% of the current IT budget. A number of factors force companies to modernise, but one that focuses attention is when there is a need to replace systems following a demerger, or sale of a business.
Legal & General (L&G) faced this issue in May 2019 when it announced the sale of its general insurance business to Allianz. Peter Jackson joined L&G in 2018 as its group director of data science and was tasked with supporting the data requirements of the business.
In-house mainframe expertise aids data transfer
The sale of the division meant L&G needed to move away from a shared technology infrastructure. This provided impetus for the insurer to rethink its data warehouse strategy. L&G wanted to reduce the time and money it was expending on multiple SAS extract, translate and load (ETL) software licences.
Although his overall strategy involved moving to an integrated approach to data management at L&G, Jackson’s most immediate challenge was how to support the data needs of L&G following the sale of its general insurance business. The data in question was a customer database, held in DB/2 on a mainframe system. L&G wanted a cut of the data that it could use for marketing to customers. “We needed to extract data from the mainframe quickly and needed to rapidly deploy an extract transform and load tool,”says Jackson.
The tool would need to connect to DB/2 on the mainframe and extract data to a data warehouse running on top of SQL Server. A new criterion for Jackson was that he needed an ETL tool that could be picked up relatively easily. He chose WhereScape as the ETL for the project. For Jackson, one of the benefits of WhereScape is that it automatically produces a data warehouse as part of the ETL process.
Like many businesses, L&G had outsourced its IT, and the new project involved an understanding of the DB/2 mainframe system, and the relationship between the metadata that describes what the mainframe data is and how it is used. “There were millions of rows of data,” says Jackson. “The team had to understand SQL Server and needed quite a lot of DB/2 skills.” He decided the best approach would be to upskill the internal team to rebuild in-house mainframe and SQL Server expertise.
Given that the migration involved customer data, under the European Union’s (EU) General Data Protection Regulation (GDPR), only customer data with explicit consent could be migrated from the mainframe to the SQL Server data warehouse. Jackson says the migration project, which effectively involved taking a cut of the mainframe DB/2 data, cleansing and deduplicating that data and migrating it to SQL Server, took six months. Jackson says the project benefited from the fact that the team could test the results produced by WhereScape with the ETL the company had used previously. “Sometimes you are extracting new data for production applications with nothing to compare it to, but what we needed was very low risk because we could compare the old with the new ETL and audit the output,” he says.
Modernising Active Directory
In another example, RWE needed to replace its existing IT following its demerger from Eon Energy. The separation of the business meant RWE effectively had to build an entirely new IT infrastructure from scratch to replace the old systems that were being run through a managed service by Eon Energy.
It is now building a future-ready IT backbone including integrating a new Microsoft Azure-based infrastructure, upgrading and refreshing desktops from Windows 7 to Windows 10 and deploying Microsoft Office 365. The firm used Avanade for the multimillion-pound IT modernisation project, which involves setting up new IT infrastructure after separating from Eon, a 20,000-user Windows 10 migration, together with the acquisition of Eon’s renewable energy business.
Explaining the strategy, Edward Bouwmans, head of infrastructure at RWE, says: “Succeeding in today’s energy industry requires a high degree of speed and responsiveness. It is also important to build a flexible operating environment that can adapt quickly when organisational change is required. We need a new IT operating model, but we also need a modern, cloud-based architecture to support our future growth in a competitive energy sector. The new cloud-based workplace environment will provide the perfect foundation for building our agile enterprise and making RWE future-ready.”
Bouwmans says the project began two years ago and involved assessing whether it was worth migrating the legacy Microsoft Active Directory, which the firm had been running for years, to the new, greenfield IT environment. “It would have been a nightmare, so instead we decided to go with what was fit for purpose,” he says. “I think greenfield was the right approach because Active Directory has moved so quickly. Normally you never get a chance to change because there is no business case to upgrade Active Directory. But if you have an opportunity to upgrade, do it.”
Although it wants to move applications to the cloud and has a cloud-first strategy, RWE still requires new on-premise servers to run some of its back-end applications. “We have set up a cloud environment, and are looking utilise the cloud as much as possible,” says Bouwmans. “But we have to split the business by a certain time and there is a deadline. Speed is of the essence.”
He says RWE has until the end of 2020 to set up its IT and related networks and security, to bring on board the renewables merger and move to Windows 10, because of Windows 7 reaching end of life. Bouwmans agrees that the tasks in hand are complex. The company is taking a big bang approach. “We shouldn’t do it like this, but we hope to prove it’s possible,” he says. “We have the tech infrastructure in place and in January 2020 we migrated our first thousand users.”
Having gone through the process of understanding the mainframe data and migrating it to SQL Server, Jackson says the company is now in a good position to migrate the data onto something else, as it looks to improve performance.
Starting with the DR facility
With any application modernisation project, there are huge risks that can cause business disruption, particularly when ageing IT equipment runs mission-critical software. This is the situation facing Dominic Maidment, technology architect for European enterprise architecture at Total Gas & Power. The energy supplier, which supplies energy to businesses, has been a long-term user of Oracle software, middleware such as WebLogic, the Solaris operating system, various x86 and Oracle (Sparc) servers.
“I joined six years ago and was amazed at the array of tech at Total Gas & Power,” says Maidment. “It had partially converged on Cisco UCS FlexPod comprising Netapp, Cisco blade servers, Cisco networking and VMware.” This setup supported the business’s gas and power line-of-business applications and offered virtual desktops for the firm’s offshore developers. However, says Maidment: ”We noticed that managing this infrastructure took three different engineers. There had to be a better way.”
The opportunity to rethink the technical architecture arose when the company learnt that its disaster recovery (DR) site was going to be redeveloped as luxury flats. Building on its experience of converged infrastructure, Maidment says there was an opportunity to look at how to provide new infrastructure that could deliver performance improvements without having a major impact on the company’s production systems. “It is a wonderful opportunity and is less risky,” he says.
It chose to run DR using Nutanix hyper-converged infrastructure. Not only did this enable the DR site to run more quickly, but also allowed the company to remove 10 different platforms and manage the IT infrastructure from a single console window. Migrating some of the legacy systems onto modern IT infrastructure running on Nutanix was not a simple lift-and-shift project. Given that Nutanix is based on a different hardware architecture than some of the company’s legacy IT, Maidment says: “We had to emulate some things like the Digital Tru64 Unix database, which runs on DEC Alpha servers using the Stromasys emulator.” By proving that the legacy application could be emulated on Nutanix, Maidment says: “We could convince the business that it was the right thing to do and would mitigate risks and provide a future [for the legacy platform].”
For the Sparc servers, Maidment says the original plan was to move all the Solaris Unix workloads onto Solaris running on x86 servers, and then migrate these to Nutanix. ”But we did some research and figured out we could convert the Solaris partitioned virtual operating system environments (zones) to virtual machines and then effectively move them straight over to a Nutanix cluster,” he says. This enables the company’s Solaris-based WebLogic application servers to be migrated as virtual machines that can then run on the Nutanix platform.
For front-end applications, Maidment says the firm plans to move its software development environment over first, before a staged migration of the WebLogic production environment onto Nutanix. Back-end systems are more complex, he points out. “We have situations where we have bought the intellectual property rights for some of our applications and have started developing them bespokely in-house to match business-specific requirements,” says Maidment. “We can’t go to anyone for support, so we have to treat these applications extremely carefully as they will require the most reverse engineering effort.”
In some situations, he says, it may not be worth moving the legacy code to a new platform. But it may be possible to find other areas of the business that offer some of the functionality of the legacy application, he says. “You look at these things and ask whether it is really worth moving. In some cases, we will emulate them or maybe build something new. In some cases, we have used MuleSoft API integration to abstract the application.”
Read more about application modernisation
US-based food producer has moved hundreds of servers and business applications to the AWS public cloud as part of a push to modernise its IT estate and boost its overall business agility.
Artificial intelligence is intensifying as an investment area for UK IT buyers, while cloud delivery is coming to ERP as well as other applications.
The company’s long-term goal is to standardise on Red Hat Enterprise Linux, but migrating from Solaris will require recompiling application source code. One of the issues that Total Gas & Power will need to resolve is the Red Hat enterprise support contract, says Maidment. Although Red Hat can host the Nutanix hypervisor – AHX – it is not currently covered by a Red Hat enterprise support contract. But support is available from Nutanix.
As the examples from L&G, RWE and Total Gas & Power show, application modernisation is not something that just happens, nor is it solely driven by digitisation and cloud-first strategies. “Most application modernisation initiatives happen because something kick-starts the process,” says Bola Rotibi, a principal analyst at CCS Insight. This offers an opportunity to do something different. RWE, for instance, needed to build up an entire new IT infrastructure as it separated from Eon Energy and chose a cloud-first approach based on the Azure Active Directory.
The organisations Computer Weekly spoke to mitigated the risk in modernising their legacy platforms. In L&G’s case, it was able to compare the output of ETL on its mainframe-based DB/2 system with the output from WhereScape, while Total Gas & Power modernised its disaster recovery site, and used this to show the business how a legacy Tru64 database system could be emulated on hyperconverged infrastructure. “There are plenty of approaches organisations can take to modernise applications,” says Rotibi. “IT decision-makers need to do their homework, look at the risks and business value, then decide which approach works best.”
Read more on Software development tools
Hyper-converged infrastructure: Why software-defined everything might not work for all datacentres
Prioritising workloads in a hyper-converged infrastructure migration
After a push at the start, Mencap’s automation journey is gathering pace
Tackling multicloud deployments with intelligent cloud management