Vladislav Kochelaevs - Fotolia

How to apply DevOps practices to legacy IT

The one constant in legacy systems is that they can’t just be switched off. We find out how Ticketmaster has adapted its legacy IT to DevOps

When Ticketmaster was established 41 years ago, its core software was written on a Vax minicomputer. Over time, its IT estate became more complex.

The ticketing company acquired other businesses, taking on their IT, and moved with the times onto the web.

The Vax still runs bits of Ticketmaster, albeit in the form of an emulated system rather than physical hardware, but it now has a raft of supporting software connected into it.

“Over the years, we have been extracting these technologies by adding APIs [application programming interfaces] to modernise the interface to our ticketing engines and platforms,” says Justin Dean, senior vice-president for technical operations at Ticketmaster.

The company has developed a complex ecosystem of tools to support the Vax systems and has needed to integrate ticketing systems taken on board when the company has made acquisitions.

The highly heterogeneous environment ranges from code developed in mod_perl, which runs its websites, through to new systems built using Java virtual machines and Tomcat (the Java application server), running in Docker.

Legacy IT can hold back organisations trying to move to a cloud-first world, especially as many core production systems were developed before the era of cloud computing.

Take Ticketmaster’s legacy Vax. The Vax system, which now runs on a software emulator, is a small part of a far bigger legacy issue, says Dean. The big legacy bottleneck is the supporting software, which makes the Vax system available to other applications, such as Java applications that abstract bits of functionality from the Vax through APIs.

Although there are no plans to make the Vax emulator work in a cloud environment, the supporting software could potentially be made cloud-ready, says Dean.

“There is less emphasis on the amount of work being done, and more on the outcome”

Justin Dean, Ticketmaster

Most of Ticketmaster’s software is not completely new products, he says, which means the company needs to integrate them with existing systems. “We wanted to get them out quickly,” says Dean. “To do that, we need to touch a lot of systems. This has driven us to DevOps.”

Moving to DevOps  is a transformation that Ticketmaster has been working on since 2013. The business driver is a familiar one – to become more agile. Dean explains: “For us, it really started with DevOps. Part of our transformation was to focus on delivering business value faster and delivering more of it, and the driver was speed to market of product.”

From an IT perspective, it was necessary to rethink IT’s value, he says. “We said we would work on the things that would move the needle for a product, and not place a lot of value on output. There is less emphasis on the amount of work being done, and more on the outcome.” This has the effect of encouraging the teams to deliver software more quickly, he adds.

Greater agility is a worthy goal, but getting there is complicated by the need for new products to interface with legacy IT.

Friction reduces DevOps

There was a business need to move faster and the teams wanted to move faster, but there was a lot of friction preventing this, says Dean. “Every time someone needed to go outside their team to request servers, they needed to go to an operations team and so there was a huge delay,” he says.

For example, if the development team needed a new version of Apache web server, they had to put in a helpdesk request. “Given that our infrastructure and tools were fairly large, giving people access to them was considered a little too risky,” says Dean.

This prevented the teams from having autonomy, in that they still needed to rely on other teams to deploy their software. “Even if they did have access, it was too complex for any one team to learn all the systems,” he says.

Beauty in isolation

One of the uses of containers such as Docker is to isolate code. Ticketmaster has taken this further by using containers to isolate units of work.

To get around the bottleneck preventing the developers being in full control of their software, the code is containerised and then pushed into a deployment pipeline, where any additional steps needed for deployment can be made.

“We try to make sure there is zero friction,” says Dean. “The team does not have to get help externally, so they can move as fast or as slow as they want. On the flipside, the operational teams do not require massive support organisations, which means they can spend more time building tools that support a self-service model for software deployment and develop better platform products, instead of doing operations for teams.”

Adapting legacy to DevOps

Legacy systems such as Ticketmaster’s Vax are often managed by a centralised team of expert administrators. But Ticketmaster wanted to avoid the friction DevOps teams face when handing over the responsibility of deploying their code to an external team.

“The same DevOps principles still apply to those people who manage legacy software stacks, so the team that writes software also has operational duties,” says Dean. “Part of our DevOps transformation has been to support teams that may not have had access to the technical environments needed to deploy their software and let them operate in a DevOps fashion.”

Read more about modernising legacy IT

It’s round two in the fight between open source and commercial software, and open source is punching well above its weight.

High-street music retailer HMV has embarked on a sizeable digital transformation project to cut costs and boost the agility of its business operations.

A lack of IoT industry standards could set DevOps practices back 20 years. 

In some cases, this means adding operational staff to the team directly, such as embedding into the development team the application engineers or system engineers who previously worked in operations supporting the legacy systems.

“We really changed their mission from an operations role to a site reliability engineer role or a DevOps engineer role, where their mission is to help the team take control and ownership of their own software,” says Dean.

The idea is to get the software team in a position where it can handle the deployment and operations of the software being developed totally autonomously, even if the software is dependent on access to Ticketmaster’s legacy systems.

Once the team is self-sufficient, the operations staff can be redeployed to another DevOps team.

Dean accepts that this approach does not scale particularly well, especially if there is a limited number of operational engineers with the right set of skills. It is used only where the business benefits of training the development team in operations can be justified.

Among the risks is that the embedded operations staff become siloed, and end up only providing operations for the development team they work with. Dean’s advice to IT leaders looking at taking a similar approach to merging DevOps with legacy IT is to change the role of the embedded IT operations staff.

“Their job is to understand friction points and work on building self-service tools, opening up system access once the team has been trained, so that everyone can operate the software that is being developed,” he says.

Read more on DevOps

CIO
Security
Networking
Data Center
Data Management
Close