cherezoff - stock.adobe.com
The global ticket distribution system operator also announced it is the first travel industry company to shut down all its mainframes after a long technical marathon which began more than 10 years ago.
The company said it now operates 100% on open systems, which has enabled it to develop new products and features more rapidly without waiting for bespoke mainframe coding to take place.
From a cloud perspective, the Amadeus Master Pricer application represents the first step in Amadeus’s strategy to roll out a cloud-based architecture, globally distributed across both the private and public cloud.
Amadeus said it is progressively migrating its shopping applications to run on the public cloud across multiple regions. It said this would enable it to scale system capacity faster and on-demand, enabling it to support peak travel.
Dietmar Fauser, senior vice-president of technology platforms and engineering at Amadeus, said: “We had the idea of moving off the mainframes in 2006 and started on the cloud in 2014.”
He said that among the key drivers was the decision by Red Hat and Google to team up to push Kubernetes in the public cloud. Fauser said this was important for Amadeus because, through Kubernetes, Amadeus would be able to have a much higher level of automation in its datacentres and make it possible to move to the cloud.
“Mainframes have been the workhorse of the travel industry for decades – clearly though, the future of the industry is now in the cloud,” he said.
“I am overjoyed to be part of this shift. This transition reflects Amadeus’s relentless commitment to invest in the technology that powers better journeys. This milestone is the result of a joint effort of R&D [research and development] and operations teams delivered in a collaborative, DevOps approach.”
Read more about Amadeus
- Amadeus is adding to the business intelligence capability it has been building since 2013. Its head of travel intelligence says industry at start of data analytics revival.
- Amadeus, the airline flight ticketing system, has used Google Compute Platform to scale its service to support internet flight searches.
According to Fauser, the use of the Google Cloud enables Amadeus to start using technologies such as artificial intelligence (AI), the internet of things (IoT) and machine learning, which could be used to improve the experience of people booking travel.
Fauser said Amadeus took an opportunistic approach in its move to the cloud, selecting applications that were running at high capacity in the company’s datacentres.
“The applications we chose were the ones not affected by regulated data. They were relatively modern and were being constantly rewritten,” he said.
He added that this made them ideal candidates for redeveloping as cloud-native applications, which could then be hosted in the public cloud.
Amadeus chose to use Google Cloud Platform’s (GCP’s) infrastructure-as-a-service (IaaS) offering, rather than use the additional services built into the Google platform. Fauser said this was important as Amadeus did not want to be tied to a particular cloud platform, so rather than use database services built in the Google platform, it used Couchbase as the database for its cloud-native applications.
“We can deploy Couchbase across different infrastructures and enjoy real-time data synchronisation,” Fauser added.
Going forward, as and when the public cloud providers start to converge their application programming interfaces (APIs), Fauser said it may be possible to use APIs from the different public cloud providers without having to rewrite applications.
He added that AWS is making some progress with this: “S3 is a good example, as is now the standard API for everyone.”
The approach Amadeus has taken means that it can choose where to deploy its cloud-native applications. “We have a strict design principle that an application should not make any assumptions on the underlying infrastructure,” said Fauser.
“We can deploy exactly the same application in the Google cloud and on-premise in our own datacentre. It runs on a Kubernetes. On-premise we use OpenStack, and in the public cloud we use Google,” he added.
However, applications that are cloud native must be aware of network latency. To limit the impact of network latency, Fauser said: “We have design guidance in place which say that, by default, all local communications stays within a particular datacentre region. Data does not flow between regions, whether it is on-premise or in the Google cloud.”
The only applications that then need to be monitored for latency are those that do need to cross regions, he added.