yingyaipumi - stock.adobe.com
Although growing numbers of organisations are working with artificial intelligence (AI) software in some shape or form, very few are generating significant financial benefits when rolling it out in a serious way, according to new research.
A study conducted by the MIT Sloan Management Review and management consulting firm the Boston Consulting Group revealed that as many as 57% of the 3,000 managers, executives and academics questioned were currently either piloting or deploying the technology. A further 59% had devised an AI strategy and 70% believed they understood how the software could generate business value.
Despite this situation, the report, Expanding AI’s impact with organizational learning, indicated that just one in 10 organisations were deriving significant financial value from the technology.
When exploring the reasons why, researchers found that simply getting the basics right – that is, having an appropriate strategy with the right supporting data, technology and skills in place – was not enough. Only one in five organisations gained major financial benefits that way.
Getting the basics right while also building AI systems that the business actually wanted bumped success figures up to 39%, but to truly generate financial value, the secret appeared to be threefold:
- Ensuring machines were in a position not only to learn autonomously, but also for humans to continuously teach them and for machines to continuously teach humans.
- Developing multiple ways for humans and machines to interact based on context.
- Introducing extensive process change in response to what has been learned across the organisation as a result of using AI.
David Semach, a partner and head of AI and automation at Infosys Consulting for Europe, the Middle East and Africa (EMEA), agrees with the researchers that satisfaction with the technology in a financial sense is often quite low, partly because organisations “are mostly still experimenting” with it. This means it tends to be deployed in pockets rather than widely across the business.
“The investment required in AI is significant, but if it’s just done in silos, you don’t gain economies of scale, you can’t take advantage of synergies and you don’t realise the cost benefits, which means it becomes a cost-prohibitive business model in many instances,” says Semach.
Another key issue here is the fact that most companies “mistakenly” concentrate on using the software to boost the efficiency of internal processes and operating procedures, rather than for generating new revenue streams.
“Where companies struggle is if they focus on process efficiencies and the bottom line because of the level of investment required,” says Semach. “But those that focus on leveraging AI to create new business and top-line growth are starting to see longer-term benefits.”
A problem, however, is that people both in IT and the business are “restricting their thinking due to their resistance to change” as well as “concerns over their own jobs and being replaced”, he adds.
“So, it’s not just about inadequate human-to-machine interaction – it’s about companies not adopting the right strategic mindset and approach. The issue is that people are not actually truly understanding what AI can enable in order to support the business strategy, business change and potential disruption for good.”
Another consideration, says Angela Eager, research director at TechMarketView, is that adopting AI involves a steep learning curve, but most UK organisations are “fairly early on in the maturity curve”.
One of the main challenges they face relates to data, and how clean, accurate and “aligned to your purposes” it is. A key issue here is that it takes time and effort to develop and train suitable data models, especially given that there are currently few tools to help – although MLOps (machine learning operations) is starting to prove its worth here.
“A big stumbling block today is how to operationalise AI, get it into production and keep it relevant once it’s in production,” says Eager.
She explains that when creating a data model, it is important to ensure the data is “fresh and appropriate” and suitably cleansed.
“But you also need to know how to train the data model, change it in-flight and manage the lifecycle once it is deployed – and not just as a one-off,” says Eager. “Data changes all the time, so you have to constantly ensure it is generating the right business outcomes, and change things quickly if it isn’t.”
Doing so requires not just having access to the right data, but also the right skillsets, both at the technical and more general data analysis level. This means upskilling may be needed in parallel with any technical initiatives, not least to educate business users and help them understand possible use cases.
But such activity also needs to take place as part of a wider change-management initiative to help employees address their fear of, and resistance to, transformation. Just as important is creating a centre of AI excellence, or AI function, with an enterprise-wide remit to oversee the creation of synergies between different business functions that lead to economies of scale.
“Ultimately, this isn’t just a technical project,” says Semach. “It’s about generating cultural change, which means that putting people first is absolutely key.”
Case study: World Remit
World Remit obtained a return on investment when implementing chatbot technology during the UK’s first Covid-related lockdown by focusing on solving a key customer-based challenge and working closely with the business to do it.
The digital international remittance provider found that the number of people wanting to use its services to send money abroad doubled overnight because they no longer had access to the corner stores that had traditionally undertaken transactions using traditional players, such as Western Union. This situation led to a boom in demand for online facilities among an older customer base that was not always as digitally savvy as the company’s more usual clients.
It also led to a boom in the number of calls to World Remit’s call centre, with many of the queries being simple questions about how the service worked and where it was possible to send money to and from.
As a result, the company decided not only to put a “frequently asked questions” page on its website, but also to introduce chatbot software on the “contact us” page to answer a number of prescribed questions. If customers’ queries remained unanswered, they would then be handed off to agents who could support several of them at once using live chat, leaving phone lines free to tackle more complex issues.
The ServisBot conversational AI bot was introduced in two weeks because, as Justin Sebok, the organisation’s senior product manager, points out: “It was a burning business problem we had to solve in days rather than months, so being quick and efficient was a requirement in terms of choosing a supplier.”
To get it right, the tech team initially had daily stand-up meetings with customer service leaders and the product team to review the approach being taken and ensure that a process of continual improvement could take place – although “as we move more into business-as-usual mode”, it is now more like three times a week, says Sebok.
Another consideration was learning from past mistakes. When the firm had introduced chatbot technology before, the right underlying data was not in place to answer open text queries, particularly from people for whom English was not necessarily their first language. This led to the software being taken out of operation.
To avoid a similar problem happening again, it was decided to adopt a menu-based approach that enabled users to navigate through a number of options. “We wanted to avoid open text because, while we want to get there in future, it does need to be approached in a really careful way,” says Sebok.
However, the company has now reached the limits of what it can do with a menu-based approach and plans to evaluate how to take things forward over the coming year. A key aim is to integrate chatbot technology with its back-end systems, thereby enabling it to answer questions in a more sophisticated way.
Another goal is to boost customer retention rates by embedding the software in specific contexts, such as the website’s payment pages, so clients can receive appropriate advice as they go about their business.
“At this point, it’s largely been about creating efficiency by keeping a handle on costs while coping with a massive influx of transactions,” says Sebok. “But as we get more contextual and move into places where there’s more customer service friction, we will hopefully see additional revenue growth off the back of it.”
Case study: Virgin Hyperloop
Without AI technology, Virgin Hyperloop could neither have designed its new hyperloop-based transportation system nor made a case for receiving fresh tranches of funding from its investors, says Jerome Wei, the firm’s senior director of machine intelligence and analytics.
The proposed system, which is based on already incredibly fast magnetic levitation, or maglev, trains, is made even speedier by the fact that they travel along inside vacuum tubes. Virgin Hyperloop, which is based in Los Angeles, trialled its first-ever passenger journey using the technology in November 2020 when two employees travelled the length of a 500-metre test track in 15 seconds, hitting 107 miles per hour in the process.
The goal is to have the system certified by 2025 and for the company to roll out its first commercial version by 2030. But to hit those ambitious targets, Wei and his 18-strong team are focusing less on the hardware itself and more on how it can be used in the most effective, efficient and safety-conscious way.
“The aim is to make it a demand-responsive system, so, just like Uber’s ride-sharing platform, you don’t have to wait for the scheduled arrival of a vehicle,” he says. “Instead, the system adapts to user requirements and allocates capacity as needed.”
This situation would lead to fleets of thousands of small, driverless vehicles “coordinated in an orchestra of movement”, says Wei.
To enable this vision, the programme of work has been split into two key areas: the first is to design physical infrastructure, such as tube tracks and stations, and the second entails optimising how trains could actually move around this infrastructure in the most energy-efficient and effective way possible, given geographical and other constraints.
As a result, the use of AI technology is vital, says Wei. “We’re trying to advance design and product maturity quickly and iteratively on timescales that would be unachievable with usual methods,” he adds. “Humans don’t have the capability to integrate so much information in such a detailed manner, aggregate it and come up with a suitable design. It’s not possible.”
Using the Amazon Web Services platform, his team undertakes the simulation and analysis of big datasets using Databricks’ platform to assess the pros and cons of different design ideas and understand whether or not they work, based on indicators and metrics that are important to investors. Such investors include existing rail and port operators.
Therefore, another crucial objective is to use the data to “represent and convince and substantiate claims of how much value we can bring”, says Wei, adding: “It’s not a cheap endeavour as it’s not like we’re building a user-facing app. In total, it’ll cost over $400m.”
Although Wei acknowledges that AI software is not suitable for every problem, it is the best solution in this kind of scenario, where processing huge amounts of data to understand complex scenarios and real-world problems is vital.
“Humans can’t contain the scale of all of this in their minds, so it’s about using the technology that’s most appropriate to the problem you want to solve,” he says.