Artificial intelligence (AI) should be adopted to help human employees create a better customer experience, rather than remove people from the process altogether, according to BT’s head of customer insight and futures, Nicola Millard.
Speaking at the 2018 Disruption Summit in London, Millard said she wants people to think differently about the use of AI, rather than framing it as a battle between man and machine.
“We’ve set up this struggle between humans and technology, that there’s going to be a winner and a loser, but I want to reframe that,” she said.
Focusing specifically on the use of AI to provide good customer experience, Millard said a lot of technology being adopted means the customers can serve themselves, giving them less reason to get in touch with a brand.
This means that when someone does get in touch, it is usually because something has gone wrong somewhere along their customer journey.
To determine what “state” the customer is in when they are calling, it is often useful to consider the channel they approached the brand through.
Millard pointed out that consumers don’t necessarily consider themselves multi-channel, but will use different channels to achieve different things and will approach a brand in either a positive, negative or neutral state. It is then up to the brand to use the correct mix of AI and human to ensure customers get the best experience out of their interaction.
“The worst thing you can do on the phone channel is to say, ‘You can do this online’, because chances are [the customer] tried that, but it didn’t work and that’s why they’re calling,” said Millard.
When a customer is interacting with a brand to achieve a positive goal, such as shopping for a picnic or planning an event, it can be more appropriate to use AI such as chatbots for assistance, as the customers are more patient, have more time and may be more open to ideas that are generated as a result of their customer data.
Millard said customers do sometimes require a human to be “ in the loop” to help them make a decision and ensure they aren’t overwhelmed by choice.
Similarly, with customers in a neutral state, where their goal is often to perform a task they are obligated to do, Millard said, “This is where quick and easy solution comes in”, and some forms of AI may be helpful in speeding up this process.
“The problem is when customers hit a problem or a state of anger and frustration,” she added. “Customers in a crisis are hard to automate.”
It is difficult to teach AI and machine learning how to respond to customers who are in a negative state because they can become irrational, emotional and impatient.
“Customers need to talk to somebody,” Millard said. “That’s the niche for the customer service adviser.”
Learning from these different states and how customers act helps brands understand where automation is and isn’t appropriate.
“We can start to see where we need to signpost customers, where we can put automation in and where things such as bots can fit in and where they don’t,” Millard added.
Although customers have grown to accept interacting with technology more over the past 20 years, there are still some things they don’t like.
As Millard put it, “customers want an easy life”, and that easy life can be created by technology and AI through collection of customer data.
Using services known for personalisation, such as Amazon or Netflix, Millard said: “I share my data, you give me something back. It’s all about easy.”
She said firms need to “persuade customers we need that data and we use that data to their benefit”, which can then lead to a more personalised and proactive customer experience.
Most customers are happy to share their data in return for a good customer experience, allowing brands to be “proactive” in offering goods, services and interactions with particular customers.
In some cases, however, brands asking for data such as location and social media usage to offer this “proactive relationship” can come across as “creepy”. Millard gave the example of a supermarket that a customer was going through a divorce by analysing their purchase data.
She said this is another instance in which humans assisted by AI can find the line between helpful and intrusive when it comes to offering this personalised service.
“Knowing it and doing something about it are two different things,” she said. “We need a human in the loop to understand what this data tells us and how we should respond.”
Humans can also add context to a situation where AI may not be able to understand the sentiment with which something is said by a customer.
For example, chatbots are becoming an increasingly popular way to interact with brands because they are easy to use and always available. But conversationally, language can be a barrier for chatbots and can be hard to automate.
“Sarcasm and algorithms do not go terribly well together,” said Millard. “We’ve learnt over the years that [AI is] not very good for complexity or complaints. Customers do weird things.”
While giving AI access to facial expressions to better understand whether or not someone is being sarcastic and giving more context to interactions, this would not always be preferred by customers.
“Going back to the creepy factor, I’m not sure I want a bot to have access to my heartbeat or emotions to let it know how I’m feeling,” Millard said. “We need to make life easier for customers and not just put in cool automation for the sake of it.”
Rather than replacing a living customer service agent, chatbots and other AI can be better at taking some of the stress out of the lead up to speaking with a human, who can then add value to the process.
Millard said we have to be “good parents” to bots and AI to make sure it learns how to interact properly and without bias.
“Don’t take humans away, don’t automate them out. Instead, help them and augment them using intelligence,”she said. “It’s not really Botman vs Superagent, it is Botman plus Superagent.”