NicoElNino - Fotolia

AI Cops Walk the Thin Line Between Fin Tech and Fiend Tech

Steps are being taken to ensure that AI and machine learning do not damage consumers finds Nick Booth

They say that power corrupts and absolute power corrupts absolutely. We should bear this in mind as the City Of London is being reshaped in the image of Facebook. There is a very thin line between FinTech and Fiend Tech.

Luckily, The Financial Conduct Authority (FCA) are onto this. They have put quill to paper and urged all the major bankers and brokers to investigate - with the utmost urgency - how artificial intelligence (AI) and machine learning (ML) can be used to protect the consumer.

Their story they would have us believe is that this pairing of AI and ML will take down the bad guys, like a pair of brilliant maverick cybercops who don’t do things by the book, but get results.

Tragically though, AI generally does do things by the book. The only limit to artificial intelligence is the imagination of the programmer who created the system. And sadly, in some cases, that is very limited indeed. In which case, artificial intelligence becomes artificial intransigence, where you keep being put into a loop and asked the same old questions.

Strato Hosting is a perfect example of this. If your bill isn’t automatically paid, they suspend your account. When you try to contact them to pay your bill, they insist you contact them via email. So the only method for rectifying your problem is blocked, until you have rectified the problem. There is no escaping this Artificial Intransigence loop.

This all begs the question. Are AI and ML massively - and damagingly over rated?

Possibly, says Devang Sachdev, Twilio’s director of product marketing at Twilio, who is anxious that resellers respect its limitations.

"Artificial Intelligence is still in its infancy,” says Sachdev.

Twilio offers a platform that allows service providers to knit together all the various channels of communication into one cohesive unit. In reality AI has too many limitations to be left alone, says Sachdev, so it’s insane to have a system entirely run on AI. The inflexibility of AI must be being overcome by building escape hatches across any process so that when an AI driven interaction hits a limit  - or the user needs require complex reasoning - everything is handed over to a human agent.

A hybrid model is the sweet spot that most businesses need to achieve, otherwise AI is more hindrance than help.

Twilio is using historic human-to-human interaction and rich tagged data to train machine learning algorithms.

Malik Jenkins is a conversational designer at AI specialist LivePerson, which uses ex-actors and scriptwriters to give ‘bots’ sympathetic voices and teach them to empathise.

“Artificial Intelligence is only ever as good as the designers behind the scenes,” says Jenkins.

A bot is a long term project and a work in progress that must be constantly monitored and measured, he says.

“We’ve moved past the days of programming AI and releasing it into the world without a contingency plan or real-time supervision - we’ve seen far too many examples of that ending badly. No bot operates in a vacuum; humans are always involved to watch, train, and potentially overrule bots if they slip up,” says Jenkins.

There are more metrics to judge a bot’s performance than a human’s.Now underperforming bots are being retrained on their weakest skills or fired and replaced if they can’t achieve a satisfactory experience.

Much of what is touted as artificial intelligence these days is the complete opposite, genuine stupidity, according to Sophia Warwick, Solution Architect at Sutherland 

“Intelligent automation starts with understanding the care demand: where is the customer in their journey and what is their chosen channel to address? It is from this deep understanding that you can start to identify the feasibility of automation,”says Warwick.

Even when the technology is trained it should never be deployed without the ability for it - or a customer - to escalate to a human colleague at any point in the process.

Unassisted technology, which doesn’t give customers the opportunity to engage with a human, is risky.

“If a user has to change communication methods to resolve an issue, we would class that as a failed experience,” says Warwick.

Loops of failure happen when the automation hasn’t been properly thought through or when someone has attempted to automate a process that is already flawed. This is worse when the customer is not given a chance to communicate with a human, according to Warwick.

The golden rule is to never have a dead end or cyclical failure, says Warwick.

Ha! Tell me about it. Or indeed, any customer of those firms that have left them stuck in a loop of artificial hell for all eternity.

 

This was last published in August 2018

Read more on Cloud Platforms

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

ComputerWeekly.com

SearchITChannel

Close