meyerandmeyer - stock.adobe.com
Over three-quarters (77%) of cyber security decision makers are worried about the potential for deepfake technology to be used fraudulently – with online payments and personal banking services thought to be most at risk – but barely a quarter (28%) have taken any action against them.
Biometric authentication technology supplier iProov questioned over 100 security decision makers in the financial services sector in an attempt to highlight how seriously the threat of deepfakes is perceived.
A portmanteau of “deep learning” and “fake”, deepfakes first emerged on Reddit a few years ago, and describes a variety of artificial intelligence (AI) that can be used to create image, audio and video hoaxes that can be indistinguishable from the real thing.
The creation of deepfakes is still an emerging application for AIs, but nevertheless, iProov founder and CEO Andrew Bud said it was encouraging to see the financial services industry has acknowledged the scale of the dangers, which is potentially huge in terms of fraud, although he added that the tangible measures being taken to defend against them were what really mattered.
“It’s likely that so few organisations have taken such action because they’re unaware of how quickly this technology is evolving. The latest deepfakes are so good they will convince most people and systems, and they’re only going to become more realistic,” he said.
A total of 71% of respondents said they thought their customers were at least somewhat concerned about the threat, and as deepfake technology moves increasingly into the public eye – Facebook announced new policies banning them from its platform at the beginning of January 2020 – 64% said they thought this was due to worsen.
According to iProov, as both AI and machine learning technologies have become more advanced (and crucially, more available), deepfakes have already been deployed by fraudsters in a commercial context, which could be especially concerning when considering the world of personal finance.
Read more about deepfakes
- Politicians and Hollywood stars aren't the only ones at risk: Enterprises need to understand the dangers deepfakes pose to their brands and employees. Here’s a primer.
- GANs’ ability to create realistic images and deepfakes have caused industry concern. But, if you dig beyond fear, GANs have practical applications that are overwhelmingly good.
- Many internet browsers and social media companies have been forced to take on a new responsibility to combat the dissemination of false information, but can they succeed?
Of particular concern to decision makers was the potential for deepfake images to be able to compromise facial recognition defences.
“The era in which we can believe the evidence of our own eyes is ending. Without technology to help us identify fakery, every moving and still image will, in future, become suspect,” said Bud. “That’s hard for all of us as consumers to learn, so we’re going to have to rely on really good technology to protect us.”
An earlier survey of the British public conducted by iProov in 2019 revealed widespread ignorance of deepfake technology among consumers – 72% said they had never heard of deepfake videos, although once it was explained what they were 65% said that deepfakes would undermine their trust in the internet.
Consumers cited identity theft as their biggest concern for how deepfakes might be misused, and 72% also said they would be more likely to use an online service that had put measures in place to mitigate their impact.