naito8 - stock.adobe.com

Facebook to ban deepfake videos

New policy closes some loopholes around misinformation, but seems to leave others wide open

Facebook is to remove and ban deepfake videos identified as having been created or manipulated using artificial intelligence (AI), in a reversal of its previous stance on the matter.

The changes to the social media platform’s policy were announced by its vice-president of global policy management, Monica Bickert, in a blog post.

“Across the world, we’ve been driving conversations with more than 50 global experts with technical, policy, media, legal, civic and academic backgrounds to inform our policy development and improve the science of detecting manipulated media,” she said.

“As a result of these partnerships and discussions, we are strengthening our policy toward misleading manipulated videos that have been identified as deepfakes. Going forward, we will remove misleading manipulated media if it meets the following criteria:

“It has been edited or synthesised – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say.” Or:

“It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.”

However, the new policy contains some loopholes. Notably, it does not extend to cover “content that is parody or satire, or video that has been edited solely to omit or change the order of words”.

Read more about misinformation

Javvad Malik, a security awareness advocate at KnowBe4, said he did not believe Facebook’s new policy would achieve anything. “The fact that parody and satire is excluded could mean that most people could argue that any flagged video is merely intended to be satire,” he said.

“There are ways in which videos can be manipulated without the use of deep fake technology,” said Malik. “Splicing together reactions from different shots, changing the audio, or even the speed of a video, can drastically alter the message the original video intended to give.”

This would appear to suggest that a large number of fake videos made using conventional editing techniques – including a doctored video of Labour’s Keir Starmer circulated by the Conservatives ahead of the 2019 General Election, and footage that made US speaker Nancy Pelosi appear inebriated simply by slowing down the film – would still be permissible.

Facebook notably refused to remove the video of Pelosi, even after it was circulated by prominent Republicans, although it did take steps to downgrade the clip’s visibility in news feeds and attached a link to a third-party fact checker to it.

At the time, a Facebook spokesperson said the platform was working hard to strike the “right balance between encouraging free expression and promoting a safe and authentic community”.

An emerging security threat

Although not really widespread beyond examples created to prove that they can be created, deepfakes are regarded as a viable and emerging security threat, and a potentially disruptive weapon for those bent on spreading online misinformation – or, in an enterprise context, for conducting targeted phishing attacks.

Many commentators anticipate a surge in deepfake content in 2020, as the US gears up for what is likely to be one of the most divisive and troubled presidential elections in its history this November.

Deepfake videos are made using two AI systems – known as a generator and a discriminator – that are essentially in conflict with one another. In practice, this is known as a generative adversarial network (GAN).

The generator creates a video clip that the discriminator AI must then identify as real or fake. If the discriminator is accurate, the generator learns vital information about how not to go about creating video clips in future.

In theory, as both AIs learn from each other, the quality of the deepfake content will improve to the point at which it may not be possible for human observers to accurately determine what is real and what is not.

Read more on Privacy and data protection

CIO
Security
Networking
Data Center
Data Management
Close