guruXOX - stock.adobe.com

Instagram and WhatsApp – the new tools of social media propaganda

Facebook and Twitter have been cast as the villains of the piece, but social media disinformation and propaganda are evolving in new and alarming directions, say Oxford University researchers

That social media platforms Facebook and Twitter have been the primary vectors for online disinformation and propaganda is now a matter of fact. But, three years after the twin shocks of Brexit and the election of US president Donald Trump, extremists of all stripes are turning to other social platforms as the primary vectors for propaganda.

These include photo-sharing application Instagram, video platform TikTok, and encrypted messenger channels such as WhatsApp, Signal and Telegram. And for Lisa-Maria Neudert, a doctoral candidate at the University of Oxford’s Oxford Internet Institute (OII), this is just another step in the evolution of propaganda – and for her, not an unexpected one.

So what is propaganda? Nothing new, of course. Its invention is commonly credited to Alexander the Great, who, according to legend, had what amounted to a personal PR team following him around to tell the people about his great deeds and acts of heroism.

Over the years, word-of-mouth propaganda morphed into chants, hymns and sermons used for social control in the medieval church. Then the invention of the printing press spurred new evolution as a more literate society seized on pamphlets and newspapers.

In the 20th century, people such as Stalin in the Soviet Union, Goebbels in Nazi Germany and Disney in the US turned propaganda into something approaching an art form using new design techniques, photo-manipulation, radio broadcasts, movies and cartoons. So it is only natural, says Neudert, that propaganda eventually emerged on the world wide web.

“I think propaganda online is so much more dangerous and impactful than it has ever been before,” she says. “It’s now data-driven, which means it is targeted and it knows a lot of things about you. It knows what you’re interested in, it knows what kind of content people click on and, with that, it has new impact potential.

“The second thing is it is geared directly towards algorithms that are designed to capture the kind of content that is attention-grabbing and amplify it and make it even bigger for humans,” says Neudert.

“Researcher Zeynep Tufecki says disinformation is a little bit like fat, sugar and salt – humans know it’s bad for them, but at the same time they want a little bit more of it. That’s what algorithms are designed to amplify.”

The OII, which describes itself as a multidisciplinary research and teaching department dedicated to the social science of the internet, set up the Computational Propaganda Research Project, or Comprop, in 2014, at the time of Russia’s annexation of the Crimean peninsula in Ukraine.

“Propaganda online is so much more dangerous and impactful than it has ever been before”

Lisa-Maria Neudert, Oxford Internet Institute

However, says Neudert – who was speaking in Dublin at a digital summit convened in September 2019 by the Irish government – the group soon realised there was more to the story than just the Kremlin’s attempts to spin an illegal occupation, and that they were dealing with an emergent global phenomenon.

Since 2014, things have moved quickly, and Comprop became one of the first organisations to analyse online data gathered around the 2016 EU referendum in the UK, and Trump’s election in the US five months later.

“We call it real-time social data science,” says Neudert. “What that means is that we’re collecting data and, two to three weeks later, we publish something – which is what researchers at Oxford like to call real time.

“We are passionate about how algorithms and automation are changing public opinion, and how they are used to manipulate it.

“I’ve been doing research on many different countries and platforms, including Facebook, Twitter, YouTube and, more recently, also on WhatsApp and chat applications, which is something really exciting, and difficult to research.”

At its heart, computational propaganda is the use of algorithms and automation to manipulate the public. This takes a number of forms, including fake news – termed junk news by Comprop – memes, photo and video manipulation, and emergent techniques such as deep fakes, spread in general by automated bots or actual humans, also known as trolls. It is not bound by ethics, logic or factual information, and that, says Neudert, is why it performs so well on social channels.

It is also – and this is fundamental – democratised. So whereas until recently, propaganda was expensive, staged and the domain of well-funded organisations such as Alex Jones’ Infowars or Steve Bannon’s Breitbart, now anybody can create content.

Shifting sands of disinformation: lessons from the EU elections

So how are the changes that Neudert identifies manifesting in reality? Earlier this year, Comprop turned its attention to the European parliamentary elections, examining social media trends in seven languages – English, French, German, Italian, Polish, Spanish and Swedish.

The team pored over more than two million tweets, and found that rates of disinformation from junk news were actually falling. Neudert explains: “In Germany and France, for example, we used to see about 20% of all information about politics being from junk news. This year it was about 10%. In other countries it was even lower – between 5% and 10% for all the different language spheres. That made us quite hopeful and positive about the state of junk news during the European elections.”

But the team then turned their attention to Facebook, where they examined the outlets classified as junk news and outlets classified as professional or mainstream media, and examined metrics that will be familiar to any social team – shares, likes and popularity.

“What we found there is that, on average, the stories that were shared on Facebook that were disinformation were way more popular than stories from professional outlets,” says Neudert. “The average interaction for junk news sites on Facebook were way higher by a factor of between three and six times per country. That made us less hopeful.”

Much of the junk news material consisted of scare stories linking immigrants to criminality – a racist trope exploited by the far right for years. Some of these stories were linked to the 15 April 2019 fire at the cathedral of Notre-Dame de Paris, which occurred a month before the EU election and was falsely linked to Islamist terrorism almost as soon as the first pictures of the blaze appeared online. “This really shows how flexible disinformation is, how quickly it develops and is shared on social,” says Neudert.

Photo and video content: a new disinformation vector

The visual component of propaganda, as exemplified by the spectacular photos of the Notre-Dame blaze or, more recently, the Amazonian wildfires in Brazil, highlights the growth of disinformation on platforms such as Instagram and TikTok.

“Disinformation is becoming more and more visual – it’s images and gifs – and often that kind of content is harder to detect and very shareable,” says Neudert.

In the case of the Amazon fires, the spread of disinformation was almost too subtle for most people to spot, including the likes of French president Emmanuel Macron, who shared a photo taken by someone who died in 2003, and actor, producer and environmentalist Leonardo DiCaprio, who shared a photo that wasn’t even of the Amazon.

Their high profiles – Macron has 4.2 million Twitter followers and 1.2 million on Instagram, DiCaprio 19.2 million on Twitter and 35.2 million on Instagram – meant that the lie spread, literally and figuratively, like wildfire.

“This was being shared by other influencers, celebrities and politicians, and it exemplifies how difficult it is to control and how easily it spreads,” says Neudert. “Often it’s not the Russian disinformation that we are concerned about when we think about elections, but disinformation in the public mainstream.”

Computer Weekly’s Facebook investigation

Earlier in 2019, Computer Weekly’s investigations editor Bill Goodwin, alongside other journalists including Carole Cadwalladr of The Observer, came into possession of a cache of internal Facebook documents via investigative reporter Duncan Campbell.

These papers were seized when the Digital, Culture, Media and Sport Committee dispatched Parliament’s serjeant-at-arms to arrest Ted Kramer, founder of Six4Three. Kramer was made to give up hundreds of legal documents disclosed in his company’s lawsuit with Facebook following the Cambridge Analytica scandal.

We are now able to detail multiple occasions when Facebook knowingly and deliberately exploited users’ private data and disregarded privacy, drawing together leaked documents, academic research and newspaper reports.

Our reporting on Facebook over the years can be found here, while at the end of July 2019, Goodwin and Computer Weekly reporter Sebastian Klovig Skelton joined business applications editor Brian McKenna and senior editor Caroline Donnelly for a special edition of our Downtime Upload podcast to discuss their recent investigative work.

Platforms like Instagram are still little understood in the context of online propaganda, but one important emergent factor is the use of hashtags – Instagram allows users to post up to 30 on one picture – and to follow them so that pictures with the hashtag “puppies” or “flowers“, or perhaps “news” or “politics” will show up in the user’s feed from time to time.

“That content is coming from influencers and pages with big followings, and also the likes of Russia Today [which has hundreds of thousands of followers across multiple targeted accounts],” says Neudert. “There is not a whole lot of content moderation going on on Instagram yet.”

This is also a source of concern on TikTok, the de facto replacement for the now-defunct Vine service beloved of the Generation Z demographic.

TikTok, which had more than a billion downloads as of February 2019, is owned by Beijing-based ByteDance, and has rapidly emerged as a key source of short-form videos. It is wildly popular not just in the West, but in markets such as India.

In its most benevolent form, TikTok content is very similar to Vine’s, with lots of quick-fire gags and memes, and the ubiquitous puppies and kittens, but, as Neudert notes, it has a very young audience that is often more susceptible to disinformation, and the people who spread it know this.

“In India, for example, we know there are conspiracy theorists and media personalities with TikTok channels using it to propagate misinformation,” she says. “In Hong Kong – where we know there is censorship going on – TikTok very prominently carries no content relating to the Hong Kong protests, so there is already censorship, filtering and disinformation from the Chinese government.”

From deep fakes to shallow fakes: the tale of Jim Acosta

The other topic of interest for Comprop when it comes to visual content, and one that is gaining traction in the mainstream media, is the rise of deep fakes – photos and videos that are generated by artificial intelligence (AI) and purport to show real situations, but are, in reality, completely fake.

“This could be, for example, a speech given by president Trump that is not actually him – but it will look like him, sound like him, and will have his demeanour,” says Neudert.

“This kind of content is being produced by a technique called generative methods, so you have an AI that is basically watching tonnes and tonnes of video of Trump, and it learns from that kind of content how to generate something that will look and sound just like him.”

To the relief of everyone, no fake videos of Trump ordering a nuclear strike have yet emerged, as the AI techniques used to create them are still very expensive and at the bleeding edge of technology. The more relevant concern, says Neudert, is cheaper, shallow fakes.

Back in 2018, for example, CNN’s chief White House correspondent, Jim Acosta, found himself making the headlines rather than reporting them, when a testy exchange with Trump at a press conference culminated with a White House worker trying to grab the microphone from Acosta.

In the wake of this incident, the White House shared video footage, allegedly created by Infowars, that was cut in such a way that it seemed to show Acosta aggressively pushing the staffer away, when actually he was trying to pull the microphone back.

“It was shared by Trump himself and by all sorts of policy-makers and high-up people in the Republican Party,” says Neudert. “It was a very simple fake and it was debunked very quickly, but it still made it to the top trending topic on all sorts of channels and it led to Acosta being suspended from the White House.”

Encrypted channels: a great idea, but causing complications

Online disinformation is also spreading into private and encrypted forums – as the bright light of media scrutiny, and the threat of regulation, shines down on Facebook and Twitter – and it is only natural that producers of disinformation are starting to take advantage of the built-in encryption features of Telegram, Signal and WhatsApp.

After all, their reasoning is exactly the same as the day-to-day user – privacy and protection from government snooping. Objectively, such people have more reason to be concerned about that than most.

“Messenger apps are, in many countries, the most important form of social media,” says Neudert. “In Germany we have 82 million people, and 43 million people active every month on WhatsApp. Messengers are also becoming more and more important in countries like India, where the mobile internet really is the internet.”

These apps are now a huge source of news for millions of people, and because they are encrypted, it is much harder to know what is actually going on there, what content is being shared most widely, by whom it is being shared, and who is viewing it. But, anecdotally, threat researchers have described seeing closed groups that were cesspools of disinformation, extremist content and other forms of hate speech.

Reframing the debate: whose responsibility is it anyway?

Neudert’s work highlights the importance of education in countering evolving propaganda techniques. Everybody needs to take responsibility for training themselves to spot disinformation, but crucially, parents, educators and those in the media need to shoulder more of this burden.

“One thing that I hope to see from tech reporting is not to fall for the story the social media companies are telling you, that they just need guidance,” she says. “Yes, of course they do, but it’s important to be critical about that, to ask if guidance has already been provided, and no action has followed. It is clear by now that there are examples where policy-makers have provided guidance, and companies have had difficulty following that.

“That is something I hope journalists can be critical of, and ask questions that I as a researcher cannot.”

Neudert agrees with the widespread view that government regulation of social media is desirable, but unfortunately, she says, we are now past the point where social media companies can be trusted to self-regulate, particularly in the light of the Cambridge Analytica revelations. The landscape is complex, cultural and highly politicised, so regulation needs to be finely balanced.

There is one thing that may help, she concludes, and that is reframing the debate over what social media companies actually exist for.

“Back in 2016, there was debate over whether or not social media companies should be considered publishers,” she says. “I think that is now outdated.

“Acknowledging that those platforms are now information intermediaries, commodities, that they are also networks so broad that we can’t avoid them, that can maybe help change some of our thinking.”

Read more about social media technology

  • The lack of security policies in many business applications is putting enterprise data at risk and social media apps are the biggest source of malware, a poll of IT professionals reveals.
  • Employees aren’t just wasting time on social media. Some are learning the career skills that will take them to the next level. Here’s why that matters to your organisation.
  • This handbook provides insight and advice on how to use analytics to get information on customer sentiment and marketing opportunities from sets of social media data.

Read more on Privacy and data protection

CIO
Security
Networking
Data Center
Data Management
Close