natanaelginting - stock.adobe.co

Twitter announces crackdown on QAnon accounts

In a bid to stem the spread of disinformation and prevent real-world harm, Twitter is taking action against QAnon conspiracists

Twitter is taking sweeping action on its platform to limit the reach of content linked to QAnon, a sprawling far-right conspiracy theory, in a bid to tackle online harassment and misinformation.

QAnon first cropped up in October 2017 when a 4chan user posted a series messages to the forum, which centres on the belief that a cabal of Satan-worshipping, children-murdering, deep state operatives are plotting to undermine US president Donald Trump.

Twitter has said it will take a number of steps to stop the circulation of the conspiracy theory, including blocking associated URLs from being shared on the site and working to ensure it is not highlighting QAnon-related content in features such as trends and search.

“We’ve been clear that we will take strong enforcement action on behaviour that has the potential to lead to offline harm,” said the official Twitter Safety account. “In line with this approach, this week we are taking further action on so-called QAnon activity across the service.

“We will permanently suspend accounts tweeting about these topics that we know are engaged in violations of our multi-account policy, coordinating abuse around individual victims, or are attempting to evade a previous suspension – something we’ve seen more of in recent weeks.”

It added that these actions will be rolled out “comprehensively” throughout the week, and that QAnon activity will continue to be monitored so that the rules and enforcement approach can be updated if necessary.

“As we work at scale to protect the public conversation in the face of evolving threats, we’ll continue to lead with transparency and offer more context on our efforts,” said the Twitter Safety account.

The suspensions are expected to affect about 150,000 accounts worldwide, while more than 7,000 accounts have already been removed in recent weeks for violating Twitter’s rules on targeted harassment, said a Twitter spokesperson, adding that it had decided to act now because of the increasing harm being caused by proponents of the conspiracy.

“It’s a strong decision focused on reducing the amplification tools available to harmful content,” said Jason Kint, CEO of advertising trade association Digital Content Next. “By not including the content in recommendations and trends, Twitter is reducing the velocity and reach provided by their platform, rather than eliminating the content altogether.

“Yet the most important question is what will Facebook now do? Following Twitter’s lead will bring political and PR risk and not a lot of profit – an area Facebook has consistently avoided over the years.”

Read more about online harms

In response to questions from Computer Weekly about whether it would take similar action, Facebook said it already removes accounts that violate its multi-account and recidivist policies, as well as those that repeatedly coordinate abuse against others, regardless of whether or not they are connected to QAnon.

It said it was closely monitoring activity tied to the conspiracy theory and how its policies apply.

The New York Times, however, has reported that Facebook is already preparing a crackdown on QAnon as, according to two employees with knowledge of the matter, it has been coordinating action with Twitter and other social media companies, and plans to make an official announcement next month.

A number of Facebook advertisers suspended advertising on the platform during July 2020 over its failure to deal with “the vast proliferation of hate on its platforms”, as part of the Stop Hate for Profit campaign.

The campaign was launched on 17 June by a coalition of six US-based civil liberties organisations – the Anti-Defamation League (ADL), the National Association for the Advancement of Colored People (NAACP), Sleeping Giants, Common Sense, Free Press and Color of Change.

According to an ADL press release, the campaign “is a response to Facebook’s long history of allowing racist, violent and verifiably false content to run rampant on its platform”, and it will seek to “organise corporate and public pressure to demand Facebook stop generating revenue from hateful content”.

The ADL claimed that Facebook “is amplifying the messages of white supremacists, permitting incitement to violence, and is failing to disrupt bad actors using the platform to do harm”, while raking in $70bn a year in advertising revenue.

Read more on Social media technology

CIO
Security
Networking
Data Center
Data Management
Close