JJ Gouin - stock.adobe.com
Online misinformation about Covid-19 has been allowed to spread unchecked across social media platforms such as Facebook and Twitter, as the government continues to delay introducing protections to fight online harms that were first set out 15 months ago, the Digital, Culture, Media and Sport (DCMS) Select Committee has reported.
The newly published Misinformation in the Covid-19 infodemic report lays out evidence from multiple authorities on how online harms have spread like a virus during the pandemic, from potentially dangerous hoax treatments, anti-vaccination conspiracy theories, racist attacks on the UK’s Asian community, and lies about 5G mobile networks that have led to fantasists setting fire to mobile equipment and attacking telecoms engineers in the street.
“The proliferation of dangerous claims about Covid-19 has been unstoppable,” said Julian Knight, chair of the DCMS Committee. “The leaders of social media companies have failed to tackle the infodemic of misinformation. Evidence that tech companies were able to benefit from the monetisation of false information and allowed others to do so is shocking. We need robust regulation to hold these companies to account.
“The coronavirus crisis has demonstrated that without due weight of the law, social media companies have no incentive to consider a duty of care to those who use their services.”
The predecessor DCMS Committee had had substantial input into the publication in April 2019 of the Online Harms White Paper, which proposed a duty of care on tech companies and an independent online harms regulator, and MPs today said there were now concerns that the delayed legislation will not address the harms caused by misinformation and disinformation. They branded this a serious omission “that would ignore the lessons of the Covid crisis”.
The report also argued that digital platforms use business models that disincentivise action against misinformation while continuing to allow bad actors to monetise misleading content to their heart’s content. Because of this, the public is reliant either on the “good will” of tech companies, or the bad press they receive to compel them to act.
The committee made four key recommendations: that the government make a final decision on the online harms regulator without delay and bring forward legislation; that Parliament be given a role in establishing what online harms actually are, rather than letting tech companies mark their own homework; that future legislation be given real teeth, such as powers to fine, disrupt business activity, and even custodial sentences; and that the government publish a media literacy strategy by September and report on the adoption of its teaching online safety guidance by summer 2021.
Read more about misinformation
- Twitter killed thousands of accounts linked to the governments of China, Russia and Turkey that engaged in systematic operations against pro-democracy activists, political opponents and dissidents.
- In this Q&A, Penn State Health and Penn State College of Medicine CIO Cletis Earle talks Covid-19 misinformation, telehealth and next steps.
- The UK’s elections watchdog, the Electoral Commission, is calling for new powers to regulate online political advertising.
Rocio Concha, director of advocacy at consumer rights group Which?, said: “The coronavirus crisis has created the perfect breeding ground for misinformation to spread, with scammers using callous tactics to exploit people’s fears and vulnerabilities for their own financial gain.
“Regulation of online platforms is much needed, but there are a number of areas not covered by the government’s current white paper proposals where consumers are increasingly being harmed online.
“The government must ensure regulators have the right powers to tackle online harms, including where people are losing life-changing sums of money to online scams or being deceived by fake and misleading reviews and information.”