Viktor - stock.adobe.com

UK online safety regime ineffective on misinformation, MPs say

A report from the Commons Science, Innovation and Technology Committee outlines how the Online Safety Act fails to deal with the algorithmic amplification of ‘legal but harmful’ misinformation

The UK’s Online Safety Act (OSA) is failing to address “algorithmically accelerated misinformation” on social media platforms, leaving the public vulnerable to a repeat of the 2024 Southport riots, MPs have warned.

Following an inquiry into online misinformation and harmful algorithms, the Commons Science, Innovation and Technology Committee (SITC) has identified “major holes” in the UK’s online safety regime when it comes to dealing with the viral spread of false or harmful content.

Highlighting the July 2024 Southport riots as an example of how “online activity can contribute to real-world violence”, the SITC warned in a report published on 11 July 2025 that while many parts of the OSA were not fully in force at the time of the unrest, “we found little evidence that they would have made a difference if they were”.

It said this was due to a mixture of factors, including weak misinformation-related measures in the act itself, as well as the business models and opaque recommendation algorithms of social media firms.

“It’s clear that the Online Safety Act just isn’t up to scratch,” said SITC chair Chi Onwurah. “The government needs to go further to tackle the pervasive spread of misinformation that causes harm but doesn’t cross the line into illegality. Social media companies are not just neutral platforms but actively curate what you see online, and they must be held accountable. To create a stronger online safety regime, we urge the government to adopt five principles as the foundation of future regulation.”

These principles include public safety, free and safe expression, responsibility (including for both end users and the platforms themselves), control of personal data, and transparency.  

The SITC also made specific recommendations, such as creating “clear and enforceable standards” for the digital advertising ecosystem that incentivises the amplification of false information, and introducing new duties for platforms to assess and deal with misinformation-related risks. “In order to tackle amplified disinformation … the government and Ofcom should collaborate with platforms to identify and track disinformation actors, and the techniques and behaviours they use to spread adversarial and deceptive narratives online,” said MPs.

Business models and opaque algorithms

According to the SITC, social media companies have “often enabled or even encouraged” the viral spread of misinformation – and may have profited from it – as a result of their advertising and engagement-based business models.

“The advertisement-based business models of most social media companies mean that they promote engaging content, often regardless of its safety or authenticity,” MPs wrote. “This spills out across the entire internet, via the opaque, under-regulated digital advertising market, incentivising the creation of content that will perform well on social media.”

They added that while major tech companies told the committee there are no incentives to allow harmful content on their platforms, as it can damage the brand and repel advertisers, “policymaking in this space has lacked a full evidence base” because the inner workings of social media recommendation algorithms are not disclosed by the firms.

“We asked several tech companies to provide high-level representations of their recommendation algorithms to the committee, but they did not,” they said, adding that this “shortfall in transparency” makes it difficult to establish clear causal links between specific recommendations and harms.

“The technology used by social media companies should be transparent, explainable and accessible to public authorities,” they said.

The SITC added that the government should create measures to compel social media platforms to embed tools in their systems that can identify and algorithmically deprioritise fact-checked misleading content, or content that cites unreliable sources, where it has the potential to cause significant harm.

“It is vital that these measures do not censor legal free expression, but apply justified and proportionate restrictions to the spread of information to protect national security, public safety or health, or prevent disorder or crime,” said MPs.

Read more about online safety

On tackling the underlying business models that incentivise misinformation, MPs said there is a regulatory gap around digital advertising, as the focus is currently on harmful advertising content rather than “the monetisation of harmful content through advertising”.

“The government should create a new arms-length body – not funded by industry – to regulate and scrutinise the process of digital advertising, covering the complex and opaque automated supply chain that allows for the monetisation of harmful and misleading content,” they added. “Or, at the least, the government should extend Ofcom’s powers to explicitly cover this form of harm, and regulate based on the principle of preventing the spread of harmful or misleading content through any digital means, rather than limiting itself to specific technologies or sectors.”

While generative artificial intelligence (GenAI) only played a marginal role in the spread of misinformation before the Southport riots, the SITC expressed concern about the role it could play in a “future, similar crisis”.

They said GenAI’s “low cost, wide availability and rapid advances means that large volumes of convincing deceptive content can increasingly be created at scale”.

It said the government should therefore pass legislation that covers GenAI platforms, in line with other online services that pose a high risk of producing or spreading illegal or harmful content.

“This legislation should require generative AI platforms to: provide risk assessments to Ofcom on the risks associated with different prompts and outputs, including how far they can create or spread illegal, harmful or misleading content; explain to Ofcom how the model curates content, responds to sensitive topics and what guardrails are in place to prevent content that is illegal or harmful to children; implement user safeguards such as feedback, complaints and output flagging; and prevent children from accessing inappropriate or harmful outputs.”

They added that all AI-generated content should be automatically labelled as such “with metadata and visible watermarks that cannot be removed”.

Read more on Technology startups