
tostphoto - stock.adobe.com
The UK’s Online Safety Act explained: what you need to know
In this essential guide, Computer Weekly looks at the UK’s implementation of the Online Safety Act, including controversies around age verification measures and the threat it poses to end-to-end encryption
The UK’s Online Safety Act became law in October 2023 with the aim to enhance online safety for all internet users, particularly children, by placing obligations on service providers that either host user-generated content or provide search engine functionality.
Under their new obligations, more than 100,000 companies – including social media platforms, online forums, messaging services and video-sharing sites – are required to proactively prevent their users from seeing illegal or harmful content. This includes assessing the risks of such content appearing on their platforms, implementing “robust” age limits for the accessing of certain content and quickly removing the offending content when it does appear.
Failure to comply with the OSA’s measures can result in significant penalties for service providers. Online harms regulator Ofcom, for example, has the power to impose substantial fines (10% of a company's global revenue or £18m, whichever is higher), and may require payment providers or advertisers to stop working with the platform.
Senior managers for online platforms can also face criminal liability for failing to comply with Ofcom's information requests, or for not ensuring their company adheres to child safety duties, while the regulator itself can also conduct audits and direct companies to take specific steps to improve their online safety measures.
How Ofcom would regulate the act was set out in its December 2024 Illegal Harms Codes and Guidance, which went into effect and became enforceable on 17 March 2025. Under the codes, Ofcom expects any internet services that children can access (including social media networks and search engines) to carry out robust age checks, to configure their algorithms to filter out the most harmful content from these children’s feeds and implement content moderation processes that ensure swift action is taken against this content.
However, since its inception, the OSA has faced a number of criticisms, including for vague and overly broad definitions of what constitutes “harmful content”, and the threat it poses to encrypted communications.
There has also been extensive debate about whether the OSA is effective in practice, particularly since age verification measures went live in late July 2025 that require platforms to verify users’ ages to access certain content or sites, and in the wake of the 2024 Southport Riots where online misinformation played a key role in the spread of violence.
Age verification measures
Since 25 July 2025, online service providers have been required to put in place age checks to ensure children are unable to access pornography, self-harm, suicide or eating disorder-related content that could be harmful to them.
The plans for “robust age checks” were outlined in Ofcom’s May 2024 draft online child safety rules, which contained more than 40 other measures tech firms would need to implement by 25 July to comply with their new legal obligations under the act.
While much of the media focus since the deadline has been on the age-gating of porn sites, the change has also affected social media firms, dating apps, live streamers and some gaming companies.
The methods these services can use to assure people’s ages are varied, and can include facial age estimation technologies, open banking, photo-ID matching, digital identity services or credit card checks. However, since the age gate deadline on 25 July, online searches for virtual private networks (VPNs) – which encrypt a user’s connection to the internet, allowing them to bypass the OSA’s measures – have skyrocketed, with Proton alone reporting a 1800% spike in daily sign-ups for its VPN service in the UK, and VPN apps topping the Apple store’s download charts.
The Age Verification Providers Association (AVPA), on the other hand, said there has also been a sharp increase in additional age checks in the UK since age gating was introduced, with five million additional checks being carried out a day since then.
As it stands, the OSA places no limits on age verification providers from distributing, profiling or monetising the personal data of UK residents going through verification, although Ofcom notes on its website it may refer providers to the data regulator if it believes an age verification company has not complied with data protection law.
Some internet users have expressed frustration that the choice of which age assurance technology to use lies solely with the platform, meaning to access its services they must hand over their sensitive personal data to a third party. While these firms are subject to UK data protection law, it is unclear how the OSA measures around age verification will interact with the Data Use and Access Act’s (DUAA) new “purpose limitation” rules that make it easier to process data outside of its originally intended use.
The DUAA will also remove current protections against automated decision-making (ADM) so that they only apply to decisions that either significantly affect individuals or involve special category data, and introduce a list of “recognised legitimate interests” that organisations can use to process data without the need to conduct legitimacy assessments, which includes matters such as national security, prevention of crime and safeguarding.
There are also concerns with the OAS that political content is being censored in the name of protecting children, with reports of Palestine-related content being placed behind age verification walls on X and Reddit. Other reported examples of legitimate speech being removed as a result of age-gating at scale include users being unable to access content related to Alcoholics Anonymous and other addiction support, medical cannabis, the war in Ukraine, and even images of historical art, such as Francisco de Goya’s 19th-century painting Saturn Devouring His Son.
Some civil society groups and academics have also expressed concern that Ofcom’s guidance on the OSA so far incentivises platforms to adopt a “bypass strategy”, whereby they are encouraged to moderate content in ways that are more restrictive than necessary to avoid potential fines. This approach could lead to the over-removal of legitimate speech while potentially restricting users' freedom of expression.
Breaking encryption
Aside from age verification, the most controversial aspect of the act is power it gives to Ofcom to require tech firms to install “accredited technology” to monitor encrypted communications for illegal content. In essence, this would mean tech companies using software to bulk-scan messages on encrypted services (such as WhatsApp, Signal and Element) before their encryption, otherwise known as client-side scanning (CSS).
Implementing such measures would undermine the security and privacy of encrypted services by scanning the content of every message and email to check whether they contain illegal content. This has been repeatedly justified by the government as necessary for stopping the creation and spread of child sexual abuse materials (CSAM), as well as violent crime and terrorism. Cryptographic experts, however, have repeatedly argued that measures mandating tech firms to proactively detect harmful content through client-side scanning should be abandoned.
A policy paper written by Ross Anderson, a Cambridge University professor of security engineering, and researcher Sam Gilbert in October 2022, for example, argued that using artificial intelligence (AI)-based scanning to examine the content of messages would raise an unmanageable number of false alarms and prove “unworkable”. They further claimed the technology is “technically ineffective and impractical as a means of mitigating violent online extremism and child sexual abuse material”.
A previous October 2021 paper from Anderson and 13 other cryptographic experts, including Bruce Schneier, argued that while client-side scanning “technically” allows for end-to-end encryption, “this is moot if the message has already been scanned for targeted content. In reality, CSS is bulk intercept, albeit automated and distributed.”
In September 2023, BCS, The Chartered Institute for IT, said the government proposals in on end-to-end encryption are not possible without creating systemic security risks and, in effect, bugging millions of phone users.
It argued that the government is seeking to impose a technical solution to a problem that can only be solved by broader interventions from police, social workers and educators, noting that some 70% of BCS’ 70,000 members say they are not confident it is possible to have both truly secure encryption and the ability to check encrypted messages for criminal material.
The proposals have also led to a backlash from encrypted messaging providers, including WhatsApp, Signal and Element, which threatened to withdraw their services from the UK if the bill becomes law.
As it stands, while Ofcom does have the power to compel companies to scan for child sexual abuse material in encrypted environments, it is still working on guidance for tech firms around how “accredited technologies” such as client-side scanning and hash-matching can be implemented to protect child safety online.
There are currently no “accredited technologies” that Ofcom requires companies to use, with final guidance on the matter planned for publication in Spring 2026.
Online disinformation persists
Although the bill eventually received royal assent in October 2023 – four-and-a-half years after the online harms whitepaper was published in April 2019 – its ability to deal with real-world disinformation is still an open question. In May 2025, for example, the government and Ofcom were still in disagreement in over whether the act even covers misinformation.
As part of its inquiry into online misinformation and harmful algorithms, a report from the Commons Science, Innovation and Technology Committee (SITC) published a report of its findings in July 2025, outlining how the OSA fails to deal with the algorithmic amplification of “legal but harmful” misinformation.
Highlighting the July 2024 Southport riots as an example of how “online activity can contribute to real-world violence”, the SITC warned that while many parts of the OSA were not fully in force at the time of the unrest, “we found little evidence that they would have made a difference if they were”.
It said this was due to a mixture of factors, including weak misinformation-related measures in the act itself, as well as the business models and opaque recommendation algorithms of social media firms.
“It’s clear that the Online Safety Act just isn’t up to scratch,” said SITC chair Chi Onwurah. “The government needs to go further to tackle the pervasive spread of misinformation that causes harm but doesn’t cross the line into illegality.
“Social media companies are not just neutral platforms but actively curate what you see online, and they must be held accountable. To create a stronger online safety regime, we urge the government to adopt five principles as the foundation of future regulation.”
These principles include public safety, free and safe expression, responsibility (including for both end users and the platforms themselves), control of personal data and transparency.
Development hell
While controversies around certain aspects of the act are still ongoing, its process of becoming legislation was also fraught with tension, running through many iterations since the UK government initially published its Online Harms Whitepaper in April 2019.
Announcing the new measures, the then prime minister Theresa May argued these companies “have not done enough for too long” to protect their users, particularly young people, from “legal but harmful” content.
Although this was the world’s first framework designed to hold internet companies accountable for the safety of those using their services, and outlined proposals to place a statutory “duty of care” on internet companies to make them accountable for the safety of their users, it did not receive Royal Assent to become an act until October 2023.
While the government published an initial response to its whitepaper in February 2020 and a full response in December 2020 which provided more detail on the proposals, an initial draft of the bill did not materialise until May 2021.
At this stage, the draft bill contained a number of new measures, such as specific duties for “Category 1” companies – those with the largest online presence and high-risk features, which is likely to include Facebook, TikTok, Instagram and Twitter – to protect “democratically important” content, publish up-to-date assessments of their impact on freedom of expression, and new criminal liability for senior managers.
Further additions to the bill came in February 2022, when the government expanded the list of “priority offences” that tech companies will have to proactively prevent people from being exposed to. While terrorism and child sexual abuse were already included in the priority list, the government has redrafted it to include revenge porn, hate crime, fraud, the sale of illegal drugs or weapons, the promotion or facilitation of suicide, people smuggling and sexual exploitation. As it stands, there currently are more than 130 priority offences outlined in the act.
In November 2022, the “legal but harmful” aspect of the bill – which attracted strong criticism from Parliamentary committees, campaign groups and tech professionals alike – was then dropped, meaning companies would no longer be obliged to remove or restrict legal content, or otherwise suspend users for posting or sharing that content. Instead, the measures around “legal but harmful” were reduced to only apply to children.
However, controversy continued – in January 2023, the then-Conservative government attempted to amend the bill so that existing immigration offences would be incorporated into the list of “priority offences”, meaning tech companies could be forced to remove videos of people crossing the English Channel “which show that activity in a positive light”. “Unlawful immigration” content is still included in the act’s list of priority offences.
Throughout this entire process, the bill attracted strong criticism. The Open Rights Group and other civil society organisations, for example, called for its complete overhaul in September 2022, on the basis that its measures threaten privacy and freedom of speech.
They specifically highlighted concerns around the act’s provisions to compel online companies to scan the content of users’ private messages, and the extensive executive powers granted to the secretary of state to define what constitutes lawful speech.
Computer Weekly’s Online Safety Act coverage
- 8 April 2019: UK introduces world’s first online safety regulations.
- 15 Dec 2020: UK government unveils details of upcoming online harms rules.
- 24 Feb 2021: Fact-checking experts tell House of Lords inquiry that upcoming Online Safety Bill should force internet companies to provide real-time information on suspected disinformation, and warn against over-reliance on AI-powered algorithms to moderate content.
- 18 April 2021: Government puts Facebook under pressure to stop end-to-end encryption over child abuse risks.
- 12 May 2021: UK government publishes Online Safety Bill draft.
- 24 June 2021: Campaign group set up to oppose Online Safety Bill says the duty of care is too simplistic, cedes too much power to US corporations and will, in practice, privilege the speech of journalists or politicians.
- 1 July 2021: UK government issues “safety by design” guidance for tech firms.
- 22 July 2021: House of Lords report criticises the government’s forthcoming Online Safety Bill for imposing duty of care on tech platforms to deal with ‘legal but harmful’ content, which it says threatens freedom of expression online.
- 30 July 2021: A parliamentary “super committee” made up of MPs and lords has been established to scrutinise government online harms approach.
- 5 November 2021: Digital secretary commits to establishing ongoing oversight of the Online Safety Bill and its implementation, and suggests the grace period on criminal liability for tech company execs should be shortened from two years to a maximum of six months once it’s enacted.
- 19 November 2021: The Safety Tech Challenge Fund winners will now develop technologies to limit the spread of child abuse material in encrypted environments, which the government has claimed will not be repurposed for other uses.
- 14 December 2021: MPs and peers release report on Online Safety Bill following five-month inquiry into the draft legislation and make a number of recommendations on how it can be improved to deal with harmful content and abuse on the internet.
- 11 January 2022: Firms working on the UK government’s Safety Tech Challenge have suggested that scanning content before encryption will help prevent the spread of child sexual abuse material – but privacy concerns remain.
- 19 January 2022: Content removal will not stop misinformation, says Royal Society.
- 24 January 2022: MPs say Online Safety Bill fails to tackle full range of harms.
- 7 February 2022: Government expands tech firms’ obligations in Online Safety Bill.
- 8 February 2022: Porn sites could be legally obliged to verify that their users are 18 or over under proposed online safety rules, in UK government’s second attempt to prevent children from accessing pornography online.
- 9 February 2022: Technology companies should introduce measures to protect children from online abuse before they are compelled to do so by law, an expert on child safety warned last night.
- 24 February 2022: Paid-for advertising still not covered in Online Safety Bill.
- 28 February 2022: Online Safety Bill updated to deal with anonymous abuse.
- 9 March 2022: Paid-for advertising measures included in Online Safety Bill.
- 17 March 2022: The government has introduced its long-awaited Online Safety Bill in Parliament, alongside new criminal offences and sanctions for tech company execs.
- 29 March 2022: Members of the Chartered Institute of IT, the professional body for technology professionals in the UK, warn against limiting end-to-end encryption.
- 07 July 2022: An amendment to the Online Safety Bill, currently going through Parliament, will put pressure on tech companies over end-to-end encrypted messaging services.
- 08 July 2022: Ofcom publishes Online Safety Roadmap.
- 12 August 2022: IT specialists lack confidence that legislation compelling tech firms to tackle online harms will work as intended, with only a small minority believing ‘harmful but legal’ content can be effectively and proportionately policed by internet platforms.
- 28 September 2022: Online Safety Bill needs complete overhaul, say rights groups.
- 06 October 2022: Automatic scanning of messaging services for illegal content could lead to one billion false alarms each day in Europe.
- 14 October 2022: Ross Anderson argues in a rebuttal to GCHQ experts that using artificial intelligence to scan encrypted messaging services is the wrong approach to protecting children and preventing terrorism.
- 29 November 2022: Online Safety Bill’s ‘legal but harmful’ provision will be dropped by the UK government in favour of public risk assessments, tools to help users control the content they consume, and new criminal offences around self-harm.
- 09 December 2022: MPs and online safety experts have expressed concern about encryption-breaking measures contained in the Online Safety Bill as it returns to Parliament for the first time since its passage was paused in July.
- 19 January 2023: Under UK government amendments to the Online Safety Bill, video footage that shows people crossing the Channel in a ‘positive light’ could be added to a list of illegal content that all tech platforms must proactively prevent from reaching users, while senior managers could face further criminal sanctions.
- 27 January 2023: Lords question ‘extensive’ government online safety powers.
- 18 April 2023: Tech companies and NGOs urge rewrite of Online Safety Bill to protect encrypted comms.
- 12 July 2023: Ofcom’s online safety preparedness efforts hobbled by government.
- 20 July 2023: Online Safety Bill screening measures amount to ‘prior restraint’.
- 21 July 2023: The government has introduced an amendment to the Online Safety Bill that it says will require the regulator to conduct extra scrutiny before requiring technology companies to scan encrypted messages for illegal content.
- 01 September 2023: BCS, The Chartered Institute for IT, argues the government is seeking a technical fix to terrorism and child abuse without understanding the risks and implications.
- 20 September 2023: The home secretary is calling on Meta to halt its plans to introduce encrypted messaging services on Facebook and Instagram until the company puts measures in place to detect abuse.
- 20 September 2023: Parliament passes sweeping Online Safety Bill but tech companies still concerned over encryption.
- 27 October 2023: Tech firms cite risk to end-to-end encryption as Online Safety Bill gets royal assent.
- 09 May 2024: In the draft codes, Ofcom calls on technology firms to use ‘robust’ age-checking and content moderation systems to keep harmful material away from children online.
- 8 August 2024: Ofcom issues online safety warning to firms in wake of UK riots.
- 20 November 2024: Government issues strategic priorities for online safety regulator Ofcom.
- 17 December 2024: Ofcom publishes Illegal Harms Codes of Practice.
- 27 February 2025: Representatives from the social media firms said that while the scale of their platforms makes content moderation difficult, they are effectively dealing with the vast majority of misinformation.
- 17 March 2025: Online Safety Act measures come into effect.
- 02 May 2025: MPs heard different views from the online harms regulator and the UK government about whether and how the Online Safety Act obliges platforms to deal with disinformation
- 11 July 2025: A report from the Commons Science, Innovation and Technology Committee outlines how the Online Safety Act fails to deal with the algorithmic amplification of ‘legal but harmful’ misinformation.