Online Safety Bill screening measures amount to ‘prior restraint’

The Open Rights Group is calling on Parliament to reform the Online Safety Bill, on the basis that its content-screening measures would amount to “prior restraint” on freedom of expression

Privacy campaigners are urging Parliament to address a clause in the government’s upcoming Online Safety Bill that would force tech companies to screen people’s communications for illegal content, after legal opinion from Matrix Chambers identifies “real and significant issues” around its lawfulness.

The opinion – which was delivered to Open Rights Group (ORG) by Dan Squires KC and Emma Foubister of Matrix Chambers – found the bill’s measures around screening user content “amounts to prior restraint” as it will require platforms to intercept and block online communications before they have even been posted.

It also highlighted a number of issues around the use artificial intelligence (AI) algorithms and other automated software to detect illegal content, noting “it is difficult to see how the duty could be fulfilled without” them.

In its current form, clause 9 of the bill places a duty on online platforms to prevent users from “encountering” certain “illegal content”, which encompasses a wide range of material from terrorism and child sexual abuse, to any content deemed to be assisting illegal immigration or suicide.

To achieve this, the bill will require platforms to proactively screen and block their users content before it is uploaded, so that others are prevented from seeing material deemed illegal.

These requirements will be accompanied by new powers for online harms regulator Ofcom, which will be able to compel firms to install “proactive” client-side scanning (CSS) and other detection technologies so they can analyse content prior to it being uploaded. These measures are of particular concern for end-to-end encryption (E2EE) services, which they have argued will undermine the privacy and security of UK citizens’ communications.

“In our view, the bill, if enacted in its current form, will represent a sea change in the way public communication and debate are regulated in this country. It risks fundamental encroachments into the rights of freedom of expression to impart and receive information,” the lawyers wrote.

“The bill will require social media platforms, through AI or other software, to screen material before it is uploaded and to block content that could be illegal. That is prior restraint on freedom of expression and it will occur through the use by private companies of proprietorial, and no doubt secret, processes.

“It will give rise to interference with freedom of expression in ways that are unforeseeable and unpredictable and through entirely opaque processes, and in ways which risk discriminating against minority religious or racial groups.”

Monica Horten, policy manager for freedom of expression at ORG, said: “As well as being potentially unlawful, these proposals threaten the free speech of millions of people in the UK. It is yet another example of the government expecting Parliament to pass a law without filling in the detail.”

Computer Weekly contacted the Department of Science, Innovation and Technology about the legal opinion, and it said: “The Online Safety Bill is compatible with the European Convention on human rights. This groundbreaking piece of legislation will make the UK the safest place in the world to be online by protecting children from bullying, grooming and pornography.

“Under the bill, platforms will have to tackle illegal content on their services while protecting freedom of expression, as set out in a number of clear safeguards.”

Specific issues

The opinion outlined a number of specific issues with the bill, including that forcing tech companies to make their own determinations over what is and is not illegal content introduces the risk that “a significant number of lawful posts will be censored” as a result.

It added this is especially problematic given the highly likely use of AI algorithms and automated software to filter through content at scale.

Highlighting the example of existing immigration offences being incorporated into the list of illegal content that tech platforms must proactively police – which means firms could be forced to remove videos of people crossing the English Channel “which show that activity in a positive light” – the opinion noted it is difficult to predict how an AI algorithm will decide if the content is “positive” or not.

“What if the image was posted without any text? Removing these images prior to publication, without due process to decide whether they are illegal, may have a chilling effect on a debate of public importance relating to the treatment of refugees and illegal immigration,” it said, adding that the threat of large fines and criminal sanctions for managers over failures to effectively police illegal content will also incentivise companies to err on the side of caution and block more content than they otherwise would.

It further added that there is a significant and growing body of research that demonstrates AI and other  automated technologies frequently contain “inherent biases”, creating a clear risk that “screening systems will disproportionately block content relating to or posted by minority ethnic or religious groups”.

This will be compounded by an overall lack of transparency and accountability, as the bill currently places no requirements on companies to provide users with a reason for why their content was removed, or to even notify them that is has happened.

“By contrast to the draconian enforcement conditions to encourage prior restraint, the provisions which enable users to challenge over-zealous action…are limited,” it said, adding that while companies will have a duty to operate complaints procedures, there is no information in the bill about the timescales with which these complaints should be addressed, and no enforcement processes for failures to adequately deal with complaints.

A previous legal opinion commissioned by Index on Censorship from Matthew Ryder KC, published in November 2022, found that technical notices issued by Ofcom – which would require private messaging services to put in place “accredited” tech to filter the content of messages – amount to state-mandated surveillance on a mass scale.

“Ofcom will have a wider remit on mass surveillance powers of UK citizens than the UK’s spy agencies, such as GCHQ (under the Investigatory Powers Act 2016),” wrote Ryder.

The surveillance powers proposed by the Online Safety Bill were unlikely to be in accordance with the law and would be open to legal challenge, he said: “Currently, this level of state surveillance would only be possible under the Investigatory Powers Act if there is a threat to national security.”

A number of technology companies offering encrypted messaging services – including WhatsApp, Signal and Threema – have also previously urged the government to make urgent changes to the bill, arguing that it threatens to undermine the privacy of encrypted communications.

The Home Office maintains that that the Online Safety Bill does not represent a ban on end-to-end encryption and would not require messaging services to weaken their encryption.

Read more about online safety

Read more on Artificial intelligence, automation and robotics

Search CIO
Search Security
Search Networking
Search Data Center
Search Data Management
Close