guruXOX - stock.adobe.com

Ofcom publishes Online Safety Roadmap

The roadmap sets out how the online harms regulator will approach implementing the UK’s online safety regime, and tells tech firms to start preparing for the new rules

Online harms regulator Ofcom has published an Online Safety Roadmap, provisionally setting out its plans to implement the UK’s forthcoming internet safety regime.

The Online Safety Bill – which has passed committee stage in the House of Commons and is subject to amendment as it passes through the rest of the parliamentary process – will impose a statutory “duty of care” on technology companies that host user-generated content or allow people to communicate, meaning they would be legally obliged to proactively identify, remove and limit the spread of both illegal and “legal but harmful” content, such as child sexual abuse, terrorism and suicide material.

Failure to do so could result in fines of up to 10% of their turnover by Ofcom, which was confirmed as the online harms regulator in December 2020.

The Bill has already been through a number of changes. When it was introduced in March 2022, for example, a number of criminal offences were added to make senior managers liable for destroying evidence, failing to attend or providing false information in interviews with Ofcom, and for obstructing the regulator when it enters company offices for audits or inspections.

At the same time, the government announced it would significantly reduce the two-year grace period on criminal liability for tech company executives, meaning they could be prosecuted for failure to comply with information requests from Ofcom within two months of the Bill becoming law.

Ofcom’s roadmap sets out how the regulator will start to establish the new regime in the first 100 days after the Bill is passed, but is subject to change as it evolves further.

The roadmap noted that, upon Ofcom receiving its powers, the regulator will quickly move to publish a range of material to help companies comply with their new duties, including draft codes on illegal content harms; draft guidance on illegal content risk assessments, children’s access assessments, transparency reporting and enforcement guidelines; and consultation advice to the government on categorisation thresholds.

Targeted engagement

It will also publish a consultation on how Ofcom will determine who pays fees for online safety regulation, as well as start its targeted engagement with the highest-risk services.

“We will consult publicly on these documents before finalising them,” it said. “Services and other interested stakeholders should therefore be prepared to start engaging with our consultation on draft codes and risk assessment guidance in Spring 2023.

“Our current expectation is that the consultation will be open for three months. Services and stakeholders can respond to the consultation in this timeframe should they wish to do so. We will also have our information gathering powers and we may use these if needed to gather evidence for our work on implementing the regime.”

It added the first illegal content codes are likely to be issued around mid-2024, and that they will come into force 21 days after this: “Companies will be required to comply with the illegal content safety duties from that point and we will have the power to take enforcement action if necessary.”

Types of service

However, Ofcom further noted that while the Bill will apply to roughly 25,000 UK-based companies, it sets different requirements on different types of services.

Category 1, for example, will be reserved for the services with the highest risk functionalities and the highest user-to-user reach, and comes with additional transparency requirements, as well as a duty to assess risks to adults of legal but harmful content.

Category 2a services, meanwhile, are those with the highest reach, and will have transparency and fraudulent advertising requirements, while Category 2b services are those with potentially risky functionalities, and will therefore have additional transparency requirements but no other additional duties.

Based on the government’s January 2022 impact assessment – in which it estimated that only around 30 to 40 services will meet the threshold to be assigned a category – Ofcom said in the roadmap that it anticipates most in-scope services will not fall into these special categories.

“Every in-scope user-to-user and search service must assess the risks of harm related to illegal content and take proportionate steps to mitigate those risks,” it said.

“All services likely to be accessed by children will have to assess risks of harm to children and take proportionate steps to mitigate those risks,” said Ofcom, adding that it recognises smaller services and startups do not have the resources to manage risk in the way the biggest platforms do.

“In many cases, they will be able to use less burdensome or costly approaches to compliance. The Bill is clear that proportionality is central to the regime; each service’s chosen approach should reflect its characteristics and the risks it faces. The Bill does not necessarily require that services are able to stop all instances of harmful content or assess every item of content for their potential to cause harm – again, the duties on services are limited by what is proportionate and technically feasible.”

Read more about online safety

On how companies should deal with “legal but harmful content”, which has been a controversial aspect of the Bill, the roadmap said “services can choose whether to host content that is legal but harmful to adults, and Ofcom cannot compel them to remove it.

“Category 1 firms must assess risks associated with certain types of legal content that may be harmful to adults, have clear terms of service explaining how they handle it, and apply those terms consistently. They must also provide ‘user empowerment’ tools to enable users to reduce their likelihood of encountering this content. This does not require services to block or remove any legal content unless they choose to do so under their terms of service.”

On 6 July 2022 – the same day the roadmap was released – Priti Patel published an amendment to the Bill that will give powers to regulators to require tech companies to develop or roll out new technologies to detect harmful content on their platforms.

The amendment requires technology companies to use their “best endeavours” to identify and prevent people from seeing child sexual abuse material posted publicly or sent privately; placing pressure on tech companies over end-to-end encrypted messaging services.

Ministers argue that end-to-end encryption makes it difficult for technology companies to see what is being posted on messaging services, although tech companies have argued that there are other ways to police child sexual abuse. “Tech firms have a responsibility not to provide safe spaces for horrendous images of child abuse to be shared online,” said digital minister Nadine Dorries. “Nor should they blind themselves to these awful crimes happening on their sites.”

Critics, however, say the technology could be subject to “scope creep” once installed on phones and computers, and could be used to monitor other types of message content, potentially opening up backdoor access to encrypted services.

“I hope Parliament has a robust and detailed debate as to whether forcing what some have called ‘bugs in your pocket’ – breaking end-to-end encryption (unsurprisingly, others argue it doesn’t) to scan your private communications – is a necessary and proportionate approach,” said technology lawyer Neil Brown.

Read more on Technology startups

CIO
Security
Networking
Data Center
Data Management
Close