Daniel - stock.adobe.com
UK government publishes Online Safety Bill draft
Bill builds on previous commitments by the government, which has added new measures to uphold democracy and freedom of speech while making tech giants more accountable
The UK government has published a draft of its long-awaited Online Safety Bill, introducing a raft of new measures that it claims will safeguard freedom of expression online and increase the accountability of tech giants.
The government says the legislation will also help keep children safe and prevent some of the worst abuses online, including racist hate crimes.
Both Lords and MPs have previously expressed frustration about delays in the Online Safety Bill, the draft of which is being published two years after the release of the Online Harms whitepaper in April 2019.
The government gave its initial response to the whitepaper in February 2020, and its full response in December 2020.
The full response set out the general regulatory framework the government wants to take forward – such as establishing a statutory duty of care to the users of online companies, which will be legally obliged to identify, remove and limit the spread of illegal content – but the draft bill contains a number of new measures designed to further curb harmful practices online, promote accountability, and protect democratic debate.
It includes, for example, specific duties for “Category 1” companies – those with the largest online presence and high-risk features, which is likely to include Facebook, TikTok, Instagram and Twitter – to protect “democratically important” content, such as posts promoting or opposing specific political parties.
“Companies will also be forbidden from discriminating against particular political viewpoints and will need to apply protections equally to a range of political opinions, no matter their affiliation,” said the government website. “Policies to protect such content will need to be set out in clear and accessible terms and conditions, and firms will need to stick to them or face enforcement action from Ofcom.”
These companies will also need to conduct and publish up-to-date assessments of their impact on freedom of expression, and demonstrate that they have taken steps to mitigate any adverse effects caused by their platforms.
Online companies generally will also be forced to take responsibility for tackling fraudulent user-generated content, such as “romance fraud” – whereby a victim is tricked into sending money or personal information to someone they think is interested in a relationship – and posts about fake investment opportunities.
Although the government previously signalled in its full response that Ofcom would be given powers to issue General Data Protection Regulation (GDPR)-style fines to any company that fails to carry out its statutory duty of care, the draft bill includes a new criminal offence for senior managers as a deferred power.
However, while this power can be introduced down the line if tech companies do not improve their practices, it cannot be enforced until a review is conducted at least two years after the legislation has come into effect.
“Today the UK shows global leadership with our groundbreaking laws to usher in a new age of accountability for tech and bring fairness and accountability to the online world,” said digital secretary Oliver Dowden.
“We will protect children on the internet, crack down on racist abuse on social media and, through new measures to safeguard our liberties, create a truly democratic digital age.”
Home secretary Priti Patel added: “It’s time for tech companies to be held to account and to protect the British people from harm. If they fail to do so, they will face penalties.”
The draft bill will be scrutinised by a joint committee of MPs before a final version is formally introduced to Parliament.
Read more about online harms and safety
- A coalition of organisations representing consumers, civil society and business is urging the government to include protections from online cyber scams in the Online Safety Bill, warning that Westminster’s much-quoted ambition to make the UK “the safest place in the world” to be online risks being unattainable.
- Many of the regulatory bodies overseeing algorithmic systems and the use of data in the UK economy will need to build up their digital skills, capacity and expertise as the influence of artificial intelligence and data increases, MPs have been told.
- The Law Commission has set out proposals for how criminal law can be changed to better deal with offensive online communications, including abusive messages, “cyber flashing”, pile-on harassment and the deliberate sending of false information.
Fact-checking experts previously told a House of Lords committee in February 2021 that the Online Safety Bill should force internet companies to provide real-time information and updates about suspected disinformation, and further warned against an over-reliance on artificial intelligence algorithms to moderate content.
In response to the Queen’s Speech on 11 May, which said “government will lead the way in ensuring internet safety for all”, independent fact-checking charity Full Fact has called for the bill to include: a specific misinformation code of practice on actions to fulfil the new duty of care; measures to protect and enhance freedom of expression; and real-time transparency from internet companies to improve accountability.
“A year of conspiracy theories and false health advice has shown the threat that bad information poses to all our lives,” said Full Fact CEO Will Moy. “We cannot go on relying on the internet companies to make decisions on online misinformation without independent scrutiny and transparency.
“The government’s Online Safety Bill is a necessary and overdue response. But with fundamental rights at stake, it must be closely scrutinised by Parliament.
“The bill should include measures to safeguard UK democracy and counter dangerous false information, while protecting – and enhancing – freedom of expression.”
As it stands, the draft bill makes no new commitments on real-time disinformation transparency on the part of tech companies, which must produce annual transparency reports on how they are dealing with a variety of issues, and this is not limited to disinformation alone.
These annual reports from companies will feed into Ofcom’s own transparency report, the first of which is due a year after the initial report is made by an internet company, and willbe required at least once a year after that.
On 7 April 2021, the government announced that its Digital Markets Unit (DMU), which was set up to help scrutinise the dominance of tech giants in the UK economy and is based in the Competition and Markets Authority (CMA), had begun work on developing legally binding codes of conduct to prevent anti-competitive behaviour in digital markets.
Under a roadmap published by the Digital Regulation Cooperation Forum (DRCF) – which was formed in July 2020 to strengthen the working relationships between the regulators and establish a greater level of cooperation – the DMU will work alongside other UK regulators with remits over different aspects of the digital economy, including Ofcom and the Information Commissioner’s Office (ICO).