Leigh Prather - stock.adobe.com
The UK government has finally published its long-awaited draft Online Safety Bill. The bill follows years of thinking and debate about how we can make the UK the safest place to go online through the regulation of user-generated content posted online. The draft bill is extraordinarily complex and will require careful and detailed pre-legislative scrutiny to ensure it delivers on its objectives.
The government and the tech sector share a common goal for regulation to create more secure online spaces for users while enabling technological innovation and upholding fundamental human rights such as free speech.
Tech companies do not want harmful or illegal content on their platforms and society is looking to feel safer when going about their daily online lives.
However, the tensions caused by individual and often subjective experiences, coupled with the scale of content uploaded daily, means we cannot reasonably expect to eliminate all bad behaviour online.
What we can do is lead the way in designing a world-leading approach that is principles-based and proportionate to help businesses understand what is expected of them and that directs them to take the appropriate steps to protect society from harm now and in the future.
The scope of the draft bill is broad and, according to estimates from the Department for Digital, Culture, Media and Sport, as many as 24,000 companies may fall within the scope of this legislation. These are companies of very different size, function and purpose, with different operating models and structures.
This bill is not just about regulating companies – it is about regulating people’s behaviours online.
Many of the questions surrounding the legislation remain interlinked with how we all use technology in our day-to-day lives. Technological innovation has allowed us to work, socialise, campaign and learn online, and with this benefit has come inevitable risk.
Precise legal drafting will be required to balance fundamental freedoms with the prevention of harm – particularly where content is harmful but not illegal. The test should be whether the bill makes it easier for companies and regulators to take decisions and act quickly and decisively to counter harmful content without undermining fundamental freedoms.
Legal but harmful
Companies desperately want clear legal definitions on the extraordinarily tricky question: what is legal but harmful content? Pushing this down the tracks and leaving it to secondary legislation delays a fundamental part of this regime, creating uncertainty and holding back tech companies from taking action to remove harmful content. We need to answer these questions about definitions up front in the process, so that we can properly understand the implications of this legislation.
Certainty is needed on where the thresholds lie between different categories of companies to help them understand whether they will be likely to move between categories if they grow in size or change functionality, and how this might affect their obligations. This will be vital for so many British companies that often scale quickly and need to understand, as they grow, how their responsibilities change and what action they need to take.
Finally, some assurance on how many codes of practice companies will be required to comply with, will help businesses of all sizes and types begin to put systems in place to prepare to tackle online harms once the legislation is complete.
The tech sector and the government are aiming for the same outcome: how to make the UK the safest place to be online. To do this, Parliament needs to legislate, but the tech sector needs to put that into action. If we work together, we can design a world-class regime to protect children and society online – in practice as well as on paper.