JackF - stock.adobe.com

Tech can’t wait for regulation to protect children online

The general secretary of the UK largest teachers’ union explains why social media should be banned for under-sixteens

There is a familiar story that plays out every time another news report emerges of children being seriously harmed online. Parents are told to “take control”. Schools are asked to “do more”. Tech companies promise another round of tweaks. But this framing misses the real issue. The harm children experience on social media is not a failure of parenting or education. It is the outcome of commercial systems designed to maximise engagement at all costs.

If the tech sector genuinely prioritised child safety, we would not be facing the scale of harm that now confronts children and young people. What is happening online is not accidental, or the result of a few bad actors. It is the consequence of algorithmic recommender systems deliberately engineered to keep users scrolling. Systems optimised for profit do not suddenly behave differently because the user is a child.

This was laid bare by the findings of the Big Tech’s Little Victims Algorithm Experiment. The project, led by the National Education Union, created four fictional profiles of British 13-year-olds across TikTok, Snapchat, YouTube and Instagram to see what content children are served when they sign up for the first time. The results were shocking, but sadly not surprising to teachers. Within minutes, children were shown harmful and inappropriate content, including guns, self-harm, sexualised material and misogynistic narratives.

Harmful material in three minutes

Most alarming, the experiment found that for every minute spent scrolling, children were shown a piece of concerning content. Harmful material appeared within just three minutes of logging on – and in some cases it was the very first thing served.

This matters because teachers are not debating the online harm of children in theory - they are already dealing with its consequences. In classrooms, we see the impact of children being exposed to violent content, self-harm and suicide material, sexualised imagery, and extreme narratives pushed at scale.

One visible example is the rise of online misogyny - girls being targeted or harassed, and female staff facing open hostility. What starts on a feed becomes offline behaviour and, once embedded, becomes far harder for schools to unpick. As Louis Theroux’s recent documentary The Manosphere has brought into sharp focus, the scaling of misogynistic content, for example, is not incidental - it is by design.

So what needs to happen?

First, we need honesty about the limits of half measures. The Government has launched a national consultation on children’s digital wellbeing. Ministers have also announced a six week pilot involving 300 teenagers, in which families will trial different forms of social media restriction at home – including disabling social media apps entirely, imposing one hour daily limits, or enforcing overnight curfews – with a control group continuing as normal, to assess the impact on children’s sleep, wellbeing and school life.

This approach fundamentally misunderstands how social media platforms actually work. A partial ban that still leaves some children on social media is not a meaningful test of safety. Harmful content does not stay neatly contained on one screen. If even one child in a friendship group remains on a platform, others will still be exposed through shared videos, images and messages. When algorithms can push extreme material within minutes of account creation, tinkering with time limits or overnight blocks will not keep children safe.

Secondly, tech companies must take accountability now, not later. If platforms know a user is a child – or cannot be sure they are not – the duty of care must be to prevent foreseeable harm by design, not to apologise after it happens.

Why social media for under 16s should be banned

This failure is why we are calling for a ban on social media access for under-16s. Of course, raising the age of access is not a silver bullet. It must be paired with guaranteed space in the curriculum for high quality digital literacy, so young people develop the skills to navigate online life safely and critically.

The tech sector has had repeated warnings, mounting evidence and countless opportunities to act - and it has failed to do so. That is why Government action now matters. Raising the age of social media access to 16 is the only meaningful step that would reduce harm at scale – and every day of inaction leaves more children exposed to avoidable harm.

Read more on proposals for a UK social media ban

The UK’s proposed social media ban explained -The UK government will use new legal powers to lay the groundwork for an under-16 social media ban after its consultation on children’s digital well-being, but opponents warn the measures being considered will only treat the symptoms of the problem if they ignore the structural power of big tech

UK government consults on social media ban for under-16s. A UK government consultation launched today asks whether under-16s should be banned from social media, and age restrictions introduced for VPNs and chatbot

Governments urged to step up enforcement of big tech amid rush to ban social media for under-16s - The Council of Europe’s Commissioner for Human Rights says that European governments should consider better enforcement against big tech companies before banning children from social media

 

Read more on Mobile apps and software