Nelos - stock.adobe.com

Protecting children as they spend years in virtual worlds

To protect children online, we must now focus on pre-emptive and robust regulation around immersive technologies

Molly Russell was just 14 when she killed herself in 2017. It was ruled that Molly had died from “an act of self-harm while suffering from depression and the negative effects of online content”.

The inquest into Molly’s death has rightly sparked a huge push for tougher social media regulation. But to effectively protect children online, we must now focus on pre-emptive and robust regulation around immersive technologies – the future and ever-growing risk.

Risks in virtual and immersive spaces

It’s predicted that children growing up now will spend 10 years of their lives in the metaverse, exposing them to new and greater risks of trolling behaviour, inappropriate content and grooming. Unfortunately, as with all technology, adoption by bad actors can turn positive innovations into potential threats very quickly, and there are several risks that should be highlighted around virtual and immersive spaces.

There is evidence of grooming and harassment in newly launched virtual spaces, with researchers identifying the occurrence of abuse once every seven minutes in one virtual environment. Furthermore, the immersive experience provided by extended reality can lead to negative experiences being even more traumatic.

Integration of virtual spaces with extended reality – virtual, augmented and mixed reality – creates new privacy concerns, with headsets and controllers being able to record a combination of biographic, biometric and situational data points simultaneously.  As well as a record of name or username, details of eye movements, posture and the room someone is in may also be collected. One estimate suggests that two million data points can be collected during a 20-minute virtual reality session.

An overarching challenge exists with the vision of a singular interconnected and “limitless” digital world where a dominant virtual platform emerges, allowing users to jump freely between different platforms and experiences. This poses several questions for law enforcement and regulation in terms of legal jurisdiction around advertising standards and safeguarding. It also poses a challenge for users, and their guardians, trying to limit exposure to harmful content and unwanted interaction.

The role of government

Platforms have been self-regulating for years – and it’s not effective. Molly’s inquest has highlighted just how broken this method is. In New Zealand, tech giants agreed to “self-regulate” to reduce harmful online content, making a move that critics said dodged the alternative of government regulation.

Regulation should be combined with policing and moderation in the metaverse. Data should be used to understand what content children are exposed to, the interactions they are having and actions they are taking when in these worlds, and to drive appropriate interventions.

The UK government has a duty to protect all users online, especially children. The development of the UK’s Online Safety Bill is still ongoing, and while it applies to immersive technologies, including the metaverse, it currently focuses on published user-generated content rather than activity. The bill aims to help reduce content risks (where a child is exposed to inappropriate or illegal content – for example, websites advocating harmful or dangerous behaviours, such as self-harm, suicide and anorexia), but does not currently consider how to regulate real-time contact and conduct risks.

The increased adoption of immersive technologies is likely to see the reduction of traditional published content on social media, with real-time interactions of users becoming more prominent. Children will engage live with individuals who may seek to persuade the child to take part in unhealthy or dangerous behaviours, circumnavigating the need for content to be published online in a regulated environment.

Online Safety Harms Centre

Achieving the most effective national response to online harms requires a rethink of the way the government, law enforcement, technology companies and third sector organisations collaborate. One viable solution is the establishment of an Online Harms Safety Centre (OHSC) to orchestrate the collective skills and capacity of organisations across the response landscape, allowing them to play to their areas of expertise as part of a wider view of the end-to-end threat.

The government should create the OHSC but it should operate independently, replicating the models used by the National Cyber Security Centre (NCSC) and the Centre for the Protection of National Infrastructure (CPNI). This would benefit from having industry and third sector organisations “inside the tent”, acting as the central coordination entity for all activity across the online harms landscape, including preventing child sexual abuse and exploitation, extremism, intolerance, self-harm and suicide-related material. 

The OHSC model would provide the coherence currently lacking, leading to the creation of a unified enterprise that is greater than the sum of its parts. The OHSC would be quick to establish and be adaptive, running on a lean staffing model of central secretariat and seconded experts from across the threat landscape.

Enough is enough – safety considerations should be explicitly considered in the design of extended reality services, not retrofitted after harm or misuse has occurred.   


Patrick Cronin is an online safety expert at PA Consulting.

Read more about children in the online world

 

Read more on Social media technology

CIO
Security
Networking
Data Center
Data Management
Close