EU rolling out measures for online safety and AI liability

The European Council has approved the passage of the Digital Services Act to protect people’s rights online, while the European Commission has announced proposals to help those negatively affected by artificial intelligence to claim compensation

This article can also be found in the Premium Editorial Download: Computer Weekly: How Russia hacked a former MI6 spy chief

The European Union (EU) is rolling out a raft of measures designed to hold technology companies more accountable for how their products and services impact end-users, including new online safety and artificial intelligence (AI) liability rules.

Regarding online safety, the European Council approved the Digital Services Act (DSA) on 4 October, which is designed to protect the digital space against the spread of illegal content and ensure the protection of users’ fundamental rights.

Initially announced by the European Commission in December 2020 alongside the Digital Market Act (DMA), its approval by the Council means it is now law, although most of its provisions will not take effect until 15 months later.

“The Digital Services Act is one of the EU’s most ground-breaking horizontal regulations and I am convinced it has the potential to become the ‘gold standard’ for other regulators in the world,” said Jozef Síkela, minister for industry and trade. “By setting new standards for a safer and more accountable online environment, the DSA marks the beginning of a new relationship between online platforms and users and regulators in the European Union and beyond.”

Under the DSA, providers of intermediary services – including social media, online marketplaces, very large online platforms (VLOPs) and very large online search engines (VLOSEs) – will be forced into greater transparency, and will also be held accountable for their role in disseminating illegal and harmful content online.

For example, the DSA will prohibit platforms from using targeted advertising based on the use of minors’ personal data; impose limits on the use of sensitive personal data for targeted advertising, including gender, race and religion; and introduce obligations on firms to react quickly to illegal content.

However, the rules have also been designed asymmetrically, which means larger intermediary services with significant societal impact (such as VLOPs and VLOSEs) are subject to stricter rules. This includes offering users a system for recommending content that is not based on profiling, and obligating firms to analyse the systemic risks their services create.

In the UK, the Online Safety Bill is designed to achieve similar goals, but has taken a broader approach to also make tech companies deal with content that is “legal but harmful”.

Under the Bill’s duty of care, tech platforms that host user-generated content or allow people to communicate will be legally obliged to proactively identify, remove and limit the spread of both illegal and “legal but harmful” content, or could be fined up to 10% of their turnover by the online harms regulator, Ofcom

In September 2022, a coalition of civil society groups wrote to new digital minister Michelle Donelan to lay out their concerns about a number of provisions in the Online Safety Bill.

These include: the provision to compel online companies to scan the content of users’ private messages; the extensive executive powers granted to the secretary of state to define what constitutes lawful speech; and the duty it would impose on tech platforms to deal with “legal but harmful” content, which the groups said “would impose a two-tier system for freedom of expression, with extra restrictions for categories of lawful speech, simply because they appear online”.

Read more about regulation of the tech sector

  • The Metropolitan Police is taking an “irresponsible” approach to deploying live facial-recognition technology, say experts. We talk to civil society groups, lawyers and politicians about the controversial programme.
  • Digitally mapping supply chains to identify forced labour and slavery is no longer a technology problem for the IT sector, but a lack of government enforcement and corporate inaction are major barriers to effective change.
  • Millions of people working for gig economy platforms in Europe could be reclassified as workers rather than self-employed, entitling them to a much wider range of rights and workplace protections, under a proposal put forward by the European Commission.

Regarding AI, the European Commission (EC) has separately adopted two proposals – one to modernise the existing rules on the strict liability of manufacturers for defective products (from smart technology to pharmaceuticals); and another aimed at harmonising national AI liability rules throughout the bloc, so that it is easier for victims of AI-related damage to get compensation for any potential harms.

“The purpose of the AI Liability Directive is to lay down uniform rules for access to information and alleviation of the burden of proof in relation to damages caused by AI systems, establishing broader protection for victims (be it individuals or businesses), and fostering the AI sector by increasing guarantees,” said the EC.

“It will harmonise certain rules for claims outside of the scope of the Product Liability Directive, in cases in which damage is caused due to wrongful behaviour. This covers, for example, breaches of privacy, or damages caused by safety issues. The new rules will, for instance, make it easier to obtain compensation if someone has been discriminated against in a recruitment process involving AI technology.”

It added that the directive will simplify the legal process for victims affect by AI by further introducing a “presumption of causality” that, in circumstances where a relevant fault has been established and a causal link to the AI performance seems reasonably likely, will help address explainability difficulties that victims have in understanding and navigating complex AI systems.

It will also introduce a right of access to evidence for victims, so that they can get information from companies and suppliers in cases involving high-risk AI systems.

“We want the AI technologies to thrive in the EU,” said Věra Jourová, EC vice-president for values and transparency. “For this to happen, people need to trust digital innovations. With today’s proposal on AI civil liability we give customers tools for remedies in case of damage caused by AI, so that they have the same level of protection as with traditional technologies and we ensure legal certainty for our internal market.”

The directive will complement and give teeth to the EU’s AI Act, which has previously been criticised by civil society organisations for its lack of redress mechanisms.

In November 2021, for example, 114 civil society organisations signed an open letter calling on European institutions to amend the Act by, for example, placing more obligations on users of high-risk AI systems to facilitate greater accountability, creating mandatory accessibility requirements so that those with disabilities are able to easily obtain information about AI systems, and prohibiting the use of any system that poses an unacceptable risk to fundamental rights.

To facilitate meaningful redress, they further recommended adding two new rights to the Act for individuals – to not be subject to AI systems that pose an unacceptable risk or do not comply with the Act, and to be provided with a clear and intelligible explanation for decisions taken with the assistance of AI systems.

Digital civil rights experts and organisations have also previously told Computer Weekly that the regulatory proposal is stacked in favour of organisations – both public and private – that develop and deploy AI technologies, which are essentially being tasked with box-ticking exercises, while ordinary people are offered little in the way of protection or redress.

The EC’s AI liability proposal will now need to be adopted by the European Parliament and the Council.

It is proposed that five years after the AI Liability Directive comes into force, the EC will assess the need for no-fault liability rules for AI-related claims, if necessary.

Read more on IT governance

CIO
Security
Networking
Data Center
Data Management
Close