‘Significant gaps’ in UK AI regulation, says Ada Lovelace Institute

UK government’s plans to diffuse regulatory responsibility for AI among existing regulators will mean the tech is “only partially regulated”, while its data reforms will undercut already-limited existing protections, says Ada Lovelace Institute

The UK’s government’s “deregulatory” data reform proposals will undermine the safe development and deployment of artificial intelligence (AI) by making “an already-poor landscape of redress and accountability” even worse, the Ada Lovelace Institute has said.

Published on 29 March, the government’s AI whitepaper outlined its “adaptable” approach to regulating AI, which it claimed will drive responsible innovation while maintaining public trust in the technology.

As part of this proposed “pro-innovation” framework, the government said it would empower existing regulators – including the Information Commissioner’s Office (ICO), the Health and Safety Executive, Equality and Human Rights Commission (EHRC) and Competition and Markets Authority – to create tailored, context-specific rules that suit the ways AI is being used in the sectors they scrutinise.

Set to pass some time in Autumn 2023, the Data Protection and Digital Information (DPDI) Bill will also amend the UK’s implementation of the European Unions’ General Data Protection Regulation (GDPR) and Law Enforcement Directive (LED); which civil society groups have previously condemned as “a wholesale deregulation of the UK data protection framework”.

In a report analysing the UK’s proposals for AI regulation, however, the Ada Lovelace Institute found that, because “large swathes” of the UK economy are either unregulated or only partially regulated, it is not clear who would be responsible for scrutinising AI deployments in a range of different contexts.

This includes recruitment and employment practices, which are not comprehensively monitored; education and policing, which are monitored and enforced by an uneven network of regulators; and activities carried out by central government departments that are not directly regulated.

“In these contexts, there will be no existing, domain-specific regulator with clear overall oversight to ensure that the new AI principles are embedded in the practice of organisations deploying or using AI systems,” it said, adding that independent legal analysis conducted for the Institute by data rights agency AWO found that, in these contexts, the protections currently offered by cross-cutting legislation such as the UK GDPR and the Equality Act often fail to protect people from harm or give them an effective route to redress. “This enforcement gap frequently leaves individuals dependent on court action to enforce their rights, which is costly and time consuming, and often not an option for the most vulnerable.”

Weakened data rights

On the DPDI bill, the Institute added that, by expanding the legal bases for data collection and processing, removing requirements such as the obligation to carry out data protection impact assessments when high-risk processing is being carried out, it weakens current data rights even further.

“A particularly important safeguard in the context of AI is Article 22 of the UK GDPR, which currently prohibits organisations from making decisions about individuals with ‘legal or similarly significant’ effects based solely on automated processing, with a handful of exceptions” it said.

“The bill removes the prohibition on many types of automated decision, instead requiring data controllers to have safeguards in place, such as measures to enable an individual to contest the decision – which is, in practice, a lower level of protection.”

Alex Lawrence-Archer, a solicitor at AWO, said: “For ordinary people to be effectively protected from online harms, we need regulation, strong regulators, rights to redress and realistic avenues for those rights to be enforced.

“Our legal analysis shows that there are significant gaps which mean AI harms may not be sufficiently prevented or addressed, even as the technology that threatens to cause them becomes increasingly ubiquitous.”

Recommendations

The Institute has therefore recommended that the government “rethink” elements of the DPDI bill that would undermine people’s safety, and particularly around the accountability framework it sets out.

It added that the government should also review the rights and protections provided by existing legislation, and where necessary, legislate to introduce new rights and protections; and produce a “consolidated statement” of what protections people should expect when using or interacting with AI systems.

It further recommended exploring the creation of an “AI ombudsman”, noting there is a need for some sort of redress or dispute resolution mechanism for individuals affected by AI, especially in sectors where no formal mechanisms currently exist.

“Adopting an ombudsman-style model could act as a complement to other central functions the government has set out, supporting individuals in resolving their complaints, directing them to appropriate regulators where this is not possible, and providing the government and regulators with important insights into the sorts of AI harms people are experiencing, and whether they are effectively securing redress,” it said.

Given the “uniquely personal” nature of biometric data, the Institute said evidence shows there is “no widespread public acceptance of, or support for, the use of biometrics” safeguards that the existing legal framework does not provide, adding that biometrics is one of several areas where “new rights and protections” may be needed to effectively govern AI.

Both Parliament and civil society have repeatedly called for new legal frameworks to govern the use of biometrics, particularly by law enforcement bodies – including a House of Lords inquiry into police use of advanced algorithmic technologies; the UK’s former biometrics commissioner, Paul Wiles; an independent legal review by Matthew Ryder QC; the UK’s Equalities and Human Rights Commission; and the House of Commons Science and Technology Committee, which called for a moratorium on live facial recognition as far back as July 2019.

However, the government maintains there is “already a comprehensive framework” in place.

Read more about artificial intelligence

Elsewhere, the report calls for greater urgency from government given the “significant harms associated with AI use today, many of which are felt disproportionately by the most marginalised”.

It added that the pace with which foundation AI models are being integrated into the economy also risks scaling up these harms.

“When foundation models are used as a base for a range of applications, any errors or issues at the foundation-model level may impact any applications built on top of or ‘fine-tuned’ from that model,” it said.

“This unchecked distribution of foundation models risks compounding the challenges of embedding the AI principles in the practices of organisations deploying and using AI. Timely action from the Foundation Model Taskforce will be necessary to ensure that, as the usage of foundation models grows, these cutting-edge technologies are considered trustworthy by businesses and the public.”

The Institute has therefore recommended introducing mandatory reporting requirements for foundation model developers; pilot projects to develop better expertise and monitoring in government; and including a more diverse range of voices at the government’s upcoming AI Safety Summit in Autumn 2023.

It added that the government’s Foundation Model Taskforce – established in June 2023 with £100m funding to lead on AI safety – could play a role in running any pilot programmes, as well as in reviewing opportunities for and barriers to the enforcement of existing law.

Further recommendations made by the Institute include “significantly” increasing the funding available to regulators (in line with calls from the EHRC); clarifying the law around AI liability, so that legal and financial liability for AI risk is distributed proportionately along AI value chains; and considering public development of AI capabilities to steer the tech towards long-term public benefit.

Read more on Business applications

CIO
Security
Networking
Data Center
Data Management
Close