The use of artificial intelligence (AI) by UK police could undermine human rights and further exacerbate existing inequalities without sufficient safeguards, supervision and caution, a House of Lords inquiry has found.
Following a 10-month investigation into the use of advanced algorithmic technologies by UK police, including facial recognition and various crime “prediction” tools, the Lords Home Affairs and Justice Committee (HAJC) described the situation as “a new Wild West” characterised by a lack of strategy, accountability and transparency from the top down.
In a report published on 30 March 2022, the HAJC said: “The use of advanced technologies in the application of the law poses a real and current risk to human rights and to the rule of law. Unless this is acknowledged and addressed, the potential benefits of using advanced technologies may be outweighed by the harm that will occur and the distrust it will create.”
In the case of “predictive policing” technologies, the HAJC noted their tendency to produce a “vicious circle” and “entrench pre-existing patterns of discrimination” because they direct police patrols to low-income, already over-policed areas based on historic arrest data.
“Due to increased police presence, it is likely that a higher proportion of the crimes committed in those areas will be detected than in those areas which are not over-policed. The data will reflect this increased detection rate as an increased crime rate, which will be fed into the tool and embed itself into the next set of predictions,” it said.
On facial recognition, the other major algorithmic technology being deployed by police, the report noted it could have a chilling effect on protest, undermine privacy, and lead to discriminatory outcomes.
“While we found much enthusiasm about the potential of advanced technologies in applying the law, we did not detect a corresponding commitment to any thorough evaluation of their efficacy,” said the HAJC report.
It added that, on top of there being “no minimum scientific or ethical standards that an AI tool must meet before it can be used in the criminal justice sphere”, the vast majority of public bodies involved in the development and deployment of these technologies lacked the expertise and resources to carry out proper evaluations of new equipment.
“As a result, we risk deploying technologies which could be unreliable, disproportionate, or simply unsuitable for the task in hand,” said the HAJC, adding the system needed “urgent streamlining and reforms to governance” because “as it stands, users are in effect making it up as they go along”.
The committee’s conclusion was in line with comments from Karen Yeung, an interdisciplinary professorial fellow in law, ethics and informatics at Birmingham Law School, who told the HAJC in October 2021 that policing authorities had started using new technologies “just because we can…without clear evidence” about their efficacy or impacts.
HAJC chair Baroness Hamwee, summarising the committee’s 55 written contributions and 20 witness interviews, said: “We had a strong impression that these new tools are being used without questioning whether they always produce a justified outcome. Is ‘the computer’ always right? It was different technology, but look at what happened to hundreds of Post Office managers.”
The HAJC report makes a number of recommendations on how to address the concerns raised by its inquiry. This includes the establishment of a single national body to set minimum scientific standards for the use of new technologies by law enforcement bodies, to certify every new technological solution against these standards, and to regularly audit their deployment.
This national body should also be established on an independent statutory basis, have its own budget and the power to implement moratoria.
Dubious procurement practices and transparency
Regarding the procurement of new technologies, the HAJC noted a range of “dubious selling practices” stemming from a conflict of interest between police forces, which are obliged under the Public Sector Equality Duty (PSED) to consider how their policies and practices could be discriminatory, and private sector suppliers, which often want to protect their intellectual property and trade secrets.
“We heard about companies refusing to engage constructively with customers such as police forces on confidentiality grounds. [The Birmingham Law School’s] Yeung was concerned that some technology providers may invoke intellectual property rights to make ‘empty promises’ on the representativeness of training data, hiding it from its customers, external reviewers and courts,” said the report.
“The Metropolitan Police Service also told us about ‘vendors being reluctant to share information, citing reasons of commercial confidentiality’.”
In August 2020, the use of live facial recognition (LFR) technology by South Wales Police (SWP) was deemed unlawful by the Court of Appeal, in part because the force did not comply with its PSED.
It was noted in the judgment that the manufacturer in that case – Japanese biometrics firm NEC – did not divulge details of its system to SWP, meaning the force could not fully assess the technology and its impacts.
“For reasons of commercial confidentiality, the manufacturer is not prepared to divulge the details so that it could be tested. That may be understandable, but in our view it does not enable a public authority to discharge its own, non-delegable, duty under section 149,” said the ruling.
To deal with these and other procurement issues, the HAJC recommended that, while forces should be free to procure any tech solutions certified by the national body, extra support should be provided so they can become “proficient customers” of new technologies.
“Pre-deployment certification could, in itself, reassure them about the quality of the products they are procuring. Enhanced procurement guidelines are also needed,” it said, adding local and regional ethics committees should also be established on a statutory basis to investigate whether any given technology’s proposed and actual uses are “legitimate, necessary and proportionate”.
On the transparency front, the HAJC noted that while there were currently “no systemic obligations” on law enforcement bodies to disclose information about their use of advanced technologies, a “duty of candour” should be established, alongside a public register of police algorithms, so that regulators and the general public alike can understand exactly how new tools are being deployed.
Explicit legislation needed
Speaking to Computer Weekly, the HAJC’s Hamwee said members of the committee were “bemused and anxious” when they began to understand the scope of how advanced technologies are deployed in the justice system, and were left with “a lot of concerns” about the implications for human rights and civil liberties.
“We couldn’t work out who was responsible for what – over 30 bodies (that we identified – and we may have missed some) with some sort of role suggested that if things went wrong, it would be almost impossible to hold anyone to account,” she said. “And if things went wrong, they could go very badly wrong – you could even be convicted and imprisoned on the basis of evidence which you don’t understand and cannot challenge.”
Hamwee added that while the committee recognised that AI could bring “considerable benefits”, for example in efficiency and new ways of working, final decisions must always be taken by a human being and new legislation is necessary to control how technologies are used by UK police.
“I doubt any committee member thinks new laws are the answer to everything, but we do need legislation – as the basis for regulation by a national body, with a register of algorithms used in relevant tools and certification of each tool,” she said. “Readers of Computer Weekly would not be deferential to technology, but to many people it’s often a matter of ‘the computer says so’. Strict standards will mean the public can trust how the police, in particular, use advanced technologies, as they are now and as they may be in the future.”
Baroness Hamwee, HAJC
The HAJC therefore also recommended that “the government bring forward primary legislation which embodies general principles, and which is supported by detailed regulations setting minimum standards” because “this approach would strike the right balance between concerns that an overly prescriptive law could stifle innovation and the need to ensure safe and ethical use of technologies”.
Computer Weekly contacted policing minister Kit Malthouse for comment on the inquiry’s findings, but received no response.
Malthouse previously said during a webinar on the challenges and future of policing that the acquisition and use of digital technologies would be a major priority going forward, and told the HAJC in January 2022 that the use of new technologies by police should be tested in court rather than defined by new legislation, which he argued could “stifle innovation”.
This is in line with previous government claims about police technology. For example, in response to a July 2019 Science and Technology Committee report, which called for a moratorium on police use of live facial recognition technology until a proper legal framework was in place, the government claimed in March 2021 – after a two-year delay – that there was “already a comprehensive legal framework for the management of biometrics, including facial recognition”.
Paul Wiles, the former commissioner for the retention and use of biometric material, also told the Science and Technology Committee in July 2021 that while there was currently a “general legal framework” governing the use of biometric technologies, their pervasive nature and rapid proliferation meant a more explicit legal framework was needed.
In March 2022, the Strategic Review of Policing in England and Wales confirmed the central role technology would play in policing going forward, but also warned of the need for greater ethical scrutiny to ensure public trust.
Although the review focused on policing as a whole – noting the need for “root and branch reform” to address the current crisis in public confidence – a number of its 56 recommendations dealt specifically with the role of technology.
One of the review’s recommendations was for the Home Office to bring forward legislation to introduce a duty of candour to police forces.
Read more about police technology
- A coalition of civil society groups has called on European lawmakers to use the upcoming Artificial Intelligence Act as an opportunity to ban predictive policing systems.
- Civil society groups have called for a ban on the use of live facial recognition technology amid claims that the government and the police are introducing intrusive surveillance measures without parliamentary scrutiny.
- Police forces across England and Wales are being reminded not to overlook their data protection-related compliance responsibilities when making use of the Police Digital Service’s Amazon-powered cloud platform.