Alan Stockdale - stock.adobe.com

Test police algorithms in court rather than legislate, Lords told

Police algorithms and their impacts should be tested and challenged in court, rather than controlled by new laws, says policing minister

The UK’s policing minister Kit Malthouse has suggested that the use of advanced algorithmic technologies by police should be tested in court rather than defined by new legislation, which he argued could “stifle innovation”.

Addressing the House of Lords Home Affairs and Justice Committee (HAJC) on 12 January 2022, Malthouse said the use of algorithmic technologies by UK police was already controlled by a “web of legislation”, and that a “principles-based” framework is preferable to creating new legal rules because “there are always areas of nuance and circumstance which you can’t prescribe in law”.

The HAJC opened an inquiry into the use of advanced algorithmic tools by UK law enforcement in May 2021, with the stated task of examining the existing legal framework around their use, as well as any ethical issues raised by their deployments.

This includes the use of facial-recognition (LFR) tech and other biometric technologies, as well as algorithmic crime “prediction” tools such as the Metropolitan Police Services’ (MPS) Gangs Matrix or Durham Constabulary’s Harm Assessment Risk Tool (Hart).

“Obviously all these techniques that are used, if they’re offered in evidence, have to be and will be subject to expert scrutiny and evaluation in the in the courtroom,” said Malthouse, adding that they would also have to be validated having a human-in-the-loop and testing by experts before use.

Giving the example of transparency as a key principle, Malthouse said police were already being encouraged to share algorithms and datasets with independent experts for scrutiny, as part of the government’s commitments in its National AI Strategy.

He added that other aspects of policing, such as escalation in the use of force, are also not prescribed in law.

“There are principles set down about in what circumstances the police can use force and the same, I think, is true of the legislative framework around these areas,” he said, adding that existing legislation already sets out principles of “proportionality and reasonableness for the purposes of what they’re doing”.

Asked whether he had considered making transparency requirements on certain technologies hard law, rather than a matter of encouragement, Malthouse said “we’ll certainly have to see”, adding: “Mandating transparency across the board would be more of a principle thing than it would be a legislative thing.”

He further added: “The trouble is, once you get into legislation, you have to be quite prescriptive about in what circumstances, in what form and how. As we move into this rapidly advancing area of technological development, that to a certain extent might stifle innovation.”

Using facial-recognition tech as an example, Malthouse claimed its use by police was already subject to “quite a lot of testing and analysis”, including for racial bias, and pointed to the fact that use of the technology by South Wales Police (SWP) was found to be unlawful in court.

“All this stuff is testable in court,” he said. “As you will have seen, the live facial recognition South Wales Police used underwent judicial review, it was tested in court…and that’s how we test those uncertainties, by testing them in front of a judge.”

In response to the minister’s comments, Griff Ferris, legal and policy officer at non-governmental organisation Fair Trials – which campaigns for a fairer justice system globally – said new technologies and automated systems can have serious impacts, with many recorded instances of racial profiling and wrongful police action resulting from their use.

“Protecting people from these potential harms is far more important than innovation or commercial concerns,” he said.

“The government, and indeed Parliament, should legislate to provide proper legal safeguards in their use, or in the case of certain technologies, prohibit their use entirely, and not rely on people challenging them in court.

“Taking the government or police to court should be a last resort, not the sole way of holding them and their actions accountable, not least when litigation is extremely expensive and the majority of people will not be able to afford it.”

A conflict of interest between police and suppliers

According to the Court of Appeals judgment in the SWP case – which was handed down in August 2020 – the decision was made on the grounds that the force’s use of the technology was “not in accordance” with the claimants Article 8 privacy rights; that it did not conduct an appropriate Data Protection Impact Assessment (DPIA); and that it did not comply with its Public Sector Equality Duty (PSED) to consider how its policies and practices could be discriminatory.

In terms of the discriminatory impact of the technology, the ruling stated the “SWP have never sought to satisfy themselves, either directly or by way of independent verification, that the software program in this case does not have an unacceptable bias on grounds of race or sex”.

It added: “For reasons of commercial confidentiality, the manufacturer is not prepared to divulge the details so that it could be tested. That may be understandable but, in our view, it does not enable a public authority to discharge its own, non-delegable, duty under section 149.”

Addressing the HAJC in November 2021, the national policing chief scientific adviser at the National Police Chief’s Council (NPCC), Paul Taylor, highlighted the conflicting interests between police forces and their technology suppliers.

“To be able to sufficiently scrutinise these new technologies and all the permutations, to be able to 100% tell you what they are and aren’t doing, requires a level of resource that we simply do not have – the amount of testing and testing and testing one would need is just not there,” he said.

“If I turn around to industry and say, ‘I’m going to hold you responsible’, they will then want to do a level of testing that probably makes investment into that technology not viable anymore; not an interesting product for them to engage into.

“There’s a tension there [because] we don’t want to stifle the market by saying that you have to do all of this [testing], but equally, of course, we need them to do it, and so that’s a real challenge for us,” said Taylor.

He added that part of the problem is that police are often taking a fairly mature technology and trying to implement it in a policing context it was not explicitly designed for.

Malthouse’s view that it would be preferable to test new technologies through the courts also stands in contrast to the view of former biometrics commissioner, Paul Wiles, who told the House of Commons Science and Technology Committee in July 2021 that while there was currently a “general legal framework” governing the use of biometric technologies, their pervasive nature and rapid proliferation meant a more explicit legal framework was needed.

Wiles said the current framework governing their use had “not kept up with the development of new biometrics” and nor had “the government responded to judgments by both domestic courts and the European Court of Human Rights about the inadequacy of that current framework”.

During that same session, Malthouse commented on whether the government should legislate specifically on biometrics: “Obviously there is a framework at the moment, and that’s been adduced through the courts, but as technology advances we would like to get to a position where both the police and the public can be confident about the legislative architecture that enables the adoption of future technology.

“Whether that is required through legislation or not is a matter of a discussion, but we’ve got a manifesto commitment, so no doubt we’ll be bringing forward plans before the next election.”

A questionable legal basis

Speaking to the HAJC in October 2021, Karen Yeung – an Interdisciplinary Professorial Fellow in Law, Ethics and Informatics at Birmingham Law School – said a key issue with police deployments of new technologies is that authorities have started using them “just because we can … without clear evidence” about their efficacy or impacts.

On the trials of LFR tech conducted by the MPS specifically, Yeung said the force’s scientific methodology was “very unrigorous”, noting that because procedures were tweaked every time a trial was conducted, “we do not have a stable and rigorous set of data on the basis of these experiments”.

She added: “In those 11 trials, 500,000 faces were scanned to produce nine to 10 arrests, and many of those were individuals who were wanted for very trivial offences. All of this means the real-time location tracking of many, many hundreds of thousands of British people going about their lawful business, not bothering anyone.”

As with facial-recognition tech, Yeung said the development of crime prediction tools has been equally unrigorous, with historic arrest data being used a proxy for who is likely to commit a crime.

In response to the Science and Technology Committee’s July 2019 report, which called for a moratorium on police use of LFR until a proper legal framework was in place, the government claimed in March 2021 – after a delay of nearly two years – that there was “already a comprehensive legal framework for the management of biometrics, including facial recognition”.

The government said this framework included police common law powers to prevent and detect crime, the Data Protection Act 2018 (DPA), the Human Rights Act 1998, the Equality Act 2010, the Police and Criminal Evidence Act 1984 (PACE), the Protection of Freedoms Act 2012 (POFA), and police forces’ own published policies.

Read more about police technology

Read more on Artificial intelligence, automation and robotics

CIO
Security
Networking
Data Center
Data Management
Close