Andrey Popov - stock.adobe.com

Met police deploy facial-recognition technology in Oxford Circus

Police facial-recognition deployment results in three arrests, despite sustained calls from Parliament and civil society to limit the technology’s use until a specific legal framework for biometrics is in place

London police have revealed the results of their latest deployment of live facial-recognition (LFR) technology in Oxford Circus, which resulted in three arrests and roughly 15,600 people’s biometric information being scanned.

The Metropolitan Police Service (MPS) said its LFR deployment on Thursday 7 July outside Oxford Circus was part of a long-term operation to tackle serious and violent crime in the borough of Westminster.

Those arrested include a 28-year-old man wanted on a warrant for assault of an emergency worker; a 23-year-old woman wanted for possession with intent to supply Class A drugs; and a 29-year-old man for possession with intent to supply Class As and failures to appear in court.

Those arrested were engaged and detained by officers following alerts from the vehicle-mounted LFR system, which enables police to identify people in real time by scanning their faces and matching them against a database of facial images, or “watchlist”, as they walk by.

According to the post-deployment review document shared by the MPS, the deployment outside Oxford Circus – one of London’s  busiest tube states – generated four match alerts, all of which it said were ‘true alerts’. It also estimates that the system processed the biometric information of around 15,600 people.

However, only three of the alerts led to police engaging, and subsequently arresting, people. Computer Weekly contacted the MPS for clarification about the fourth alert, which said that the LFR operators and engagement officers were unable to locate the individual within the crowd.

The last time police deployed LFR in Oxford Circus on 28 January 2022 – the day after the UK government relaxed mask wearing requirements – the system generated 11 match alerts, one of which it said was false, and scanned the biometric information of 12,120 people. This led to seven people being stopped by officers, and four subsequent arrests.

Commenting on the most recent deployment, Griff Ferris, a senior legal and policy officer at non-governmental organisation Fair Trials, who was present on the day, said: “The police’s operational use of facial-recognition surveillance at deployments across London over the past six years has resulted in countless people being misidentified, wrongfully stopped and searched, and even fingerprinted. It has also clearly been discriminatory, with black people often the subject of these misidentifications and stops.

“Despite this, the Metropolitan police, currently without a commissioner, in special measures, and perpetrators of repeated incidents evidencing institutional sexism and racism, are still trying to pretend this is a ‘trial’. Facial recognition is an authoritarian surveillance tool that perpetuates racist policing. It should never be used.”

In response to Computer Weekly’s questions about whether the MPS has recreated operational conditions in a controlled environment without the use of real-life custody images, it said: “The MPS has undertaken significant diligence in relation to the performance of its algorithm.” It added that part of this diligence is in continuing to test the technology in operational conditions.

“Alongside the operational deployment, the Met tested its facial-recognition algorithms with the National Physical Laboratory [NPL]. Volunteers of all ages and backgrounds walk past the facial recognition system…After this, scientific and technology experts at the NPL will review the data and produce a report on how the system works. We will make these findings public once the report has been completed,” it said.

In the “Understanding accuracy and bias” document on the MPS website, it added that algorithmic testing in controlled settings can only take the technology so far, and that “further controlled testing would not accurately reflect operational conditions, particularly the numbers of people who need to pass the LFR system in a way that is necessary to provide the Met with further assurance”.

Calls for new legislative framework for biometrics

In June 2022, the Ryder Review – an independent legal review on the use of biometric data and technologies, which primarily looked at its deployment by public authorities – found that the current legal framework governing these technologies is not fit for purpose, has not kept pace with technological advances, and does not make clear when and how biometrics can be used, or the processes that should be followed.

It also found that the current oversight arrangements are fragmented and confusing, and that the current legal position does not adequately protect individual rights or confront the very substantial invasions of personal privacy that the use of biometrics can cause.

“My independent legal review clearly shows that the current legal regime is fragmented, confused and failing to keep pace with technological advances. We urgently need an ambitious new legislative framework specific to biometrics,” said Matthew Ryder QC of Matrix Chambers, who conducted the review. “We must not allow the use of biometric data to proliferate under inadequate laws and insufficient regulation.”

Fraser Sampson, the UK’s current biometrics and surveillance camera commissioner, said in response to the Ryder Review: “If people are to have trust and confidence in the legitimate use of biometric technologies, the accountability framework needs to be comprehensive, consistent and coherent. And if we’re going to rely on the public’s implied consent, that framework will have to be much clearer.”

We must not allow the use of biometric data to proliferate under inadequate laws and insufficient regulation
Matthew Ryder, Matrix Chambers

The lack of legislation surrounding facial recognition in particular has been a concern for a number of years. In July 2019, for example, the UK Parliament’s Science and Technology Committee published a report identifying the lack of a framework, and called for a moratorium on its use until a framework was in place.

More recently, in March 2022, the House of Lords Justice and Home Affairs Committee (JHAC) concluded an inquiry into the use of advanced algorithmic technologies by UK police, noting that new legislation would be needed to govern the police force’s general use of these technologies (including facial recognition), which it described as “a new Wild West”.

The government, however, has largely rejected the findings and recommendations of the inquiry, claiming here is already “a comprehensive network of checks and balances” in place.

While both the Ryder Review and JHAC suggested implementing moratoria on the use of LFR – at least until a new statutory framework and code of practice are in place – the government said in its response to the committee that it was “not persuaded by the suggestion”, adding: “Moratoriums are a resource heavy process which can create significant delays in the roll-out of new equipment.”

Asked by Computer Weekly whether the MPS would consider suspending its use of the technology, it cited this government response, adding: “The Met’s use of facial recognition has seen numerous individuals arrested now for violent and other serious offences. It is an operational tactic which helps keep Londoners safe, and reflects our obligations to Londoners to prevent and detect crime.”

Necessary and proportionate?

Before it can deploy facial-recognition technology, the MPS must meet a number of requirements related to necessity, proportionality and legality.

For example, the MPS’s legal mandate document – which sets out the complex patchwork of legislation the force claims allows it to deploy the technology – says the “authorising officers need to decide the use of LFR is necessary and not just desirable to enable the MPS to achieve its legitimate aim”.

In response to questions about how the force decided the 7 July deployment was necessary, the MPS claimed: “The deployment was authorised on the basis of an intelligence case and operational necessity to deploy, in line with the Met’s LFR documents.”

In terms of the basis on which the deployment was deemed proportionate, it added: “The proportionality of this deployment was assessed giving due regard to the intelligence case and operational necessity to deploy, whilst weighing up the impact on those added to the watchlist and those who could be expected to pass the LFR system.”

The LFR deployment, according to the MPS review document, contained 6,699 image in the watchlists, scanned 15,600 people’s information, and generated four alerts, leading to three arrests.

The justifications outlined to Computer Weekly by the MPS regarding necessity and proportionality are exactly the same as those provided after its last Oxford Circus LFR deployment in late January 2022.

The MPS’s Data Protection Impact Assessment (DPIA) also says that “all images submitted for inclusion on a watchlist must be lawfully held by the MPS”.

In 2012, a High Court ruling found the retention of custody images – which are used as the primary source of watchlists – by the Metropolitan Police to be unlawful, with unconvicted people’s information being kept in the same way as those who were ultimately convicted. It also deemed the minimum six-year retention period to be not proportionate.

Addressing the Parliamentary Science and Technology Committee on 19 March 2019, then-biometrics commissioner Paul Wiles said there was “very poor understanding” of the retention period surrounding custody images across police forces in England and Wales.

He further noted while both convicted and unconvicted people could apply to have their images removed, with the presumption being that the police would do this if there was no good reason not to, there is “little evidence it was being carried out”.

“I’m not sure that the legal case [for retention] is strong enough, and I’m not sure that it would withstand a further court challenge,” he said.

Asked how it had resolved this issue of lawful retention, and whether it could guarantee every one of the 6,699 images in the 7 July watchlists were held lawfully, the MPS cited section 64A of the Police and Criminal Evidence Act 1984, which gives police the power to photograph people detained in custody and to retain that image.

It added that the custody images are also held in accordance with Management of Policing Information Authorised Police Practice (MOPI APP) guidelines.

In July 2019, a report from the Human Rights, Big Data & Technology Project based at the University of Essex Human Rights Centre – which marked the first independent review into trials of LFR technology by the Metropolitan Police – highlighted a discernible “presumption to intervene” among police officers using the technology, meaning they tended to trust the outcomes of the system and engage individuals that it said matched the watchlist in use even when they did not.

On how it has resolved this issue, the MPS said it had implemented additional training for officers involved in facial-recognition operations.

“This input is given prior to every LFR deployment to ensure officers are aware of the current systems capabilities. LFR is a tool that is used to help achieve the wider objectives of the policing operation, it does not replace human decision-making,” it said. “Officers are reminded during the training of the importance of making their own decisions on whether to engage with a member of the public or not.”

Read more about police technology

Read more on Artificial intelligence, automation and robotics

CIO
Security
Networking
Data Center
Data Management
Close