Fractal Pictures - stock.adobe.c

Police defend facial recognition target selection to Lords

Senior police officers confirm to Lords committee that facial recognition watchlist image selection is based on crime categories attached to people’s photos, rather than a context-specific assessment of the threat presented by a given individual

As UK police look to increase their adoption of live facial recognition, a House of Lords committee has heard that serious issues remain around the proportionality and necessity of the way forces are using the technology.

On 12 December 2023, the Lords Justice and Home Affairs Committee (JHAC) – which has launched a short follow-up inquiry into the use of artificial intelligence (AI) by UK police, this time looking specifically at live facial recognition (LFR) – heard from senior Metropolitan Police and South Wales Police officers about the improving accuracy of the technology, as well as how both forces are managing their deployments.  

Claiming there was a “very clear focus” on the most serious criminality, they also told the Lords about the operational benefits of LFR technology, which includes the ability to find people they otherwise would not be able to and as a preventative measure to deter criminal conduct.

At the same time, they confirmed that both forces use generic “crime categories” to determine targets for their live facial recognition deployments, bringing into question claims that their use of the technology is concentrated on specific offenders who present the greatest risk to society.

Academic Karen Yeung, an interdisciplinary professorial fellow in law, ethics and informatics at Birmingham Law School, challenged the proportionality and necessity of this approach during the evidence session, claiming the coercive power of the state means police must be able to justify each entry to the watchlists based on the specific circumstances involved, rather than their blanket inclusion via “crime types”.

The new inquiry follows a 10-month investigation into the use of advanced algorithmic technologies by UK police, including facial recognition and various crime “prediction” tools, which concluded with the JHAC describing the situation as “a new Wild West”, characterised by a lack of strategy, accountability and transparency from the top down.

In a report published in March 2022, the committee said: “The use of advanced technologies in the application of the law poses a real and current risk to human rights and to the rule of law.

“Unless this is acknowledged and addressed, the potential benefits of using advanced technologies may be outweighed by the harm that will occur and the distrust it will create.”

Throughout the inquiry, the JHAC heard from expert witnesses – including Yeung – that UK police are introducing new technologies with very little scrutiny or training, continuing to deploy new technologies without clear evidence about their efficacy or impacts, and have conflicting interests with their own tech suppliers.

In July 2022, however, the UK government largely rejected its findings and recommendations, claiming there was already “a comprehensive network of checks and balances”.

‘Clear focus on serious crime’

During the follow-up session, temporary deputy chief constable Mark Travis, the senior responsible officer (SRO) for facial recognition at South Wales Police, said the force had a “very clear focus” in deploying the technology to deal with “the most serious crime, the most serious vulnerability”.

Giving the example of large-scale events like football matches or concerts, he said the technology could be used “to identify people who may be coming to that venue with the intent of committing crime – that could be serious crimes such as terrorism, it could be crime against vulnerable people, against young girls and women.”

Travis added that the threshold for deployment had been kept deliberately high, with the decision to deploy ultimately taken by an officer with a rank of assistant chief constable or above, to ensure the benchmarks for necessity and proportionality are met in every instance.  

“We have deployed our facial recognition system 14 times in the last year, and … we’ve reviewed 819,943 people and had zero errors. It’s small in the number of deployments, it’s small in intrusion [to rights], and its high quality in terms of its accuracy”
Mark Travis, South Wales Police

It should be noted that before UK police can deploy facial recognition technology, they must ensure the deployments are “authorised by law”, that the consequent interference with rights (such as the right to privacy) is undertaken for a legally “recognised” or “legitimate” aim, and that this interference is both necessary and proportionate.

For example, the Met Police’s legal mandate document – which sets out the complex patchwork of legislation that covers use of the technology – says the “authorising officers need to decide the use of LFR is necessary, and not just desirable, to enable the Met to achieve its legitimate aim”.

Travis added that once a decision has been made to deploy LFR, the force has around 20 specialist officers trained in the equipment to determine whether or not the system is making accurate matches. If they determine a match is accurate, he said the information is then passed to an officer outside the facial recognition van who will then engage the member of public.

“We have deployed our facial recognition system 14 times in the last year, and … we’ve reviewed 819,943 people and had zero errors,” he said. “It’s small in the number of deployments, it’s small in intrusion [to rights], and its high quality in terms of its accuracy.”

In April 2023, research commissioned by the police forces found “substantial improvement” in the accuracy of their systems when used in certain settings.

Speaking about her own force’s approach to deploying LFR, Met Police director of intelligence Lindsey Chiswick highlighted the Met’s use of the technology in Croydon on 7 December: “Croydon this year has the highest murder rate, it’s got the highest number of knife crime-related offences, and it’s got a really blossoming nighttime economy, and with that comes problems like violence against women and girls.

She added that once this “intelligence case” has been established and the decision to deploy has been taken, a watchlist is then pulled together off the back of that picture.

“The watchlist is pulled together not based on an individual, but based on those crime types that I just talked about. It’s then taken to approval from an authorising officer. In the Met, the authorising officer is superintendent-level or above,” she said.

Chiswick added that seven people were arrested as a result of the 7 December deployment, including for rape and burglary, criminal damage, possession of Class A drugs, suspicion of fraud, failing to appear for a road traffic offence, and someone on recall to prison for robbery.

“So seven significant offences and people found who were wanted by the police who we wouldn’t have otherwise been able to find without the technology.”

She added that, in any given morning briefing, officers may see 100 images of wanted people the police are searching for. “There’s no way as an officer could remember those individual faces. All this technology does is up the chances of plucking that face out of a crowd. What comes after that, at that point, is normal policing,” she said.

Chiswick revealed that over the course of the Met’s last 19 LFR deployments, there have been 26 arrests and two false alerts.

Picking up on Chiswick’s claim that once someone is identified by LFR “normal policing kicks in”, committee chair Baroness Hamwee asked whether the scale of the tech’s use would meet the tests for necessity and proportionality, given that, “by definition, it’s bigger than a couple of officers who happened to be crossing Oxford Circus”.

Chiswick said: “There’s a balance here between security and privacy. For me, I think it’s absolutely fair, and the majority of the public – when we look at public surveys, between 60 and 80% of the public are supportive of enforcement using the technology.”

However, when the Lords noted those were national surveys that might not reflect the views of different local communities towards policing, she added that “when you drill down into specific community groups, that support does start to drop” and that there would be community engagement pre-deployment in an attempt to quell any fears people might have.

In line with Chiswick’s previous appearance before the committee, in which she highlighted the technology’s “deterrence effect”, Travis also said LFR can “deter people coming to a location ... to cause harm to other people” and that it helps to create “a safe environment”.

He gave the example of a pop concert, where young people might be concerned about people with “an intent to young people” also going to that concert, but the use of LFR might prevent them coming because the technology meant they would be identified by police.

Delivering the promises?

According to Yeung, the technology needs to be assessed in two ways: one, whether the functional performance of the software works as claimed, and two, whether or not it delivers the benefits promised in real-world settings.

“What I want to focus on is the question of operational effectiveness … [and whether] it will actually deliver the promised benefit in your specific contextual circumstances,” she said. “This is where we need to keep in mind that there’s a world of difference between accurate functional performance of matching software in a stable laboratory setting, and then in a real-world setting.”

Yeung added: “While there is every reason to believe the software is getting better in terms of accuracy … there is still a massive operational challenge of converting a match alert into the lawful apprehension of a person who is wanted for genuine reasons that match the test of legality.”

To achieve this, she said officers on the ground need to be “capable of intervening in complex dynamic settings”, but warned that “the mere fact an alert has been generated does not in and of itself satisfy the common law test of reasonable suspicion”.

“So when a police officer stops a person in the street, they do so on the basis of voluntary cooperation of that person producing identification, because the mere fact of a facial recognition alert match is not enough, in my view, to satisfy the reasonable suspicion test.”

Pointing to a number of LFR-related arrests for shoplifters and small drug offences during the Met’s early 2022 deployments, Yeung said: “There is a divergence between the claims that they only put pictures of those wanted for serious crimes on the watchlist, and the fact that in the Oxford Circus deployment alone, there were over 9,700 images.”

Responding to Yeung’s challenge on how the proportionality and necessity of including each image was decided, Chiswick said the watchlists in London are so large because there are “a lot of wanted offenders in London that we’re trying to find”.

She further explained how watchlists are selected based on the crime type categories linked to the images of people’s faces, rather than based on intelligence about specific people.

“In a liberal, democratic society, it’s essential that the coercive power of the state is used to justify every single decision about an individual. The fact that we’re categorising people but we’re not actually evaluating each specific person troubles me deeply”
Karen Yeung, Birmingham Law School

Travis also confirmed that watchlist images are selected based on categories: “It’s not an individual identification for a person for each deployment.”

Yeung pointed out that there “seems to be a bit of  mismatch” between claims the watchlists are only populated by people wanted for serious crimes and the reality that people are selected based on the crime category attached to their image.

“In a liberal, democratic society, it’s essential that the coercive power of the state is used to justify every single decision about an individual,” she said. “The fact that we’re categorising people but we’re not actually evaluating each specific person troubles me deeply.”

Chiswick responded that whether or not something is “serious” depends on the context, and that, for example, retailers suffering from prolific shoplifting would be “serious for them”.

She added that each watchlist is deleted after each operation, and that the processing of the biometric information of people who are not matched is “fleeting, instantaneous”, as the image captured is automatically and immediately deleted.

Yeung concluded that there needs to be clear guidance on LFR, informed by human rights expertise, and called for a legislative framework to provide further clarity on how the technology can be deployed.

The day after LFR

A number of Lords also expressed concerns about the potential presented by LFR and other types of facial recognition down the line, noting it would be entirely possible to, for example, connect a network of cameras to the software so it can search through thousands of faces simultaneously across a city.

Yeung said it was already possible on a technical level, due to the pre-existing surveillance camera infrastructure in places like London, and that would only require a high-quality internet connection to do so.

Echoing previous statements from the former biometrics commissioner, Fraser Sampson, Yeung added: “We are moving from the equivalent of line fishing to deep ocean trawling. It’s a very powerful technology. The capacity to scale up is readily malleable once the infrastructure is in place. You can see the attractions for control within that infrastructure.”

Chiswick said any future “dystopian” scenarios like this could be avoided by sticking closely to the principles of proportionality and necessity, noting that even if such a feat was technically possible, it would still have to meet this legal test.

She also noted a number of practical challenges with a network of LFR-enabled cameras, such as the high angles of existing CCTV cameras and the lack of manpower available to go through the volume of material that would be captured by such a system.

While he did not rule the possibility out, Travis added that any movement away from overt facial recognition deployments on top of vans would need to be considered very carefully, and that the relationship between police and the public would need to maintain a level of transparency.

The committee will now receive further written evidence before writing to the government about the findings of its short-term inquiry in the new year.

Omissions

During the session, the was no discussion of retrospective facial recognition (RFR), whereby still images or video are run through the recognition software after the fact in a covert setting, despite the massive expansion of its use since 2021.

Home Office data disclosed to The i newspaper and human rights group Liberty under freedom of information rules shows the number of RFR searches of the Police National Database (PND) carried out by forces last year reached a little over 85,000 – more than three times as many as in 2021. Figures for the first four months of 2023 suggest this year’s total is on course to exceed 130,000 – a further 52% annual increase.

It added that while 13 of the 45 UK territorial police forces denied having used RFR in 2022, the Home Office figures show they had carried out thousands of searches between them.

Separate data obtained by Liberty and The Ferret shows Police Scotland carried out more than 2,000 retrospective facial recognition searches in the first four months of 2023, and has tripled its use of this facial recognition technique over the past five years, from just under 1,300 searches in 2018 to nearly 4,000 in 2022.

There was also no discussion of the ongoing unlawful retention of millions of custody images that are used to populate the police facial recognition watchlists.

In 2012, a High Court ruling found the retention of custody images in the PND to be unlawful on the basis that information about unconvicted people was being treated in the same way as information about people who were ultimately convicted, and that the six-year retention period was disproportionate. Despite the 2012 ruling, millions of custody images are still being unlawfully retained.

Writing to other chief constables to outline some of the issues around custody image retention in February 2022, the National Police Chiefs Council (NPCC) lead for records management, Lee Freeman, said the potentially unlawful retention of an estimated 19 million images “poses a significant risk in terms of potential litigation, police legitimacy and wider support and challenge in our use of these images for technologies such as facial recognition”.

In November 2023, the NPCC confirmed to Computer Weekly that it has launched a programme that (while not yet publicised) will seek to establish a management regime for custody images, alongside a review of all currently held data by police forces in the UK. This will be implemented over a two-year period.

“Custody images are one of the most valuable sources of intelligence for front-line officers and investigators, but policing needs to ensure transparency and legitimacy in the way we control the use of this important biometric information,” said an NPCC spokesperson.

“A national programme between policing and the Home Office has recently been launched [in October 2023] to ensure consistency and coordination across UK policing in how it retains, processes and then uses custody images, particularly in the use of facial recognition.

“Through the programme, we will agree and implement a robust management regime for custody images to ensure compliance to agreed policies and legislation. It is vital to public confidence that this programme is adopted nationally to make sure that we are holding data lawfully and ethically, both now and in the future.”

Read more about police technology

Read more on Artificial intelligence, automation and robotics

CIO
Security
Networking
Data Center
Data Management
Close