andreusK - stock.adobe.com
The increasing use of live facial recognition (LFR) cameras to automatically collect biometric data points to an Orwellian future; one which bears witness to the usurping of an individual’s privacy.
While not necessarily unlawful - the Information Commissioner’s Office (ICO) recently said it was satisfied with LFR used by security company Facewatch - there are genuine concerns, and the independent regulator has far from given a green light to the blanket use of LFR technology.
In April 2023, a cross-party collection of almost 50 parliamentarians wrote to Mike Ashley’s Frasers Group to condemn the use of LFR cameras in the group’s stores, and called for the group to end the deployment of LFR cameras.
What are the issues?
There are a myriad of deeply troubling issues with the deployment of LFR cameras, of which the parliamentarians address many in their letter to Frasers Group, but what is particularly alarming is the risk of people being subjected to wrongful, automated decisions. Technology is not without its flaws, and the automatic collection of biometric data at speed and scale is highly likely to lead to a significant number of errors.
One flaw that is especially egregious is algorithmic bias. In 2019, National Institute of Standards and Technology researchers studied 189 facial recognition algorithms (representing the majority of the industry at the time), and found that most facial recognition algorithms exhibit a form of bias.
According to the researchers, facial recognition technologies falsely identified black and Asian faces 10 to 100 times more often than they did white faces. The technologies also falsely identified women more than they did men, thus making black women particularly vulnerable to algorithmic bias.
Similarly, in 2021, the ICO published its opinion on LFR technology in public places. The opinion identified a number of key data protection issues, including, but not limited to: the potential for bias and discrimination.
On that matter, the ICO commented, “several technical studies have indicated that [LFR] works with less precision for some demographic groups, including women, minority ethnic groups and potentially disabled people. Error rates in [LFR] can vary depending on demographic characteristics such as age, sex, race and ethnicity. These issues often arise from design flaws or deficiencies in training data and could lead to bias or discriminatory outcomes”.
More recently, the ICO said that it would “monitor the evolution of live facial recognition technology to ensure its use remains lawful, transparent and proportionate”.
Those who seek to deploy LFR cameras in public places will claim that they are ensuring safety and preventing crime. In a retail store for example, LFR cameras will scan the faces of every shopper and check them against a database of suspected thieves.
Should there be a match it will, at least in theory, alert staff and/or security who will proceed to either closely monitor the person; or remove the person from the store.
However, there is a growing body of evidence that LFR technologies are likely to produce a large number of errors when processing such a large volume of data. Those errors could go on to have a damaging impact on people’s ordinary lives.
Legal framework and regulatory direction
LFR technology involves the processing of personal, biometric, and sometimes special category, data. The main laws which regulate its use are the UK General Data Protection Regulation (UKGDPR) and the Data Protection Act 2018 (DPA), legislation which controllers therefore must comply with before LFR technology can be deployed.
However, these laws in themselves are not sufficient as guardrails to govern this nascent technology’s use.
The UKGDPR and DPA are silent on the technical effectiveness of LFR technologies that controllers use. For example, the ICO’s opinion in 2021 recommended, amongst other things, that “precision” should be used by a controller as a measure of successful deployment.
“Precision” here relates to the percentage of positively identified cases that return positive hits (e.g., if the LFR technology matches 1 out of 10 matches to a watchlist correctly, its precision rate would be 10%). High precision is imperative to ensuring that people are not subject to detriment through incorrect identification, however the current legal framework does not stipulate a specific threshold.
This lacuna is problematic, to say the least; the practical reality means that controllers are not obliged to establish thresholds in the data protection impact assessment (DPIA), meaning there is nothing stopping a controller using LFR technology with abysmal precision rates so long as the relevant provisions in the UKGDPR and DPA are complied with.
Clearly, regulators need to move fast in putting in place rules concerning the accuracy of these camera systems – before more firms follow the lead of Frasers Group. On 8 March 2023, the UK government introduced the Data Protection and Digital Information (No 2) Bill (the Bill), which despite being hailed by Data Minister Julia Lopez as “modern laws for a data-driven era”, omits any specific reference to LFR technology.
Failure to address the pitfalls, inaccuracies and uncertainties surrounding LFR technology as soon as practicably possible – and certainly before it becomes ubiquitous in public places – may leave the UK in a legal quagmire unless specific legislation is enacted to regulate its use.
If LFR technology is to be become the norm in public places, which by all indications it may, steps need to be taken to ensure accountable and proportionate deployment. The regulatory steps that need to be taken will have to be decided by parliament, and these measures will need to carefully consider how to ensure that the efficacy of this technology is balanced against people’s fundamental rights to privacy and fair treatment.
The use of LFR technology in public places currently puts onus on the data controller to conduct a DPIA. However, under the Bill, the notion of DPIAs will be replaced with an assessment of high risk processing (AHRP), which is much narrower in scope.
AHRPs have three basic objectives: they must summarise the purpose of processing, assess the necessity of the processing and the risks that apply to individuals, and detail how the controller aims to mitigate any such risks.
The ICO has already noted that they have seen a lack of due diligence from controllers in their work reviewing DPIAs, therefore work must be done to ensure the same cannot be said for AHRPs. It would be worthwhile for the ICO to provide clarity for controllers on what needs to be made available for further assessment when they deploy LFR technologies, and how they can ensure that their AI models will not unfairly discriminate against those most at risk.
It would also be prudent for the ICO to issue guidance for minimum statistically accurate percentage figures. If controllers were not able to meet the set minimum, there should be self-reporting requirements, so that the ICO can investigate further. By imposing a regulation akin to this, work can be done to ensure the technical effectiveness of LFR technologies.
The ICO will need to continue to investigate and advise on the myriad of issues that permeate LFR technologies. Failure to take proactive steps in, amongst other things, conducting audits of LFR systems that are already in operation and assessing DPIAs (soon to be AHRPs) which identify high-risk processing, could have dire far-reaching consequences.
Unless this issue is seriously considered by parliament and steps are taken to enact specific legislation, or the regulator seeks to impose higher standards, we could well bear witness to copious amounts of particularly egregious stories of how LFR technology has negatively impacted individuals and their communities.
Read more about facial recognition
- Home Office pushes for more police facial-recognition deployments: An independent report commissioned by the biometrics commissioner of England and Wales reveals that the UK policing minister is pushing for wider adoption of facial-recognition technology by police, and further criticises the government’s proposed changes to surveillance oversight.
- UK police double down on ‘improved’ facial recognition: The Met and South Wales Police have doubled down on their use of facial recognition technology after research found improved accuracy in their algorithms when using certain settings, but civil society groups maintain that the tech will still be used in a discriminatory fashion.
- Met Police director of intelligence defends facial recognition: The Met Police’s director of intelligence has appeared before MPs to make the case for its continuing use of facial-recognition technology, following announcements from the force and the Home Office that they intend to press on with its adoption.