chones - stock.adobe.com

ICO warns against using biometrics for ‘emotional analysis’

ICO warning highlights risk of ‘systemic bias’ and discrimination associated with organisations using biometric data and technologies for emotion analysis

The Information Commissioner’s Office (ICO) is warning organisations against using biometric technologies to conduct emotion analysis, which it has described as “immature” and “pseudo-scientific”.

Emotion analysis technologies are often powered by artificial intelligence (AI) algorithms and rely a wide range of personal and biometric data to perform their purported functions, including information about gaze tracking, sentiment analysis, facial movements, gait analysis, heartbeats, facial expressions and skin moisture.

Examples of biometric emotional analysis include financial organisations using voice and gait data for identification and security purposes; schools using student’s body, eye and head movements to register them for exams; and employers using it to analyse prospective employees during job interviews.

The ICO, however, has warned the algorithms are currently unable to effectively detect emotional cues, which risks opening the door to systemic bias, inaccuracy and discrimination. This marks the first time the data protection regulator has issued a blanket warning about the ineffectiveness of a technology.

“Developments in the biometrics and emotion AI market are immature. They may not work yet, or indeed ever,” said deputy commissioner Stephen Bonner.

“While there are opportunities present, the risks are currently greater. At the ICO, we are concerned that incorrect analysis of data could result in assumptions and judgements about a person that are inaccurate and lead to discrimination.

“The only sustainable biometric deployments will be those that are fully functional, accountable and backed by science. As it stands, we are yet to see any emotion AI technology develop in a way that satisfies data protection requirements, and have more general questions about proportionality, fairness and transparency in this area.”

The ICO said it aims to publish formal guidance on the wider use of biometric technologies in spring 2023, which will look at a range of use cases, including facial recognition and emotional analysis.

It added the use of biometric information is a particularly sensitive area, because the data “is unique to an individual and is difficult or impossible to change should it ever be lost, stolen or inappropriately used”.

Ongoing concerns

In June 2021, then-information commissioner Elizabeth Denham published an official Commissioner’s Opinion about the inappropriate and reckless use of live facial-recognition (LFR) technologies in public spaces, noting that none of the organisations investigated by her office up to that point were able to fully justify its use.

“When sensitive personal data is collected on a mass scale without people’s knowledge, choice or control, the impacts could be significant,” Denham wrote in an accompanying blog post, adding that although “it is not my role to endorse or ban a technology”, there is an opportunity to ensure its use does not expand without due regard for the law.

There has also been a massive expansion of workplace surveillance and monitoring software since the onset of the Covid-19 pandemic, which allows employers to see a range of information about their employees activities. This includes anything from keystrokes and mouse clicks to employee’s physical locations and use of the internet.

Using these and a variety of other information – including biometrics – the software can help enterprises to conduct predictive and behavioural analytics, enabling managers to understand and track how productive employees are over time. It can also be used to feed algorithms with human resource functions, including hiring and firing.

According to a recent survey by Prospect Union, which represents specialist technology workers, a major part of the problem is that workers themselves are being kept in the dark about how the surveillance software monitoring them works.

For example, only 11% of survey respondents said they were “very sure” what data their employer was collecting about them and why. Just over two in five were “somewhat” or “very” unsure what data their employer was gathering on them or how it was being used.

In March 2022, the Trades Union Congress (TUC) said the intrusive and increasing use of surveillance technology in the workplace was “spiralling out of control”, and pushed for workers to be consulted on the implementation of new technologies at work.

In June 2022, the Ryder Review – an independent legal review conducted by Matthew Ryder QC of Matrix Chambers – found that the current legal framework governing biometric technologies is not fit for purpose, has not kept pace with technological advances and does not make clear when and how biometrics can be used, or the processes that should be followed.

It also found that the current oversight arrangements are fragmented and confusing, and that the current legal position does not adequately protect individual rights or confront the very substantial invasions of personal privacy that the use of biometrics can cause.

While the review focuses primarily on the use of biometrics by public authorities, particularly by police forces, it also takes into account private sector uses of biometric data and technologies, such as in public-private partnerships and for workplace monitoring.

Read more about biometric technologies

Read more on Privacy and data protection

CIO
Security
Networking
Data Center
Data Management
Close