denisismagilov - stock.adobe.com

Is the EU better equipped than the US to supervise the use of facial recognition?

Clearview AI can be an indispensable tool to reinforce national security, but there are many risks associated with the use of facial recognition technology that the EU might be better equipped to deal with than the US

Is Big Brother watching us? For the past few weeks, Clearview AI has been attracting a lot of attention. The controversial startup is currently the target of a class action lawsuit with similar alleged violations of privacy laws that Facebook was accused of a few weeks ago.

This slightly dystopian facial recognition app allows users to upload a photo and compare it with the app’s large database, which results in finding all the images in which the same person appears, and a list of websites on which the matching images were found. An investigation led by the New York Times revealed that Clearview AI was sold to nearly 600 US law enforcement agencies, as well as private security firms for use in investigatory work.

Unsurprisingly, the app has started to attract significant criticism. Among its critics, Twitter has been contesting Clearview AI’s use of photos uploaded onto the social media platform by users to feed its database.

Elizabeth Warren, the Massachusetts Democrat Senator, also reached out to the founder of Clearview AI through a letter that set out 14 questions about the app and requiring disclosure of the list of law enforcement agencies and companies that use its technology. The attorney general of New Jersey has banned the use of Clearview AI by police services in the state.

Figures-wise, the database contains more than three billion images, all collected on the internet, the majority of which come from social media sites. The app is said to successfully match images in 75% of cases. Importantly, most images are obtained in violation of the terms and conditions of social media sites.

Also, most individuals whose photos end up in the Clearview AI database have not consented to such use of their image. There is currently no evidence to confirm that the results obtained through the app are reliable, and Clearview AI has not been formally authorised by any US state authority, despite being used by law enforcement agencies. In the light of this, the potential risks are high.

Clearview AI relies on facial recognition technology through the use of biometric data. The value and usefulness of such technology is irrefutable, the authentication and identification of individuals by facial recognition having the potential to be used in a variety of domains.

Facial recognition is already used to access administrative and commercial services, such as banking, to unlock one’s phone or to facilitate passport control. Nonetheless, the risks associated with the use of facial recognition for illegal or generalised surveillance and the potential hacking of biometric data in the event of a security failure are significant. Such risks reflect the concerns raised by various parties.

Hopes and worries

Clearview AI’s major characteristic consists in the identification of an individual among a group. This feature has created both hopes and worries.

Some see it as an incomparable tool to reinforce national security, while others see a generalisation of surveillance and the end of anonymity in public spaces. At the heart of the debate lie concerns about people’s rights to private life and the protection of our personal data, as well as our relationship to technology.

Neither a strict prohibition nor the unregulated use of facial recognition for the identification of individuals is an appropriate solution. On this topic, France’s data protection authority, the Commission Nationale de l'Informatique et des Libertés, has perfectly summarised what seems essential to bear in mind – that a new technology, as useful and powerful as it may be, should not receive unconditional assent from public authorities.

In other words, political choice should not be purely and simply dictated by technical possibilities, and public authorities are responsible for determining what is acceptable, legitimate and proportionate and what isn’t, particularly in relation to the protection of fundamental rights and freedoms.

In the EU, the development of apps such as Clearview AI is possible only if General Data Protection Regulation (GDPR) provisions and the directive 2016/679 of 27 April 2016, also known as the directive on “police and justice”, are respected. Because facial recognition technology uses biometric data, it gives rise to the application of several rules regarding personal data, prescribed by the GDPR, as well as the directive on police and justice, because the data will be used by public authorities in the context of police activity.

Yet Article 9 of the GDPR strictly forbids the use of biometric data for the purpose of identifying a particular physical person, except for certain limited exceptions. This principled prohibition reflects the highly sensitive nature of this type of data.

The EU therefore seems better equipped than the US to supervise the use of facial recognition for the purpose of identifying an individual, because the technology will be constrained by the respect of major principles on the protection of liberties, personal life and personal data.

Unfortunately, this does not mean we are out of Clearview AI’s reach, especially as the likelihood of finding our photos in the database is strong.

Next Steps

How a soccer club uses facial recognition access control

Read more on Identity and access management products

CIO
Security
Networking
Data Center
Data Management
Close