andreusK - stock.adobe.com
Clearview AI, the highly controversial startup at the centre of a row over the ethics of facial recognition technology, has prompted renewed and heated debate over its activities after admitting that its entire client database of more than 600 law enforcement agencies has been stolen.
Clearview holds in excess of three billion photos of people in its database. It has scraped these images from the public internet (including social media) without ever seeking explicit permission from any of the people pictured. Its modus operandi is to sell access to this database to law enforcement agencies, with the goal of making it easier for police to identify suspects using its machine learning and artificial intelligence (AI) algorithms to compare photos.
It claims: “Clearview’s technology has helped law enforcement track down hundreds of at-large criminals, including paedophiles, terrorists and sex traffickers. It is also used to help exonerate the innocent and identify the victims of crimes including child sex abuse and financial fraud.”
However well-intentioned, its behaviour has already prompted outrage. In January, The New York Times published an in-depth exposé of Clearview – which was founded by Hoan Ton-That, a Vietnamese-Australian college drop-out and former fashion model, and backed by, among others, Peter Thiel of Palantir.
Besides the scraping of photos without consent, the newspaper uncovered a worrying culture at Clearview. Among other things, The New York Times alleged that Ton-That had created fake identities to throw its reporter off the scent, and encouraged police officers to intimidate and harass them. He also sought funding from white supremacist businessman and failed US politician Paul Nehlen.
As a result of the negative publicity it has attracted, Clearview is already attracting lawsuits over its collection and storage of biometric identifiers without consent, and digital platforms including Google and Twitter have ordered it to cease and desist its activities.
According to the Daily Beast, which was one of the first news outlets to report on the hack after receiving leaked communications informing customers of the breach, an intruder gained unauthorised access to Clearview data, including its customer list, the number of user accounts they had set up, and the number of searches they had run through its systems.
Clearview claimed there was no breach of its servers or compromise of its systems or network, and that the vulnerability has since been fixed.
In a statement sent to the news outlet, company attorney Tor Ekeland said: “Security is Clearview’s top priority. Unfortunately, data breaches are part of life in the 21st century. Our servers were never accessed. We patched the flaw and continue to work to strengthen our security.”
Tim Mackey, principal security strategist in the cyber security research centre (CyRC) at Synopsys, said that in general there were two types of attacks – opportunistic and targeted – and it was clear which type the Clearview hack was.
Kjell Carlsson, Forrester
“With the type of data and client base that Clearview AI possesses, criminal organisations will view compromise of Clearview AI’s systems as a priority. While their attorney rightly states that data breaches are a fact of life in modern society, the nature of Clearview AI’s business makes this type of attack particularly problematic,” said Mackey.
“Facial recognition systems have evolved to the point where they can rapidly identify an individual, but combining facial recognition data with data from other sources like social media enables a face to be placed in a context which, in turn, can enable detailed user profiling – all without explicit consent from the person whose face is being tracked,” he added. “There are obvious benefits for law enforcement seeking to identify missing persons to use such technologies for good, but with the good comes the bad.”
Forrester senior analyst Kjell Carlsson said there was a high likelihood that whoever was behind the hack would leak the client list, likely seeking to feed the public backlash against Clearview.
“It will likely bring the public awareness, and mistrust, of facial recognition to a new level. We can expect many knee-jerk reactions that try to bar law enforcement from using facial recognition. Much of this legislation will prove ineffective because it is unable to distinguish new facial recognition technologies from the earlier solutions that police have been using for decades, but it will be a deterrent for local governments to investigate and invest in these solutions,” he said.
Carlsson said it was unlikely that the incident would lead to a slowdown in the use of facial recognition and related technologies. He said the technology was too useful and convenient to deter widespread adoption, citing more mundane uses such as replacing swipe cards to enter office buildings, or even paying for things, which is becoming popular in China. “If there is one thing that Facebook has shown it is that customers are extremely willing to forgo privacy for convenience,” he said.
Nevertheless, Carlsson said it was still important to evaluate facial recognition technology on the basis of ethics and efficacy, and on both counts Clearview scored low, having very clearly chosen a highly non-ethical means of building its database.
“There will now be even more pressure on western tech giants not to invest in facial recognition, and it is wrong. It is far better that companies like Google, Microsoft, AWS [Amazon Web Services] and IBM offer facial recognition, because they have the capabilities to do it well and the reputational risk to ensure that it is done as ethically as possible versus companies like Clearview which can operate in the dark until a scandal brings it to public attention,” he said.
GDPR an issue in Europe
Toni Vitale, head of data protection at law practice JMW Solicitors, said that in Europe it was likely that Clearview would run afoul of the General Data Protection Regulation (GDPR).
“Clearview must have a lawful basis to conduct data scraping. There are six lawful bases available under GDPR: consent; contract with the data subject; compliance with a legal obligation; vital interest; public interest; legitimate interest,” Vitale told Computer Weekly in emailed comments.
“Of these, the only potentially fitting lawful ground is legitimate interest. Consent can quickly be discounted on the basis that most individuals will not have consented to having their data scraped, although some social media sites allow scraping but only with their written prior permission. Legitimate interest allows processing to be undertaken if it is necessary for the purposes of business, except where such interests are overridden by the interests or fundamental rights and freedoms of individuals, and there is some interplay here between the human right to privacy and data protection laws.
“It is a steep hill to climb to overcome the human right to privacy. Even if a company can establish a lawful basis, not all personal data can be scraped. Explicit consent is needed to scrape data such as race, religion, health data, political opinions, and so on.”
Clearview ‘code of conduct’
Following The New York Times’ January 2020 exposé, Clearview publicised a so-called “code of conduct”. It states that Clearview’s technology is only made available to law enforcement and select security professionals to use for investigative purposes, and contains only public information.
The company says it recognises that “powerful tools always have the potential to be abused” and claims to take this threat seriously. Its app has built-in safeguards to ensure that those with access only use it for its intended purpose.
The code of conduct also mandates that investigators use its technology safely and ethically, and must obtain permission from a supervisor within their organisation before creating an account and using it.
It claims that this code is strictly enforced, and that it suspends and terminates the accounts of users who violate it.
Read more about facial recognition technology
- Live facial recognition will be rolled out operationally by the Met Police, but police monitoring group Netpol believes it will hamper people’s ability to exercise their right to protest.
- Clearview AI can be an indispensable tool to reinforce national security, but there are many risks associated with the use of facial recognition technology that the EU might be better equipped to deal with.
- Members of the Scottish Parliament have said police use of live facial recognition technology is “not fit for purpose”.
Read more on Privacy and data protection
Supermarket facial recognition challenged by privacy group
Lawmakers worry about biometric data in wake of Roe v. Wade
ICO orders facial recognition firm Clearview AI to delete all data about UK residents
ICO warns facial recognition company Clearview AI it could face £17m fine over privacy breaches