alphaspirit - stock.adobe.com
Tech giants Amazon, Microsoft and IBM all recently agreed to halt sales of their respective facial-recognition technologies to US law enforcement agencies during the second week of June 2020, despite privacy campaigners raising concerns about its use for several years. But why now?
In an open letter to the US Congress dated 8 June 2020, IBM CEO Arvind Krishna cited concerns about its take on the technology being used “for mass surveillance, racial profiling and violations of basic human rights and freedoms” as reasons for curbing sales of its “general purpose” facial-recognition software.
The company also confirmed to The Verge that IBM would cease any further research or development of the technology.
In a short, two-paragraph blog post announcing the decision on 10 June 2020, Amazon claimed it had been an advocate for stronger government regulation on the ethical use of facial recognition and that “in recent days, [US] Congress appears ready to take on this challenge”.
It added: “We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help, if requested.”
Speaking at a Washington Post Live event on 11 June 2020, Microsoft president Brad Smith set out its reasons for doing so, and said: “We will not sell facial-recognition technology to police departments in the United States until we have a national law in place, grounded in human rights, that will govern this technology.”
Unlike its contemporaries, Google has been advocating for “cautious” use of the technology since the end of 2018.
“Facial recognition is a very sensitive technology, and that’s why we have taken a cautious approach and don’t offer it in a general purpose API [application programming interfaces] while we work through the policy and technical issues at stake,” said a Google spokesperson.
“So while we don’t support a ban on this technology, we do encourage strong guardrails – particularly for public facial recognition – through regulations and other means.”
The announcements came after several weeks of mass protests against the police murder of George Floyd, a 46-year-old African-American who was killed in Minneapolis during an arrest for allegedly using a counterfeit note on 25 May 2020.
The protests, which have now transcended national boundaries, have led to increased scrutiny of technology companies and their contracts with law enforcement, raising questions about their complicity in police brutality and wider institutional racism.
However, since the start of the pandemic, a slew of biometric companies from across the globe have updated their facial-recognition algorithms to identify people with hidden faces, while a number of notable suppliers said they will continue to serve US law enforcement despite increased market scrutiny.
This suggests the market is nowhere near slowing down, despite the enormous, if not new, ethical questions raised in the past few weeks about selling the technology to police forces.
‘Ethics-washing’ or genuine acts of solidarity?
Speaking on a facial-recogntion panel at CogX, an annual global leadership summit focused on artificial intelligence (AI) and other emerging technologies, independent researcher and broadcaster Stephanie Hare said technology companies are always dropping unprofitable technologies from their product development plans, “but they don’t write to US Congress and make a political point about it – so that’s the substantive change that IBM has done”.
Speaking on the same panel at CogX, Peter Fusey, a sociology professor at the University of Essex who conducted the first independent study of the Metropolitan Police’s facial-recognition trials, added while IBM’s motivation is unclear, it was at least interesting because of how it changes the debate going forward.
“Who knows what the motivation is, it could be ethic-washing, whatever it is, but something that’s quite interesting about that [IBM] announcement today, is it’s harder to therefore argue that regulating facial recognition is somehow anti-innovation,” he said.
Decrying regulation as “anti-innovation” is a common refrain heard by campaigners and others when seeking to regulate the tech industry. According to the Financial Times, a draft of Brussel’s AI plans from December warned that a ban on using facial recognition could stifle innovation in the sector.
“By its nature, such a ban would be a far-reaching measure that might hamper the development and uptake of this technology,” it said.
However, it turns out that IBM had already “removed its facial-recognition capabilities from its publicly distributed API” in September 2019, according to a paper prepared for the Artificial Intelligence, Ethics and Society (AIES) conference in February 2020, raising questions about the sincerity of IBM choosing this moment to publicly pivot.
Meanwhile, privacy campaigners have raised questions about whether the actions of Amazon and Microsoft go far enough.
On 10 June, the Electronic Frontier Foundation (EFF) published a blog post saying: “While we welcome Amazon’s half-step, we urge the company to finish the job. Like IBM, Amazon must permanently end its sale of this dangerous technology to police departments.
“In 2019, Microsoft stated that it had denied one California law enforcement agency use of its face recognition technology on body-worn cameras and car cameras, due to human rights concerns. The logical next step is clear – Microsoft should end the programme once and for all.”
Many campaigners, however, are unsatisfied the technology is being used at all, and want a permanent ban on its use.
“There should be a nation-wide ban on government use of face surveillance. Even if the technology were highly regulated, its use by the government would continue to exacerbate a policing crisis in this nation that disproportionately harms black Americans, immigrants, the unhoused, and other vulnerable populations,” said EFF.
Evan Greer, Fight for the Future
“We agree that the government should act, and are glad that Amazon is giving them a year to do so, but the outcome must be an end to government use of this technology.”
Others, such as digital rights group Fight for the Future, were more critical, calling the move by Amazon “nothing more than a public relations stunt”.
“Amazon knows that facial-recognition software is dangerous. They know it’s the perfect tool for tyranny. They know it’s racist – and that, in the hands of police, it will simply exacerbate systemic discrimination in our criminal justice system,” said deputy director of Fight for the Future, Evan Greer.
“The last sentence of Amazon’s statement is telling. They ‘stand ready to help if requested’. They’ve been calling for the Federal government to ‘regulate’ facial recognition, because they want their corporate lawyers to help write the legislation, to ensure that it’s friendly to their surveillance capitalist business model.
“But it’s also a sign that facial recognition is increasingly politically toxic, which is a result of the incredible organising happening on the ground right now,” she added.
What about other policing technologies?
Many of the criticisms and concerns shared regarding the tech sector’s ties to law enforcement in recent weeks have been focused exclusively on facial recognition, obfuscating the negative outcomes other technologies can have when deployed by police.
IBM’s Krishna, in his open letter, even went as far to say that national policy should be made to “encourage and advance the use of technology that bring greater transparency and accountability to policing, such as body cameras and modern data analytics techniques”.
While Amazon and Microsoft do not go this far, there was no mention of how other technologies can have similarly discriminatory effects.
In their book, Police: a field guide, which analyses the history and methods of modern policing, authors David Correia and Tyler Wall argue that such technologies are inherently biased towards the police perspective.
“Remember that body-worn cameras are tools organised, controlled and deployed by the police. How should they be used? When should they be used? Where should they be used? These are all questions answered exclusively by police,” they said.
“Any police reform demand that includes a call for police to wear body cameras is a call to invest total oversight authority of police with police.”
In November 2015, a trial of body-worn cameras conducted by the Metropolitan Police, alongside the Mayor’s Office for Police and Crime, the College of Policing and the Home Office, found the technology had little-to-no impact on several areas of policing.
The trial revealed the cameras had “no overall impact” on the “number or type of stop and searches”, “no effect” on the proportion of arrests for violent crime, and “no evidence” that the cameras changed the way officers dealt with either victims or suspects.
It added that while body-worn videos can also reduce the number of allegations against officers, this “did not reach statistical significance”.
It is also unclear how “modern data analytics techniques” could increase police transparency and accountability, as typically when police use data analytics it is for “predictive policing”, a technique used to identify potential criminal activity and patterns, either in individuals or geographical areas, depending on the model.
It should be noted that IBM has been developing crime prediction programmes and tools since the 1990s.
“At the heart of legal challenges to the police practice of stop and frisk, for example, is scepticism of police claims to prediction. In other words, it is the belief that is it racial profiling, and not knowledge of future crimes, that determines who police choose to stop and frisk,” said Correia and Wall.
“Predictive policing, however, provides seemingly objective data for police to engage in those same practices, but in a manner that appears free of racial profiling… so it shouldn’t be a surprise that predictive policing locates the violence of the future in the poor of the present.”
They add that crime rates and other criminal activity data reflect the already racialised patterns of policing, which creates a vicious cycle of suspicion and enforcement against black and brown minorities.
“Police focus their activities in predominantly black and brown neighbourhoods, which result in higher arrest rates compared to predominately white neighbourhoods. [This] reinforces the idea that black and brown neighbourhoods harbour criminal elements, which conflates blackness and criminality, [and] under CompStat [a data-driven police management technique] leads to even more intensified policing that results in arrest and incarceration,” they said.
According to evidence submitted to the United Nations (UN) by the Equalities and Human Rights Commission (EHRC), the use of predictive policing can replicate and magnify “patterns of discrimination in policing, while lending legitimacy to biased processes”.
It added: “A reliance on ‘big data’ encompassing large amounts of personal information may also infringe on privacy rights and result in self-censorship, with a consequent chilling effect on freedom of expression and association.”
The function of policing
Author of End of policing, Alex Vitale, who is also a professor of sociology and coordinator of the Policing and Social Justice Project at Brooklyn College, has warned against simply enacting “procedural reform” to police institutions, arguing in the Guardian that it has not worked in Minneapolis, where George Floyd was killed.
“None of it worked. That’s because ‘procedural justice’ has nothing to say about the mission or function of policing. It assumes that the police are neutrally enforcing a set of laws that are automatically beneficial to everyone,” he said.
“Instead of questioning the validity of using police to wage an inherently racist war on drugs, advocates of ‘procedural justice’ politely suggest that police get anti-bias training, which they will happily deliver for no small fee.”
Alex Vitale, Brookyln College
He argues the answer to solving the problems of modern policing is not spending more money on things such as training programmes, technology or oversight, but to “dramatically shrink” the functions of policing itself.
“We must demand that local politicians develop non-police solutions to the problems poor people face. We must invest in housing, employment and healthcare in ways that directly target the problems of public safety,” he said.
“Instead of criminalising homelessness, we need publicly financed supportive housing; instead of gang units, we need community-based anti-violence programmes, trauma services and jobs for young people; instead of school police, we need more counsellors, after-school programmes, and restorative justice programmes.”
This idea that public spending should be diverted from police to “community-led health and safety strategies” instead has been gaining traction online with calls to #DefundThePolice, a position which Vitale has long been a proponent of.
Writing in End of policing, Vitale said: “What we really need is to rethink the role of police in society. The origin and functions of the police are intimately tied to the management of inequalities of race and class. The suppression of workers and the tight surveillance and micromanagement of black and brown lives have always been at the centre of policing. Any police reform strategy that does not address this reality is doomed to fail.”
Read more about facial-recognition technology
- A research project being conducted by UK universities in collaboration with the Home Office and Metropolitan Police could produce facial-recognition systems that allows users of the technology to identify people with their faces covered.
- Collaboration between police forces and private entities on facial-recognition technology comes under scrutiny by the Home Office’s Biometrics and Forensics Ethics Group.
- Despite the continuing controversy around its use, the Metropolitan Police will be deploying live facial recognition across the capital.