Border technologies largely fail to respect human rights
AI-powered border technologies used to ‘manage’ migration frequently make the process more arbitrary, discriminatory and unjust, says human rights group
Already-vulnerable migrants are being used as “testing grounds” for a host of migration “management” and surveillance technologies, but rather than promoting fairness and dignity, these tools are often used to trample on human rights, claims a research paper.
Published on 9 November by European Digital Rights (EDRi), the paper, Technological testing grounds: Migration management experiments and reflections from the ground up, looked at the intersection between migration and technology, identifying a number of trends in how tech is developed and deployed for border enforcement.
For example, it found that many migrants will often encounter artificial intelligence (AI)-powered technologies before even coming into contact with a border. This includes iris scanning and other biometric checkpoints being used in refugee camps, as well as social media scraping and cellphone tracking to screen immigration applications, bringing up questions of consent and how that extremely sensitive data is safeguarded.
Many migrants worldwide are also increasingly subject to automated decision-making systems and predictive data analytics, both before and after border crossings. But these technologies, the report noted, are often rolled out without the system being validated and with little or no governance or oversight, resulting in decisions that are still “opaque, discretionary, and hard to understand”.
“These technological experiments to augment or replace human immigration officers can have drastic results – in the UK, 7,000 students were wrongfully deported because a faulty algorithm accused them of cheating in a language acquisition text,” said the paper. “In the US, the Immigration and Customs Enforcement Agency has worked with Palantir Technologies and other private companies to track and separate families and enforce deportations and detentions of people escaping violence in Central and Latin America.”
In early August, the UK Home Office was forced to scrap its “visa streaming” algorithm in response to the threat of legal action from the Joint Council for the Welfare of Immigrants (JCWI) and digital rights group Foxglove, which claimed the tool helped to create a hostile environment for migrants.
“This streaming tool took decades of institutionally racist practices, such as targeting particular nationalities for immigration raids, and turned them into software,” said Chai Patel, legal policy director at JCWI, at the time. “The immigration system needs to be rebuilt from the ground up to monitor for such bias and to root it out.”
Autonomous surveillance drones, such as those used by European border and coast guard agency Frontex in the Mediterranean and Aegean seas, are also increasingly used to facilitate interceptions and pushbacks of boats in defiance of international maritime law, which is forcing people to take ever more dangerous routes.
“These technologies can have drastic results,” said the report. “For example, border control policies that use new surveillance technologies along the US-Mexico border have actually doubled migrant deaths.”
EU knowledge of migrant boats
In his book Violent borders: Refugees and the right to move, Reece Jones, professor of geography at the University of Hawaii, said these kinds of operations and data-gathering practices “suggest that the European Union monitors the sea very carefully for vessels and is aware of most migrant boats travelling from the coast of Africa”.
He added: “However, because officials do not want to encourage additional migration by rescuing people outside of the territorial waters of EU states, they often do not intervene until the boats reach shore or are very clearly in distress.”
Jones further noted that globally, more than half of deaths at borders in the past decade have occurred at the edges of the EU, making it “by far the most dangerous border crossing in the world”.
According to the report, the wider societal effect of these technology use cases is that people on the move are presupposed to be criminals unless proven otherwise, leading to a massive militarisation of border management and enforcement.
“The opacity of border zones and transnational surveillance transform migration into a site of potential criminality that must be surveilled and managed to root out the ever-present spectre of terrorism and irregular migration,” it said.
Increasing use of public-private partnerships
The paper further found that although all of these innovations in migration technology are often justified under the guise of needing novel ways to manage migration, those in charge of their development often fail to consider the profoundly detrimental impact these technologies can have on human rights and lives.
“The primary purpose of the technologies used in migration management is to track, identify and control those crossing borders,” it said. “The issues around emerging technologies in the management of migration are not just about the inherent use of technology, but rather about how it is used and by whom.”
The paper added that nation states and private companies are calling the shots and deciding which priorities matter, while the migrants most affected are routinely excluded from discussions about how technology should be used.
“The development and deployment of migration management is ultimately about decision-making by powerful actors on communities with few resources and mechanisms of redress,” it said.
“The unequal distribution of benefits from technological development privileges the private sector as the primary actor in charge of development, with states and governments wishing to control the flows of migrant populations benefiting from these technological experiments. Governments and large organisations are the primary agents that benefit from data collection and affected groups remain the subject, relegated to the margins.”
In particular, it noted that these partnerships have created a dual lack of accountability, pointing to how governments – often lacking the necessary technical skills in-house – are happy to relinquish their own liability and accountability for people on the move to the private sector, “where the legally enforceable rights that allow individuals to challenge governments may not exist” and where “powerful actors can easily hide behind intellectual property legislation or various other corporate shields to ‘launder’ their responsibility and create a vacuum of accountability”.
Read more about controversial technology
- Understanding the political, social and economic relationships between different groups in society is needed to ensure location data is not used in ways that further endanger already vulnerable people on the move.
- The government has outlined its plans for a new points-based immigration system to ensure only skilled workers can enter the UK – but what does the technology sector think?
- Despite the abundance of decision-making algorithms with social impacts, many companies are not conducting specific audits for bias and discrimination that can help mitigate their potentially negative consequences.
The paper added that technology and technological development “occurs in specific spaces that are not open to everyone and its benefits do not accrue equally”, which means it has clear potential to replicate existing power structures and, in the case of migration technology, “render certain communities as testing grounds for innovation” without their consent.
“The development of technology also reinforces power asymmetries between countries and influences our thinking around which countries can push for innovation, while other spaces like conflict zones and refugee camps become sites of experimentation,” it said. “The development of technology is not inherently democratic and issues of informed consent and right of refusal are particularly important to think about in humanitarian and forced migration contexts.”
To rein in the harmful effects of a range of such border technologies, the report recommends that governments should commit to the abolition of automated migration management technologies unless and until independent and impartial human rights impact assessments are carried out, which would ensure that the burden of proof lies squarely with the nation states and developers.
It further recommended freezing all efforts to procure, develop or adopt any new automated border technologies until existing systems fully comply with internationally protected fundamental human rights frameworks.
Governments should also commit to transparency and report publicly on what technology is being developed and used, for example in the form a public registry, as well as create an independent body to oversee and review all use of existing and proposed automated technologies in migration management, it said.
“Civil society organisations, NGOs and international organisations working with people on the move must also examine their use of and participation in the development and deployment of migration management technology and must ensure that human rights, dignity and freedom from harms as a result of technological experimentation remain at the centre of discussion,” it said.
The research paper was authored by Petra Molnar, Mozilla fellow and associate director of the Refugee Law Lab, who conducted the research over the course of a year and spoke to more than 40 refugees, asylum seekers, migrants without status, and people on the move.
The publication coincides with the launch of Migration and Technology Monitor, a collective of journalists, filmmakers, academics, and communities working to interrogate technological experiments conducted on people crossing borders.