
baiterek_media - stock.adobe.com
Amnesty: AI surveillance risks ‘supercharging’ US deportations
Amnesty International says AI-driven platforms from Palantir and Babel Street are being used by US authorities to track migrants and revoke visas, raising fears of unlawful detentions and mass deportations
Automated artificial intelligence (AI)-powered surveillance tools are being deployed to track migrants, refugees and asylum seekers in the US, raising serious human rights concerns, according to a report by Amnesty International.
Amnesty’s analysis of documents obtained from the Department of Homeland Security (DHS) highlights how two systems in particular – Babel X, provided by Babel Street, and Palantir’s Immigration OS – have automated monitoring and mass surveillance capabilities that are being used to underpin the government’s aggressive immigration enforcement operations.
The organisation claims the tools feed into the State Department’s AI-driven “Catch and Revoke” initiative, which combines social media monitoring, visa status tracking and automated threat assessments of foreign individuals on visas. The practice has already been criticised for violating the First Amendment rights of people living in the US.
Amnesty warns that the speed and scale at which these technologies can identify people and infer their behaviour could result in mass visa revocations and deportations.
“It is deeply concerning that the US government is deploying invasive AI-powered technologies within a context of a mass deportation agenda,” said Erika Guevara-Rosas, senior director for research, advocacy, policy and campaigns at Amnesty International.
“The coercive Catch and Revoke initiative, facilitated by AI technologies, risks supercharging arbitrary and unlawful visa revocations, detentions, deportations and violations of a slew of human rights.”
The tools
Babel X, a data-mining platform developed by Babel Street, has been used by US Customs and Border Protection (CBP) since at least 2019. It collects vast amounts of personal information, including names, emails, phone numbers, IP addresses, employment records and mobile advertising IDs that reveal device locations. The tool can also monitor social media posts.
Amnesty says this information is fed into AI systems that scan social media for “terrorism”-related content, which can then be used to decide whether an individual’s visa should be revoked. Once a visa is revoked, Immigration and Customs Enforcement (ICE) agents can be dispatched to deport the person in question.
Palantir’s Immigration Lifecycle Operating System (Immigration OS) was introduced following a $30m contract with ICE in April 2025. The system integrates datasets across agencies, enabling ICE to build electronic case files, link investigations and track personal information on immigrants. Its updated features include streamlining arrests based on ICE priorities, real-time monitoring of “self-deportations” and identifying priority deportation cases, particularly visa overstayers.
According to Amnesty, the use of such tools has been critical in enabling US authorities to scale up deportations. However, the organisation warns they also increase the risk of unlawful actions by drawing on multiple public and private sources without adequate oversight.
The non-governmental organisation contacted both companies, with Babel not providing any comment, and Palantir stating that its product was not used to power the administration’s Catch and Revoke effort.
The report further notes that probabilistic systems such as these often rely on behavioural inferences, which can be discriminatory. For example, pro-Palestine content could be falsely categorised as antisemitic, amplifying existing biases.
“Algorithms are socially constructed, and our world is built on systemic racism and historical discrimination,” said Petra Molnar, a lawyer specialising in migration and human rights and director of the Refugee Law Lab at York University. “These tools are going to replicate biases already inherent to the immigration system, not to mention create new ones based on very problematic assumptions about human behaviour.”
Molnar stressed that there is an underlying layer of “systemic discrimination that undercuts all of this” based on the assumption that “people on the move are somehow a threat”.
“This is ultimately about dehumanisation. That is the central narrative that the Trump administration is pushing,” she said. “There has been an exponential increase in the type of technologies and surveillance mechanisms that are being increasingly weaponised towards people on the move and mobile communities.”
Amnesty also criticises Palantir and Babel Street for failing to carry out adequate human rights due diligence, arguing that companies are responsible for ensuring their technologies are not deployed in ways that violate human rights.
Molnar pointed to the Ruggie Principles, a UN framework that sets out corporate responsibilities in this area: “This is an independent standard for private companies. They have to adhere to international legal principles when it comes to the development and deployment of technology.”
For Molnar, the ideal solution would involve a “robust human-rights respecting framework”, including human rights and data impact assessments conducted throughout the lifecycle of a project. But she stressed the need for “public awareness of what these companies are doing” and “a divestment from certain companies”.
“There needs to be an open dialogue between people who actually develop the technology and the affected community, because there is this wall right now between people who develop the tech and the people who the tech is hurting,” she said.
“These are trends I’ve been seeing across the world. It’s not just in the United States, but I think the United States is the most recent manifestation.”
Computer Weekly contacted Palantir and Babel Street about the concerns raised by Amnesty’s report, and asked a number of further questions, including how the firms are working to reduce algorithmic bias, the measures they are taking to avoid negative human rights impacts with their deployments, and whether either company has consulted with affected migrant communities.
Neither had responded by the time of publication.
UK parallels
Similar patterns are emerging in the UK. Human rights campaigners at the Migrants’ Rights Network (MRN) have investigated the use of AI at the border and highlighted its growing role in surveillance technologies such as facial recognition.
“AI technologies are used under the guise of efficiency. It allows border immigration systems to become automated. It reduces the need for human intervention, for borders to be reliant on patrols or physical walls,” said an MRN representative.
The organisation argues that government reliance on private contractors risks creating and aggravating an already “digital hostile environment”. But, they add, it is often difficult to obtain information about how these technologies are being used.
For example, when investigating the Home Office’s deployment of Anduril Maritime Sentry Towers on the south-east coast of England, researcher Samuel Storey had to file 27 separate Freedom of Information (FOI) requests. While the Home Office claimed the towers were intended to support “environmental protection”, Storey argues they are being used for surveillance of migrant crossings between the UK and France.
“The FOI system is an extension of state secrecy. It’s not really a tool for the freedom of information, but an extension of the state’s capacity to not divulge or disclose,” he said.
MRN has also raised questions about data access and privacy.
“It’s been incredibly difficult to find out, but that data will be stored somewhere, and we have a suspicion that it will be stored in one of these Amazon Web Services hubs, because they have huge contracts with the government. Personal data is one of the most valuable things that companies can have,” said the representative.
The concern, they added, is that if private companies store the data, these companies would have access to this data, and additional entities, such as foreign governments, may technically be able to access it too.
Previous reporting by Computer Weekly warned that the new eVisa systems could be used to track migrants and support immigrant enforcement.
Read more about technology and migration
- UK’s error-prone eVisa system is ‘anxiety-inducing’: People experiencing technical errors with the Home Office’s electronic visa system explain the psychological toll of not being able to reliably prove their immigration status in the face of a hostile and unresponsive bureaucracy
- ICO investigates lawfulness of algorithms used in immigration enforcement: The UK information commissioner is investigating claims that the use of algorithms by the Home Office to make recommendations on migrants breaches privacy laws
- Data sharing for immigration raids ferments hostility to migrants: Data sharing between public and private bodies for the purposes of carrying out immigration raids helps to prop up the UK’s hostile environment by instilling an atmosphere of fear and deterring migrants from accessing public services