UN human rights chief calls for moratorium on AI technologies

High commissioner’s call for a moratorium on the use of AI systems that pose a serious risk to human rights is accompanied by a UN report on the negative human rights impacts associated with the technology

The United Nations’ (UN) high commissioner on human rights has called for a moratorium on the sale and use of artificial intelligence (AI) systems that pose a serious risk to human rights as a matter of urgency.

Michelle Bachelet – a former president of Chile who has served as the UN’s high commissioner for human rights since September 2018 – said a moratorium should be put in place at least until adequate safeguards are implemented, and also called for an outright ban on AI applications that cannot be used in compliance with international human rights law.

“Artificial intelligence can be a force for good, helping societies overcome some of the great challenges of our times,” said Bachelet in a statement. “But AI technologies can have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights.

“Artificial intelligence now reaches into almost every corner of our physical and mental lives and even emotional states. AI systems are used to determine who gets public services, decide who has a chance to be recruited for a job, and of course they affect what information people see and can share online.

“Given the rapid and continuous growth of AI, filling the immense accountability gap in how data is collected, stored, shared and used is one of the most urgent human rights questions we face.”

Bachelet’s comments coincide with the release of a report (designated A/HRC/48/31) by the UN Human Rights Office, which analyses how AI affects people’s rights to privacy, health, education, freedom of movement, freedom of peaceful assembly and association, and freedom of expression.

The report found that both states and businesses have often rushed to deploy AI systems and are largely failing to conduct proper due diligence on how these systems impact human rights.

“The aim of human rights due diligence processes is to identify, assess, prevent and mitigate adverse impacts on human rights that an entity may cause or to which it may contribute or be directly linked,” said the report, adding that due diligence should be conducted throughout the entire lifecycle of an AI system.

“Where due diligence processes reveal that a use of AI is incompatible with human rights, due to a lack of meaningful avenues to mitigate harms, this form of use should not be pursued further,” it said.

The report further noted that the data used to inform and guide AI systems can be faulty, discriminatory, out of date or irrelevant – presenting particularly acute risks for already marginalised groups – and is often shared, merged and analysed in opaque ways by both states and corporations.  

As such, it said, dedicated attention is required to situations where there is “a close nexus” between a state and a technology company, both of which need to be more transparent about how they are developing and deploying AI.

“The state is an important economic actor that can shape how AI is developed and used, beyond the state’s role in legal and policy measures,” the UN report said. “Where states work with AI developers and service providers from the private sector, states should take additional steps to ensure that AI is not used towards ends that are incompatible with human rights.

“Where states operate as economic actors, they remain the primary duty bearer under international human rights law and must proactively meet their obligations. At the same time, businesses remain responsible for respecting human rights when collaborating with states and should seek ways to honour human rights when faced with state requirements that conflict with human rights law.”

Read more about technology and human rights

It added that when states rely on businesses to deliver public goods or services, they must ensure oversight of the development and deployment process, which can be done by demanding and assessing information about the accuracy and risks of an AI application.

In the UK, for example, both the Metropolitan Police Service (MPS) and South Wales Police (SWP) use a facial-recognition system called NeoFace Live, which was developed by Japan’s NEC Corporation.

However, in August 2020, the Court of Appeal found SWP’s use of the technology unlawful – a decision that was partly based on the fact that the force did not comply with its public sector equality duty to consider how its policies and practices could be discriminatory.

The court ruling said: “For reasons of commercial confidentiality, the manufacturer is not prepared to divulge the details so that it could be tested. That may be understandable but, in our view, it does not enable a public authority to discharge its own, non-delegable, duty.”

The UN report added that the “intentional secrecy of government and private actors” is undermining public efforts to understand the effects of AI systems on human rights.

Commenting on the report’s findings, Bachelet said: “We cannot afford to continue playing catch-up regarding AI – allowing its use with limited or no boundaries or oversight, and dealing with the almost inevitable human rights consequences after the fact.

“The power of AI to serve people is undeniable, but so is AI’s ability to feed human rights violations at an enormous scale with virtually no visibility. Action is needed now to put human rights guardrails on the use of AI, for the good of all of us.”

The European Commission has already started grappling with AI regulation, publishing its proposed Artificial Intelligence Act (AIA) in April 2021.

However, digital civil rights experts and organisations told Computer Weekly that although the regulation is a step in the right direction, it fails to address the fundamental power imbalances between those who develop and deploy the technology, and those who are subject to it.

They claimed that, ultimately, the proposal will do little to mitigate the worst abuses of AI technology and will essentially act as a green light for a number of high-risk use cases because of its emphasis on technical standards and mitigating risk over human rights.

In August 2021 – following Forbidden Stories and Amnesty International’s exposure of how the NSO Group’s Pegasus spyware was being used to conduct widespread surveillance of hundreds of mobile devices – a number of UN special rapporteurs called on all states to impose a global moratorium on the sale and transfer of “life-threatening” surveillance technologies.

They warned that it was “highly dangerous and irresponsible” to allow the surveillance technology sector to become a “human rights-free zone”, adding: “Such practices violate the rights to freedom of expression, privacy and liberty, possibly endanger the lives of hundreds of individuals, imperil media freedom, and undermine democracy, peace, security and international cooperation.”

Read more on Artificial intelligence, automation and robotics

CIO
Security
Networking
Data Center
Data Management
Close