Laurent - stock.adobe.com
All 193 member states of the United Nation’s Educational, Scientific, and Cultural Organisation (Unesco) have unanimously adopted a series of recommendations on ethical artificial intelligence (AI), which aim to realise the advantages of the technology while reducing the human rights risks associated with its use.
The recommendations – adopted by all member states including the UK and China on 25 November 2021 – address issues around transparency, accountability, surveillance, data protection, the environment, social scoring and more; and noted the need for governments and tech firms to build AI technologies that protect and promote human rights and fundamental freedoms.
The first draft of the recommendation was published in May 2020, and was produced by a multidisciplinary unit of 24 AI specialists known as the Ad Hoc Expert Group (AHEG), which was formed two months earlier with the specific task of developing a framework that takes into account the wide-ranging impacts of AI.
The draft was then opened for public consultation for three months, with the final recommendation submitted to member states during the 41st session of Unesco’s general conference, where it was formally adopted.
To guide the development of ethical AI, the recommendation outlines 10 principles – including safety and security, fairness and non-discrimination, proportionality and do no harm, sustainability, transparency and explainability, and awareness and literacy – backed up by more concrete policy actions on how they can be achieved.
For example, the recommendation strongly promotes the use of ethical impact assessments as a way of making sure that developers and deployers of the technology take account of the wider socio-economic impact of their systems, including in terms of data protection and human rights, and calls on member states to implement “strong enforcement mechanisms and remedial actions” so that any harm caused by an AI system can be effectively dealt with.
The recommendation also outlines how AI systems should and should not be deployed: “Member states should introduce incentives, when needed and appropriate, to ensure the development and adoption of rights-based and ethical AI-powered solutions for disaster risk resilience; the monitoring, protection and regeneration of the environment and ecosystems; and the preservation of the planet.
“These AI systems should involve the participation of local and indigenous communities throughout the lifecycle of AI systems and should support circular economy type approaches and sustainable consumption and production patterns.”
However, it also added that while no system should be used in a way that infringes on or abuses people rights, “in particular, AI systems should not be used for social scoring or mass surveillance purposes”.
The recommendation further stressed that, when developing regulatory frameworks to inhibit the potential for social scoring or mass surveillance, member states must ensure that ultimate responsibility always lies with a human being, and that no AI technologies should ever be given legal personalities themselves.
“The world needs rules for artificial intelligence to benefit humanity. The recommendation on the ethics of AI is a major answer,” said Unesco chief Audrey Azoulay. “It sets the first global normative framework while giving member states the responsibility to apply it at their level. Unesco will support its 193 member states in its implementation and ask them to report regularly on their progress and practices.”
In September 2021, the UN Human Rights Office published a report (designated A/HRC/48/31), which found that both states and businesses have often rushed to deploy AI systems and are largely failing to conduct proper due diligence on how these systems impact human rights.
“The aim of human rights due diligence processes is to identify, assess, prevent and mitigate adverse impacts on human rights that an entity may cause or to which it may contribute or be directly linked,” said the report, adding that due diligence should be conducted throughout the entire lifecycle of an AI system.
“Where due diligence processes reveal that a use of AI is incompatible with human rights, due to a lack of meaningful avenues to mitigate harms, this form of use should not be pursued further.”
The release of the report coincided with comments from the UN’s high commissioner on human rights, Michelle Bachelet, who called for a moratorium on the sale and use of AI systems that pose a serious risk to human rights as a matter of urgency.
Unesco member’s track records on mass surveillance
It should be noted that while the Unesco recommendation is the first time China has agreed to end pervasive, AI-powered mass surveillance in an international forum, the US is not a Unesco member and is therefore not a signatory of the recommendation, which is itself only voluntary.
In October 2021, the Institute of Development Studies (IDS) and the African Digital Rights Network (ADRN) published a comparative analysis of surveillance laws and practices of six African countries, including Egypt, Kenya, Nigeria, Senegal, South Africa and Sudan – all of which are Unesco members.
It found that the governments in each country were using and investing in new digital technologies to carry out illegal surveillance on citizens, including AI-based internet and mobile surveillance which can be deployed to scan electronic communications en masse.
Mass surveillance has also been carried out by other Unesco members, including the UK, whose signal intelligence agency GCHQ was found to be unlawfully analysing phone, internet and email records of UK citizens under a secret deal with its American counterpart, the National Security Agency (NSA).
While GCHQ has not formally confirmed exactly how it uses AI, it said in a policy paper published February 2021 that AI could be used responsibly to protect the UK’s national security, although it did not mention surveillance.
Although the European Union (EU) is currently attempting to regulate the use of AI in a legally binding way, more than 100 civil, human and digital rights organisations have said the bloc’s proposed Artificial Intelligence Act (AIA) will not truly protect fundamental rights until it addresses structural imbalances of power.
In November 2021, a report published by Human Rights Watch further claimed the AIA threatens to undermine the bloc’s social safety net, and is ill-equipped to protect people from surveillance and discrimination.
Senior researcher on AI and human rights at Human Rights Watch, Amos Toh, said the proposal will ultimately fail to end the “abusive surveillance and profiling” of those in poverty. “The EU’s proposal does not do enough to protect people from algorithms that unfairly strip them of the benefits they need to support themselves or find a job,” he said.
Read more about artificial intelligence and ethics
- Latest draft of the ‘Emerging Technology Charter for London’ encourages local authorities, public services and technology companies to improve how they implement technology in the capital.
- The chief digital and technology officers of London and Barcelona speak to Computer Weekly about their joint initiative launched with other cities to promote the ethical deployment of artificial intelligence in urban spaces.
- The UK government has announced a new standard for artificial intelligence to be adopted by government departments and public sector bodies.