vchalup - stock.adobe.com

Over 100 civil society groups call for changes to EU AI Act

Civil, human and digital rights organisations sign open letter calling on European policymakers to put fundamental rights at the heart of the proposed Artificial Intelligence Act

A total of 114 civil society organisations have signed an open letter calling on European institutions to amend the forthcoming Artificial Intelligence Act (AIA) so that it properly protects fundamental human rights and addresses the structural impacts of artificial intelligence (AI).

The European Commission’s proposed AIA was published in April 2021 and sought to create a risk-based, market-led approach to regulating AI by establishing self-assessments, transparency procedures and various technical standards.

Digital civil rights experts and organisations have previously told Computer Weekly that although the regulation is a step in the right direction, it will ultimately fail to protect people’s fundamental rights and mitigate the technology’s worst abuses because it does not address the fundamental power imbalances between tech firms and those who are subject to their systems.

According to the open letter from the civil society groups, which was coordinated by European Digital Rights (EDRi) and published on 30 November, “it is vital that the AIA addresses the structural, societal, political and economic impacts of the use of AI, is future-proof, and prioritises democratic values and the protection of fundamental rights”.

It adds: “We specifically recognise that AI systems exacerbate structural imbalances of power, with harms often falling on the most marginalised in society. As such, this collective statement sets out the call of 114 civil society organisations towards an Artificial Intelligence Act that foregrounds fundamental rights.”

Sarah Chander, a senior policy adviser at EDRi, told Computer Weekly: “The EU’s AI Act needs a serious update to protect people from the myriad harms presented by the use of AI systems.

“To truly protect people’s fundamental rights, the Act needs as a minimum to update its prohibitions – including a full ban on biometric identification in public, predictive policing – mandate that deployers do impact assessments before use, and provide redress for those harmed by AI systems.”

The signatories – which include Access Now, Fair Trials, Algorithm Watch, Homo Digitalis, Privacy International, Statewatch, the App Drivers and Couriers Union and Big Brother Watch – make a number of recommendations on how the AIA can be modified to ensure it truly prevents and protects people from the worst abuses of AI technologies.

These include placing more obligations on users of high-risk AI systems to facilitate greater accountability, creating mandatory accessibility requirements so that those with disabilities are able to easily obtain information about AI systems, and prohibiting the use of any system that poses an unacceptable risk to fundamental rights.

The letter notes, for example, that the AIA predominately imposes obligations on suppliers of AI systems rather than the end-users actually deploying them, adding that while some of the risks posed by high-risk AI systems do come from how they are designed, significant risks also stem from how they are deployed.

“To remedy this, we recommend that the AIA is amended to include the obligation on users of high-risk AI systems to conduct a fundamental rights impact assessment before deploying any high-risk AI system,” it says.

“For each proposed deployment, users must designate the categories of individuals and groups likely to be impacted by the system, assess the system’s impact on fundamental rights, its accessibility for persons with disabilities, and its impact on the environment and broader public interest.”

Linked to the obligations on users of high-risk systems, the letter also calls for consistent and meaningful transparency.

As it stands, the AIA would create an EU-wide database of high-risk systems – which will be publicly viewable and based on “conformity assessments” that seek to assess the system’s compliance with the legal criteria – but only for information registered by providers, with no information on the context of use.

“This loophole undermines the purpose of the database, as it will prevent the public from finding out where, by whom and for what purpose(s) high-risk AI systems are actually used,” says the open letter. “Further, the AIA only mandates notification to individuals impacted by AI systems listed in Article 52. This approach is incoherent because the AIA does not require a parallel obligation to notify people impacted by the use of higher risk AI systems.”

It adds that an obligation should be placed on users to register their systems on the database and provide information for each specific deployment.

Although the signatories believe these measures will help mitigate some of the worst abuses, they also say some AI practices are simply incompatible with fundamental rights, and should be banned outright. These include the use of any AI systems for emotion recognition, biometric categorisation, predictive policing and immigration enforcement.

Read more about regulating artificial intelligence

The civil society groups also recommend including a comprehensive set of vulnerabilities that AI systems could be made to exploit, rather than limiting it to, as Article 5 of the AIA does, to “age, physical or mental disability”.

The letter adds: “If an AI system exploits the vulnerabilities of a person or group based on any sensitive or protected characteristic, including, but not limited to age, gender and gender identity, racial or ethnic origin, health status, sexual orientation, sex characteristics, social or economic status, worker status, migration status or disability, then it is fundamentally harmful and therefore must be prohibited.”

In September 2021, the United Nations high commissioner on human rights, Michelle Bachelet, called for a moratorium on the sale and use of AI systems that pose a serious risk to human rights, at least until adequate safeguards are implemented, as well as for an outright ban on AI applications that cannot be used in compliance with international human rights law.

“Artificial intelligence now reaches into almost every corner of our physical and mental lives and even emotional states,” she said. “AI systems are used to determine who gets public services, decide who has a chance to be recruited for a job, and of course they affect what information people see and can share online.

“Given the rapid and continuous growth of AI, filling the immense accountability gap in how data is collected, stored, shared and used is one of the most urgent human rights questions we face.”

An accompanying report by the UN Human Rights Office, which analysed how AI affects a range of rights, found that both states and businesses have often rushed to deploy AI systems and are largely failing to conduct proper due diligence on how these systems affect human rights.

On the subject of providing meaningful rights of redress for those harmed by AI systems, the civil society letter adds that the AIA currently does not contain any provisions or mechanisms for either individual or collective redress and, as such, “does not fully address the myriad harms that arise from the opacity, complexity, scale and power imbalance in which AI systems are deployed”.

To facilitate meaningful redress, it recommends adding to the AIA two new rights for individuals – to not be subject to AI systems that pose an unacceptable risk or do not comply with the Act, and to be provided with a clear and intelligible explanation for decisions taken with the assistance of AI systems.

It further recommends including an additional right to effective remedy, which would apply to both individuals and collectives, and creating a mechanism for public interest organisations to lodge complaints with national supervisory authorities.

At the start of November 2021, drawing on case studies from Ireland, France, the Netherlands, Austria, Poland and the UK, Human Rights Watch published a report that found Europe’s trend towards automation is discriminating against people in need of social security support, compromising their privacy, and making it harder for them to obtain government assistance.

It said that while the AIA proposal does broadly acknowledge the risks associated with AI, “it does not meaningfully protect people’s rights to social security and an adequate standard of living”.

The report added: “In particular, its narrow safeguards neglect how existing inequities and failures to adequately protect rights – such as the digital divide, social security cuts, and discrimination in the labour market – shape the design of automated systems, and become embedded by them.”

Amos Toh, senior researcher on AI and human rights at Human Rights Watch, said the proposal will ultimately fail to end the “abusive surveillance and profiling” of those in poverty. “The EU’s proposal does not do enough to protect people from algorithms that unfairly strip them of the benefits they need to support themselves or find a job,” he said.

Read more on Artificial intelligence, automation and robotics

CIO
Security
Networking
Data Center
Data Management
Close