zentilia - Fotolia

Europe’s proposed AI regulation falls short on protecting rights

The European Commission’s proposal for artificial intelligence regulation focuses on creating a risk-based, market-led approach replete with self-assessments, transparency procedures and technical standards, but critics warn it falls short of being able to protect people’s fundamental rights and mitigating the technology’s worst abuses

This article can also be found in the Premium Editorial Download: Computer Weekly: A new three-year plan for digital government

The European Commission’s (EC) proposal to regulate artificial intelligence (AI) is a step in the right direction but fails to address the fundamental power imbalances between those who develop and deploy the technology, and those who are subject to it, experts have warned.

In the Artificial Intelligence Act (AIA) proposal, published on 21 April 2021, the EC adopts a decidedly risk-based approach to regulating the technology, focusing on establishing rules around the use of “high-risk” and “prohibited” AI practices.

On its release, European commissioner Margrethe Vestager emphasised the importance of being able to trust in AI systems and their outcomes, and further highlighted the risk-based approach being adopted.

“On AI, trust is a must, not a nice-to-have. With these landmark rules, the EU [European Union] is spearheading the development of new global norms to make sure AI can be trusted,” she said.

“By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way. Future-proof and innovation-friendly, our rules will intervene where strictly needed – when the safety and fundamental rights of EU citizens are at stake.”

Speaking to Computer Weekly, however, digital civil rights experts and organisations claim the EC’s regulatory proposal is stacked in favour of organisations – both public and private – that develop and deploy AI technologies, which are essentially being tasked with box-ticking exercises, while ordinary people are offered little in the way of protection or redress.

This is despite them being subject to AI systems in a number of contexts from which they are not necessarily able to opt out, such as when used by law or immigration enforcement bodies.

Ultimately, they claim the proposal will do little to mitigate the worst abuses of AI technology and will essentially act as a green light for a number of high-risk use cases due to its emphasis on technical standards and mitigating risk over human rights.

Technical standards over human rights

Within the EC’s proposal, an AI system is categorised as “high risk” if it threatens the health and safety or fundamental rights of a person.

This includes use cases such as remote biometric identification, the management or operation of critical infrastructure, systems used for educational purposes, and systems used in the context of employment, immigration or law enforcement decisions.

“In line with a risk-based approach, those high-risk AI systems are permitted on the European market subject to compliance with certain mandatory requirements and an ex-ante conformity assessment,” says the proposal.

Digital civil rights experts and organisations claim the EC’s regulatory proposal is stacked in favour of organisations that develop and deploy AI technologies while ordinary people are offered little in the way of protection or redress

“The classification of an AI system as high risk is based on the intended purpose of the AI system, in line with existing product safety legislation. Therefore, the classification as high risk does not only depend on the function performed by the AI system, but also on the specific purpose and modalities for which that system is used.”

Alexandra Geese, a German Member of the European Parliament (MEP), says while the mere existence of the regulation has helped open up public debate about the role of AI technologies in society, it does not fulfil the rhetorical promises made by Vestager and others at the highest levels of the bloc.

Referring to a leaked version of the proposal from January 2021, which diverges significantly from the document that has been officially published, Geese says it “really acknowledged AI as a danger for democracy, AI as a danger for the environment, while this proposal sort of pretends that it’s just about technical standards, and I don’t think that’s good enough”.

Geese adds that while the language may have been vague at points in the leaked draft – a problem present in the final proposal too – the sentiments behind it were “perfect” as it more fully acknowledges the harmful potential of AI technologies.

Daniel Leufer, a Europe policy analyst at digital and human rights group Access Now, adds that the AI whitepaper published in February 2020 – which significantly shaped the direction of the proposal – “raised alarm bells for us” because of its dual focus on proliferating AI while mitigating its risks, “which doesn’t take account [of whether] there are applications of AI (which we believe there are) where you can’t mitigate the risks and that you don’t want to promote”.

He distinguishes, for example, between competing globally on machine learning for medical image scanning and competing on AI for mass surveillance, adding that “there needs to be acknowledgement that not all applications will be possible in a democratic society that’s committed to human rights”.

Databases, high-quality datasets and conformity assessments

While the proposal’s risk-based approach means it contains a number of measures focused on how high-risk AI systems can still be used, critics argue that the thresholds placed on their use are currently too low to prevent the worst abuses.

Although it includes provisions for the creation of an EU-wide database of high-risk systems – which will be publicly viewable and based on “conformity assessments” that seek to assess the system’s compliance with the legal criteria – multiple experts argue this is the “bare minimum” that should be done to increase transparency around, and therefore trust in, artificial intelligence technology.  

Sarah Chander, a senior policy advisor at European Digital Rights (EDRi), says while the database can assist journalists, activists and civil society figures in obtaining more information about AI systems than is currently available, it will not necessarily increase accountability.

“That database is the high-risk applications of AI on the market, not necessarily those that are in use,” she says. “For example, if a police service is using a predictive policing system that technically is categorised as high risk under the regulatory proposal as it exists now, we wouldn’t know if Amsterdam police were using it, we would just know that it’s on the market for them to potentially buy.”

Giving the example of Article 10 in the proposal, which dictates that AI systems need to be trained on high-quality datasets, Chander says the requirement is too focused on how AI operates at a technical level to be useful in fixing what is, fundamentally, a social problem.

“Who defines what high quality is? The police force, for example, using police operational data, that will be high-quality datasets to them because they have trust in the system, the political construction of those datasets [and] in the institutional processes that led to those datasets – the whole proposal overlooks the highly political nature of what it means to develop AI,” she says.

“A few technical tweaks won’t make police use of data less discriminatory, because the issue is much broader than the AI system or the dataset – it’s about institutional policing [in that case].”

On this point, Geese agrees that the need for high-quality datasets is not an adequate safeguard, as it again leaves the door open to too much interpretation by those developing and deploying the AI systems. This is exacerbated, she says, by the lack of measures included to combat bias in the datasets.

“It says the data has to be representative, but representative of what? Police will say, ‘This is representative of crime’, and there’s also no provision that says, ‘You not only need to identify the bias, but you also have to propose corrective measures’,” she says.

“There is no obligation to remove the original bias [from the system]. I talked to Vestager’s cabinet about it and they said, ‘We stopped the feedback loops from worsening it, but the bias in the data is there and it needs to be representative,’ but nobody can answer the question ‘representative of what?’,” says Geese.

Chander also points out that in most of the high-risk use cases, the proposal allows the developers of the systems to conduct the conformity assessments themselves, meaning they are in charge of determining the extent to which their systems align with the regulation’s rules.

“They don’t really categorise these uses as high risk, otherwise you would have some sort of external verification or checks on these processes – that’s a huge red flag, and as a system check it won’t overcome many of the potential harms,” she says.

Leufer adds that while the proposal does establish “notified bodies” to check the validity of conformity assessments if a complaint about an AI system arises, the measure risks creating a “privatised compliance industry” if commercial firms are relied on over data protection authorities and other similar entities.

“Ideally, [notified bodies] should be focused on protecting human rights, whereas if it’s Deloitte, that’s a paid service, they’re focused on compliance, they’re focused on getting through the process,” he says. “The incentives, I think, are quite off, and the notified body doesn’t seem to be involved in the majority of cases. Even if they were, it doesn’t seem like a sufficient measure to catch the worst harms.”

He says while the processes around databases and conformity assessments are “an improvement on a really bad current state of affairs… it’s just a basic level of transparency [that] doesn’t actually solve any issues”.

Referring to a “slow process of privatisation”, Chander adds that the proposal also sets in motion a governance model whereby the user of an AI system must follow the “instructions of use” provided by the developer.

“This could be viewed in a very neutral way to say, ‘Well, the developer of the system knows how it works so that makes sense’,” she says. “But in a more political way, this means that actually what we’re doing is tying in a relationship between a service provider – a private company – and a public institution… [and] embedding the reliance of the public sector on the private sector.”

Technology and ethics researcher Stephanie Hare says the EU’s current thinking around conformity assessments and databases only provides a “veneer of transparency” as there is an inherent tension within the proposal between creating transparency and protecting the “proprietary information” and interests of private companies.

“The increased transparency obligations will also not disproportionately affect the right to protection of intellectual property since they will be limited only to the minimum necessary information for individuals to exercise their right to an effective remedy,” says the proposal.

“Any disclosure of information will be carried out in compliance with relevant legislation in the field, including Directive 2016/943 on the protection of undisclosed know-how and business information (trade secrets) against their unlawful acquisition, use and disclosure.”

Prohibited in name only

While the bulk of the proposal focuses on managing high-risk use cases of AI, it also lists four practices that are considered “an unacceptable risk”, and which are therefore prohibited.

This includes systems that distort human behaviour; systems that exploit the vulnerabilities of specific social groups; systems that provide ‘scoring’ of individuals; and the remote, real-time biometric identification of people in public places.

Critics say the proposal contains a number of loopholes that significantly weaken any claims that practices considered an unacceptable risk have been banned

However, critics say the proposal contains a number of loopholes that significantly weaken any claims that these practices have actually been banned.

Chander says although the proposal provides a “broad horizontal prohibition” on these AI practices, such uses are still allowed within a law enforcement context and are “only prohibited insofar as they create physical or psychological harm”.

“That’s a narrowing down of the prohibition already, because only such uses that create these tangible – and quite high threshold – types of harm are prohibited,” she says.

“Considering that one of the prohibitions are uses that could take advantage on the basis of people’s mental ability, physical disability or age, you would imagine a legitimate prohibition of that would be any such uses regardless of the question of whether harm was produced.”

Leufer says this measure is “totally ridiculous”, giving the example that if the text was read literally, and AI systems deployed “subliminal techniques” beyond a person’s consciousness to distort their behaviour for their own benefit, that would technically be allowed.

“You can just drop the harm bit from each one of them, because those practices…can’t be done for someone’s benefit – that is in itself completely at odds with human rights standards,” he says.

Biometric identification

The prohibition of biometric identification in particular has a number of “huge loopholes” according to Geese. The first, she says, is that only real-time biometric identification is banned, meaning police authorities using facial recognition, for example, could simply wait for a short period of time and do it retroactively.

She adds the second major loophole is tied to the proposal’s “threat exemption” in the proposal, which means real-time biometric identification can be used in a law enforcement context to conduct “targeted searches” for victims of crime, including missing children, as well as threats to life or physical safety.

“You have to have the infrastructure in place all the time. You can’t just set them up overnight when a child’s missing – you have to have them in place and, usually, when you have that security infrastructure in place, there’s a strong incentive to use it,” says Geese.

“It’s counter-intuitive to say, ‘We have all these cameras and we have all this processing capacity with law enforcement agencies, and we just turn it off all the time’.”

Hare shares a similar sentiment, saying while she is not necessarily opposed to facial recognition technology being used for specific and limited tasks, this is something that needs to be weighed against the evidence.

“What they’re saying is, ‘We’ll build that entire network, we’ll just turn it off most of the time’... You have to weigh it and ask, ‘Do we have any examples anywhere, have we piloted it even in just one city that used it in that specific, limited way described?’,” she says.

Hare further adds that while she is encouraged that European police would be banned from conducting generalised facial recognition surveillance, as they would need sign-off from a judge or some kind of national authority, in practice this could still run into “rubber-stamping” issues.

“Home secretaries love to be on the side of ‘law and order’, that’s their job, that’s how they get headlines… and they’re never going to want to piss off the cops,” she says. “So, if the cops wanted it, of course they’re going to rubber-stamp it. I’ve yet to see a home secretary who’s pro-civil liberties and privacy – it’s always about security and [stopping] the terrorists.”

Even if judges were solely in charge of the process, she adds, the post-9/11 experience in the US has saw rubber-stamp applications to tap tech companies’ data through secret courts set up under the Foreign Intelligence Surveillance Act (FISA), contributing to its intrusive surveillance practices since 2001.

Both Leufer and Hare point out that the European Data Protection Supervisor (EDPS) has been very critical of biometrics identification technology, previously calling for a moratorium on its use and now advocating for it being banned from public spaces.  

“The commission keeps saying that this is, essentially, banning remote biometric identification, but we don’t see that at all,” says Leufer.

Everyone who spoke to Computer Weekly also highlighted the lack of a ‘ban’ on biometric AI tools that can detect race, gender and disability. “The biometric categorisation of people into certain races, gender identities, disability categories – all of these things need to be banned in order for people’s rights to be protected because they cannot be used in a rights-compliant way,” says Chander. “Insofar that the legislation doesn't do that, it will fall short.”

Asymmetries of power

In tandem with the relaxed nature of the prohibitions and the low thresholds placed on the use of high-risk systems, critics say the proposal fundamentally does little to address the power imbalances inherent in how AI is developed and deployed today, as it also contains very little about people’s rights to redress when negatively affected by the technology.

Describing the proposal’s provisions around redress (or lack of) as “about as useful as an umbrella in a hurricane”, Hare adds that if AI technology has been used on you in some way – in anything from a hiring decision to a retailer using facial recognition – most people do not have the resources to make a challenge.

“What are you going to do? Do you really think the average person has the time and the money and the knowledge to go and file a complaint?” she says.

Leufer adds that while EU citizens can use the General Data Protection Regulation (GDPR) as an avenue to challenge abuses, this puts a “heavy burden” on individuals to know their rights as a data subject, meaning the proposal should contain further mechanisms for redress.

“There definitely needs to be some form of redress or complaint mechanism for individuals or groups, or civil society organisations on behalf of people, to point out when a system is in violation of this regulation… because of how market-focused this is,” he says.

“It’s the problem of putting a huge burden on individuals to know when their rights have been violated. We’ve said from the beginning that the commission has a responsibility to proactively guarantee the protection and enjoyment of fundamental rights – it should not be ‘deploy, let someone be harmed, and then the complaint starts’, it should be taking an active role.”

For Chander, the whole point of prohibiting certain use cases and limiting others considered high risk should be to reverse the burden of proof on the parties seeking to develop and deploy the tech.

“Particularly considering the vast power imbalance when we’re talking about AI, the reason we argue for prohibitions in themselves is that most of such legislation creates such a burden of proof on people to prove that a certain type of harm has occurred,” she says.

“So this takes the language of prohibition, but then also doesn’t remove those sort of institutional barriers to seeking redress.”

Ultimately, Chander believes that while the “creation of a bureaucracy” around AI compliance might, “optimistically speaking,” engender more consideration about the outcomes and impacts the systems may have, if the proposal stays as it is “it won’t change much”.

“The lack of any procedures for human rights impact assessments as part of this legislative process, the fact that most of the high-risk systems are self-assessed for conformity, and the fact that many of the requirements themselves don’t structurally challenge the harms, [shows] they’d rather look for more technical tweaks here or there,” she says.

“There is some sort of value judgement being made that these [AI use cases] will be useful to the European Commission’s broader political goal, and that I think speaks to why the limitations are so soft.”

Geese further contends that while AI systems may nominally be designated as high risk, the effect of the regulation as it stands will be to broadly legalise a range of harmful AI practices.

The proposal must now go to the European Parliament and Council for further consideration and debate before being voted on.

Read more about artificial intelligence 

Next Steps

FTC pursues AI regulation, bans biased algorithms

Read more on IT governance

CIO
Security
Networking
Data Center
Data Management
Close