AlienCat - stock.adobe.com
EC publishes approach to human and ethical implications of AI, but what will UK do?
The European Commission has published a guide to the EU’s approach to the human and ethical effects that artificial intelligence might bring
The new president of the European Commission (EC), Ursula von der Leyen, promised to put forward legislation “for a coordinated European approach on the human and ethical implications of artificial intelligence” within 100 days of taking office on 1 December 2019. On 19 February 2020, the EC published for consultation its whitepaper On artificial intelligence – a European approach to excellence and trust, giving us a clear view of the substantial changes the EC has in mind.
Law and regulation in the UK and EU are currently generally technology agnostic. There are laws and regulations that apply to technology, but most of these are not specific to technology. The same is true for artificial intelligence (AI) solutions, used in this article as an umbrella term for a wide range of algorithm-based technologies that solve complex tasks, often tasks which, until recently, required human intelligence.
As the UK exits the EU and the EC brings forward a new framework for AI regulation, the UK finds itself at a crossroads. Will it choose to follow the new European approach and bring in laws targeted specifically at AI, or go its own way?
Where is the EU going?
The EC has been paying close attention to AI for a number of years, but the election of a new EC president does seem to have triggered a desire to take more direct action. As well as suggesting targeted changes to the European liability framework, the EC whitepaper published on 19 February consults on a possible new AI-specific regulatory framework, which would impose significant additional legal requirements in relation to the development and use of “high-risk” AI.
The EC is proposing that a set of binding requirements would apply to developers and users of high-risk AI. To distinguish between high-risk and low-risk AI, a list of high-risk sectors would be identified (such as healthcare and transport), along with a more abstract definition of high-risk use. This will focus on AI that produces legal effects for individuals or companies, poses a risk of injury, death or significant damage, or other effects that cannot reasonably be avoided.
The AI would have to satisfy both the sector and use criteria in order to be considered high-risk. For example, an AI system that is used in the healthcare sector but relates to booking appointments would not be caught, because it would not be sufficiently high-risk to justify intervention. There would also be exceptional purposes that would be considered high-risk irrespective of sector, such as use of AI in recruitment processes or remote biometric identification.
The EC’s suggestions for the types of mandatory legal requirements that would apply to high-risk AI are extensive, and include obligations relating to training data, data and record-keeping, information provision, robustness and accuracy, and human oversight. Additional obligations would also apply to certain applications of AI, such as to remote biometric identification.
Enforcement, governance and geographical reach
Perhaps the most surprising recommendation in the whitepaper is the EC’s proposed enforcement regime. To ensure that high-risk AI meets the mandatory requirements, a “prior conformity assessment” would be carried out. This could include procedures for testing, inspection or certification, checks on algorithms and of the datasets used during development. Additional ongoing monitoring may also be mandated.
In terms of governance, the EC suggests that member states should be required to appoint an authority responsible for monitoring the overall application and enforcement of the regulatory framework for AI.
The EC is proposing that the new regulatory regime would apply to everyone providing AI-enabled products or services in the EU, irrespective of their country of origin. This will expand European influence outside the EU’s borders, because it means non-EU companies will be required to comply if they want to service EU customers.
Where now for the UK?
It is clear from the whitepaper that the EC is still at an early stage in its regulatory journey. By the time the EC brings forward its full proposals for AI regulation, it seems almost certain that the UK Brexit transition period will have ended, and the UK will not be bound to follow the EU approach.
This puts the UK in an interesting position. Rather than being aligned with Europe for the time being and having the choice as to whether to move away from the EU position at some point in the future, the UK will instead have to decide whether it wishes to follow the EU, or to maintain its own approach.
Many of the points raised by the EC’s whitepaper had already been considered to some extent by the UK’s House of Lords Select Committee on AI, which summarised its views in its report AI in the UK: ready, willing and able? in April 2018. The committee concluded that, at that stage, blanket AI regulation would be inappropriate and that existing regulators, such as the Information Commissioner’s Office (ICO), were best placed to consider the impact of AI on their sectors of expertise.
More recent insight into the possible thinking of the UK government can be found in a February 2019 blog post by Dominic Cummings, written before he took up his current role as chief special adviser to the prime minister. He wrote: “In many areas, the EU regulates to help the worst sort of giant corporate looters defending their position against entrepreneurs. Post-Brexit Britain will be outside this jurisdiction and able to make faster and better decisions about regulating technology like genomics, AI and robotics. Prediction: just as insiders now talk of how we ‘dodged a bullet’ in staying out of the euro, within ~10 years, insiders will talk about being outside the charter/ECJ [European Court of Justice] and the EU’s regulation of data/AI in similar terms.”
The findings of the House of Lords Select Committee, and the thoughts of those influencing the top of the UK’s government, indicate a reluctance to put in place EU-style rules and a general regulator for AI. Confirmation from the government, setting out its intentions for AI regulation for this five-year parliament, would be immensely helpful for developers of AI and businesses wishing to take advantage of the opportunities AI offers.
At the moment, it seems that while the EU proceeds on its newly set course, the UK is likely to take its own route.