Elnur - stock.adobe.com
Top 10 AI regulation stories of 2023
From the UK government’s publication of its long-awaited AI whitepaper to its convening of the world’s first AI Safety Summit, here are Computer Weekly’s top 10 AI regulation stories of 2023
While the conversation about the best way of regulating artificial intelligence (AI) has been bubbling for years, the release of generative AI (GenAI) models at the end of 2022 prompted more urgent discussion on the need to regulate throughout 2023.
In the UK, Parliament launched a number of inquiries into various aspects of the technology, including autonomous weapon systems, large language models (LLMs) and general AI governance in the UK.
Computer Weekly’s coverage of AI regulation mainly focused on these and other developments in the UK, including the government publication of its long-awaited AI whitepaper, its insistence that specific AI legislation is not yet needed, and its convening of the world’s first AI Safety Summit at Bletchley Park in November, which Computer Weekly attended along with press from across the globe.
Coverage also touched on developments with the European Union’s (EU) AI Act, which has taken a market-oriented, risk-based approach to regulation, and the efforts of civil society, unions and backbench MPs in the UK to gear regulation around the needs of workers and communities most affected by AI’s operation.
1. MPs warned of AI arms race to the bottom
In the year’s first Parliamentary session on AI regulation, MPs heard how the flurry of LLMs deployed by GenAI firms at the end of 2022 prompted big tech into an “arms race” to the bottom in terms of safety and standards.
Noting that Google founders Larry Page and Sergei Brin were called back into the company (after leaving their daily roles in 2019) to consult on its AI future, Michael Osborne – a professor of machine learning at Oxford University and co-founder of responsible AI platform Mind Foundry – said the release of ChatGPT by OpenAI in particular placed a “competitive pressure” on big tech firms developing similar tech that could be dangerous.
“Google has said publicly that it’s willing to ‘recalibrate’ the level of risk ... in any release of AI tools due to the competitive pressure from OpenAI,” he said.
“The big tech firms are seeing AI as something as very, very valuable, and they’re willing to throw away some of the safeguards … and take a much more ‘move fast and break things’ perspective, which brings with it enormous risks.”
2. Lords Committee investigates use of AI-powered weapons systems
Established 31 January 2023, the Lords Artificial Intelligence in Weapon Systems Committee spent the year exploring the ethics of developing and deploying lethal autonomous weapons systems (LAWS), including how they can be used safely and reliably, their potential for conflict escalation, and their compliance with international laws.
In its first evidence session, Lords heard about the dangers of conflating the use of AI in the military with better international humanitarian law (IHL) compliance because of, for example, the extent to which AI speeds up warfare beyond human cognition; dubious claims that AI would reduce loss of life (with witnesses pointing out “for whom?”); and the “brittleness” of algorithms when parsing complex contextual factors.
Lords later heard from legal and software experts that that AI will never be sufficiently autonomous to take on responsibility for military decisions, and that even limited autonomy would introduce new problems in terms of increased unpredictability and opportunities for “automation bias” to occur.
After concluding its investigation in December, the committee published a report urging the UK government to “proceed with caution” when developing and deploying military AI.
While much of the report focused on improving oversight of military AI, it also called for a specific prohibition for the use of the technology in nuclear command, control and communications due to the risks of hacking, “poisoned” training data and escalation – whether it be intentional or accidental – during moments of crisis.
3. UK government publishes AI whitepaper
In March, the UK government published its long-awaited AI whitepaper, setting out its agile, “pro-innovation” framework for regulating the technology.
It detailed how the government would empower existing regulators – including the Information Commissioner’s Office, the Health and Safety Executive, Equality and Human Rights Commission, and Competition and Markets Authority – to create tailored, context-specific rules that suit the ways AI is being used in the sectors they scrutinise.
It added that any legislation would include “a statutory duty on our regulators requiring them to have due regard to the [five AI governance] principles” of safety and security; transparency and explainability; fairness; accountability and governance; and contestability and redress.
While industry was generally welcoming of the whitepaper (with caveats) for providing extra certainty for business, those from civil society and trade unions have repeatedly criticised its vagueness and unaddressed regulatory gaps.
4. EU AI Act: The wording of the act is finalised
In December, the European Union (EU) finalised the wording of its AI Act following secretive trialogue negotiations between the European Parliament, Council and Commission.
Among the significant areas covered by the EU Act are so-called high-risk systems that can have a negative impact on EU citizens. The act includes a mandatory fundamental rights impact assessment, classifying AI systems used to influence the outcome of elections and voter behaviour as high risk, and gives EU citizens the right to launch complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights.
The AI Act also includes guardrails for general-purpose AI, meaning that developers of such systems need to draw up technical documentation, ensure the AI complies with EU copyright law, and share detailed summaries about the content used for training.
The act has also attempted to limit the use of biometric identification systems by law enforcement in prohibiting the use of biometric categorisation systems that use sensitive characteristics (for example, political, religious, philosophical beliefs, sexual orientation, race).
The untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases is also banned, as is emotion recognition in the workplace and educational institutions, and social scoring based on social behaviour or personal characteristics.
Fines for non-compliance range from €35m or 7% of global turnover to €7.5m or 1.5% of turnover
5. Worker-focused AI Bill introduced by backbench MP Mick Whitley
In May, backbench Labour MP Mick Whitely introduced a worker-focused AI bill to Parliament, outlining a similar approach being advocated by unions in the UK and an alternative vision to AI regulation presented in the government’s whitepaper.
The bill’s provisions are rooted in three assumptions: that everyone should be free from discrimination at work; that workers should have a say in decisions affecting them; and that people have a right to know how their workplace is using the data it collects about them.
Building on this foundation, Whitley said key provisions of his bill include the introduction of a statutory duty for employers to meaningfully consult with employees and their trade unions before introducing AI into the workplace, and the strengthening of existing equalities law to prevent algorithmically induced discrimination.
This would include amending the Employment Rights Act 1996 to create a statutory right, enforceable in employment tribunals, so that workers are not subject to automated decisions based on inaccurate data, and reversing the burden of proof in discrimination claims so that employers are the ones that have to establish their AI did not discriminate.
Although 10-minute rule motions rarely become law, they are often used as a mechanism to generate debates on an issue and test opinion in the Parliament. As Whitley’s bill received no objections, it was been listed for a second reading on 24 November 2023, but this never took place.
6. UK AI plans offer ‘inadequate’ human rights protection, says EHRC
The Equalities and Human Rights Commission (EHRC) said while it is broadly supportive of the UK’s approach to AI regulation, more must be done to deal with the negative human rights and equality implications of AI systems.
“Since the publication of the whitepaper, there have been clear warnings from senior industry figures and academics about the risks posed by AI, including to human rights, to society and even to humans as a species,” said the EHRC in its official response.
“Human rights and equality frameworks are central to how we regulate AI, to support safe and responsible innovation. We urge the government to better integrate these considerations into its proposals.”
The EHRC said there is generally too little emphasis on human rights throughout the whitepaper, and is only explicitly mentioned in relation to the principle of “fairness” – and only then as a subset of other considerations and in relation to discrimination specifically – and then again implicitly when noting regulators are subject to the 1998 Human Rights Act.
It added it is vital that the government creates adequate routes of redress so people are empowered to effectively challenge AI-related harms, as the current framework consists of a patchwork of sector-specific mechanisms, and that regulators need to be appropriately funded to carry out their AI-realted functions.
7. AI Summit: 28 governments and EU agree to safe AI development
At the start of November, the UK government convened its global AI Safety Summit, which was attended by 28 countries (including the EU), civil society groups and leading AI industry figures.
While the event was heralded by some as a diplomatic success due to the attendance of China and the signing of the Bletchley Declaration by all participating governments (which committed them to deepening international cooperation on AI safety and affirmed the need for “human-centric” systems), others branded it a “missed opportunity” due to the dominance of big tech firms, a focus on speculative risks over real-world harms, and the exclusion of affected workers.
Although the event was mostly a closed shop, Computer Weekly spoke to those able to attend the sessions, who offered a range of perspectives about the success (or not) of the event.
Dutch digital minister Alexandra van Huffelen, for example, described the consensus that emerged around stopping companies from “marking their own homework”, adding there was a tension between companies wanting more time to test, evaluate and research their AI models before regulation is enacted, and wanting to have their products and services out on the market on the basis that they can only be properly tested in the hands of ordinary users.
There was also consensus on the need for proper testing and evaluation of AI models going forward to ensure their safety and reliability.
Despite the participating governments making commitments in the Bletchley Declaration to create AI systems that respect human rights, French finance minister Bruno Le Maire said the summit was “not the right place” to discuss the human rights records of the countries involved when asked by Computer Weekly about the poor human rights records of some signatories.
Two further summits will be held over the next year, in South Korea and France.
8. ‘Significant gaps’ in UK AI regulation, says Ada Lovelace Institute
In July, the Ada Lovelace Institute published a report analysing the UK’s government’s approach to AI regulation, which argued its “deregulatory” data reform proposals will undermine the safe development and deployment of AI by making “an already poor landscape of redress and accountability” even worse.
It specifically highlighted the weakness of empowering existing regulators within their remits, noting that because “large swathes” of the UK economy are either unregulated or only partially regulated, it is not clear who would be responsible for scrutinising AI deployments in a range of different contexts.
This includes recruitment and employment practices, which are not comprehensively monitored; education and policing, which are monitored and enforced by an uneven network of regulators; and activities carried out by central government departments that are not directly regulated.
“In these contexts, there will be no existing, domain-specific regulator with clear overall oversight to ensure that the new AI principles are embedded in the practice of organisations deploying or using AI systems,” it said.
Independent legal analysis conducted for the Institute by data rights agency AWO found that, in these contexts, the protections currently offered by cross-cutting legislation such as the UK GDPR and the Equality Act often fail to protect people from harm or give them an effective route to redress. “This enforcement gap frequently leaves individuals dependent on court action to enforce their rights, which is costly and time consuming, and often not an option for the most vulnerable.”
9. Lords begin inquiry into large language models
In September, the House of Lords Communications and Digital Committee launched an inquiry into the risks and opportunities presented by LLMs, and how the UK government should respond to the technology’s proliferation.
During the first evidence session on 12 September, Ian Hogarth, an angel investor and tech entrepreneur who is now chair of the government’s Frontier AI Taskforce, noted the ongoing development and proliferation of LLMs would largely be driven by access to resources, in both financial and computing power terms.
Neil Lawrence, a professor of machine learning at the University of Cambridge and former advisory board member at the government’s Centre for Data Ethics and Innovation, noted the £100m earmarked for the taskforce pales in comparison to other sources of government funding.
Commenting on developments in the US, Lawrence added it was increasingly becoming accepted that the only way to deal with AI there is to let big tech take the lead: “My concern is that, if large tech is in control, we effectively have autocracy by the back door. It feels like, even if that were true, if you want to maintain your democracy, you have to look for innovative solutions.”
Lawrence and others also warned that LLMs have the potential to reduce trust and accountability if given too great a role in decision-making.
10. No UK AI legislation until timing is right, says Donelan
In the wake of the AI Safety Summit, Whitehall officials outlined why the UK government does not currently see the need for new artificial intelligence legislation, noting that regulators are already taking action on AI and that effective governance is a matter of capacity and capability rather than new powers.
Digital secretary Michelle Donelan, in a separate appearance before the same committee, later said the UK government would not legislate on AI until the timing is right. In the meantime, she said it would instead focus on improving the technology’s safety and building regulatory capacity in support of its proposed “pro-innovation” framework.
She added there was a risk of stifling innovation by acting too quickly without a proper understanding of the technology.
“To properly legislate, we need to better be able to understand the full capabilities of this technology,” she said, adding that while “every nation will eventually have to legislate” on AI, the government decided it was more important to be able to act quickly and get “tangible action now”.
“We don’t want to rush to legislate and get this wrong. We don’t want to stifle innovation…We want to ensure that our tools can enable us to actually deal with the problem in hand, which is fundamentally what we’ll be able to do with evaluating the models.”
Read more about AI regulation
- MPs say UK at real risk of falling behind on AI regulation: MPs in charge of scrutinising the UK government’s proposals for regulating artificial intelligence have warned that time is running out for the introduction of new AI legislation before 2025.
- UK regulators confident they are ready for AI safety governance: MPs at a recent artificial intelligence governance meeting were keen to hear how Ofcom, the FCA and the ICO are preparing for UK AI legislation.
- Government and industry figures meet to discuss AI regulation: Tech industry figures are broadly supportive of the need for artificial intelligence to be regulated, but despite growing consensus, there is still disagreement over what effective AI regulation looks like.