akarb - Fotolia

The race to regulate AI: 2024 unpacked

EU’s AI Act is set to become ‘gold standard’ of AI regulation. Will other countries adopt the same high standards or like the UK opt for a light touch in favour of growth and investment?

2023 was a breakout year for AI in the commercial and consumer worlds. This has created huge momentum for further growth in investment, development and implementation of AI going into 2024. 

However, there are issues that could impact on that momentum in different countries including regulation, intellectual property disputes, workforce skills and financial incentives. 2024 is going to be particularly important for the continued development of the regulatory landscape and on-going intellectual property.

Where are we with AI regulation in 2024?

There isn’t a single agreed approach to regulating, and not all countries and regions are following the same path. Those in favour of comprehensive regulation point to the need to combat the risks of AI, and also argue that clear rules can encourage investment. Others consider that a less regulated approach allows AI developers to innovate with freedom and that updating existing laws and practices sufficiently counters the risks.

 The clear leader in the race to regulate AI is the EU. The stand-out development in 2023 was the EU’s provisional agreement on introducing a new AI Act – agreed after a marathon 20-plus hour negotiation session just before Christmas. The final text of the AI Act still needs to be finalised and then adopted by the EU. This is, however, widely predicted to happen in the first half of 2024.

 If the AI Act is adopted as expected, it will become the first comprehensive AI regulatory regime in the world. In practical terms it will establish a risk-based approach for categorising AI systems, set regulatory requirements for each category and include significant financial penalties for non-compliance. Delving into a little more detail, it addresses:

  • ‘Unacceptable risk’ systems considered a threat to people will (subject to certain exceptions) be banned (e.g.  real-time and remote biometric identification systems, such as facial recognition).
  • ‘High risk’ systems that negatively affect safety or fundamental rights will be assessed before being put on the market in the EU and also throughout their lifecycle.
  • Other ‘limited risk’ systems will also need to comply with certain requirements (e.g. relating to transparency – so use of chatbots or deepfakes must be flagged to users).

 The Act may also set the clock running on compliance requirements, the shortest of which is a six-month window to comply with the prohibition on ‘unacceptable risk’ systems. This could potentially fall towards the end of 2024.

 The EU is also working on an ‘AI Liability Directive’ to deal with perceived difficulties relating to harm caused by AI. The impact of this legislation is likely to become clearer after the EU elections in June 2024. 

China too is making substantial progress on AI regulation. China’s approach has previously focussed on specific areas of AI, rather than on building a comprehensive regulatory regime for AI systems. For example, China has introduced individual legislation for recommendation algorithms, generative AI and deepfakes. This approach has seemingly been reactive to developments in AI, rather than proactively seeking to shape that development.

 In June 2023, however, China confirmed that it was working on an ‘Artificial Intelligence Law’. It is not currently clear what will be covered by the new law (as a draft is yet to be published). But the expectation is that the new law will seek to provide a comprehensive regulatory framework and therefore potentially rival the EU’s AI Act in terms of breadth and ambition. So China is definitely a jurisdiction to watch – particularly given its stated goal of becoming the global AI leader by 2030.

The USA shows signs of ramping up its own regulation of AI whilst taking a different approach to the EU. President Biden issued an executive order in October 2023. The order, to be put into practice this year, directs government agencies to develop sector specific standards for AI systems, and specified transparency requirements for foundation models “that poses a serious risk to national security, national economic security, or national public health and safety”. An important difference with the EU’s AI Act is that the executive order did not contain enforcement provisions.

Beyond that, however, further developments are likely to be tied to the US elections in November 2024 as the outcome will dictate the policy agenda for the coming years.

The UK is an outlier in this landscape, opting for light touch regulation to “turbocharge growth”. In a White Paper in March 2023, the UK government set out that it would “avoid heavy-handed legislation which could stifle innovation and take an adaptable approach to regulating AI” and would not introduce a new AI regulator. Instead, a system of non-statutory guidance is preferred, led by existing sector-specific regulators, to be developed further this year via public consultation.

Like the USA, the upcoming general election may also alter the UK’s regulatory course.

It isn’t just regulatory – what’s happening with the intellectual property?

A number of intellectual property disputes relating to AI have arisen, particularly in the USA, including claims against AI developer, Anthropic, by Universal Music Group regarding song lyrics; against Open AI by writers (including Sarah Silverman, John Grisham and George R. R. Martin) regarding their written works; and against Microsoft, GitHub and OpenAI by programmers regarding programming code.

Disputes are not limited to the USA, however. A significant claim by Getty Images against Stability AI regarding stock images is currently being fought out in the UK’s High Court (as well as in the USA).

These disputes primarily relate to copyright infringement issues fundamental to both the training and subsequent use of generative AI systems (although some also raise other potential intellectual property infringement issues).

Developments in these legal claims in 2024 will shed light on the legality of existing AI training and development practices in various jurisdictions, which may in turn impact on the previously rapid growth of some or all of these AI systems. Like the developing regulatory regimes, it will be interesting, particularly for key AI developers, to understand which legal systems are seen to be pro or anti AI systems. This could have an impact on where developers base themselves – at least in the short term. 

2024 – good or bad for AI?

The EU’s AI Act seems set to become the ‘gold standard’ of AI regulation in the future. There are a number of reasons for this. It seems very likely that it will be the world’s first comprehensive regulatory regime for AI; the comprehensive nature of the AI Act may mean other countries or regions adopt similar approaches; and the size of the EU market is a significant incentive for AI developers to comply.

There is debate, however, regarding whether the weight of such comprehensive regulation may stifle AI innovation, as France’s President Macron warned in late 2023, possibly even directing AI innovators towards less comprehensive regulation systems (such as the USA or UK). 2024 will therefore be a crucial year, as the EU seeks to finalise the precise requirements of the AI Act and starts to gauge how AI innovators react.

2024 will also be important for establishing how AI systems are impacted by existing intellectual property laws. The outcomes of various legal cases, particularly in the USA and UK, may fundamentally alter how AI systems are both developed and used.

Jamie Rowlands is a partner and Angus Milne an associate at Haseltine Lake Kempner

Read more on IT legislation and regulation

CIO
Security
Networking
Data Center
Data Management
Close