AlienCat - stock.adobe.com

UK AI strategy focused on economic growth, resilience and ethics

Chris Eastham of law firm Fieldfisher looks at the government’s National Artificial Intelligence Strategy, and the merits of the 10-year plan and its approach

On 22 September 2021, the UK government published its National AI Strategy, delivering on its ambitions to unleash the transformational power of artificial intelligence. Taking a three-pronged approach, the strategy focuses on:

  • Making sure the country invests in the long-term growth of AI.
  • AI benefiting all sectors and regions of the economy.
  • Governing AI effectively by adequate rules that encourage innovation and investment and protect the public and the country’s fundamental values.

The UK AI Strategy

Strategic objectives

The new strategic goals appear to be a broader restatement of the UK’s previously published objectives announced on 12 March 2021 by the then secretary of state for Digital, Culture, Media and Sport, Oliver Dowden, but which still encapsulates those aims to: grow the UK economy through widespread use of AI technologies; remain resilient in the face of change through an emphasis on skills, talent and R&D; and ensure the ethical, safe and trustworthy development of responsible AI.

The broad aims and objectives described are not dissimilar to those expounded by European regulators, who are seeking to achieve harmonisation across the bloc through a directly applicable regulation that does not require local implementing measures at the national level. While the approach suggested by the European Commission represents a set of requirements aimed at producers and users of AI as a technology, the UK government has taken a different approach, putting forward a proposal that is more nuanced, which may better support innovation – the core narrative of the new strategy.

Some readers of the announcement may conclude that the focus on supporting innovation and business is unsurprising, given that over £13.5bn was invested into 1,400 UK private technology companies in the first half of 2021, which is more than that achieved in Germany and France combined, some of the largest tech markets in the European Union (EU). Based on the strength of its technology sector, those I have spoken to expect the UK to be net exporter of AI in the long run.

UK regulatory reform

While we can anticipate some specific proposals for UK regulation in early 2022 when the government publishes its whitepaper on the regulation and governance of AI, this latest announcement indicates that we are not there yet. Based on my reading, I believe it is unlikely that we will see a European-style “AI Act”, and that instead, the UK will look to revisit existing legislation and pursue incremental reform to regulation at the sector level.

This is hardly surprising because it echoes the recommendations of the House of Lords Select Committee on Artificial Intelligence back in 2018, and it feels to me like we’ve had a consistent message from the government on this since then.

These proposals represent the UK government’s response to the question on how to achieve balance between regulatory autonomy and harmonisation of compliance requirements and global interoperability. By electing to seek rapid progress through incremental reforms and appropriate delegation to existing regulators with the necessary subject matter expertise to ensure assurance of AI systems based on end use, it is possible that the UK may steal a march on EU legislators, who – history shows us – are at risk of getting bogged down in protracted negotiations for a pan-European regulation.

If we do see new law introduced on the topic of AI, this is likely to focus on transparency obligations and the like. I would suggest that a new regulator is unlikely, however, with existing regulators being well placed to enforce rules within their areas of competence. However, this does remain an open debate.

There are two areas of general application where we can expect change, however – the data protection and intellectual property regimes. A consultation on copyright and patents for AI is expected to be launched shortly, and we already have an open consultation on data privacy.

Knowing that the interplay between AI and data protection is going to be central to reform, as well as one of the earliest aspects to be implemented in law, we can look to certain aspects of the proposed changes to the UK’s data protection framework to give some more clues as to the likely direction of travel.

Data protection framework reform

Seeking an innovation-friendly regime

On 10 September 2021, the UK government announced a 10-week consultation on reforming the UK’s data protection framework, contemplating deviation from a GDPR (General Data Protection Regulation)-matched approach post-Brexit. This followed the UK’s 10 Tech Priorities announced on 12 March 2021, which featured “unlocking the power of data” to enable the UK to become the number one data destination globally.

The proposed reforms reflect the government’s drive to operate a pro-growth and innovation-friendly regime while maintaining high data protection standards and, crucially, adequacy status. The aim is to boost innovation and economic growth by reducing what the consultation describes as the “unnecessary barriers” that currently exist under the Data Protection Act 2018 and the UK GDPR.

The UK government is keen to emphasise that while it intends to retain the technology-neutral approach of the UK GDPR and avoid allowing technology-driven harms, it also seeks to ensure that regulation does not impede data-driven innovation. In contrast to the EU’s focus on individual control, irrespective of whether the processing is “good” or “bad”, the general direction for the UK now appears to be tilting the balance away from individual rights and towards reducing the administrative burden on businesses to comply with regulations in order to encourage “good” use of data, specifically when it comes to AI.

Fairness

The consultation highlighted the UK government’s concerns about uncertainty about what “fairness” really means for AI when that term is used in the data protection context, as well as a lack of clarity regarding the Information Commissioner’s Office’s (ICO’s) regulatory reach.

Fair data use falls firmly within the scope of data protection regulation. We have been living with this regime for many years now and the concepts are reasonably well understood. There is, however, a surfeit of opinion from different stakeholders, and we might therefore expect new consolidated guidance to clarify what constitutes fair data use when it comes to AI.

One example is reflected in findings from the Centre for Data Ethics and Innovation, which suggested a lack of understanding around how to use personal data (and sensitive personal data) for mitigating bias in AI, and that this is “paralysing” for organisations.

Read more about governmental AI strategy

The necessity of using personal data for bias detection and mitigation in AI systems has also been recognised by the ICO and, to combat this, the government has proposed to permit processing personal data for these purposes as a legitimate interest for which the balancing test is not required. Notably, this also reflects the approach proposed by the EU in its draft regulation.

Procedural fairness is a more complex question. As it stands under the current legal framework, there are provisions on automated decision-making, including profiling. Specifically under Article 22 of the UK GDPR, data subjects have the right to not be subject to a solely automated decision-making process with significant effects.

While the consultation contemplates the removal of Article 22, as recommended by the Taskforce on Innovation, Growth and Regulatory Reform, and this might be the topic of separate regulation if Article 22 is in fact removed from the UK GDPR, I think this is relatively unlikely. It is more likely that the UK government will be looking for evidence of a problem in practice, and then for ways of being more definitive (either by amending Article 22 or through guidance) to ensure that innovation is supported.

As to outcome-fairness, the consultation suggests that horizontal or sector-specific laws (and associated regulators) might be the best way to deal with fairness of outcome in the context of AI systems. This is reflective of the approach set out in the strategy more broadly.

Definitions

As acknowledged by the recent consultation, the multiple, and sometimes conflicting, definitions around AI can cause confusion. From a European perspective, it is proposed that AI will be defined, very broadly, as software that can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations or decisions influencing the environments it interacts with, and that it is developed using a defined set of approaches (including machine learning, inductive programming, knowledge bases, inference/deductive engines, symbolic reasoning, expert systems, statistical approaches, Bayesian estimation, and search and optimisation methods).

In the UK, we don’t yet have that degree of clarity. For the purposes of the consultation, AI was defined as “the use of digital technology to create systems capable of performing tasks commonly thought to require intelligence”, which doesn’t really take us forward (from a legal perspective) without resolving grand philosophical questions as to the nature of intelligence. However, there was recognition that the state of AI is constantly evolving.

If we look to other UK legislative instruments, the draft regulations proposed to support the National Security and Investment Act 2021 refer to AI as “technology enabling the programming or training of a device or software to: perceive environments through the use of data; interpret data using automated processing designed to approximate cognitive abilities; and make recommendations, predictions or decisions, with a view to achieving a specific objective”.

While this may be suitable for the purposes of that Act, I suspect it is unlikely that this definition would be adopted for English law in general. The concept of AI is sufficiently nebulous that we are more likely to see nuanced definitions focusing on the attributes of AI relevant for the purposes of the specific legislation, rather than a definition of general applicability. I am conscious that this remains a very open question, however.

AI assurance and standards

Global interoperability

The UK government has recognised the importance of securing interoperability with all key markets for the purposes of supporting international trade and economic growth. By gathering inputs from UK stakeholders and communicating these on the global stage, the AI Standards Hub described in the strategy is likely to be crucial. By contributing to and influencing the development of global AI technical and regulatory standards, the government proposes to achieve interoperability and minimise the costs of regulatory compliance without the need for parity in regulatory approach.

This approach is not without risk, however, because its success will depend on the degree of influence the UK can wield, the timescales for establishing the standards, and ensuring they meet all regulatory regimes worldwide, for example.

Consistency of sectoral regulation

The strategy also describes the publication of an AI Assurance Roadmap, which is likely to aim to bring together techniques to manage risk and compliance from various contexts, such as impact assessments, audits, and independent verification against standards, to deliver a toolkit for sector regulators (for example, the FCA, Ofcom, MHRA) to select from when determining what degree of verification of conformance is appropriate to the specific context.

Providing a single toolkit to work from, together with greater collaboration between regulators, could lead to a more consistent approach, potentially reducing the regulatory burden on AI providers.

Conclusions

While participating in multilateral discussions on the world stage to shape approaches to AI governance, the UK government is openly intent on taking a different path to Europe – seeking to gain a competitive advantage on the global stage by taking what it considers to be a more nuanced and business-friendly approach to AI regulation.

On the national scale, this will indeed be helpful to startups and smaller businesses, focused initially on the UK market. However, recognising that national boundaries are less relevant when it comes to digital products and services, for international businesses and those looking to scale, there will be increased costs involved in ensuring compliance across borders, the greater the divergence between those regulatory environments.

My view remains that all businesses will benefit from interoperability between regimes applicable to AI across the major markets. Thankfully, based on what I have read, I am comforted that the government appears to share my desire for a global AI ecosystem that promotes innovation and responsible development.

Chris Eastham is a technology, outsourcing and privacy partner at law firm Fieldfisher.

Next Steps

Language models and the metaverse top AI stories of 2021

Read more on Artificial intelligence, automation and robotics

CIO
Security
Networking
Data Center
Data Management
Close