How can YOU influence the future of UK AI Regulation?

The draft programmes for the Conservative and Labour party conference indicate the scale of effort that Corporate Lobbyists, Digital Trade Associations and Special Interest Groups are putting into lobbying officials and politicians in order to influence whether post-Brexit UK regulation, including over AI will align to that of the EU or US of follow a third way, akin to the interests of their clients and/or members. I covered that choice here What AI Global Leadership should the UK offer? when I reviewed the excellent book by O Hara and Hall on the Four Internets.

What the lobbyists may not have noticed is that at the last Conservative Party conference the sessions most heavily attended by party members were the brain storming sessions organised by the Conservative Policy Forum. Each was standing room only.  There is now the serious risk of an outbreak of democracy in the Conservative Party. The current leadership of the Labour party seems to similarly be looking for policies that will appeal to voters, not just to the lobbyists and consultants they are recruiting to help their policy studies in a re-run of the New Labour project.

I do not yet know the CPF programme for this years party conference but:

the first of the CPF consultations looking beyond the immediate political future is on The Future of Deregulation and Artificial Intelligence .

The deadline for responses is 3rd September.

The Chair of the Conservative Science and Technology Forum has issued a draft response (below) for review on 31st August.

I have agreed to help look after the governance of that review – to ensure balance. I have not worked for a technology supplier since before I created the Technology Assessment Operations of the National Computing Centre in 1982, having spent five years looking at the impact of technology, including AI, big data and robotics, on healthcare. At various times since then I have had to organise reviews and inputs to reviews but always from the perspective of the “victims” – alias customers, investors and the politicians who will be blamed when the dreams to which they have lent their reputations turn into nightmares.

Given my long preference for policies which command cross party consensus, I have also been given permission to blog the draft and call for feedback  to [email protected] . When sending feedback please begin by giving your name and any relevant professional background/expertise, also indicate whether you are member of the party and, if so, which constituency. CSTF is currently in the process of restructuring its membership routines so please indicate whether you would like an invitation to join.  It does have a process for those who are not party members but wish to help inform policy.

I also remind readers that the origins of CSTF go back to the Conservative Computer Forum, founded in 1978 to discuss the impact of AI on skills and jobs and look towards politics for the 1979 General Election. We tend to forget that the term AI was coined in the late 1950s to secure funding for a US research programme into algorithmic computing. In 1979 the Conservative Policy Centre, forerunner to the CPF, published Cashing in on the Chips with a recommendation for one of only two UK skills programmes to be delivered to time and budget and to achieve its objectives – and far more. That one was for a micro-computer in every schools by 1982. The other was Gordon Brown’s Millennium Bugbusters programme – which transformed the UK supply of those competent to check for two digit dates in microcomputers and control systems. It too had an impact way beyond its immediate objective – showing how the supply of skills in short supply could be transformed by using short modular, hands-on courses under industry-led quality control.

= = =-

Draft CSTF Response to CPF Consultation on

The Future of Deregulation and Artificial Intelligence 

  1. What regulations hold back businesses and should the government consider for reform?

Summary – key issues are:  

  • ensuring regulators to provide clarity over the application of existing law,
  • building expertise and monitoring capacity in Government (as per the AI white paper)
  • ensuring Government is effectively reactive as the markets innovate and require amendments to existing law, if any, and that there is clear leadership for this role.

There is a regulatory principle that for general purpose technology one looks at the context of use.  It is inherently a risk to innovation look at regulating the technology directly.  In that sense the UK already has a world leading policy position in its AI white paper.

In the case of AI as is normally meant it is clearly a general purpose technology and that therefore the priority is to ensure that existing regulations and regulators have the capability and support to monitor and adapt to AI as the market develops.

The UK has already made significant advance in creating dedicated supporting institutions – the office of the AI and the Centre for Digital Ethics are new but key specific issues will also fall in particular to the ICO,  IPO and OFCOM.

Although, as elsewhere, issues are not new the emergence particularly of generative AI and Large Language Models raises specific challenging issues right now.   In the first instance the priority is simply clarity of how the existing law will apply – in order to avoid uncertainty , loss of investment and minimise the expensive and high risk dependency of setting precedent by legal process.  Although legal challenges have a justified place they are not explicitly required to look at issues such as impact on economic growth, innovation and international convergence and as such create risks.  Again this has been recognised in the AI white paper but they are worth calling out.

If AI is trained on publicly available data and used to create new material at scale it potentially increases that need to have clarity on when this is derived content and thus needing a licence from the owner of the IP it is based on and when this transformational and new content.  Similarly we may need more clarity on when commercial exploitation occurs – at the ’reading’ stage or the downstream publication stage.  It’s important to emphasise this is not new.  We all read such material and it informs our work.  If a writer or artist creates something that is perceived to both create commercial benefit and infringe copyright it is ultimately up to the legal process to clarify this.  Should such issues arise at an unprecedented rate then the legal system could be overwhelmed and costs could become prohibitive.

Similarly if misinformation (in all its many forms including political, fraud and scams) becomes produced at unprecedented in both scale and exploits AI for bespoke niche targeting then our existing processes , already struggling, would be overwhelmed.  We might respectfully suggest that the current regulatory debate on online harms be prioritised on such matters given the expectation of a forthcoming general election.  Key issues such as the ability to identify users of social media platforms are nuanced and balanced and AI does not change the principles involved but may mean resolving them is a far greater priority than it is taken as yet.

It is unlikely that there will either be a political will or public appetite for significant deregulation in the ‘context of use’ areas.  It could be envisaged – copyright for example has been extended and extended and its scope is now far beyond what was originally considered.  Alternatively the onus could be placed on content creators to update and attach explicit licences but that takes us into international agreements such as the Berne Convention/WIPO.

However there is a clear need to empower our institutions to take up and address issues as they arise across Whitehall and with the other regulators – it is vital that this responsibility is clearly assigned and politically empowered and does not fall between DBAT, DSIT and the Office of AI.  The need to be able to test and innovate is currently appealing to ‘regulatory sandboxes’ .  These are entirely valid but should be regarded as a safety valve – they do not replace innovation friendly regulatory environments, indeed extensive use would indicate a failing regulatory environment.

As an example the UK has made clear rules on allowing the testing of self driving cars in public roads, should such a car ever be developed to the point a supplier wanted to then, in effect the UK would have to re-asses the application and meaning of the equivalent of a driving test and driving licence for such a car or service. The UK was leading in allowing such testing – it needs to maintain such leadership and fulfil that promise but that is a systemic issue on our overall governance.  Some areas this is known and expected – medical devices for example – but the issue is that existing regulators may have expertise and capacity issues and no one can predict where something as general as AI might make it an economic issue to rapidly but effectively update our regulation.   However it should be noted that this is again a generic issue for all innovation.

(for IPO related issues see answers to 3)

  1. How can we ensure that AI systems are developed and deployed in a manner that aligns with ethical principles and respects societal values?

Summary – apply extent sector ethics where extant otherwise rely on law covering the context of use.

Whilst some sectors – legal and medical – for example have clear statements on the ethics that applies in the context of use , in general its not the case.

Ultimately a combination of transparency and market forces are what largely enforce societal values.  That is not to say issues already enshrined in law such as privacy should be relaxed.   Again this issue is clarity on the application of existing law and possible re-prioritisation of existing issues.

Care should also be taken in considering the meaning of ethics for general purpose technology, yet another way of pointing out that context of use is the dominant factor. It is instructive to consider the open source definition which includes:

No Discrimination Against Fields of Endeavor

The license must not restrict anyone from making use of the program in a specific field of endeavour. For example, it may not restrict the program from being used in a business, or from being used for genetic research

  1. How can we safeguard individual privacy rights and protect sensitive data in the age of AI?

Summary

Ensure use of research exemptions can lead to deployable solutions. 

Take a market not company based approach to ensuring alternatives to automatic processing exist.

It should be noted that the ICO have already set out extensive guidance and the draft Data Act contains a limited extension of allowed new purpose to consent for scientific research. As such it is worth pointing out that AI does not change the current law.  In brief a developer needs consent or legitimate interest to process data.  You have the right to not be subject to solely automated ‘individual’ decision making.

AI does bring into focus some key priorities and questions:

 

It is worth noting that in most cases if data is used to train an AI for a specific purpose the data is neither retained nor reproducible. As an example if your medical data is used to train a diagnostic tool there will be a better result simply as more data is used but your data will not be stored or reproduced by the neural network.  (Note this is not true of testing data nor general language processing models like LLMs).  As such there is query on the balance of public interest.

If a research exemption is allowed how does one then move the outcome of that research into a viable deployment under the current proposals without going back and seeking to duplicate the seeking of consent and retraining with possibly significant less data at vast expense.

So whilst it seems disproportionate to change the existing approach of consent significantly there should be a move to extend the carve out for research and allow the move to actual deployment as long as privacy is protected.

It should be noted this highlights something of a fundamental issue in that data protection legislation is very much process rather than outcome driven.

On the right ‘not be subject to solely automated ‘individual’ decision making’ it is currently interpretated very narrowly.  However it should be considered whether this should be a right ‘as is’ applying to every single service provider or whether we take a market level view and allow cheaper automated services but premium personal services to emerge as analogous to the impact direct telephone and internet sales had on the insurance market.

  1. What strategies can be implemented to reskill or upskill workers whose jobs may be at risk of automation and AI-driven job displacement?

Summary – initially use a small ‘nudge’ super tax credit including training time as well as external costs.

Skills requirements will only be known in the wider work force as technology is applied sector by sector.  In the meantime skills gaps are widely known not only due to digital adoption including by AI but also issues such as net zero, globalisation and many other factors.

The economic argument for a super tax credit is the same as R&D – companies under invest due to spill over benefits.

We suggest a small credit as data on actual training is minimal and should be created and give way to an evidenced based approach and allow competition and transparency on levels of training to be exposed across sectors.

  1. How can we bridge the existing digital divide and ensure fairness and equity in the development and deployment of AI technologies, particularly in relation to access, benefits, and opportunities?

Summary – Again a small nudge super tax credit on adoption on key business productivity technology.

The key risk is that the gap between productive high digital intensity companies will widen as AI will be rolled out into business products.  The UK picture is one of very high consumer and B2C adoption but rather poor in business generally.  AI in many senses is already in the hands of the consumer every day both direct via products like voice assistants but is embedded much more widely in services across the board.

According to ONS the use of any combination of enterprise resource planning, customer relationship management and supply chain management technologies in our measures are associated with a productivity premium of around 25%.  Again data on the exact levels of adoption is poor especially with the UK falling out of DESI (Eurostat Digital Economy and Society Index) but the UK was bottom quadrant in business adoption and well below the EU average.

Today virtually all push easy paths to adoption are services not capex so a supertax credit is required – the idea is to stimulate the adoption and ensure it becomes an issue for the ecosystem around business – accountants and advisers etc.

Key design issues learnt from the now defunct Help to Grow Digital

  • Start small and build data for evidence based policy
  • The choice of supplier should lie with the SME (not vendor or worse official as happened with Help to Grow Digital)
  • The value of any benefit should go to the SME (not the vendor as happened with Help to Grow Digital – it just offsets competitive discounting).
  • Upgrades , cross selling etc should *not* be supported – the issue is getting companies to engage on first adoption – its up to vendors to prove our worth after that fact.
  • Value added resellers are critical – they have the niche domain expertise to support SMEs.
  • Simple is good!
  1. How can we establish mechanisms to hold AI systems and their developers accountable, ensuring transparency in their decision-making processes and data usage?

Ultimately the best enforcer is an informed and demanding customer base. It’s also the case that we should be careful not to create an additional hurdle just for AI.  Data that is under copyright,  personal data or state confidential is already careful controlled by law and contract.

We have careful rules on safety critical systems already in place – as above there is a need that issues like ensuring product safety standards are able to take into account AI both in terms of not creating anti-innovative barriers but also preventing unintended consequences.

  1. How can we address the national security implications of AI, including potential vulnerabilities and threats?

Mostly addressed by the discussion on the potential for targeted misinformation at scale above.  In many senses AI is a significant additional element to the armoury on data analytic and data processing  but is not per se new.

  1. How can we foster international cooperation and establish global norms, standards, and frameworks for AI development, deployment, and regulation?

In the main the UK should show leadership in contributing to and deploying (both as developer and procurer) international standards using de jure and wider standardisations bodies that respect the WTO TBT annex 3 requirements.  This means avoiding direct legislation of technology, recognising and validating the use of such standards and being very clear on their value and any limitations that need to be addressed and contributing that back into the standardisation efforts.

Avoiding the international fragmentation we have seen in areas such as cybersecurity with multiple direct unique national approaches is paramount and building on the UK’s traditional leadership in this area.

  1. Is there any other observation you would like to make?

In looking at any new regulatory proposals it is important to ask:

  • Is your proposed new regulation really necessary?
  • How would it prevent what abuse at what cost?
CIO
Security
Networking
Data Center
Data Management
Close