Where is our National AI strategy heading?

The recent House of Lords Communications Committee expert witness session revealed a somewhat bizarre situation in government relating to our National AI Strategy. 

By her own admission, Dame Wendy Hall, who was originally co-chair of the government’s AI review, no longer appears to be in the loop as to where the national strategy for AI is heading.

She told the committee that the government looks to have shifted focus from the original recommendations of that review. Everything is about November 1-2 at Bletchley Park and the highly anticipated AI Safety Summit.

When the summit was announced, Prime Minister Rishi Sunak said: “To fully embrace the extraordinary opportunities of artificial intelligence, we must grasp and tackle the risks to ensure it develops safely in the years ahead.”

Technology secretary Michelle Donelan said international collaboration was a cornerstone of the government’s approach to AI regulation.

These are worthy areas to consider, but for Hall, much of the debate on safety is around hypothetical risk. There are, of course, a number of very real risks, but Hall believes the more imminent technological threats to society are the deployment of facial recognition technology and deep fakes.

She feels that among the key areas the National AI strategy needs to focus on is skills, training and how to get people who lose their jobs to automation, reskilled in high tech. Pointing to the Tortoise Global AI Index, Hall said that the UK is falling behind. We were previously considered number three, behind China and the US. Now the UK is Number 4, overtaken by Singapore. Will the focus on safety help?

In terms of the UK leading the way, the expert witnesses are not wholly convinced. Part of the reason is that the US is an AI powerhouse. Any AI business will most likely seek advice primarily from US regulators and lawmakers. For any products sold in the EU or that target EU businesses or consumers, they will also need to contend with the rules set by the EU AI Act. The role for the UK will be to collaborate with international regulators to ensure our voice is also heard. However, Microsoft’s recent purchase of Activision Blizzard demonstrates how regulators in different regions can lead to differing outcomes.

Another important factor to consider is the use of public data for training large language models. Hall believes we need to learn how to use that data, but why she asks, “Are we giving it away to the US?

It is commendable to make AI safety a priority, but safety is a foundation on which national strategy much be built.

CIO
Security
Networking
Data Center
Data Management
Close