Getty Images

Sunak sets scene for upcoming AI Safety Summit

Prime minister Rishi Sunak has outlined how the UK will approach making AI safe, but experts say there is still too big a focus on catastrophic but speculative risks over real harms the technology is already causing

Artificial intelligence (AI) risks are serious but nothing to lose sleep over, prime minister Rishi Sunak said in a speech on the UK’s global responsibility to understand and address potential problems with the technology.

Speaking at the Royal Society on 26 October, Sunak said while he is “unashamedly optimistic” about the transformative potential of technology, the real risks associated with AI need to be dealt with to benefit from the full range of opportunities presented.

“Get this wrong, and AI could make it easier to build chemical or biological weapons. Terrorist groups could use AI to spread fear and destruction on an even greater scale. Criminals could exploit AI for cyber attacks, disinformation, fraud, or even child sexual abuse. And in the most unlikely but extreme cases, there is even the risk that humanity could lose control of AI completely through the kind of AI sometimes referred to as ‘super intelligence’,” he said.

“Now, I want to be completely clear – this is not a risk that people need to be losing sleep over right now. I don’t want to be alarmist. And there is a real debate about this – some experts think it will never happen at all. But however uncertain and unlikely these risks are, if they did manifest themselves, the consequences would be incredibly serious,” he continued.

Sunak further noted that, in the interests of transparency, the UK government has published its analysis of the risks and capabilities associated with “frontier AI”, which will serve to set the discussion for its upcoming AI Safety Summit at Bletchley Park on 1 and 2 November.

Published a day before Sunak’s speech, the analysis paper outlines the current state of frontier AI – defined by government as any highly capable general-purpose AI model that can perform a wide variety of tasks, and which can either match or exceed the current capabilities of the most advanced models – noting that while the technology presents clear opportunities to boost productivity across the economy, it also comes with risks that it could “threaten global stability and undermine our values”.

While outlining how further research is needed into the manifold risks of such AI tools due to the lack of expert consensus around its potential negative impacts, the paper noted that “the overarching risk is a loss of trust in and trustworthiness of this technology, which would permanently deny us and future generations its transformative positive benefits”.

Other AI risks outlined in the paper include a lack of incentives for developers to invest in risk mitigation; its potential to create significant concentrations in market power; disruption of labour markets; disinformation and a general degradation of the information environment; the enabling of increasingly sophisticated cyber attacks; and ultimately the loss of human control over the technology.

Sunak himself added that while the state must play a role in ensuring AI is safe – noting the only people currently testing the safety of the technology are the very organisations developing it – the UK would not rush to regulate the technology.

“This is a point of principle – we believe in innovation, it’s a hallmark of the British economy, so we will always have a presumption to encourage it, not stifle it. And in any case, how can we write laws that make sense for something we don’t yet fully understand?” he said. “Instead, we’re building world-leading capability to understand and evaluate the safety of AI models within government. To do that, we’ve already invested £100m in a new taskforce, more funding for AI safety than any other country in the world.”

To this end, Sunak also announced that the government would launch the world’s first AI Safety Institute, which “will carefully examine, evaluate and test new types of AI so that we understand what each new model is capable of [while] exploring all the risks, from social harms like bias and misinformation, through to the most extreme risks of all.”

On the upcoming AI Safety Summit, Sunak said it would be a priority to “agree the first ever international statement about the nature of these risks” so that a shared understanding could be used as a basis for future action.

Taking inspiration from the Intergovernmental Panel on Climate Change, which was set up to help reach an international scientific consensus, Sunak said he would propose to summit attendees that a similar global expert panel be established for AI, which would be nominated by the countries and organisations attending.

“By making the UK a global leader in safe AI, we will attract even more of the new jobs and investment that will come from this new wave of technology,” he added.

Getting the balance right?

During a pre-AI Safety Summit event at the Royal Society on 25 October, which was held prior to Sunak’s speech, a panel of experts spoke about how the recent emphasis on AI-related existential risks – from both government and certain developers, as well as generally in the media – took away focus from real-world harms that are happening right now.

For example, while it is designed to take forward cutting-edge AI safety research and advise government on the risks and opportunities associated with the technology, the £100m Frontier AI Taskforce Sunak lauded in his speech has a particular focus on assessing systems that pose significant risks to public safety and global security.

Ben Brooks, head of policy at generative AI firm Stability AI, for example, noted that all the emphasis on “frontier risks” implies we are facing a completely new set of issues, and not questions of transparency, reliability, predictability and accountability that well pre-date the current AI hype cycle.

He further added that such an emphasis on frontier risks and models also reduces focus to “a small basket of speculative risks, and ignores the huge number of more immediate, more everyday risks that confronts us”.

Gina Neff, executive director of the Minderoo Centre for Technology and Democracy at the University of Cambridge, also spoke about how AI could be used to build the new infrastructural monopolies of the future, which brings up questions about power and who ultimately gets to reap the benefits.

“Those are questions that aren’t on the table at the summit. Those are questions that are not the kinds of things that are motivating bringing people together, countries together, in Bletchley Park next week,” she said, warning against a situation where a small number of firms with advanced AI capabilities get to dictate the terms of the economy going forward.

“There are questions of how we are remaking our economies, how we are remaking our societies. Those questions are not the questions we’re talking about, but it’s the reason why the public is so interested.”

On a similar note, Brooks added the risk that does not get enough airtime is the potential collapse of competition and what this means for wider society.

“AI as a technology has the potential to centralise economic value creation in a way that almost no other technology has … we’ve lived for 20 to 25 years through a digital economy that has one search engine, two or three social media platforms, three or four cloud compute providers,” he said. “There’s a serious risk that we’re going to repeat some of these mistakes in AI unless we think of competition as a policy priority.”

Brooks and others – including Anna Bacciarelli, a technology and human rights programme manager at Human Rights Watch, and Sabhanaz Rashid Diya, founder of the Tech Global Institute – also warned about the “complex supply chain” that sits behind AI as a major safety risk of the technology, and the need to ensure that data labellers and others responsible for training and maintaining artificial intelligence have their employment and human rights respected.

“[AI] models don’t exist in a vacuum; they exist because people design them and make decisions behind them. There are people who are deciding what data goes in, what data goes out, how you’re going to train it, or design it, how you’re going to deploy it, how you’re going to govern it,” said Diya.

Responding to Sunak’s speech, Andrew Pakes, deputy general secretary at the Prospect union, noted the exclusion of workers and trade unions from the upcoming AI Safety Summit, despite the technology’s increasingly widespread deployment in workplaces.

“AI brings with it opportunities for better work, but it also threatens jobs and has the potential to exacerbate prejudices in recruitment and disciplinary processes,” he said.

“No review of AI legislation can be effective without proper consultation with unions and workers who will be affected by any changes. The UK has an opportunity to influence the world of work globally, but only if it leads the way in involving workers in the decisions that will affect them in the coming years.”

Read more about artificial intelligence

Read more on Technology startups

CIO
Security
Networking
Data Center
Data Management
Close