Siarhei - stock.adobe.com

UK government announces £8.5m in grants for AI safety research

The funding programme will be directed by the UK’s AI Safety Institute, with grants being used to understand and mitigate the impacts of artificial intelligence, including any systemic risks it presents at the societal level

Digital secretary Michelle Donelan has announced that the UK government will provide up to £8.5m in grants for artificial intelligence (AI) safety research, during the second day of the AI Seoul Summit.

While the overall research programme will be headed up by the UK’s AI Safety Institute (AISI), which was established in the run-up to the inaugural AI Safety Summit at Bletchley Park in November 2023, the grants will be awarded to researchers studying how to best protect society from the risks associated with AI, such as deepfakes and cyber attacks.

The government said research grants will also be given to those studying ways to harness the benefits of AI to, for example, increase productivity, adding that the most promising proposals will be developed into longer-term projects and could receive further funding down the line.

Delivered in partnership with UK Research and Innovation (UKRI) and the Alan Turing Institute, the AISI programme will also aim to collaborate with other AI safety institutes globally, as per the Seoul Statement of Intent toward International Cooperation on AI Safety Science signed by 10 countries and the European Union (EU) on the first day of the South Korean summit.

The government added that the grant programme is also designed to broaden the AISI’s remit to include the field of “systemic AI safety”, which aims to understand and mitigate the impacts of AI at a societal level, as well as to figure out how various institutions, systems and infrastructure can adapt to the technology.

While grant applicants will need to be based in the UK, the government said they will be actively encouraged to collaborate with other researchers from around the world.

“When the UK launched the world’s first AI Safety Institute last year, we committed to achieving an ambitious yet urgent mission to reap the positive benefits of AI by advancing the cause of AI safety,” said Donelan.

“With evaluation systems for AI models now in place, Phase 2 of my plan to safely harness the opportunities of AI needs to be about making AI safe across the whole of society.

“This is exactly what we are making possible with this funding, which will allow our institute to partner with academia and industry to ensure we continue to be proactive in developing new approaches that can help us ensure AI continues to be a transformative force for good.”

Photo of Michelle Donelan, secretary of state for science, innovation and technology. PHOTO BY DAVID WOOLFALL

“This funding will allow our AI Safety Institute to partner with academia and industry to ensure we continue to be proactive in developing new approaches that can help us ensure AI continues to be a transformative force for good”

Michelle Donelan, secretary of state for science, innovation and technology

Christopher Summerfield, the AISI’s research director, described the new programme as “a major step” to ensuring AI is deployed safely throughout society. “We need to think carefully about how to adapt our infrastructure and systems for a new world in which AI is embedded in everything we do,” he said. “This programme is designed to generate a huge body of ideas for how to tackle this problem, and to help make sure great ideas can be put into practice.”

On the first day of the Seoul Summit, the government announced the 10 winners of its inaugural Manchester Prize, which was set up in March 2023 to fund AI breakthroughs that contribute to the public good. The finalists will each receive a share of the £1m prize money, and will look to address energy, infrastructure and environmental challenges using AI.

The winners will also benefit from comprehensive support packages, including funding for computing resources, investor readiness support, and access to a network of experts.

A week before the funding announcement, the AISI publicly released a set of AI safety test results for the first time, which found that none of the five unnamed models tested were able to do more complex, time-consuming tasks without humans overseeing them, and that all of them remain highly vulnerable to basic “jailbreaks” of their safeguards. It also found that some of the models will produce harmful outputs even without dedicated attempts to circumvent these safeguards.

The AISI also announced plans to open a new branch in San Francisco over the summer to access leading AI companies and Bay Area tech talent.

Speaking in Seoul, Donelan added: “I am acutely aware that we can only achieve this momentous challenge by tapping into a broad and diverse pool of talent and disciplines, and forging ahead with new approaches that push the limit of existing knowledge and methodologies.”

On the first day of the AI Seoul Summit, 16 AI companies from around the globe signed the Frontier AI Safety Commitments, a set of voluntary commitments to ensure they develop the technology safely and responsibly.

One of the key voluntary commitments made is that the companies will not develop or deploy AI systems if the risks cannot be sufficiently mitigated, although red lines are yet to be set around what constitutes an unmitigable risk.

The EU and the same 10 countries that signed the Seoul Statement of Intent around research collaboration on AI safety also signed the Seoul Declaration, which explicitly affirmed “the importance of active multi-stakeholder collaboration” in this area and committed the government’s involved to “actively” include a wide range of stakeholders in AI-related discussions.

Signatories included Australia, Canada, the EU, France, Germany, Italy, Japan, the Republic of Korea, the Republic of Singapore, the UK and the US.

Read more about artificial intelligence (AI)

Read more on Technology startups

CIO
Security
Networking
Data Center
Data Management
Close