chrisdorney - stock.adobe.com
The UK government will be hosting an international artificial intelligence (AI) safety summit at Bletchley Park on 1 and 2 November 2023. The conference will look at the risks of AI and how these can be mitigated through internationally coordinated action.
Matt Clifford, co-founder and CEO of investment firm Entrepreneur First, and former senior diplomat Jonathan Black, who were recently appointed as the Prime Minister’s representatives, have been tasked with rallying leading AI nations and experts over the next three months to ensure the summit provides a platform for countries to work together on further developing a shared approach to agree on the safety measures needed to mitigate the risks of AI.
Commenting on the summit, Prime Minister Rishi Sunak said: “To fully embrace the extraordinary opportunities of artificial intelligence, we must grip and tackle the risks to ensure it develops safely in the years ahead. With the combined strength of our international partners, thriving AI industry and expert academic community, we can secure the rapid international action we need for the safe and responsible development of AI around the world.”
Technology secretary Michelle Donelan said international collaboration was a cornerstone of the government’s approach to AI regulation. “We want the summit to result in leading nations and experts agreeing on a shared approach to its safe use,” she said.
The organisers said the summit would also build on ongoing work at international forums, including the OECD, Global Partnership on AI, Council of Europe, and the UN and standards development organisations, as well as the recently agreed G7 Hiroshima AI Process.
James Cleverly, foreign secretary
Foreign secretary James Cleverly said: “No country will be untouched by AI, and no country alone will solve the challenges posed by this technology. In our interconnected world, we must have an international approach.”
The origins of modern AI and computing can be traced back to Bletchley Park. The site is known for cracking the code of the German Enigma cipher and the world’s first programmable computer Colossus, developed by Post Office engineer Tommy Flower. Colossus was used to break German encryption and influence the successful outcome of World War Two.
While AI promises to improve productivity and automate manually intensive processes across the public and private sectors, there are many who believe there is a risk that AI systems will amplify biases that already exist in society.
In June, following the publication of a government whitepaper on AI regulations, the Equalities and Human Rights Commission (EHRC) said it was broadly supportive of the UK’s approach, but more must be done to deal with the negative human rights and equality implications of AI systems.
It said the proposed regulatory regime would fail if regulators – including itself, the Information Commissioner’s Office (ICO) and others involved in the Digital Regulators Cooperation Forum (DRCF) – were not appropriately funded to carry out their functions.
Last week, the government announced £13m of funding for healthcare research through AI. The funding supports a raft of new projects including transformations to brain tumour surgeries, new approaches to treating chronic nerve pain, and a system to predict a patient’s risk of developing future health problems based on existing conditions.
Read more about AI safety
- Leaving generative AI unchecked risks flooding platforms with disinformation, fraud and toxic content. But proactive steps by companies and policymakers could stem the tide.
- Researchers find they can trick AI chatbots including OpenAI ChatGPT, Anthropic Claude and Google Bard into providing disinformation, hate speech, and other harmful content.