kirill_makarov - stock.adobe.com

Alan Turing Institute hits ‘key milestone’ in AI strategy

The Alan Turing Institute reaffirms its commitment to addressing “grand challenges” in health, environment and security with the appointment of four new directors

The Alan Turing Institute has appointed four new directors of science and innovation to its senior scientific leadership team, which it said is a key step towards implementing its new strategy.

The institute was founded in 2015, with the goal of making “great leaps in the development and use of data science and artificial intelligence [AI] to change the world for the better”. It launched its new strategy, Turing 2.0, based around three “grand challenges”, in March 2023.

The grand challenges are based around environment and sustainability, health, and defence and national security, which sit alongside the institute’s work on AI and data science research.

The strategy aims to address the risks posed by AI technologies when used without transparent processes and good human oversight – one of the biggest challenges is ensuring these technologies are used ethically and for societal good, it said when the strategy was launched.

The Turing Institute has now confirmed Marc Deisenroth will lead its Environment and Sustainability grand challenge. Deisenroth joins the Turing Institute from UCL, where he is the Google DeepMind Chair of AI and machine learning.

In its explanation of the environment challenge, the Turing Institute said that because of its expertise in applying data science and AI across a broad range of science and engineering, it’s well-placed to play a central role “in tackling and averting the climate and biodiversity crisis to help society and life prosper in a net-zero world”.

Andrew Duncan will lead the Turing’s Fundamental Research. He has been a senior lecturer in the maths department at Imperial College London, and was lead scientist in the defence division at UK technology company Improbable. He’s also previously been a group leader for the data-centric engineering programme at the Turing Institute.

Health and wellbeing

Aldo Faisal will head up the grand challenge in Health, which aims to improve the nation’s health and wellbeing. He is a professor of AI and Neuroscience at Imperial College London, and the founding director of the UK Research and Innovation (UKRI) Centre in AI for Healthcare and the UKRI Centre in AI for Digital Healthcare.

The Turing Institute’s page about the healthcare challenge said the opportunities to use data science and AI to enable a more proactive focus on the prevention, the earlier identification of disease, and earlier, better-targeted interventions to improve health for all “are yet to be fully realised”. 

Tim Watson will lead the Defence and National Security grand challenge. Professor Watson joined the Turing Institute in January 2022 to take up the role of programme director of defence and security, and he is also director of the Cyber Security Centre at Loughborough University.

The Turing Institute’s explanation of this challenge notes that the increased use of networked systems and the sharing of private data gives rise to new insecurities and risks, both technical and social. “Further, the speed of technological change poses a unique challenge to the development of new laws, policy, and governance,” it said. “The UK must continue to stay ahead in researching, developing and integrating technologies to meet these challenges, or risk falling behind on the global stage.”

The government’s chief scientific adviser, Angela McLean, said: “I am excited by the potential of AI to change our world for the better. These are all areas where AI is starting to have an impact, and I am looking forward to seeing what these grand challenge areas can deliver under the leadership of the new directors.”

The Institute has also appointed Jon Crowcroft and Mike Wooldridge as special advisers to further boost the Institute’s scientific leadership.

Read more about AI and regulation

Demis Hassabis, co-founder and CEO of Google DeepMind, said: “We are living through a time of tremendous progress in the field of artificial intelligence, and will need their energy and insight to make sure its promises are shared by all. Turing himself once put it best: ‘We can only see a short distance ahead, but we can see plenty there that needs to be done.’”

The next step is for the directors to start mapping out their priorities, or “missions”, for each of these areas to ensure society benefits from the positive impact of AI and data science – while minimising the risks that emerging technologies can pose.

The aim of the Turing 2.0 strategy, according to the institute, is that 10 years from now, the institute will be internationally recognised as a centre of research and innovation for harnessing data science and AI to make a lasting impact on the world’s most pressing societal issues. 

The strategy also came as a response to the dramatic rise of large language models such as ChatGPT, which has created much excitement – and also some concerns. For example, in December 2023, the institute’s Centre for Emerging Technology and Security warned that generative AI (GenAI) use could cause significant harm to the UK’s national security in unintended ways.

Much of the debate about AI threat has focused on the risks from groups who set out to inflict harm using GenAI such as through disinformation campaigns and cyber attacks, it said. But while GenAI will amplify the speed and scale of these activities, it said governments should also beware of the unintentional dangers posed by experimentation with GenAI tools, and “excessive risk-taking” as a result of over-trusting AI outputs and a fear of missing out on the latest technological advances.

The government is keen to promote the UK as a centre for AI innovation, both as a source of new high-tech job opportunities and a way of boosting productivity.

Earlier this month, the government said it would not bring in any specific regulation of AI, at least for now. Instead, it will provide extra cash to existing regulators so that they build their tools and capabilities to understand AI risk. However, the government did say the challenges posed by AI technologies “will ultimately require legislative action in every country once understanding of risk has matured”.

Read more on Artificial intelligence, automation and robotics

CIO
Security
Networking
Data Center
Data Management
Close