kirill_makarov - stock.adobe.com
CDEI research pinpoints barriers to trustworthy AI exploitation
Survey of UK business by the Centre for Data Ethics and Innovation has disclosed the patchiness of artificial intelligence and data-driven technology adoption across different economic sectors
A survey of UK business by the government-backed Centre for Data Ethics and Innovation (CDEI) has thrown a light on the patchiness of artificial intelligence (AI) and data-driven technology adoption.
The survey of almost 1,000 organisations reveals lower adoption in healthcare businesses (12%), contrasting with digital and communications, where one in five use AI, albeit only 4% extensively.
The research shows a range of barriers to AI exploitation, chiefly relating to data sharing. Some 70% said they wanted more information to help them find their way around legal requirements concerning data collection, use and sharing. Nearly a quarter (23%) cited difficulty accessing quality data as a barrier and 43% highlighted limited technological capabilities.
The survey was carried out for the CDEI by Ipsos Mori among 965 businesses across eight sectors, between March and May 2021.
The centre has also developed an “AI barometer”, informed by more than 80 AI specialists, and has published the second edition of it. This report is said to identify areas where there are “untapped opportunities for innovation in three key sectors which have been particularly affected by the Covid-19 pandemic”, said the CDEI.
These are in: transport and logistics, with “opportunities to improve energy efficiency, drive down emissions, and yield better environmental outcomes, as well as smooth trade flows at borders”; in recruitment and employment, where “data-driven innovation has the potential to improve talent pipelines, enable greater access to job opportunities and reduce bias and discrimination”; and in education, where the centre pointed to “the potential to reduce the administrative burden on teachers and increase social mobility”.
The CDEI said it is working hand in glove with DCMS [the Department for Digital, Culture, Media and Sport] to deliver the National Data Strategy and “enable trustworthy access to quality data”, by exploring new approaches to data governance, such as data intermediaries, and emerging technical solutions, including through a new UK-US R&D effort to mature privacy-enhancing technologies.
Chris Philp, minister for technology and the digital economy, said: “Data and AI can be harnessed to support both our economic and social recovery. Understanding how we can best use technologies to address major shifts in labour markets and the ways that we work, deliver education or decarbonise our transport infrastructure, will be crucial to this mission.
“I look forward to working with organisations across the UK to address the barriers to innovation highlighted in the CDEI’s analysis, so that the UK can unlock the full potential of data and AI.”
Edwina Dunn, interim chair at the CDEI, said: “Data and AI can help tackle some of the greatest challenges of our time. To achieve this, we need to overcome barriers to innovation, such as poor-quality data, and address risks, such as algorithmic bias. The CDEI is working in partnership with a range of organisations to help them overcome these barriers, mitigate risk and put high-level ethical principles – such as accountability and transparency – into practice. It is practical work like this that will enable us to build greater public trust in how data and AI are used.”
The CDEI published a “roadmap” for AI assurance in the UK earlier this month. The document describes what should be done to verify that AI systems are effective, trustworthy and compliant. It constitutes one of the commitments set out in the government’s National AI Strategy, which was published in September 2021.
Read more about trust and AI
- The power of AI can be unleashed with a focus on ethics.
- Build 2020: How Microsoft aims to build trust in artificial intelligence.
- What is responsible AI?