Sergey Nivens - Fotolia

AI bias and privacy issues require more than clever tech

The new AI Barometer report from the Centre for Data Ethics and Innovation has assessed the threats and opportunities of artificial intelligence in the UK

Algorithmic bias, lack of artificial intelligence (AI) explainability and failure to seek meaningful consent for personal data collection and sharing are among the biggest barriers facing AI, according to analysis from the Centre for Data Ethics and Innovation (CDEI).

The CDEI’s AI Barometer analysis was based on workshops and scoring exercises involving 120 experts. The study assessed opportunities, risks and governance challenges associated with AI and data use across five key UK sectors.

Speaking at the launch of the report, Michael Birtwistle, AI Barometer lead at the CDEI, said: “AI and data use have some very promising opportunities, but not all are equal. Some will be harder to achieve but have high benefits, such as realising decarbonisation and understanding public health risk or automatic decision support to reduce bias.”

Birtwistle said the CDEI analysis showed that what these application areas have in common is complex data flows about people that affect them directly. “We are unlikely to achieve the biggest benefits without overcoming the barriers,” he added.

Roger Taylor, chair of the CDEI, said: “AI and data-driven technology has the potential to address the biggest societal challenges of our time, from climate change to caring for an ageing society. However, the responsible adoption of technology is stymied by several barriers, among them low data quality and governance challenges, which undermine public trust in the institutions that they depend on.

“As we have seen in the response to Covid-19, confidence that government, public bodies and private companies can be trusted to use data for our benefit is essential if we are to maximise the benefits of these technologies. Now is the time for these barriers to be addressed, with a coordinated national response, so that we can pave the way for responsible innovation.”

The report found that the use of biased algorithmic tools – due to biased training data, for example – entrenches systematic discrimination against certain groups, such as reoffending risk scoring in the criminal justice system. 

Bias is systemic

During a virtual panel discussion at the launch of the AI Barometer, Areeq Chowdhury, founder of WebRoots Democracy, discussed how technology inadvertently amplifies systemic discrimination. For instance, while there is a huge public debate about the accuracy rate of facial recognition systems to identify people from black and Asian minorities, the ongoing racial tension in the US has shown that the problem is wider than the actual technology.

According to Chowdhury, such systemic discrimination builds up from a collection of policies over a period of time.

The experts who took part in the CDEI analysis raised concerns about the lack of clarity over where oversight responsibility lies. “Despite AI and data being commonly used within and across sectors, it is often unclear who has formal ‘ownership’ of regulating its effects,” said the CDEI in the report.

AI needs cross-industry data regulations

Cathryn Ross, head of the Regulatory Horizon Council, who also took part in the panel discussion, said: “A biting constraint on the take-up of technology is public trust and legitimacy. Regulations can help to build public trust to enable tech innovation.”

Mirroring her remarks, fellow panellist Annemarie Naylor, director of policy and strategy at Future Care Capital, said: “Transparency has never been so important.”

The AI Barometer also reported that the experts the CDEI spoke to were concerned about low data quality, availability and infrastructure: It said: “The use of poor quality or unrepresentative data in the training of algorithms can lead to faulty or biased systems (eg diagnostic algorithms that are ineffective in identifying diseases among minority groups).

“Equally, the concentration of market power over data, the unwillingness or inability to share data (eg due to non-interoperable systems), and the difficulty of transitioning data from legacy and non-digital systems to modern applications can all stymie innovation.”

The CDEI noted that there is often disagreement among the public about how and where AI and data-driven technology should be deployed. Innovations can pose trade-offs such as between security and privacy, and between safety and free speech, which take time to work through.

However, the lockdown has shown that people are prepared to make radical changes very quickly if there are societal benefits. This has implications for data privacy policies. 

The challenge for regulators is that existing data regulations are often sector-specific. In Ross’s experience, technological innovation with AI cuts across different industry sectors. She said a fundamentally different approach that coordinated regulations was needed.

Discussing what the coronavirus has taught policy-makers and regulators about people’s attitudes to data, Ross said: “Society is prepared to take more risk for a bigger benefit, such as saving lives or reducing lockdown measures.”

Read more about AI and ethics

Read more on Artificial intelligence, automation and robotics

Search CIO
Search Security
Search Networking
Search Data Center
Search Data Management
Close