arthead - stock.adobe.com

CDEI: UK public wants clear algorithmic transparency policies

A research exercise by the Centre for Data Ethics and Innovation has found the UK public wants clear, easy-to-understand information about algorithms in use by the public sector

Despite low levels of awareness or understanding around the use of algorithms in the public sector, people feel strongly about the need for transparency when informed, the UK government advisory body on the responsible use of artificial intelligence (AI) has said.

In its 151-page review into bias in algorithmic decision-making from November 2020, the Centre for Data Ethics and Innovation (CDEI) recommended that the government place a mandatory transparency obligation on all public sector organisations that use algorithms when making significant decisions affecting people’s lives.

“Government should conduct a project to scope this obligation more precisely, and to pilot an approach to implement it, but it should require the proactive publication of information on how the decision to use an algorithm was made, the type of algorithm, how it is used in the overall decision-making process, and steps taken to ensure fair treatment of individuals,” it said at the time.

To scope exactly how this transparency obligation could work in practice, and to find which measures would be most effective at promoting better public understanding of algorithms, the CDEI worked with the Central Digital and Data Office (CDDO) and BritainThinks to consult with 36 members of the public over a three-week period.

“This involved spending time gradually building up participants’ understanding and knowledge about algorithm use in the public sector and discussing their expectations for transparency, and co-designing solutions together,” wrote the CDEI in a blog post published on 21 June.

“We focused on three particular use-cases to test a range of emotive responses – policing, parking and recruitment,” he said.

The CDEI found that, despite generally low levels of awareness or understanding around how algorithms are used, participants felt strongly about the need for transparency information to be published after being introduced to specific examples of public sector algorithms.

Read more about algorithmic decision-making

“This included desires for; a description of the algorithm, why an algorithm was being used, contact details for more information, data used, human oversight, potential risks and technicalities of the algorithm,” said the CDEI, adding that it was a priority for participants that this information be both easily accessible and understandable.

To resolve any tension between transparency and simplicity, participants also broke down the information they wanted into different tiers, based on how important it is to the operation of the algorithm and who is looking to access it.

“Participants expected the information in ‘tier one’ to be immediately available at the point of, or in advance of, interacting with the algorithm, while they expected to have easy access to the information in ‘tier two’ if they choose to seek it out,” said the CDEI.

“They expected more that experts, journalists and civil society may access this ‘tier two’ information on their behalf, raising any concerns which may be relevant to citizens.”

Tier two

While tier one information was limited to just a description of the algorithm, its purpose, and who to contact for access to more information, tier two information included description, purpose, contact point, data privacy, human oversight, risks and commercial information among others.

“It was also interesting to note how different use-cases impacted how proactively participants felt transparency information should be communicated,” said the CDEI, adding that for lower risk and impact use cases, passively available information – or information that individuals can seek out if they want – was enough on its own.

“We found that the degree of perceived potential impact and perceived potential risk influences how far participants trust an algorithm to make decisions, what transparency information they want to be provided with, and how they want this to be delivered,” it said.

“For higher potential risk and higher potential impact use cases there is a desire not just for information to be passively available and accessible if individuals are interested to know more about the algorithm, but also for the active communication of basic information upfront to notify people that the algorithm is being used and to what end.”

The CDEI added that it will continue its public engagement work which, alongside separate engagement with internal stakeholders and external experts by the CDDO, is expected to inform the development of a standard for algorithmic transparency.

Read more on Artificial intelligence, automation and robotics

CIO
Security
Networking
Data Center
Data Management
Close