Pavlo Vakhrushev - stock.adobe.com

Algorithmic accountability needs meaningful public participation

Global analysis by Ada Lovelace Institute and other research groups finds algorithmic accountability mechanisms in the public sector are hindered by a lack of engagement with the public

Algorithmic accountability policies should prioritise meaningful public participation as a core policy goal, so that any deployment actually meets the needs of affected people and communities, according to a global study of algorithms in the public sector.

The study – conducted by the Ada Lovelace Institute in collaboration with the AI Now Institute and Open Governance Partnership – analysed more than 40 examples of algorithmic accountability policies at various stages of implementation, taken from more than 20 national and local governments in Europe and North America.

“This new joint report presents the first comprehensive synthesis of an emergent area of law and policy,” said Carly Kind, director of the Ada Lovelace Institute. “What is clear from this mapping of the various algorithmic accountability mechanisms being deployed internationally is that there is clear growing recognition of the need to consider the social consequences of algorithmic systems.

“Drawing on the evidence of a wide range of stakeholders closely involved with the implementation of algorithms in the public sector, the report contains important learnings for policymakers and industry aiming to take forward policies in order to ensure that algorithms are used in the best interests of people and society.”

The research highlighted that, despite being a relatively new area of technology governance, there are already a wide variety of policy mechanisms that governments and public sector bodies are using to increase algorithmic accountability.

These include: non-binding guidelines for public agencies to follow; bans or prohibitions on certain algorithmic use cases, which have been particularly directed at live facial-recognition; establishing external oversight bodies; algorithmic impact assessments; and independent audits.

However, the analysis found that very few policy interventions have meaningfully attempted to ensure public participation, either from the general public or from people directly affected by an algorithmic system.

It said that only a minority of the accountability mechanisms reviewed had adopted clear and formal public engagement strategies or included public participation as a policy goal – most notably New Zealand’s Algorithm Charter and the Oakland Surveillance and Community Safety Ordinance, both of which required extensive public consultation.

“Proponents of public participation, especially of affected communities, argue that it is not only useful for improving processes and principles, but is crucial to designing policies in ways that meet the identified needs of affected communities, and in incorporating contextual perspectives that expertise-driven policy objectives may not meet,” the analysis said.

“Meaningful participation and engagement – with the public, with affected communities and with experts within public agencies and externally – is crucial to ‘upstreaming’ expertise to those responsible for the deployment and use of algorithmic systems.

“Considerations for public engagement and consultation should also keep in mind the forums in which participation is being sought, and what kind of actors or stakeholders are engaging with the process.”

Read more about algorithmic accountability

  • The UK’s Taskforce on Innovation, Growth and Regulatory Reform has recommended scrapping safeguards against automated decision-making contained in the General Data Protection Regulation.
  • Despite the abundance of decision-making algorithms with social impacts, many companies are not conducting specific audits for bias and discrimination that can help mitigate their potentially negative consequences.
  • Many of the regulatory bodies overseeing algorithmic systems and the use of data in the UK economy will need to build up their digital skills, capacity and expertise as the influence of artificial intelligence and data increases, MPs have been told. 

It added that, for forms of participatory governance to be meaningful, policy-makers must also consider how actors with varying levels of resources can contribute to the process, and suggested providing educational material and ample time to respond as a means of making new voices heard.

Closely linked to public engagement is transparency, which the report noted needed to be balanced against other factors and policy goals.

“Transparency mechanisms should be designed keeping in mind the potential challenges posed by countervailing policy objectives requiring confidentiality, and trade-offs between transparency and other objectives should be negotiated when deciding to use an algorithmic system,” it said. “This includes agreeing acceptable thresholds for risk of systems being gamed or security being compromised, and resolving questions about transparency and the ownership of underlying intellectual property.”

However, it noted that there is currently “a lack of standard practice about the kinds of information that should be documented in the creation of algorithmic systems”, and for which audiences this information is intended – something that future accountability policies should seek to clarify.

“As one respondent noted, in the case where the creation of an algorithmic system was meticulously documented, the intended audience (the public agency using the system) found the information unusable due to its volume and its highly technical language,” the analysis said.

“This speaks not only to the need to develop internal capacity to better understand the functioning of algorithmic systems, but also to the need to design policies for transparency, keeping in mind particular audiences and how information can be made usable by them.”

A 151-page review published in November 2020 by the Centre for Data Ethics and Innovation (CDEI) – the UK government’s advisory body on the responsible use of artificial intelligence (AI) and other data-driven technologies – also noted that the public sector’s use of algorithms with social impacts needs to be more transparent to foster trust and hold organisations responsible for the negative outcomes their systems may produce.

A separate research exercise conducted by the CDEI in June 2021 found that, despite low levels of awareness or understanding around the use of algorithms in the public sector, people in the UK feel strongly about the need for transparency when informed of specific uses.

“This included desires for a description of the algorithm, why an algorithm was being used, contact details for more information, data used, human oversight, potential risks and technicalities of the algorithm,” said the CDEI, adding that it was a priority for participants that this information should be both easily accessible and understandable.

Other lessons drawn from the Ada Lovelace Institute’s global analysis include the need for clear institutional incentives, as well as binding legal frameworks, that support the consistent and effective implementation of accountability mechanisms, and that institutional coordination across sectors and levels of governance can help create consistency over algorithmic use cases.

Amba Kak, director of global policy and programmes at the AI Now Institute, said: “The report makes the essential leap from theory to practice, by focusing on the actual experiences of those implementing these policy mechanisms and identifying critical gaps and challenges. Lessons from this first wave will ensure a more robust next wave of policies that are effective in holding these systems accountable to the people and contexts they are meant to serve.”

Read more on IT governance

CIO
Security
Networking
Data Center
Data Management
Close