SFIO CRACHO - stock.adobe.com
From deciding visa eligibility to detecting financial fraud, predicting reoffending rates to allocating police resources, algorithmic systems increasingly assist government in making important decisions. And as policy-makers have adjusted to pandemic conditions, where old ways of working have been challenged, the growth of algorithm-assisted decision-making has only accelerated.
But so, too, has public disquiet over the use of algorithmic systems to help make decisions with big impacts on our lives. After an algorithm used by Ofqual, England’s exam regulator, downgraded 40% of A-level grades assessed by teachers, students took to the streets, giving their opinion in no uncertain terms. “F*** the algorithm” became a rallying cry and Ofqual began an embarrassing climbdown, ditching the algorithm in the face of mounting public pressure.
In the aftermath of the fiasco, a survey by BCS, the Chartered Institute for IT, found that 53% of British adults have “no faith” in any organisation to use algorithms when making judgements about them. Just 7% trust the use of algorithms to inform decisions in education and the provision of social services.
The use of algorithms to make high-impact decisions might appear more trouble than it’s worth. Yet rather than throw up their hands in the face of resistance, government and public sector organisations can and should do more to engage with the public’s legitimate concerns and rebuild trust in algorithm use.
One clear-cut way of doing so would be to inject a healthy dose of transparency into the algorithmic decision-making process. Ministers and civil servants are regularly hauled across the coals by committees and regulators to ensure transparency in the way government does business.
And quite rightly so – transparency is the lifeblood of our democracy. As citizens, we can only scrutinise and understand the decisions of our government by interrogating the evidence, assumptions, and principles on which they are based.
But how can transparency and accountability be achieved in the brave new world of “government by algorithm”? A recent Reform policy hackathon – bringing together policymakers, academics, and regulators – focused on this question.
All agreed that an informed public debate over algorithm use was a necessary first step. However, public engagement on this issue is still being held back by a very low level of awareness on the prevalence of algorithm use by public sector organisations such as police forces and NHS trusts.
Public sector organisations must be far more open about the ways in which they already use algorithms to make decisions. Setting up an online, searchable register of current algorithm systems covering central and local government and organisations such as police forces and NHS trusts would be a welcome first step. Helsinki and Amsterdam both launched algorithm and AI registers in 2020, which document why and how algorithms are being used by their respective governments.
Rolling out a register on a national scale would give citizens access to information on the ways in which algorithmic decisions affect them and, in turn, facilitate an informed public debate on where and how new technologies can be used for good in the public sector.
However, while documenting the current state of play is long overdue, our experts argued that transparency must be built into the algorithmic decision-making process at a much earlier stage. Too often, transparency and accountability are afterthoughts – necessary responses to public anger when things go wrong. Rather than wait until algorithms have been deployed, organisations should be open with the public while designing and developing algorithms.
Just as organisations publish environmental impact assessments before starting infrastructure projects, they should publish algorithmic impact assessments before rolling out algorithm systems. These would inform the public of the potential benefits and risks of an algorithm and the mechanisms to be put in place to ensure its safe use well in advance of being rolled out.
Paying close attention to the impacts of algorithm use on at-risk groups should be a central component of these assessments. Where high-level risks are identified for particular groups, public sector organisations should engage in prospective consultation to ensure that the concerns of those who could be negatively affected are aired and acted upon.
Equipped with information on benefits and costs in advance, external analysts and citizens themselves can scrutinise whether sufficient attention has been given to the impacts of using an algorithm and whether mechanisms for mitigating risk are adequate for protecting the public interest.
After algorithmic systems are deployed, proactive, post-market surveillance to ensure that harms of algorithm use are identified and acted upon early must become the norm, instead of the current system of responding to newspaper headlines and public anger, after the fact.
Only by building transparency into every stage of the algorithmic decision-making process can we build the trust necessary to realise the potential of digitally enabled public services. If government by algorithm is to be the future, transparency is non-negotiable.
Sebastian Rees is a researcher at the independent think tank, Reform.