Improve trust in algorithmic decision-making

There is a new dirty word in politics. Over the last few weeks politicians and commentators have spoken publicly about how this year’s GCSE and A’levels won’t be based on algorithms. Instead, they will rely on teacher assessment, and that’s much better.

And so the public is being told algorithms are bad. Algorithms don’t work, says the people who are making policy decisions.

A bad workman always blames his tools. And this holds true with the way last year’s exams were conducted. The fallout was blamed on poor algorithms. But It had nothing to do with the algorithms. Nor should the programmers be blamed because the algorithms they were asked to code, did not produce the results that could be referenced in a sound bite to make career politicians look good.

It is far harder to explain to the public that it wasn’t the algorithm at fault, but the rules set by experts that failed last year’s students. It is much easier to blame the machine, or, as in this example, blame the algorithm for blighting the career prospects of students taking GCSEs and A’levels in 2020.

Last year, at the height of the A’level and GCSE fiasco, Computer Weekly looked at how algorithms are behind more and more of the decision-making in today’s society. To build public trust, the decision-making that went into them – from the policy decisions right down to the lines of source code that encapsulate these policies – must be scrutinised.

Using algorithms to assess outcomes

Algorithms are among the tools the government relies on when plotting a path out of the pandemic. One hopes that when the Prime Minister says he is following the science, it is about following the Covid-19 data, and extrapolating this with a host of what-if scenarios, to balance opening up the economy with reducing the strain on the NHS, public health and wellbeing, intensive care beds and death rates.

There are so many variables to consider. It takes complex algorithms to inform policy makers. The outcomes are probabilistic. They are never black and white, but shades of grey. Across a large cohort of people, such and such will produce a slightly more positive outcome.

As Computer Weekly warned last year, the fairness and validity of algorithms must constantly be kept in check. But blaming the algorithm is no excuse for policy failure or poor planning. Algorithms are going to be increasingly behind the decisions impacting everyone’s lives. Decision-makers cannot blame algorithms. Ultimately, it is wholly their responsibility.

Content Continues Below