One year on from the UK exams algo fiascos

Algorithms got themselves a bad name in the summer of 2020. This year’s Scottish Highers and rest of the UK A-Levels results have been algo-free. But not free of their usual class bias: private school students did even better than usual.

At Computer Weekly, our main interest is education in computing. Here there is some qualified good news: in 2021, 13,829 students in the UK in total took computing at A-level, an increase from 12,428 entrants the previous year. But while BCS analysis of A-level computing over the past five years has found a 350% increase in the number of girls taking the subject, there is still a gap between the number of boys and girls taking it.

The same organisation recently published a report that says the UK can take an international lead in AI ethics, but only if it cultivates a more diverse workforce for AI-related jobs, including people from non-STEM backgrounds. And so, if as a society, we want to use algorithms free of bias, we need people doing the jobs that use artificial intelligence to be much more representative of society, in terms of gender, ethnicity, and class.

In the meantime, it is worth reflecting that the algorithms at the centre of the 2020 exams controversy were not “black box algorithms” of the kind that breed despair about bias. The Scottish Qualifications Authority and Ofqual published the statistical models they chose and it was clear what data they chose to use – from previous years’ performance. They were transparent.

However, those models did generate unjust results at the level of individual candidates. There were anomalies in the standardisation applied to teacher assessments, and it highlighted already-existing social inequalities. The experience suggested there are some areas of life that are only very problematically ready, if at all, for adjudication by machine. Employee recruitment could well be another such area, even if organisations complement algorithmic decision making with human control and oversight.

Another recent report – this time from the Centre for Data Innovation and Ethics – has looked at the use of algorithms to help detect and address harmful content on social media platforms, specifically misinformation about Covid-19. Human beings could not possibly find and take action against such content given the immense scale of it. Nevertheless, the use of such algorithms has led to more content being wrongly identified as misinformation. Their use is therefore, a necessarily blunt instrument.

Somewhere in the mix of getting a more diverse range of people into AI-related jobs and building on whatever technical success, however limited, has been achieved – by social media platforms and by the corporate use of AI – has to be the path forward in the use of AI for good. But it is a narrow path, beset by dangers on all sides.