A-Level calculations shine a spotlight on unfair algorithms

The fiasco with the calculations for A-Level result marks has projected the unfairness of algorithms into the spotlight.

In modern society, such algorithms act as black boxes that make fundamental decisions, which have a direct impact on individuals. They determine whether you can get a loan, if you receive any benefits from the state and how much; they are used to calculate your car insurance, how healthy you are and what products you are likely to buy.

In the case of the A-Level grades, a whole generation of students has been impacted by a decision made in a computer that asked something along the lines: “If I see this data, then you fail, otherwise you pass.” Given university entrance requirements are largely based on A-Level grades, this automated decision will very likely have a direct impact on the career plans of those students marked down by the algorithm.

The computer is wrong

Computer Weekly has had a long history of investigating the unfairness in algorithms. In all cases, officials deny the algorithm is at fault. In December 2019, a High Court judge confirmed that allegations made by subpostmasters about the reliability of the Post Office’ Horizon IT computer system they use were right. For years, the Post Office hounded subpostmasters when their branch accounts showed deficits. It did not accept that the system could be at fault.

In 1999, Computer Weekly published RAF Justice, a 140-page report about the Chinook ZD576 crash in 1994, which looked into the reliability of the Fadec engine control system. The RAF blamed the crash on pilot error, but a House of Lords committee exonerated the pilots.

The news is littered with reports of people left penniless because of the unfairness of Universal Credit. But, as is the case with many decisions made by computer algorithms, these individuals have very little recourse. “The computer says ‘No’ and it is always right.” When trying to improve a credit rating or even appealing against a parking fine, the decision-making is automated; rarely does the algorithm ever reverse its original decision.

The simple truth is that all of these algorithms are sold on the concept that they automate the decision-making process. And since it is encoded in an algorithm, the decision-making is transparent. But is it? Front-line officials who use such systems and the people who are directly impacted, are never in a position where they can question the accuracy of the algorithm.

Algorithms are behind more and more of the decision-making in today’s society. The fairness and validity of these algorithms must constantly be kept in check. Someone needs to take responsibility.

Content Continues Below

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close