How racist bias is embedded in software systems

The issue of racist bias encoded in software made mainstream news last week, with a report on Channel Four news highlighting how software for profiling criminal suspects, tend to have racial biases.

Such software relies on a dataset, which is weighted against non-white individuals. Arrest data collected by US law enforcement tends to shows that, statistically, there is a strong correlation between skin colour  and criminal activities. Law enforcement use this data to stop, search and arrest members of the public, which leads to more arrests of non-white suspects; the dataset of non-white arrests grows and so the data bias becomes self-fulfilling.

During a broadcast on Thursday 16 May, Channel Four covered the issue of racist software in its flagship new programme. During the broadcast, Peter Eckersley, director of research at the Partnership on AI was interviewed about the challenge with biased data. He said: “Many countries and states in the US have started used machine learning system to make predictions that determine if people will reoffend, largely based on arrest data.” Eckersley pointed out that arrest data or conviction data is racially biased: The chance of being stopped, charged, or convicted varies depending on your race and where you are located.

The datasets used are discriminatory. Joy Buolamwini, a computer scientist a at MIT Media Lab, who is exhibiting at the What makes us Human exhibition at London’s Barbican Centre told Channel Four news presenter Jon Snow that some of the larger data sets are mainly based on samples of white men. “They are failing the rest of the world – the under sampled majority are not included.”

Discriminatory design

Computer Weekly recently spoke to Ruha Benjamin, an associate professor of African American studies at Princeton University, about discrimination in algorithms. Her new book, Race After Technology, which is out in June, explores the biases in algorithms. The discriminatory design behind algorithms are being used to automate decisions in the IT systems used in healthcare, law enforcement, financial services and education. They are used by officials to make decisions that affect people’s lives, health, freedom; their ability to get a loan or insurance or even a job. Such algorithmic bias can therefore have a detrimental effect on racial minorities. She said: “I want people to think about how automation allows the propagation of traditional biases – even if the machine seems neutral.”

Diversity in the workforce

The answer is not about hiring more people from diverse racial backgrounds. Benjamin’s research found that people’s backgrounds tend to take a back seat in the race for tech innovation. The values in the tech sector appear incompatible with diversity. Software tends to be released as fast as possible, with little thought given to its broader social impact.

While people generally recognise their own human bias, for Benjamin, outsourcing decisions to objective systems that have biased algorithms, simply shifts the bias to the machine.

Roger Taylor, from the Centre for Data Ethics and Innovation told Channel Four News: “The problem is that AI is like holding a mirror up to the biases in human beings. It is hard to teach [AI algorithms] that the flaws they see are not the future we want to create.”

CIO
Security
Networking
Data Center
Data Management
Close