tashatuvango - Fotolia
Artificial intelligence (AI) algorithms use data models to help people make decisions, but there is a risk that any human bias coded into these models will affect recommendations made by the algorithm.
At a recent Gartner data management event in London, Hannah Fry, TV presenter and lecturer in the mathematics of cities at the centre for advanced spatial awareness at UCL, warned that humans should not put all their trust in algorithms.
In her Hello World presentation, Fry described how an algorithm used by a criminal justice service, when presented with a case of underage sex, deduced that a 50-year-old man was less at risk of reoffending than an 18-year-old man if dating a 15-year-old girl.
A number of UK police forces have already started using algorithms to feed into their decision-making. One such system is the harm assessment risk tool in Durham, which is being used to assist officers in deciding whether an individual is eligible for deferred prosecution based on the future risk of offending.
The CDEI wants to ensure that the people who use AI technology understand the potential for bias and have measures in place to address such bias. It also aims to help guarantee fairer decisions and, where possible, improve processes.
Discussing the opening of the centre, digital secretary Jeremy Wright said: “Technology is a force for good and continues to improve people’s lives, but we must make sure it is developed in a safe and secure way.”
Roger Taylor, chair of the CDEI, said the centre was focused on addressing the greatest challenges and opportunities posed by data-driven technology.
“These are complex issues and we will need to take advantage of the expertise that exists across the UK and beyond. If we get this right, the UK can be the global leader in responsible innovation,” he said.
“We want to work with organisations so they can maximise the benefits of data-driven technology and use it to ensure the decisions they make are fair. As a first step, we will be exploring the potential for bias in key sectors where the decisions made by algorithms can have a big impact on people’s lives.”
Read more about AI bias
- The answer to the ultimate question of life, the universe and everything is 42, according to Deep Thought in The Hitchhiker’s Guide to the Galaxy – but experts need to explain AI decisions.
- We explore how enterprises must confront the ethical implications of AI use as they increasingly roll out technology that has the potential to reshape how humans interact with machines.
The CDEI plans to explore the opportunities for data-driven technology to address the potential for bias in existing systems and to support fairer decision-making. This may include increasing opportunities for those in the job or credit markets in existing recruitment and financial services systems. It said it would explore opportunities to boost innovation in the digital economy.
The CDEI is also planning to investigate how data is used to shape online experiences through personalisation and micro-targeting – for example, where users search for a product and then adverts for similar products appear later in their browser. This is something regulators are beginning to recognise.
As Computer Weekly has previously reported, the Irish Data Protection Commission (DPC) has a number of ongoing investigations looking at how web giants use personal data and algorithms.
In its annual report, the DPC stated it was “examining whether LinkedIn has discharged its GDPR obligations in respect of the lawful basis on which it relies to process personal data in the context of behavioural analysis and targeted advertising on its platform”.