Data reveals lack of ethics in decision making systems

A recent survey from price comparison site, comparethemarket.com, has highlighted the subtext, which obscures a host of unfair assumptions made in the depths of computer systems. These assumptions are biases that leave people worse off.

The comparison website looked at the car insurance policies available via its website. It ran a search on the cost of insurance when only employment status and profession were changed.

The data from comparethemarket.com found that people classified as “unemployed” can expect to pay the highest insurance premium, over 40% more than someone classified as a housewife or househusband.

Algorithmic bias

One can only assume that the algorithms that do the number crunching behind these car insurance premium calculations have gone down some kind of a risk assessment cul de sac. If someone is unemployed, are they more likely to crash into someone deliberately? Are they more likely to make a fraudulent insurance claim. Are they the sort of people who ask their mates to bury or dump their cars so they can claim it has been stolen? This looks stark, when presented in black and white, but it is the sort of reasoning that goes on inside the computer systems that make the decisions impacting the insurance premiums of millions of people.

Commenting on the data, Dan Hutson, head of motor insurance, at comparethemarket.com, said: “With millions expected to be made unemployed around the UK as a result of the pandemic, this research indicates that these individuals may face the increased financial burden of higher car insurance costs at a time when they can least afford it.”

With the amount of data collected and shared, it is entirely plausible that these computer systems are able to assess people at an individual level by trawling historical data and social media.

This raises questions over ethics and the right to privacy.

Automated decisions requires human oversight

Computers cannot be allowed to make decisions that impact humans, without some level of oversight. In a recent House of Lords special inquiry committee meeting on AI, witnesses warned that people naively assume that tech can do something better than a human. Daniel Susskind, fellow in economics, Balliol College told the committee that today’s decision making systems are “more opaque” than what came before them. He said: “If we are honest, the finest computer scientists are not necessarily hired for the sensitivity of their moral reasoning. There is a burden on engineers to make these technologies as transparent as they can be to ensure users can scrutinise them.”

CIO
Security
Networking
Data Center
Data Management
Close