Ethics of AI algorithms under examination

Sebastian Klovig Skelton’s article in this week’s CW ezine issue, ‘Auditing for algorithmic discrimination’ could hardly be more timely.

The fall-out from the algorithmic exams fiascos of recent weeks – with the Highers in Scotland and the A-levels in the rest of the UK – is set to continue. These were not “black box algorithms” based on straightforwardly biased data, of the kind we have often heard about. The Scottish Qualifications Authority and Ofqual published the statistical models they chose (after what seems like a lot of head-scratching and brains-cudgelling) and it is clear what data they chose to use.

However, it is also clear those models generated unjust results at the level of individual candidates. In retrospect, the SQA and Ofqual should have been clearer and stronger in their advice to the governments as soon as the fateful decision was taken not to have exams because of the pandemic: there were bound to be outrageous anomalies in the standardisation applied to teacher assessments, and it was bound to shine a spotlight on already-existing social inequalities, which are usually, and disgracefully, largely ignored.

But the business world can ill afford to be insouciant about the algorithms that companies are increasingly applying to their commercial activities. For one thing, these recent fiascos have given algorithms a bad name. One of the sources in ‘Auditing for algorithmic discrimination’ says: “too many in the technology sector still wrongly see technology as socially and politically neutral, creating major problems in how algorithms are developed and deployed”.

A new class of algorithm auditors has arisen to help companies clean up their algorithmic act. What they are finding is not a pretty picture. One, indeed, compares the situation to creating a medicine whose ingredients are utterly opaque.

And while commercial algorithms may well require intellectual property protection, it should be possible to have them audited to certify they are free of bias.

Often bias comes in at the level of the very choice of the data fed into the algorithm. And engineers require social and ethical training so they can endeavour to do no harm when they build their statistical models.

We are, however, in the very early days of such auditor oversight or ethical education of data engineers and data scientists. We need to get more quickly to the point where algorithms do not discriminate, and away from an academic appreciation that they should not do so.

One of our sources says she discerns a “slow movement in the corporate world that means they realise they need to stop seeing users as this cheap resource of data, and see them as customers who want and deserve respect”.

Time to speed that movement up: go faster!

CIO
Security
Networking
Data Center
Data Management
Close