The Post Office scandal shows how much AI can and must help humans

There has been much speculation about the potential for a future AI disaster after the role of IT in the Post Office scandal - but the way to avoid this is to make sure AI and humans work hand in hand

The rise of ChatGPT and its many imitators has changed the world. While the generative artificial intelligence (AI) narrative has been widely negative to some, there are others, especially entrepreneurs, who believe it to be a positive force for the world.

Jobs, of course, are going to be affected and the speed of generative AI is genuinely astonishing, something I know as the CEO and founder of a translation software company. It has shredded the industry as I knew it, but my belief is that it is also a force for good in my sector.

That is for another time and place. Instead, I have been shocked and astonished by the news from the UK about the Post Office Horizon scandal and how some subpostmasters were ruined by the failure of the Fujitsu system that was supposed to help them.

As I’m led to believe by a London friend, Computer Weekly was the first UK media to reveal this scandal 15 years ag and continued to campaign for justice even while others looked away. Good work.

Finally, after an overdue dramatisation on UK television, justice seems to be coming for these people, some of whom went to prison and even committed suicide. Most were ruined financially.

Rampant technology

Of course, those at the Post Office and Fujitsu will face justice at a later time, but such rampant and domineering technology has to be tempered with humans who can prevent this juggernaut from running amok.

It could be the perfect test case for how to regulate generative AI in the very near future. There have to be “humans in the loop” to prevent machines stomping over people and there is likely to be many instances of generative AI “hallucinations” being seen as true representatives of reality. This cannot be the state of things.

I wrote about this for Computer Weekly in August last year, arguing for humans in the loop for mass adoption of generative AI, but even in the ensuing months there have been huge changes.

What used to take months now takes less than 48 hours, whether it’s modelling technology to creating project teams, to producing professionally narrated videos or even complicated spreadsheets. The optimists have careered into generative AI and even they are astounded by how much can be done.

The need for humans is retreating fast, but there have been pushbacks. The proliferation of apps and automatic systems that seem to exist for the system themselves, not for the user, have become so widespread and discretion-free that some companies are now advertising their use of humans, not just in the loop, but at the head of feedback.

For those who can afford it, human service is something that comes naturally. Investment banks who handle people’s money would not dare to operate automated systems, they need a human concierge or facilitator to keep their customers. This is likely to finally trickle down to everybody eventually.

Engage humans

For those subpostmasters 20 years ago, they certainly could not afford to engage humans to help with their misfiring technology. Instead, they were forced to engage with humans representing the technology and the type of humans that were in thrall to the technology and who refused to acknowledge that the machine might be wrong.

Imagine if that was to happen in 2024 and AI was the culprit. Nobody who is a human in the loop is going to argue with the machine, their jobs are at stake and while hallucinations appear to be a constant problem with generative AI procedure, surely there must be a regulatory body that can decide, quickly, how things stand.

And that may be the saviour of the upcoming battle between humans and AI. The very nature of AI should be able to rule on a problem such as Horizon’s software with celerity and speed. Such a problem would not last 25 years, it would last for days and it would be fixed within days.

That is the promise of generative AI. It could deliver justice better than any slow body of humans and some are already working on it.

In the US, the California Innocence Project (CIP) uses humans to prevent miscarriages of justice, but is such labour-intensive work that it involves hundreds of hours from law professionals to do so.

Now, generative AI is being used to rapidly speed up this work and free law professionals to focus on more efficient tasks. CIP lawyers are using a generative AI model developed by CaseText in partnership with OpenAI.

This large language model-based “AI legal assistant” can review and summarise legal documents and is specifically trained on legal documents, case law and court proceedings.

Human partnership

Imagine if an organisation such as CIP was charged to intervene in the Horizon case. There would have been no two-decade delay and those subpostmasters would not have faced the indignity and horror they have had to endure.

Of course, generative AI is not perfect, other mistakes could be made, but with the judicious – no pun intended - use of AI and human partnership, such miscarriages of justice in the future would be almost impossible.

All of this is too late for the subpostmasters, but the widespread adoption of AI, be that with humans in the loop or human regulators at the head of processes, can, indeed, be a good force for the world.

It’s up to everybody involved to start work now before it’s too late. It’s either that or there could be more Horizons - and I don’t mean what can be seen in the distance.

Frederik Pedersen is CEO of EasyTranslate.

Read more on Artificial intelligence, automation and robotics

CIO
Security
Networking
Data Center
Data Management
Close