Laurent - stock.adobe.com

AI: Black boxes and the boardroom

Computers can and do make mistakes and AI is only as good its training so relying purely on machine intelligence to make critical decisions is risky

From Elon Musk claiming that the unchecked growth of artificial intelligence (AI) could spawn “an immortal dictator”, to the news that a major bank has started using the technology to detect money laundering, AI is seldom out of the headlines.

Whether or not AI – the science and engineering of machines that act intelligently – will herald a golden age of leisure, or spell the end of humanity, remains to be seen. What is clear, however, is that businesses globally are already embracing the technology to improve how they work.

The possibilities AI presents are vast and its current applications are similarly impressive: it can steer driverless cars; it can read and extract key information from thousands of legal contracts in minutes; and it can review MRI and PET scans and identify malignant tumours with greater accuracy than human doctors.

But AI is not without its risks, be it poor performance, misuse or reliance on bad data. Ultimately, computers can make mistakes, and decisions based on flawed, non-human advice can be costly for a business.

Even F1 driver Lewis Hamilton discovered this when, during the Australian Grand Prix, a software error and incorrect computer data prompted him not to open up a greater lead over rival Sebastian Vettel, who consequently won the race.

Lewis’ experience is instructive: mistrust in AI and concerns over a reliance on bad data are two key worries for businesses, but, when planning to use AI, how can businesses avoid a pile-up in the boardroom?

Trust

AI is beset by the “black box” problem: many of its processes lack transparency and can’t be easily understood by humans. For example, the developers of AlphaGo, Google DeepMind’s system, could not explain why their system made certain complicated moves in beating the human world champion of board game Go.

If we can’t easily comprehend AI’s conclusions, how can we be sure that automated processes are playing fair with their decision-making?

Of course, this makes it difficult to accurately assess risk, but businesses still must consider the environment in which the technology is being used: systems running critical infrastructure such as nuclear power stations must set the highest bar for what is considered safe.

Before incorporating AI, businesses may need to convince a regulator, perhaps by using software to monitor the technology – algorithmic auditors that will hunt for undue bias and discrimination. This will likely impact performance, since the system will divert processing power to self-analysis, but it could mean the difference between the system getting rejected or approved for commission.

Bad data

Outcomes achieved by an AI system will only ever be as good as the quality of data on which they’re based. There are many variables to determine the quality of inputted data: are the data sets “big” enough, is “real-world” data being used, is the data corrupt, biased or discriminatory?

With so much potential uncertainty, businesses should take every effort to minimise the risks. Where data is sourced from a third party, contracts should require transparency around lineage, acquisition methods, and model assumptions, (both initially and on an ongoing basis where the data set is dynamic), and there should be mandated security procedures around the data, to prevent loss, tampering and the introduction of malware – all reinforced by comprehensive rights to audit, seek injunctive relief and terminate.

Read more about artificial intelligence

Finally, common sense should apply – businesses should not rely too heavily on a limited number of data points and support big data analysis with other decision-making tools.

The corollary of these concerns is to give tremendous power to those who own large repositories of accurate personal data and we therefore expect the issue to become a significant focus for regulatory and contractual protection in the coming years.

Isaac Asimov, the famous science fiction writer, once laid down a series of rules to protect humanity from AI. Perhaps it is time businesses did the same. After all, we can’t know the future, but we can prepare for it. And with AI, the future is now.

Tim Wright, is a partner at Pillsbury Winthrop Shaw Pittman. Antony Bott, global sourcing consultant at Pillsbury Winthrop Shaw Pittman contributed to this article.

Read more on Business intelligence and analytics

CIO
Security
Networking
Data Center
Data Management
Close