Transparency in AI: Rainbird CEO on what developers need to know
This is a guest post written for the Computer Weekly Developer Network written by Ben Taylor in his capacity as CEO of Rainbird.
Rainbird is an Artificial Intelligence (AI) platform that can model human-like cognitive behaviour.
Taylor writes as follows…
For the tech industry, Artificial Intelligence (AI) is an undoubted focal point.
The research and innovation currently taking place in this field is influencing industry verticals such as legal, financial and healthcare, among others. So much is this so, AI appears to be the foundation upon which a fundamental shift in operations is occurring.
But with new developments and breakthroughs come new problems and challenges.
As algorithms advance and automate ever more complicated decisions, it becomes more difficult to discern how they work.
In part, this is down to the reticence of the companies developing them to allow third-part scrutiny of proprietary algorithms. Put simply, as the challenges they face become more complex, their inner workings become opaque and less accountable.
AI’s big challenge
All this AI innovation is starting to pose a problem.
As more and more of our economic, social and economic interactions – from mortgage applications to financial transactions, insurance policies to recruitment and legal processes – are carried out by AI systems, there are natural requests from users to be given an explanation of how a particular decision has been reached.
The techniques used by AI systems to reach decisions are difficult for a layman to fully understand. They are vast, intricate and complex – operating on the basis of probability and correlation – and unless you possess a specialist knowledge of how they work at an algorithmic level, it can appear alien.
Time for transparency
As a result, transparency has become a technical issue that companies are grappling with during development. So, what do developers need to consider when it comes to building transparency into AI systems?
The biggest issues regarding transparency tend to arise through implementation of statistical methods of data analysis or machine learning. Machine Learning is a powerful technique in the developer’s toolkit, allowing algorithms to be designed that learn the solutions to problems from data, rather than being explicitly programmed.
Unfortunately, most Machine Learning techniques are black box by nature.
Take Convolutional Neural Networks (CNN), for example: this popular deep learning technique – often used for image classification tasks – relies on a large network of weighted nodes. But when a CNN classifies an image it’s very hard to know what features have been extracted in making that classification. Which means we have little idea as to what the nodes of a CNN actually represent.
There are however ways to overcome this.
A growing set of ‘explainer algorithms’ are being researched that can be applied alongside other statistical techniques. Alternatives to neural network based machine learning also exist, techniques such as Random Decision Forests (RDF) open the door to better understanding of feature extraction.
One overriding feature of in this space is the choice of programming language or framework.
Python remains a popular choice for AI development, with languages such as R and even MATLAB remaining important for data science. There has also been a growth in well supported libraries, such as TensorFlow from Google, which abstract much of the heavy lifting.
Alongside a good Integrated Development Environment (IDE) developers have access to a comprehensive set of tools out of the box. But consideration still needs to be given to program architecture and explainability.
With Machine Learning comes the promise of systems that learn to solve problems, freeing the developer from complex, or impossible, algorithmic design.
To achieve transparency, developers need to consider the issue and the beginning of the design and development cycle. It should be the central tenet that the system is built around.
But the fact is that to achieve transparency, there are going to be a number of compromises and trade-offs that developers need to consider. If, however we can overcome some of the restrictions imposed by transparent operating, we can create AI systems that inspire public trust and confidence.