Self regulation of AI is not an option

Earlier this week the House of Lords Communications and Digital Committee took evidence from two experts, who were asked to share their thoughts on regulating artificial intelligence (AI).

Among the areas of concern is that unlike traditional research, which is steeped heavily in academia, half of the research papers on AI are coming out of commercial research outfits. This is a double-edged sword. On the one hand, commercial research is driving adoption of advanced AI in business. However, unlike academic research, there is a risk that  being commercially sensitive, the source code and datasets used in these AI algorithms and models cannot easily be reviewed independently.

How can anyone prove the algorithm has not correctly identified their situation?

The computer says ‘No’

As anybody who has tried to clear their credit history will attest to, the computer is always right. Even if the final decision ultimately involves human intervention, those involved in decision-making tend to defer to the computer’s response.

The UK’s education system is built around reading and writing and arithmetic to prepare children with the basic skills they need as adults. Is a basic understanding of AI also needed? Tabitha Goldstaub, co-founder of CogX and chair of the AI Council, believes so, especially if the general public are to benefit from any forthcoming regulatory framework. Answering questions from the MPs at the committee, she said: “One of the missing pieces as consumers of AI technology is that every child leaves school with the basics of data and AI literacy.” This is rather like the regulations in car manufacturing. The general public does not need to understand what is going on under the bonnet to have a grasp of car safety. For Goldstaub, something similar needs to be in place for AI, in which non-experts can grasp basic concepts to allow them to make informed decisions.

Self regulation or leaving market forces to determine the level of regulation required is not an option. In her remarks to the committee, Mira Murati, senior vice-president of research, product and partnerships at OpenAI gave a detailed account of how the APIs (application programming interfaces) for the GPT-3 algorithm were held back, to enable OpenAI to put in place systems and procedures to prevent misuse. By explaining the APIs, Murati demonstrated to the committee that AI is not just something the likes of Facebook, Google and Microsoft do. Any software developer can access powerful AI functionality.

Holding back a product to ensure it is safe and robust means a company can lose its competitive advantage. How many businesses would be happy to do this, unless they are  legally bound to do so?

CIO
Security
Networking
Data Center
Data Management
Close