pathdoc - stock.adobe.com

Australia’s chief scientist calls for AI regulations

The proposal would require companies to apply for a trustmark so they know from the outset how their AI is expected to behave

As the enthusiasm for artificial intelligence (AI) gathers pace in Australia, the country’s chief scientist has sounded a note of caution and called for more regulation of AI.

“What we need is an agreed standard and a clear signal, so we individual consumers don’t need expert knowledge to make ethical choices, and so that companies know from the outset how their AI is expected to behave,” said Alan Finkel in a recent address to the Committee for Economic Development of Australia.

Under Finkel’s proposal, companies would apply for a trustmark for their AI, called the Turing Certificate.

Independent auditors would certify AI developers’ products, their business processes, and their ongoing compliance with clear and defined expectations.

While Finkel is proposing a voluntary scheme at present, he has also suggested that the government take the lead by mandating Turing Certificate compliance as part of its procurement rules.

“Government agencies are already building AI into staff recruitment. Others are exploring its potential in decision-making and service delivery. Those contracts are likely to be extremely valuable for the companies that supply these capabilities. So imagine if the government demanded a Turing stamp,” he said.

He did not go into details of what a Turing Certificate might test, or how it might be administered. There is, however, a growing acceptance that some form of control is required.

Australia’s Ethics Centre is currently working on an ethical framework that can be applied to any technology development, including AI. It hopes to release the framework for discussion later this year.

Kriti Sharma, vice-president of bots and AI for Sage, and a speaker at Sydney’s Vivid festival, agreed that ethics need to be built into AI solutions from the ground up. She said it was critical that biases be stamped out by ensuring machine learning is fueled by rich and inclusive data sets.

“Often the ethics of AI are an afterthought. Once you have built the AI system, to fix it later on is harder because the AI learns on its own – you need to have the right design and principles in the algorithms,” she said.

As a non-white woman, Sharma said she had experienced AI bias first hand, particularly in facial recognition systems.

Evidence of that bias was provided in a submission to the government’s proposed identity matching bill, which would see the department of home affairs make far more extensive use of facial recognition.

In its submission, Australia’s Human Rights Law Centre cited a 2018 study which found that the misidentification rate for “darker-skinned” women was 34.7% compared to 0.8% for “lighter-skinned” men.  

According to an SAP-commissioned survey of 2,500 global C-suite executives, including Australian business leaders, nine out of 10 business leaders said AI will be critical to their survival over the next five years, and six in 10 have implemented AI or plan to do so next year.

Read more about AI in Australia

Read more on Artificial intelligence, automation and robotics

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close