peshkov - stock.adobe.com

Australian organisations lack maturity in responsible AI

Most Australian organisations are still in the early stages of their responsible artificial intelligence efforts despite growing use of AI by businesses and consumers, study finds

Less than one in 10 organisations in Australia have a mature approach to deploying responsible artificial intelligence (AI), underscoring a need for greater focus on the ethical considerations related to growing use of the technology.

That is according to the inaugural Australian responsible AI index by Ethical AI Advisory and Gradient Institute, which found that only 8% of 416 organisations in Australia are in the maturing stage of responsible AI. The index is sponsored by IAG and Telstra.

Some 38% are in the developing stage, 34% are in the initiating stage and 20% are in the planning stage. The mean score was 62 out of 100, placing the overall result in the initiating category.

Responsible AI is designed and developed with a focus on ethical, safe, transparent and accountable use of AI technology, in line with fair human, societal and environmental values. It is critical in ensuring the ethical and appropriate application of AI technology, which is the fastest growing technology sector in the world, currently valued at $327.5bn, according to IDC.

To help organisations accelerate responsible AI adoption, a self-assessment tool has been created to measure an organisation’s maturity when developing and deploying AI.

The tool will help companies in the earlier planning and initiating phases of responsible AI adoption to develop the right guardrails to support them amid rapid growth of AI in Australia and fast-paced consumer adoption of digital technology using AI.

“The implications of organisations not developing AI responsibly are that unintended harms are likely to occur – to people, society and the environment – potentially at scale”
Catriona Wallace, Ethical AI Advisory

Catriona Wallace, CEO of Ethical AI Advisory, said: “The implications of organisations not developing AI responsibly are that unintended harms are likely to occur – to people, society and the environment – potentially at scale. As only three in 10 organisations stated they had a high level of capability to deploy AI responsibly, there is significant work for Australian business leaders to do.”

Bill Simpson-Young, CEO of Gradient Institute, noted that with just over half of organisations in the index having an AI strategy in place, there is opportunity for business leaders to act on critical AI initiatives such as reviewing algorithms and underlying databases, monitoring outcomes for customers, sourcing legal advice around potential areas of liability and reviewing global best practices.

“Putting training in place to upskill data scientists and engineers, as well as board and executive teams, can also help close the gap by enabling a far greater level of understanding and education in responsible AI,” he added.

IAG, Australia’s largest general insurer, has been adopting AI – and applying the technology in an ethical and responsible manner.

For example, when it developed an AI tool to predict whether a motor vehicle is a total loss after an accident, reducing customer claims processing times, it used its AI ethics framework and the Australian government’s voluntary AI ethics principles to identify potential issues or risks.

Its efforts include verifying that customers had a positive experience and setting conservative thresholds for modelling to reduce the likelihood of wrongly predicted total losses, as well as careful consideration of the potential benefits and harms of the system.

IAG is now looking at how responsible and ethical AI can be used to help detect motor claim fraud using advanced analytical techniques. These techniques have been used to help claims consultants settle genuine customer claims sooner.

Read more about AI in Australia

Read more on Artificial intelligence, automation and robotics

CIO
Security
Networking
Data Center
Data Management
Close