How a responsible approach to AI enables its success

This is a guest post by Accenture’s Arnab Chakraborty, global lead of responsible AI, and Senthil Ramani, head of data and AI

The far-reaching capabilities of generative AI have shown AI’s first true inflection point and a glimpse of the technology’s disruptive potential. Recent research from Accenture found that 73% of companies are prioritising AI over all other digital investments, and 97% of executives believe generative AI will be transformative to their company and industry.

Although early, generative AI is beginning to permeate into the day-to-day. Chatbots suddenly more helpful. Office applications advising on writing. Inbox signalling what was missed during a real-world lunch break.  As these connections deepen, so will conversations on the implications of AI on society. Some of this already happening, as the recent G7 summit convened world leaders to discuss AI governance and one month later the Cyberspace Administration of China issued guidelines on the use of generative AI.

Ultimately, a company’s long-term success and its ability to extract value from generative AI will depend as much on strategy as it does on having a responsible AI framework. In practice, a responsible AI framework sets clear principles to guide the design, build and use of AI. Doing this effectively places the concept of “responsibility” as an enabler of innovation — an output that empowers businesses, respects people and benefits society. It also allows companies to engender trust in AI and scale with confidence.

Building responsible AI-centric organisations

Not all organisations create a responsible AI framework before diving in, which can open possibilities of the technology being misused. Take, for example, an office administrator who, in a bid to speed up the processing of customer data, feeds the highly sensitive information into a large language model (LLM) tool such as ChatGPT, which then retains that data. This creates the potential for data to be misused, including manifesting into malicious applications such as deepfakes, biosecurity manipulation and data poisoning. This is a growing concern, as several AI experts and professors found that while AI-powered content is becoming a permanent fixture in the digital world, the deepfake detection space is still lagging behind.

Unsurprisingly, this has furthered conversations about deepfakes in the AI space – with several AI companies and big tech firms convening on responsible AI adoption, of which is developing a way for consumers to identify AI-generated content, such as through the use of watermarks.

Then there is also the consideration of data bias in machine learning which AI innovations are also subjected to. In preliminary studies conducted by Stanford University, researchers found that AI plagiarism detectors were especially unreliable when the real author (a human) is not a native English speaker. This means the detectors showed signs of bias against non-speaking English speakers and were more likely to flag their content as plagiarism as compared to English speakers, even though the content submitted by both were original. As AI models work with the algorithms and data that it is fed, unethical biases, prejudices and negative societal patterns hidden in the original data sets have the potential to create language toxicity promoting discrimination with harmful consequences.

Responsible AI framework

This is where we need to define a responsible AI framework and show how it can help minimise bias, protect data and increase trust while empowering employees to use generative AI to open up new growth opportunities.

That said, many business leaders remain upbeat about the potential of generative AI to revolutionise and create exponential growth and are excited to kickstart this process. For instance, when it comes to serving as a co-pilot for creativity, more than 55% of APAC chief marketing officers (CMOs) say generative AI will accelerate innovation, faster decision making and better customer experiences.  However, for the technology to be enterprise-ready, trust is needed to ensure that the technology can operate and function within the ecosystem of the business. An Accenture report found that while only 6% of global companies surveyed have already implemented responsible AI practices, 42% of them aspire to do so by the end of 2024. Clearly, this shows that many organisations view AI regulation as a boon rather than a barrier to success.

The ability to quickly solve problems and make decisions should never come at the expense of algorithmic transparency, privacy and data security.  It becomes fundamentally important that as business leaders, the AI systems we choose to implement within our operations are “raised” with a diverse and inclusive set of inputs so that they reflect the broader business and societal norms of responsibility, fairness and transparency. It is these considerations that will then increase a company’s efficacy in using AI to contribute to areas of growth including elevating employee capabilities, building trust with customers and introducing new business models.

The most successful and responsible adoptions of generative AI are most optimal with the help of government, authorities, businesses and society. Working together across the entire ecosystem to establish an end-to-end approach to responsible generative AI is the best approach to putting these principles into practice.

CIO
Security
Networking
Data Center
Data Management
Close