sdecoret - stock.adobe.com
AI governance provides guardrails for faster innovation
Dataiku’s field chief data officer for Asia-Pacific and Japan discusses how implementing AI governance can accelerate innovation while mitigating the risks of shadow AI
With the growing adoption of generative AI (GenAI) across the Asia-Pacific region, some organisations, especially those in unregulated industries, may be hesitant to implement governance frameworks, fearing that red tape will stifle innovation.
However, Grant Case, field chief data officer for Asia-Pacific and Japan at Dataiku, argues the opposite is true: clear boundaries are essential for speed.
Speaking to Computer Weekly in a recent interview, Case noted that while the region leads in GenAI usage, eroding user trust in AI outputs can derail AI initiatives. Establishing governance guardrails, he argued, can help to plug the trust gap.
However, Case often encounters the misconception that governance slows down AI initiatives. “We see the exact opposite,” he said. “The organisations in this region that are moving the fastest are the ones that have already established a strong AI governance stance.”
He likened AI governance to highway safety barriers. Just as barriers allow vehicles to travel safely at higher speeds, governance provides the confidence necessary for rapid development.
“To move faster, you need to understand where the boundaries are,” Case explained. “Organisations that set those parameters early on eliminate the hesitation around innovation, because teams know exactly what is permissible.”
Fending off shadow AI
One of the drivers for AI governance is the rise of shadow AI, which is when workers use AI tools that aren’t approved for work.
Case pointed to recent findings indicating that 77% of security professionals have observed employees exposing corporate data to large language models (LLMs). This behaviour is rarely malicious; rather, it happens because there isn’t good internal tooling.
According to Case, employees often turn to external AI tools because they lack frictionless access to internal alternatives. The solution is not merely to ban external tools, but to provide internal options integrated with the necessary governance protocols, he noted.
“We want the governed path to be the fast path and the right path,” Case said, referencing a philosophy shared by a banking client. “If we set up the right infrastructure for the end user, they will never feel the need to go outside the organisation.”
While AI governance discussions often originate with chief data officers (CDOs), the conversation has been elevated to corporate boardrooms, driven not only by security risks but by the spiralling costs associated with unregulated AI experiments.
Case shared an example involving a client where a business unit racked up a lot of unexpected costs. The CDO started a $3m AI project two years ago, but the board recently questioned the monthly costs, which had reached $47,000, without clear proof of returns on investment. As a result, the company’s finance and internal audit teams became more involved in AI governance to deal with both technical and financial risks.
Some companies have started to build their own LLMs for governance and other reasons, such as localisation and having more control over the AI model lifecycle. Case advises against this, noting that the rapid pace of technological change often renders internal projects obsolete before they are completed.
He cited the example of a high-level analytics officer who spent about six months building an internal LLM. A few months after the model was finished, a commercial update from OpenAI had rendered their proprietary work less effective and more expensive.
Instead, Case advocated for a platform approach where governance requirements, such as those mandated by the EU AI Act, are embedded in the infrastructure. This allows companies to plug in the latest models while remaining compliant.
“The value of a platform like Dataiku is that we integrate the latest technology for you,” Case said. “This allows teams to use the best that they need right now, rather than trying to build something that will likely be too old in six months.”
According to Gartner, global spending on AI is expected to reach $2.52tn in 2026, a 44% increase year-over-year. Investments in AI platforms for data science and machine learning such as Dataiku are also poised to grow from $21.9bn in 2025 to $44.5bn in 2027.
“AI adoption is fundamentally shaped by the readiness of both human capital and organisational processes, not merely by financial investment,” said John-David Lovelock, distinguished vice-president analyst at Gartner. “Organisations with greater experiential maturity and self-awareness are increasingly prioritising proven outcomes over speculative potential.”
Read more about AI in APAC
- Australia’s Woolworths Group will enhance its digital shopping assistant, Olive, using Google Cloud’s newly released agentic AI platform, Gemini Enterprise for Customer Experience.
- Research from Thoughtworks reveals that while 77% of global businesses are focused on generating revenue from AI initiatives, Asian markets are leading the charge in agentic AI adoption, job creation and executive confidence.
- Microsoft has expanded its AI footprint in India, teaming up with four of the country’s largest IT services companies to deploy agentic AI capabilities across enterprises.
- AI is set to handle nearly half of all customer service interactions in Singapore within the next two years, but businesses risk alienating customers if they fail to explain how the technology works.
