Moving agentic AI from innovation theatre to enterprise production
As enterprises move from prompting chatbots to orchestrating AI agents, IT leaders must rethink governance, data architecture and cost management to avoid chaotic deployments and runaway cloud bills
At Malaysia’s Ryt Bank, customers can simply tell their banking app what they want to do and an artificial intelligence (AI) agent will queue up the transaction, pausing only for a human confirmation before transferring any funds.
Meanwhile in Australia, hardware chain Bunnings has recently developed an AI assistant that provides customers with expert advice and helps them find what they need, transforming the e-commerce experience beyond traditional search.
Across the Asia-Pacific region, the enterprise AI conversation has been moving away from generative AI (GenAI) chatbots towards agentic AI, systems that can plan, execute and self-correct to achieve complex business goals.
“Agentic AI is a shift from AI as an assistant to AI as an active digital worker,” says Frank Bignone, senior vice-president and head of corporate strategy and growth at FPT Software. “The distinction lies in autonomy vs. reactivity. A standard GenAI chatbot follows a prompt to generate content; an agent uses a reasoning engine to decompose a goal and self-correct if a step fails.”
Charlie Dai, vice-president and principal analyst at Forrester, agrees, noting that the dividing line is “execution authority”. While chatbots merely reason and respond, “true agents can trigger workflows, call APIs [application programming interfaces], modify state, and adapt actions based on outcomes and feedback loops”, he says.
However, giving software the authority to take independent action introduces anew set of challenges for the CIO. Moving these digital workers out of the sandbox and into live production requires far more than just a clever prompt.
Escaping innovation theatre
Despite the hype, broad-scale deployment of agentic AI is still in its infancy. Avanade’s Trendlines research indicates that 44% of organisations are still stuck at the proof-of-concept (PoC) stage.
“Many PoCs never make it to production, with many initiatives being discontinued due to lack of business alignment,” says Sagar Porayil Vadakkinakathu, chief technology officer (CTO) at Mindsprint. “They remained stuck in ‘innovation theatre’, with fragmented pilots that lack orchestration, data readiness and governance.”
Agentic workloads introduce high variability from retries, multi-model orchestration, data access, and tool calls, making simple calculators unreliable
Charlie Dai, Forrester
Vadakkinakathu notes that the biggest mistake enterprises make is treating AI agents as isolated pilots rather than embedding them into the enterprise fabric: “Organisations build impressive agents in sandbox environments but fail to integrate them with core systems, data flows and business workflows. In simple terms, enterprises don’t fail because the AI doesn’t work; they fail because the supporting ecosystem isn’t ready for it.”
When operationalised well, however, the results can be rewarding. Mindsprint recently implemented an AI-native deal-to-delivery digital backbone for a global food and agri-conglomerate, embedding agentic workflows into procurement, logistics and trade operations.
FPT Software similarly deployed AI-powered virtual assistants for a leading Japanese trade company to streamline document handling and translation, achieving a 90% reduction in processing time and up to an 80% drop in error rates.
Preventing agent sprawl
As enterprises move beyond single-agent deployments to handle more complex workflows, they face a new operational challenge: managing a workforce of digital workers.
Without a proper orchestration layer, organisations risk agent sprawl, where disconnected AI agents operate in silos without shared context, warns Bindu Sunil, chief AI officer at Mindsprint.
“Equally important is deep integration with enterprise systems and data,” she adds. “Without access to reliable, real-time data from systems like ERP [enterprise resource planning], its ability to execute workflows breaks down.”
Integration also means embedding agents directly into the systems where work actually happens. To manage this effectively, FPT Software’s Bignone advises enterprises to look for solutions that enable agent-to-agent communication. This hierarchical setup allows a manager agent to coordinate the work of multiple specialist agents.
At a technical level, orchestration requires strict parameters to prevent autonomous agents from running amok. According to Forrester’s Dai, agentic AI platforms must manage task decomposition, state tracking, retries and – crucially – termination conditions to prevent agents from entering infinite loops.
“Tool invocation needs validation, schema constraints and fallback logic,” Dai says. “Integration via APIs and events is critical to decouple agents from core systems.”
Bhavya Kapoor, president of Avanade Asia-Pacific, adds that orchestration platforms must be policy-aware and provide real-time control. This ensures that “oversight happens before agents act, not only after outcomes are audited”, turning agent sprawl into a governed, enterprise-ready capability.
The governance gap
As agents take on execution capabilities, traditional data security measures are no longer enough.
“Sovereignty now requires three distinct forms of control: behavioural, operational and cognitive,” says Kapoor. He warns that if sovereignty stops at the data layer, enterprises leave the decision-making layer exposed. “As AI agents move from assisting work to autonomously executing tasks within core business processes, this gap is no longer theoretical. It is where operational, regulatory and reputational risks accumulate fastest.”
This requires a shift from retroactive auditing to proactive, real-time control. Mindsprint’s Sunil stresses that “trustworthy AI is not a philosophy; it is an engineering discipline.”
Managing agentic AI is less about crafting clever prompts and more about ensuring AI systems are reliable, governed, integrated into core workflows, and aligned to business outcomes
Bhavya Kapoor, Avanade
She advocates for robust adversarial review systems to stress-test multi-agent failure cascades, where one agent’s bad output becomes another agent’s corrupt input, along with a constitutional and policy layer.
“Regulatory requirements, ethical guardrails and enterprise compliance rules are mapped directly into the AI pipeline as version-controlled, auditable constraints,” she says. “A data privacy rule becomes an interceptor. A fairness requirement becomes a scoring constraint.”
Just as important is the audit and feedback loop. Models drift, regulatory environments shift and what was within acceptable thresholds at deployment may not be six months later.
“Every agent action is logged, every decision is traceable and production signals feed continuously back into the governance layer. The shift this creates is the one that matters most when enterprises face scrutiny: moving from trust us, to here’s the complete trail,” Sunil says
“That is where enterprise AI credibility is genuinely won or lost, and it is the difference between a programme that survives its first production incident and one that gets quietly switched off,” she adds.
Unbounded behaviour and bill shock
With traditional software-as-a-service (SaaS) offerings, software pricing is generally predictable. But with agentic AI systems, an agent might take five steps to resolve an IT ticket, or it might encounter an error and loop 50 times to solve it, racking up token consumption along the way.
“The cost challenge isn’t just about tokens; it’s about unbounded behaviour,” says Kapoor. To prevent bill shock, he advises CIOs to treat agentic AI like cloud economics by defining clear guardrails upfront, such as iteration limits and token budgets per workflow, and shifting the financial model from open-ended consumption to cost per outcome.
Forrester’s Dai echoes this, advising CIOs to model the full AI system cost. “Agentic workloads introduce high variability from retries, multi-model orchestration, data access and tool calls, making simple calculators unreliable. Effective TCO [total cost of ownership] combines token usage with data costs, infrastructure, governance and operating overhead...with continuous optimisation and hard usage guardrails to prevent bill shock.”
Not exactly, says Kapoor: “Trust isn’t built on pristine data; it’s built on controlled environments. What matters is having clear ownership, consistent standards and strong controls over sensitive and business-critical data.”
To manage this complex environment, the shape of the IT department is also changing. The role of the standalone prompt engineer is already being viewed as too narrow.
“Managing agentic AI is less about crafting clever prompts and more about ensuring AI systems are reliable, governed, integrated into core workflows and aligned to business outcomes,” says Kapoor.
Dai notes that the ideal IT setup is highly cross-functional. “Prompt engineering, together with context engineering and harness engineering, are not standalone roles. Long-term success depends on teams that can redesign workflows, encode business policy, govern risk and continuously adapt agents in production,” he says.
Repositioning the human worker
As AI agents begin executing tasks on behalf of human employees, particularly in areas such as software engineering, customer service and IT operations, change management is becoming a critical success factor.
Employees are understandably cautious. “Resistance increases when agents act without transparency or recourse,” notes Dai. “Employees accept execution agents faster when they retain oversight and see reduced toil rather than job displacement.”
Kapoor points out that organisations getting this right treat AI adoption as a people transformation, not just a tech roll-out: “Agentic AI doesn’t remove humans from the equation; it repositions them. Employees shift from performing tasks to overseeing, guiding, and improving systems, elevating the nature of work rather than diminishing it.”
Ultimately, the true measure of agentic AI’s success is moving away from simplistic metrics such as headcount reduction towards outcome-based measures that reflect how work improves.
“Most organisations focus on productivity gains, operational efficiency and process quality improvements, such as reduced cost to serve, faster cycle times and better decision accuracy,” Kapoor says.
Dai notes that while hours saved matter, reliability, adoption and trust indicators are equally important. “Durable success is defined by sustained usage in production with stable costs and declining human escalation over time,” he says.
Read more about AI in APAC
Agoda, a digital travel platform, has set its sights on becoming an AI-powered travel companion as it changes how it builds software and moves its tech workforce into a new facility in Bangkok.
Singtel and Nvidia have teamed up on a multimillion-dollar facility to help organisations scale enterprise AI deployments, tackle extreme datacentre power densities, and prepare for the era of embodied AI.