SimonMichael - stock.adobe.com

Singapore debuts world’s first governance framework for agentic AI

The Infocomm Media Development Authority has released a guide to help enterprises deploy AI agents safely and address specific risks such as unauthorised actions and automation bias

Singapore has launched a governance framework for agentic artificial intelligence (AI) systems, which are capable of independent reasoning and action, to address the growing security and operational risks posed by AI agents.

The Model AI Governance Framework (MGF) for Agentic AI was announced on 22 January 2026 by Singapore’s minister for digital development and information, Josephine Teo, at the World Economic Forum in Davos, Switzerland.

Developed by the Infocomm Media Development Authority (IMDA), the framework builds on Singapore’s existing AI governance guidelines from 2019 but focuses on addressing the challenges of using AI agents that do more than just generate content.

Unlike generative AI, agentic AI systems can plan across multiple steps to achieve specific objectives. They can interact with their environment, such as updating customer databases or processing payments without direct human intervention. While this improves business productivity, the IMDA warned of risks arising from unauthorised or erroneous actions taken by AI agents.

“The increased capability and autonomy of agents also create challenges for effective human accountability, such as greater automation bias, or the tendency to over-trust an automated system that has performed reliably in the past,” IMDA added.

The MGF serves as a guide for organisations that are deploying AI agents, covering both in-house development and third-party agentic AI tools, ensuring safe and effective implementation.

Recommended governance actions include defining limits on an agent’s autonomy and access to data and tools, defining checkpoints where human approval is required to guard against automation bias, and implementing baseline testing and continuous monitoring throughout an AI agent’s lifecycle. Users should also know when they are engaging with an agent and be trained to oversee agentic AI systems effectively.

Industry reaction

The framework was developed in consultation with major tech players and assurance providers. Elsie Tan, country manager for worldwide public sector at Amazon Web Services, said that as agentic AI systems will make decisions with real-world consequences, concrete mechanisms for visibility, containment and alignment built into infrastructure, along with human judgement, are needed to use them wisely. 

April Chin, co-CEO of AI assurance firm Resaro, noted that the framework addresses a critical gap in policy guidance. “The framework establishes critical foundations for AI agent assurance. For example, it helps organisations define agent boundaries, identify risks and implement mitigations such as agentic guardrails,” she said.

Serene Sia, Google Cloud’s country director for Singapore and Malaysia, added that building trust in agentic AI systems is a shared responsibility, noting that Google has been working on open standards to secure the use of AI agents.

“Having pioneered open standards like the Agent2Agent Protocol (A2A) and Agent Payments Protocol (AP2), Google has been playing a key role in establishing the foundation for interoperable and secure multi-agent systems. We remain committed to responsible innovation and look forward to contributing best practices as this technology advances further,” said Sia.

IMDA has described the framework as a “living document” and is seeking feedback and case studies from the industry to refine the guidelines. The authority is also developing specific guidelines for testing agentic AI applications, building on its starter kit for testing large language model-based applications.

Earlier, in October 2025, the Cyber Security Agency of Singapore released an addendum to its guidelines on securing AI systems, specifically focused on the unique risks of agentic AI. The document, which was open for public consultation, provides practical controls for system owners and outlines how to assess risks by mapping out workflows where threat actors could exploit vulnerabilities.

Read more about AI governance in APAC

Read more on IT governance