spainter_vfx - stock.adobe.com
Google’s Agentic Data Cloud to power ‘systems of action’
As enterprises move from reactive analytics to AI agents, Google Cloud's data chief details new metadata, cross-cloud, and database tools to help them govern and scale AI agents
The days of the glorified artificial intelligence (AI) chatbot are over, with more enterprises starting to embrace the era of autonomous agents rummaging through corporate data.
At the recent Google Cloud Next 2026 conference in Las Vegas, Google took the wraps off its Agentic Data Cloud, a rebranding and architectural effort to support the transition from passive systems of intelligence to autonomous systems of action.
“Everyone’s talking about systems of intelligence, but frankly, they’re still very much kind of ingesting all the data, very much looking at the past, or maybe trying to predict the future,” said Andi Gutmans, general manager and vice-president for Data Cloud at Google.
“We’re moving from human scale to agent scale, both in the number of agents we’re going to have and also the workloads we have to manage,” he told Computer Weekly, referring to the exponential growth in compute requirements of agentic AI.
To address the workload, cost and governance issues associated with agentic AI, Google unveiled about 80 new product updates, focusing on metadata management, cross-cloud interoperability, and distributed database capabilities.
A primary concern for enterprise data teams deploying AI agents is ensuring models access the right data while respecting access controls. Gutmans noted that simply knowing where data resides no longer suffices – agents need semantic context to avoid hallucinations and errors.
“It’s typical for customers to have 500 customer tables, but which table do you want to look at?” Gutmans said. In response, Google launched the Knowledge Catalog, which builds upon its previous Dataplex governance capabilities and aggregates metadata from within Google Cloud, external cloud applications, and third-party catalogues. It then enriches the metadata to map relationships across structured and unstructured data.
Gutmans described Knowledge Catalog as a “flywheel” built directly on top of existing access controls, ensuring agents cannot surface or act upon data they are not authorised to view.
Recognising the realities of heterogeneous IT environments, Google also introduced the Cross-Cloud Lakehouse. The offering allows enterprises to run Google’s BigQuery and AI capabilities against data residing in Amazon Web Services and Microsoft Azure, while also providing zero-copy integrations with enterprise systems like SAP and Workday.
Gutmans was keen to split hairs on the terminology: “It’s very important to note that the term ‘cross-cloud’ is distinct from ‘multi-cloud’; those who refer to multi-cloud are just talking about multiple single-cloud environments”.
Another significant update was Spanner Omni. Historically, Google Cloud Spanner, a globally distributed relational database, was tethered to Google’s infrastructure, including storage, as well as GPS receivers and atomic clocks to ensure transactional consistency.
Driven by enterprise demand for disconnected edge and on-premises deployments via Google Distributed Cloud, Google has engineered Spanner to run independently.
“Three, four years ago, no one, including us, believed we could disconnect Spanner from Google Cloud,” Gutmans said. Now, Spanner Omni offers vector processing, search, and graph capabilities for disconnected environments, which Gutmans noted is key for highly regulated use cases like on-premises fraud detection.
The IT pro as orchestrator
As part of the Agentic Data Cloud rollout, Google released a Data Agent Kit, featuring support for Claude Code, Gemini CLI, Codex, and VS Code extensions. The goal is to provide developers and data engineers with the model context protocol (MCP) and tools necessary to build their own agents.
According to Gutmans, the move towards agentic AI means that rather than writing manual pipelines or Python scripts, practitioners will engage in “intent-driven development,” which allows them to focus on defining their goals and desired outcomes while the agents handle the technical implementation.
“Every practitioner is now becoming an orchestrator of agents,” Gutmans said. “That will be true whether you’re a business user, data scientist, data analyst, data engineer, or developer. We are really trying to... put this persona in a position where they have a team of agents, and they’re orchestrating the agents.”
Serving in a dual role where he provides data infrastructure for Alphabet properties like Search, YouTube, and Gmail, Gutmans noted that Google’s own site reliability engineering (SRE) teams are already deploying AI agents.
“One example is an agent that looks at support tickets. If it sees that there’s more than one support ticket in a given period of time that looks very similar, it will page us and say, ‘Hey, maybe something bigger is happening here,'” Gutmans said.
With the slew of enhancements in Agentic Data Cloud, Gutmans didn’t shy away from throwing a bit of shade at rivals, claiming Google’s ownership of the full stack – from custom tensor processors and BigQuery to DeepMind's Gemini models – leaves rivals in the dust.
“If you think about other hyperscalers like Azure, they don’t have the model, so they end up having to connect to some other environment,” he claimed. As for pure-play data platform suppliers like Databricks? “They neither have the infrastructure nor the model, so they're basically kind of assembling all this stuff.”
Moutusi Sau, managing vice-president at Gartner, said Google’s Agentic Data Cloud reframes the hyperscaler's data platform as a semantic and orchestration layer for agents, addressing challenges such as agent failures, which are often caused by poor data context, inconsistent semantics, and fragile integration.
Capabilities like zero-copy and cross-cloud access reduce data gravity and duplication, but they also increase dependence on semantic accuracy, metadata governance, and performance consistency, she added.
Ultimately, Sau called for enterprises to invest as much in semantic ownership, stewardship and validation as they did in tooling. “Without disciplined governance, enterprises risk scaling ambiguity and mistrust faster than agents scale productivity,” she said.
Read more about AI in APAC
- Agoda, a digital travel platform, has set its sights on becoming an AI-powered travel companion as it changes how it builds software and moves its tech workforce into a new facility in Bangkok.
- Singtel and Nvidia have teamed up on a multimillion-dollar facility to help organisations scale enterprise AI deployments, tackle extreme datacentre power densities, and prepare for the era of embodied AI.
- The Australian government has struck a five-year volume sourcing agreement with Microsoft to speed up adoption of AI and cloud technologies across the public sector.
- Alibaba Group has unveiled Wukong, an AI-native enterprise platform that brings advanced agentic AI capabilities directly into business workflows.
