Ataccama Data Trust Summit: Day 1

There are technology press events aplenty. But aside from the standard fodder of the press conference, the technology symposium, the C-suite exec Zoom-style update call (possibly with earnings updates, hopefully without) and the good old-fashioned roundtable lunch or dinner… a handful of tech firms have staged what we could call a “study tour or summit” at their own offices.

Often designed to run through the morning with lunch at the organisation’s own canteen if they have one (which, perhaps unsurprisingly, is often where you really get a feel for the personality of a company and its employees’ collective psyche), when technology vendors open up inside their own premises, it’s a different thing.

… and so it was with Ataccama.

Spending a day at Ataccama’s Boston headquarters at the Data Trust Summit 2025 with a select group of press was a good way of understanding how the company, known for its platform that manages, unifies and governs data quality, really works.

These days calling itself the agentic data trust company, Ataccama’s Data Trust Summit this year featured presentations by Corey Keyser, head of AI – Jay Limburn, chief product officer – Jessie Smith, VP of data quality – and of course Mike McKee, Ataccama CEO.

What defines Ataccama today?

By way of overview, the company says that “data trust” is the foundation for AI success (we know that trusted data means garbage in – garbage out is statistically less likely) and so Ataccama approaches the challenge with what it describes as a unified platform with AI automation. Ataccama ONE is a single integrated toolset for data quality, governance, lineage and master data management and automation functions (driven by AI) are embedded throughout the platform to streamline workflows and accelerate data readiness.

The company is all about operationalising data quality at scale for all users (not just data scientists) and Ataccama emphasises its integrations with key platforms like Snowflake, a move designed to bring data quality and governance capabilities directly to where the data resides.

CEO McKee: From pizzas to platforms

Ataccama CEO Mike McKee kicked off his firm’s Data Trust Summit with a laid bare story of how we rose to his current position. Having initially started a pizza business while studying at Princeton, he quickly learned that he “liked working and liked building a business” in any shape or form. As he moved through various career roles into senior C-suite positions, he explains how he has developed an essentially grounded view of those working in the data engineering industry, which, he insists, works as an essential foundation for understanding how any given customer will approach data quality today.

“When you look at CISOs (or other C-level technical engineering managers), there’s a top tier that is rather high level. Then there’s the middle tier that really gets it and understands the practicalities of working with a data quality platform and toolset designed for the modern age of AI. But then, there’s the third tier, who may have an adept level of technical understanding, but are in fact too deep down in the weeds to understand the business implications of operationalising a technology like Ataccama,” said McKee. “It’s that middle tier of pragmatists and do-ers that we work with because they are the ones capable of achieving great outcomes.”

Most enterprises still manage data reactively, fixing problems only after they disrupt reports or slow down AI projects. Ataccama ONE Agentic redefines that approach by embedding intelligence directly into the data management process, so trust is built in from the start and maintained automatically across every system and workflow.

“Barc research highlights that compliance-driven organisations see data quality, metadata enrichment, and data documentation as top priorities for agentic data management,” said Kevin Petrie, VP of research at Barc US. “Ataccama’s new release directly addresses this demand, empowering data teams to accelerate productivity while cataloguing, governing and preparing trusted data to fuel analytics and AI.”

AI lead: How to empower data stewards

McKee gave way to Corey Keyser, head of AI, a late 20s maverick who clearly harbours a deep affection for great software engineering.

“Remember that Andrew Ng said, treat an LLM like a college intern that will only get things right about 9 tenths of the time,” advised Keyser in his opening statement. “We can now measure the degree of Ataccama customer use by actual API calls to the platform, so we know what is working well for users. But I don’t think this discussion is about replacing data teams, we are empowering human ‘data stewards’ so that they can automate workflows… we’re allowing data teams to manage their original goals, which was always to manage the data inside the organisation.”

Keyser thinks that we’ll actually get to a point where teams start to employ more data stewards, because the inherent value of data is increasing. This means data teams won’t be replaced, but their productivity will be increased fivefold (or perhaps more) and non-technical business users will also experience a new level of data democratisation as they benefit from AI-powered data trust services.

ML sensor/actuator science for dummies

The evolution of AI-powered data management has seen us go from fully managed, to machine learning based, through copilots, into agents,” explained Keyser. “An agent has a ‘sensor and an actuator’ in traditional machine learning… which means that even a room HVAC control is an agent… and agents have a goal – because the room HVAC’s goal would be to control the temperature of the room, its needs to control an actuator (which in this case is the heating/cooling systems) to instruct the air flow to regulate temperature accordingly. Our next stage sees us progress to multi-agent systems that might use A2A to interact with each other; then we will move to Artificial General Intelligence (AGI) in the future.”

Providing attendees with a deployment timeline in relation to how the Ataccama platform is rolling out with regard to the wider AI ecosystem, Keyser notes that in 2023 the company built generative AI services. Agentic AI functions were built in 2024 and so are now manifesting themselves in the platform itself. Currently, Ataccama is building multi-agent capabilities in 2025…. so we’re expecting to see these move into production releases in 2027.

What are data quality rules?

Explaining how Ataccama now applies structure to its platform toolset, Keyser showed how the company gives an agent a goal to apply data quality rules. The agent assesses the goal that it is given and then runs through the tools it has (17 exist so far, more are in the pipeline) at its disposal to use for analysis.

“We feed documentation to the agent (which we can call the system prompt) and the agent then has the option to view a hierarchy (based on the type of use case) of the tools on offer – and there is a degree of ‘chain-of-thought-reasoning’ here, which can also be thought of as us asking the agent to think its tasks,” said Keyser. “Let’s remember that GPT5 is a ‘cluster of models’ that has the ability to send easier tasks to simpler models, where more complex tasks are directed to deep reasoning models that have more expensive tokens. We follow that same efficiency and execution logic inside Ataccama.”

Day 1 of this summit wrapped up with minds abuzz with where organisations are going to apply AI-powered data quality functions in their core systems and applications in this time of extreme change, there’s more to come here in day 2.