Ataccama banks clever on helping financial institutions meet EU AI Act
Data trust platform company Ataccama has announced that its Ataccama ONE data trust platform will provide capabilities that empower financial institutions to meet the requirements of the EU AI Act.
In specific terms, Ataccama ONE will enable financial institutions to produce on-demand, audit-ready evidence that the data powering their AI systems met defined quality standards at the time it was used without reconstructing pipeline history after the fact.
What’s inside the EU AI Act?
EU AI Act enforcement begins August 2, 2026. Article 10 of the act requires organisations running high-risk AI systems (a category that explicitly includes credit scoring, anti-money laundering (AML) monitoring and fraud detection) under Annex III of the Act, to demonstrate that their training and inference data were relevant, representative and free from error at the time it was used.
That obligation marks a meaningful shift from documenting governance policies to producing verifiable, data-level evidence tied to specific model decisions.
Model-specific specifics
Many financial institutions appear to have model risk and data management frameworks in place, but gaps in implementation and control mean they cannot produce the evidence regulators require. Regulators aren’t asking what controls exist; they’re asking what data a specific model used, whether it was fit for purpose, and how that can be proven under audit.
“At our scale, trusted data is essential for responsible AI and regulatory compliance,” said Jonathan Paul, VP of IT Governance at Fifth Third Bank. “We need continuous visibility into data quality and the ability to demonstrate that our data meets defined standards. Ataccama enables this by embedding data quality controls, monitoring, and audit-ready evidence, giving us confidence to advance reporting and AI on a foundation that is transparent, defensible, and verifiable.”
Ataccama ONE validates data as it moves through the training and inference pipelines that power high-risk AI models, including credit decisions, anti-money laundering screening, and fraud detection systems covered under Annex III of the EU AI Act.
Downstream data delivery
The platform checks data against business-defined rules, such as the completeness of borrower attributes, the validity of transaction records and the consistency of risk signals, before that data is used downstream.
When a violation is detected, configurable quality gates stop or flag the pipeline run, route an alert to the responsible data owner, and automatically generate a remediation ticket in the team’s ticketing system via the API, ensuring issues are resolved within a governed workflow rather than silently propagating into model inputs. Each validation outcome is logged with full pipeline context, creating a traceable record of whether data met defined standards at the moment it was used in a decision.
“Most organisations assume their data is governed, but far fewer can show that it’s actually trusted or fit for purpose. With Article 10, that gap becomes a legal obligation for any institution using high-risk AI,” said Jay Limburn, chief product officer at Ataccama. “Pipeline observability is one part of how we address this. Within Ataccama ONE, pipeline observability works alongside data quality, catalogue, lineage, and remediation to build a clearer picture of how data is created and used. That allows teams to generate audit-ready evidence directly from their pipelines and demonstrate data fitness at the point of use, without having to piece it together after the fact.”
New capabilities in Ataccama ONE include pipeline monitoring for model-driven decisions that continuously track data in motion across dbt, Airflow, Dagster, Azure Data Factory, AWS Glue, and other orchestration platforms, capturing execution health and monitoring data at rest for schema changes and anomalies.
For financial institutions, this means real-time visibility into the pipelines feeding high-risk AI models covered under Annex III, including credit decisions, AML monitoring, and fraud detection, surfacing issues such as missing borrower attributes, delayed transaction feeds, or incomplete risk signals as they occur, along with their downstream impact on models and regulatory outputs.
Full lineage & ownership context
Regulatory impact alerting consolidates pipeline and data quality signals into a single alert layer that prioritises issues based on regulatory and business impact. When a dataset used in credit risk or regulatory reporting falls below defined thresholds, alerts are routed to the responsible data owner with full lineage and ownership context, enabling timely remediation before issues propagate.
Audit-ready incident traceability converts pipeline incidents into tracked records that are sent via API to the team’s ticketing system, such as Jira or ServiceNow, linking each issue to affected datasets, pipeline steps, and downstream AI processes, such as credit approvals or fraud detection workflows.
Every incident is time-stamped and resolved within a governed workflow, creating a complete audit trail that documents how data issues were identified and addressed, directly supporting EU AI Act evidentiary requirements.
