Chaosamran_Studio - stock.adobe.

AI slop pushes data governance towards zero-trust models

Organisations are beginning to implement zero-trust models for data governance thanks to the proliferation of poor quality AI-generated data, often known as AI slop.

Unverified and low quality data generated by artificial intelligence (AI) models – often known as AI slop – is forcing more security leaders to look to zero-trust models for data governance, with 50% of organisations likely to start adopting such policies by 2028, according to Gartner’s seers.

Currently, large language models (LLMs) are typically trained on data scraped – with or without permission – from the world wide web and other sources including books, research papers, and code repositories. Many of these sources already contain AI-generated data and at the current rate of proliferation, almost all will eventually be populated with it.

A Gartner study of CIOs and tech execs published in October 2025 found 84% of respondents expected to increase their generative AI (GenAI) funding in 2026 and as this trend accelerates, so will the volume of AI-generated data, meaning that future LLMs are trained more and more with outputs from current ones.

This, said the analyst house, will heighten the risk of models collapsing entirely under the accumulated weight of their own hallucinations and inaccurate realities.

Gartner warned it was clear that this increasing volume of AI-generated data was a clear and present threat to the reliability of LLMs, and managing vice president Wan Fui Chan said that organisations could no longer implicitly trust data, or assume it was even generated by a human.

“As AI-generated data becomes pervasive and indistinguishable from human-created data, a zero-trust posture establishing authentication and verification measures, is essential to safeguard business and financial outcomes,” said Chan.

Verifiying ‘AI-free’ data

Chan said that as AI-generated data becomes more prevalent, regulatory requirements for verifying what he termed “AI-free” data would likely intensify in many regions – although these regulatory regimes would inevitably vary in their rigour.

“In this evolving regulatory environment, all organisations will need the ability to identify and tag AI-generated data,” he said. “Success will depend on having the right tools and a workforce skilled in information and knowledge management, as well as metadata management solutions that are essential for data cataloguing.” 

Chan forecast that active metadata management practices will become a key differentiator in this future, enabling organisations to analyse, alert, and automate decision making across their various data assets.

Such practices could enable real-time alerting when data becomes stale or needs to be recertified, helping organisations identify when business-critical systems may be about to be exposed to an influx of nonsense.

Managing the risks

According to Gartner, there are several other means by which organisations can go about attempting to manage and mitigate the risks of untrustworthy AI data.

Business leaders may wish to consider establishing a dedicated AI governance leadership role, covering risk management and compliance and zero-trust. Ideally, this chief AI governance officer, perhaps termed as CAIGO, should be empowered to work closely with data and analytics (D&A) teams.

Further to this, organisations should endeavour to create cross-functional teams bringing together D&A and cyber security to run data risk assessments establishing AI-generated data risks, and to sort out which can be addressed under current policies and which need new strategies. These teams should be able to build on current D&A governance frameworks to focus on updating security, metadata management and ethics-related policies to address these news risks.

Read more about zero-trust

Read more on Regulatory compliance and standard requirements