Gajus - stock.adobe.com

Generative and agentic AI in security: What CISOs need to know

AI is introducing new risks that existing evaluation and governance approaches were never designed to manage, creating a widening gap between what AI-backed security tools promise and what can be realistically controlled.

Artificial intelligence (AI) is now embedded across almost every layer of the modern cyber security stack. From threat detection and identity analytics to incident response and automated remediation, AI-backed capabilities are no longer emerging features but baseline expectations. For many organisations, AI has become inseparable from how security tools operate.

Yet as adoption accelerates, many chief information security officers (CISOs) are discovering an uncomfortable reality. While AI is transforming cyber security, it is also introducing new risks that existing evaluation and governance approaches were never designed to manage. This has created a widening gap between what AI-backed security tools promise and what organisations can realistically control.

When “AI-powered” becomes a liability

Security leaders are under pressure to move quickly. Vendors are racing to embed generative and agentic AI into their platforms, often promoting automation as a solution to skills shortages, alert fatigue, and response latency. In principle, these benefits are real, but many AI-backed tools are being deployed faster than the controls needed to govern them safely.

Once AI is embedded in security platforms, oversight becomes harder to enforce. Decision logic can be opaque, model behaviour may shift over time, and automated actions can occur without sufficient human validation. When failures occur, accountability is often unclear, and tools designed to reduce cyber risk can, if poorly governed, amplify it.

Gartner’s 2025 Generative and Agentic AI survey highlights this risk, with many companies deploying AI tools reporting gaps in oversight and accountability. The challenge grows with agentic AI - systems capable of making multi-step decisions and acting autonomously. In security contexts, this can include dynamically blocking users, changing configurations, or triggering remediation workflows at machine speed. Without enforceable guardrails, small errors can cascade quickly, increasing operational and business risk.

Why traditional buying criteria fall short

Despite this shift, most security procurement processes still rely on familiar criteria such as detection accuracy, feature breadth and cost. These remain important, but they are no longer sufficient. What is often missing is a rigorous assessment of trust, risk and accountability in AI-driven systems. Buyers frequently lack clear answers about how AI decisions are made, how training and operational data are protected, how AI model, application and agent behaviour is monitored over time, and how automated actions can be constrained or overridden when risk thresholds are exceeded. In the absence of these controls, organisations are effectively accepting black-box risk.

This is why a Trust, Risk and Security Management (TRiSM) framework for AI becomes increasingly relevant for CISOs. AI TRiSM shifts governance away from static policies and towards enforceable technical controls that operate continuously across AI systems. It recognises that governance cannot rely on intent alone when AI systems are dynamic, adaptive and increasingly autonomous.

From policy to enforceable control

One of the most persistent misconceptions about AI governance is that policies, training and ethics committees are sufficient. While these elements remain important, they do not scale in environments where AI systems make decisions in real time. Effective governance requires controls that are embedded directly into workflows. These controls must validate data before it is used, monitor AI model, application and agent behaviour as it evolves, enforce policies contextually rather than retrospectively, and provide transparent reporting for audit, compliance and incident response.

Read more about AI in cyber security

  • If organisations using agentic and generative AI don’t codify ethics and oversight now, the future may be filled with AI agents using generative AI to communicate with other agents, destroying trust in social media.
  • With threat actors exploiting the growing use of generative AI tools and the prevalence of shadow AI, organisations must strengthen their security programmes and culture to manage the rising risk.
  • While AI presents a significant opportunity to further the way we do business, what if it’s time to consider a new direction? What if the safest and most effective path for AI isn’t to go larger, but smaller instead?

The rise of “guardian” capabilities

Independent guardian capabilities are a notable step forward in AI governance. Operating separately from AI systems, they continuously monitor, enforce, and constrain AI behaviour, helping organisations maintain control as AI systems become more autonomous and complex.

AI is already delivering value-improving pattern recognition, behavioural analytics, and prioritisation of security signals. But speed without oversight introduces risk. Even the most advanced AI cannot fully replace human judgement, particularly in automated response.

The true competitive advantage will go to organisations that govern AI effectively, not just adopt it quickly. CISOs should prioritise enforceable controls, operational transparency, and independent oversight. In environments where AI is both a defensive asset and a new attack surface, disciplined governance is essential for sustainable cyber security.

Gartner analysts will further explore how AI-backed security tools and governance strategies are reshaping cyber risk management at the Gartner Security & Risk Management Summit in London, from 22–24 September 2026.

Avivah Litan is distinguished vice president analyst at Gartner

Read more on Web application security