Funtap - stock.adobe.com

Agentic AI requires rethink of cloud security strategy

Security leaders discuss the rise of agentic AI, warning that autonomous agents operating at machine speed will require organisations to move away from static protection towards behavioural monitoring and automated reasoning

The rise of agentic artificial intelligence (AI) is forcing a fundamental rethink of cloud security strategies, moving the focus from perimeter defence to behavioural analysis and automated reasoning.

Speaking at AWS re:Invent 2025 in Las Vegas, a panel of Amazon Web Services (AWS) security leaders discussed how AI agents differ from generative AI (GenAI) and why security practitioners must transition from being consumers of AI to builders of security tooling.

While GenAI creates content, agents execute tasks autonomously, creating a risk profile that Gee Rittenhouse, vice-president of security services at AWS, compared to human risk.

“From a detection perspective, we move from the classical way of looking at it – protecting a workload – to much more of a behavioural perspective,” he said.

“An independent agent acting in a non-deterministic way really does look like a potential insider threat.”

Rittenhouse noted that because the boundary of an agent resides deep inside the application, security teams must merge traditional security with observability.

“It moves us into behavioural and anomaly detections,” he said. “It’s hard to do agentic security if you’re not observing it.”

Security at machine speed

Amy Herzog, vice-president and chief information security officer (CISO) at AWS, warned that while the basics of security – identity management and least privilege – remain important and unchanged, the stakes are higher when humans are removed from the loop.

“If you think about those basics and the risks of not doing them correctly in a slightly different way with agents, they’re almost more important, because if a human is not involved in the actions of the systems, if something goes wrong, it might go wrong much more quickly,” she said.

Herzog advised organisations to think about security at machine speed rather than human speed, urging builders to ensure credentials are short-lived and tightly scoped to prevent exploitation by autonomous agents, especially in scenarios where human intervention is minimal.

Neha Rungta, director of applied science at AWS, emphasised that securing agents is about defining the “bounds of autonomy” and establishing trust.

“If you have a support agent that is allowed to do refunds, how much refund would you be okay with that agent doing autonomously? Would it be $100? $200?” she asked. “The difference is the level of trust, the boundaries of what they are allowed to do, and how do you ensure that they don’t go rogue?”

Read more about cyber security in APAC

Rungta championed the use of automated reasoning – the use of mathematical proofs to verify system correctness – as a critical tool for keeping agents within guardrails. She also pointed to the newly launched AWS Security Agent, which aims to bake security requirements directly into the design process during application development.

Hart Rossman, vice-president at the office of the chief information security officer at AWS, argued that the security industry is currently too focused on consuming AI, such as asking a chatbot to summarise a log, rather than building with it.

“We’re seeing more security practitioners defaulting to the consumer, like getting a large language model (LLM) to answer some questions,” he said. “And with the rise of agentic tech, now’s a good time to also be a builder.”

Rossman also described a future where traditional security consoles, such as those for managing web application firewalls, give way to personalised, agent-driven experiences. He cited the AWS security incident response agent as an example, noting that it can reduce investigation times from hours or days to minutes by proactively identifying evidence and suggesting courses of action.

Despite fears that bad actors will use AI to scale attacks, Rittenhouse was optimistic that the technology favours the defender, noting that AI can enhance defensive capabilities by improving data analysis and threat detection.

For example, he suggested that LLMs allow defenders to transition from feeling overwhelmed with security data to active management, as the models excel at maintaining history and context across vast datasets.

“It’s pretty easy for an LLM or an agent to sift through massive quantities of data looking for things and then act on it,” he added. “So, it really does help the defender in a way that was hard to do before. Before, customers had to break the problem down into a lot of little areas, which led to potential gaps.”

Read more on Cloud security