
Looker_Studio - stock.adobe.com
Are AI agents a blessing or a curse for cyber security?
Agentic AI is touted as a helpful tool for managing tasks, and cyber criminals are already taking advantage. Should information security teams look to AI agents to keep up?
Artificial intelligence (AI) and AI agents are seemingly everywhere. Be it with conference show floors or television adverts featuring celebrities, suppliers are keen to showcase the technology, which they tell us will help make our day-to-day lives much easier. But what exactly is an AI agent?
Fundamentally, AI agents – also known as agentic AI models – are generative AI (GenAI) and large language models (LLMs) used to automate tasks and workflows.
For example, need to book a room for a meeting at a particular office at a specific time for a certain number of people? Simply ask the agent to do so and it will act, plan and execute on your behalf, identifying a suitable room and time, then sending the calendar invite out to your colleagues on your behalf.
Or perhaps you’re booking a holiday. You can detail where you want to go, how you want to get there, add in any special requirements and ask the AI agent for suggestions that it will duly examine, parse and detail in seconds – saving you both time and effort.
“We’re going to be very dependent on AI agents in the very near future – everybody’s going to have an agent for different things,” says Etay Maor, chief security strategist at network security company Cato Networks. “It’s super convenient and we’re going to see this all over the place.
“The flip side of that is the attackers are going to be looking heavily into it, too,” he adds.
Unforeseen consequences
When new technology appears, even if it’s developed with the best of intentions, it’s almost inevitable that criminals will seek to exploit it.
We saw it with the rise of the internet and cyber fraud, we saw it with the shift to cloud-based hybrid working, and we’ve seen it with the rise of AI and LLMs, which cyber criminals quickly jumped on to write more convincing phishing emails. Now, cyber criminals are exploring how to weaponise AI agents and autonomous systems, too.
“They want to generate exploits,” says Yuval Zacharia, who until recently was R&D director at cyber security firm Hunters, and is now a co-founder at a startup in stealth mode. “That’s a complex mission involving code analysis and reverse engineering that you need to do to understand the codebase then exploit it. And that’s exactly the task that agentic AI is good at – you can divide a complex problem into different components, each with specific tools to execute it.”
Cyber security consultancy Reversec has published a wide range of research on how GenAI and AI agents can be exploited by malicious hackers, often by taking advantage of how new the technology is, meaning security measures may not fully be in place – especially if those developing AI tools want to ensure their product is released ahead of the competition.
For example, attackers can exploit prompt injection vulnerabilities to hijack browser agents with the aim of stealing data or other unauthorised actions. Or, alternatively, Reversec has demonstrated how an AI agent can be manipulated through prompt injection attacks to encourage outputs to include phishing links, social engineering and other ways of stealing information.
“Attackers can use jailbreaking or prompt injection attacks,” says Donato Capitella, principal security consultant at Reversec. “Now, you give an LLM agency – all of a sudden this is not just generic attacks, but it can act on your behalf: it can read and send emails, it can do video calls.
“An attacker sends you an email, and if an LLM is reading parts of that mailbox, all of a sudden, the email contains instructions that confuse the LLM, and now the LLM will steal information and send information to the attacker.”
Read more about agentic AI
- A Salesforce study says chief financial officers have shifted from agentic artificial intelligence caution to putting the technology front and centre of their business strategies.
- When enterprises multiply AI, to avoid errors or even chaos, strict rules and guardrails need to be put in place from the start.
- Agentic AI's autonomous nature and its ability to access multiple data layers bring heightened risk. Learn how to ensure its deployment meets compliance standards.
Agentic AI is designed to help users, but as AI agents become more common and more sophisticated, that’s also going to open the door to attackers looking to exploit them to aid with their own goals – especially if legitimate tools aren’t secured correctly.
“If I’m a criminal and I know you’re using an AI agent which helps you with managing files on your network, for me, that’s a way into the network to deploy ransomware,” says Maor. “Maybe you’ll have an AI agent which can leave voice messages for you: Your voice? Now it’s identity fraud. Emails are business email compromise (BEC) attacks.
“The fact is a lot of these agents are going to have a lot of capabilities with the things they can do, and not too many guardrails, so criminals will be focusing on it,” he warns, adding that “there’s a continuous lowering of the bar of what it takes to do bad things”.
Fighting agentic AI with agentic AI
Ultimately, this means agentic AI-based attacks is something else chief information security officers (CISOs) and cyber security teams need to consider on top of every other challenge they currently face. Perhaps one answer to this is for defenders to take advantage of the automation provided by AI agents, too.
Zacharia believes so – she even built an agentic AI-powered threat-hunting tool in her spare time.
“It was about a side-project I did in my spare time at the weekends – I’m really geeky,” she says. “It was about exploring the world of AI agents because I thought it was cool.”
Cyber attacks are constantly evolving, and rapid response to emerging threats can be incredibly difficult, especially in an area where AI agents could be maliciously deployed to uncover new exploits en masse. That means identifying security threats, let alone assessing the impact and applying the mitigations can take a lot of time – especially if cyber security staff are doing it manually.
“What I was trying to do was automate this with AI agents,” says Zacharia. “The architecture built on top of multiple AI agents aim to identify emerging threats and prioritise according to business context, data enrichment and things that you care about, then they create hunting and viability queries that will help you turn those into actionable insights.”
That data enrichment comes from multiple sources. They include social media trends, CVEs, Patch Tuesday notifications, CISA alerts and other malware advisories.
The AI prioritises this information according to severity, with the AI agents acting upon that information to help perform tasks – for example, by downloading critical security updates – while also helping to relieve some of the burden on overworked cyber security staff.
“Cyber security teams have a lot on their hands, a lot of things to do,” says Zacharia. “They’re overwhelmed by the alerts they keep getting from all the security tools that they have. That means threat hunting in general, specifically for emergent threats, is always second priority.”
She points to incidents like Log4j, a critical zero-day vulnerability in widely used software that was almost immediately exploited by sophisticated threat actors upon disclosure.
“Think how much damage this could cause in your organisation if you’re not finding these on time,” says Zacharia. “And that’s exactly the point,” she adds, referring to how agentic AI can help to swiftly identify and remedy cyber security vulnerabilities and issues.
Streamlining the SOC with agentic AI
Zacharia’s far from alone in believing agentic AI could be of great benefit to cyber security teams.
“Think of a SOC [security operations centre] analyst sitting in front of an incident and he or she needs to start investigating it,” says Maor. “They start with looking at the technical data, to see if they’ve seen something like it in the past.”
What he’s describing is the important – but time-consuming – work SOC analysts do everyday. Maor believes adding agentic AI tools to the process can streamline their work, ultimately making them more effective at detecting cyber threats.
“An AI model can examine the incident and then detail similar incidents, immediately suggesting an investigation is needed,” he says. “There’s also the predictive model that tells the analyst what they don’t need to investigate. This cuts down the grunt work that needs to be done – sometimes hours, sometimes days of work – in order to reach something of value, which is nice.”
But while it can provide support, it’s important to note that agentic AI isn’t a silver bullet that is going to eliminate cyber security threats. Yes, it’s designed to make the task of monitoring threat intelligence or applying security updates easier and more efficient, but people remain key to information security, too. People are needed to work in SOCs, and information security staff are still required to help employees across the rest of the organisation remain alert and secure to cyber threats.
Especially as AI continues to evolve and improve, and attackers will continue to look to exploit it – and it’s up to the defenders to counter them.
“It’s a cat and mouse situation,” says Zacharia. “Both sides are adopting AI. But as an attacker, you only need one way to sneak in. As a defender, you have to protect the entire castle. Attackers will always have the advantage, that’s the game we’re playing. But I do think that both sides are getting better and better.”