Getty Images/iStockphoto

Agentic AI a target-rich zone for cyber attackers in 2025

At Black Hat USA 2025, CrowdStrike warns that cyber criminals and nation-states are weaponising GenAI to scale attacks and target AI agents, turning autonomous systems against their makers

Cyber criminals and nation-states hostile to Western countries are weaponising artificial intelligence (AI) with gusto to carry out attacks and targeting AI agents as a novel attack vector, according to cyber security company CrowdStrike.

The supplier’s 2025 threat hunting report, being published at the Black Hat USA conference in Las Vegas this week, says cyber attackers are “operationalising GenAI [generative artificial intelligence] to scale operations and accelerate attacks – and increasingly targeting the autonomous AI agents reshaping enterprise operations”.

Adam Meyers, head of counter adversary operations at CrowdStrike, said: “The AI era has redefined how businesses operate, and how adversaries attack. We’re seeing threat actors use GenAI to scale social engineering, accelerate operations and lower the barrier to entry for hands-on-keyboard intrusions.

“At the same time, adversaries are targeting the very AI systems organisations are deploying. Every AI agent is a superhuman identity: autonomous, fast and deeply integrated, making them high-value targets. Adversaries are treating these agents like infrastructure, attacking them the same way they target SaaS [software-as-a-service] platforms, cloud consoles and privileged accounts. Securing the AI that powers business is where the cyber battleground is evolving.”

The report states that attackers are targeting the tools used to build AI agents: “Autonomous systems and machine identities have become a core part of the enterprise attack surface.”

CrowdStrike’s analysts, who track 265 attackers and attack groups, found that the North Korean group Famous Chiolima used GenAI to automate every phase of its insider attack programme, from building fake resumes and conducting deepfake interviews to completing technical tasks under false identities. The analysts also found that the Russian group Ember Bear has used GenAI to help boost its pro-Russia propaganda.

Chinese hackers have gone big on the cloud, according to the supplier. Genesis Panda and Murky Panda managed to evade detection through cloud misconfigurations and trusted access. Cloud intrusions were up by 136%, with Chinese attackers responsible for 40% of those, according to CrowdStrike.

Not to be left out, the Iranian group Charming Kitten has used large language models (LLMs) to write phishing email lures targeting US and European organisations.

Agentic AI under attack

But the new factor in cyber attackers using artificial intelligence is the emergence of agentic AI as a new attack surface. The supplier says it has seen attackers exploiting vulnerabilities in tools used to build AI agents, gaining unauthenticated access, gathering credentials, and deploying malware and ransomware.

“These attacks demonstrate how the agentic AI revolution is reshaping the enterprise attack surface – turning autonomous workflows and non-human identities into the next frontier of adversary exploitation,” says the Crowdstrike report.

Below the level of nation-state or affiliated attacks, the report says more mundane cyber attackers, such as criminals, are using AI to “generate scripts, solve technical problems and build malware – automating tasks that once required advanced expertise. Funklocker and SparkCat are early proof points that GenAI-built malware is no longer theoretical.”

Scattered Spider, notorious in the UK for attacking Marks and Spencer, has used such techniques as helpdesk impersonation to reset credentials, bypass multifactor authentication (MFA), and move laterally across SaaS and cloud environments. In one incident, the group moved from initial access to encryption by deploying ransomware in under 24 hours, according to CrowdStrike.

Read more about AI and cyber security

  • The advantages and disadvantages of AI in cyber security: To meet growing cyber security needs, many organisations are turning to AI. However, without the right strategies in place, AI can introduce risks alongside its benefits.
  • Can AI be secure? International cyber security experts call for global cooperation and proactive strategies to address the security challenges posed by artificial intelligence.
  • Security leaders grapple with AI-driven threats: Experts warn of artificial intelligence’s dual role in both empowering and challenging cyber defences, and call for intelligence sharing and the need to strike a balance between AI-driven innovation and existing security practices.

Read more on Hackers and cybercrime prevention