phonlamaiphoto - stock.adobe.com

ESET: Don’t fear the ‘AI Terminator’, but prepare for agent risks

While fully autonomous hacking bots remain a distant reality, an ESET expert warns that AI is quietly supercharging phishing schemes and creating new vulnerabilities inside organisations

The prevailing anxiety in boardrooms is that artificial intelligence (AI), fuelled by recent breakthrough advancements in frontier models, will soon unleash a wave of autonomous cyber attacks against corporate defences.

Tony Anscombe, chief security evangelist at cyber security firm ESET, would like everyone to take a collective breath. “Some people I talk to think, ‘AI is attacking us’,” Anscombe said in a recent interview with Computer Weekly. “No, AI is not attacking you. We’re not quite a Terminator yet.”

Instead of deploying omnipotent digital adversaries, modern cyber criminals are acting like efficiency experts, integrating AI tools to automate mundane but highly effective tasks: drafting flawless phishing emails, mimicking executives in messages and automating the hunt for stolen credentials.

The reason hackers haven’t unleashed fully autonomous AI models to attack networks, Anscombe argued, boils down to simple economics: they don’t need to. 

“There’s a lot of low-hanging fruit already,” he said, noting that basic internet scans continually reveal poorly secured remote-access systems and virtual private networks. “There are still a lot of organisations out there that publicly expose weaknesses. That means the cyber criminal has too much opportunity already.”

But the seeds of next-generation, AI-driven malware are already being sown, sometimes by accident. Anscombe pointed to a recent incident involving researchers at a New York university who successfully developed a proof-of-concept malware that uses an AI prompt mechanism. Once inside a system, the malware could dynamically analyse the digital environment, rewrite its own script on the fly, and independently decide whether to steal data. The researchers, however, inadvertently published the source code to a public malware-testing database.

“Once you put something out in the public domain, then somebody else can take it, reverse engineer it, modify it and reuse it for their own purposes,” Anscombe said. “Suddenly, you’ve got the work already done for cyber criminals.”

While mainstream hackers are not yet using such tools, ESET researchers have begun tracing similar sophisticated tactics back to state-aligned hacking groups.

The view from the inside

For chief information security officers (CISOs), external hackers are only half the battle. Faced with intense pressure from chief executives and corporate boards, many senior leaders are doubling down on AI to stay ahead of competitors.

The immediate risk, Anscombe noted, comes from well-meaning employees pasting sensitive corporate data or customer information into generative AI (GenAI) tools, potentially running afoul of privacy laws.

As such, Anscombe called for CISOs to establish clear governance policies on the use of tools such as OpenAI’s ChatGPT or Microsoft Copilot to prevent staff from sharing sensitive personal or corporate data with public models.

When it comes to AI agents, Anscombe noted that because agents act independently, they could expand a company’s attack surface and inadvertently grant access to sensitive data or facilitate lateral movement by bad actors.

To mitigate these risks, security teams must treat AI software less as traditional code and more as digital workers. “You need to make sure the agents have permissions in the same way that employees have limitations on their access rights,” Anscombe said. “You need to treat them, in some ways, like humans.”

As AI risks multiply, the CISO’s job description is evolving too. “I think the CISO is fast becoming a business operations person, and they need to start understanding the business flow and the operational flow of the business to be able to help protect it,” Anscombe observed.

On the operational level, security operations centres (SOCs) are starting to rely heavily on machine learning and AI to filter massive amounts of telemetry and prevent human analysts from being overwhelmed. AI systems act like investigators, gathering evidence, highlighting anomalies and assigning probability scores, so the human analyst can make the final determination on sophisticated threats.

However, organisations, particularly small and medium-sized enterprises (SMEs) that lack a dedicated team of specialised security analysts, can consider outsourcing to managed detection and response (MDR) providers rather than treating security tooling as a compliance checklist, Anscombe said.

“You can’t deploy something like EDR [endpoint detection and response] and then forget it. It’s not a tick box. It needs to be managed and operated, otherwise it’s ineffective,” he added.

Ultimately, Anscombe hopes to separate the existential dread surrounding AI from reality. He pointed to the Indian government’s use of facial recognition at a New Delhi train station, a project that successfully identified and reunited thousands of missing children with their parents in a matter of weeks. “We shouldn’t fear technology. We should make sure we use it responsibly,” he said.

Part of that responsibility, he said, is dialling back the marketing hype that fuels public anxiety. “I saw recently an oven that claims to cook your dinner using AI,” he said, explaining that the appliance merely uses a basic moisture sensor to know when a cake is baked. “That’s a lookup table, not AI. The overuse of the word ‘AI’ doesn’t help the fear issue.”

Read more about cyber security in APAC

Read more on Hackers and cybercrime prevention