agcreativelab - stock.adobe.com

Using AI to manage insider risk amid Middle East conflict

As geopolitical tensions reshape the cyber threat landscape across the region, organisations are turning to artificial intelligence-driven behaviour analytics, investigative automation and monitoring of AI agents to detect insider risk faster and strengthen operational resilience

The escalation in tensions involving Israel, the US and Iran has reinforced a broader reality for security leaders across the Middle East: geopolitical instability not only raises the risk of external attacks, but it also changes internal risk dynamics in ways many organisations are not prepared to manage.

As enterprises contend with remote work shifts, dispersed access patterns, supply chain dependencies and the growing use of business tools powered by artificial intelligence (AI), insider risk has become more complex, less predictable and harder to detect through conventional means. In this environment, AI is emerging not simply as a cyber security enhancement, but as a practical tool for managing uncertainty at scale.

In an interview with Computer Weekly, Mazen Adnan Dohaji, senior vice-president and general manager of IMETA at Exabeam, notes that conflict doesn’t necessarily increase the number of malicious insiders, but it creates more operational noise when defenders need clarity most.

“The real challenge for defenders is not simply that conflict creates more cyber risk,” says Dohaji. “It is that conflict creates more noise, more edge cases, and more ambiguity at exactly the moment security teams need to make faster decisions.”

That distinction matters, particularly in the Middle East, where organisations are balancing digital transformation ambitions with rising concerns around sovereignty, resilience and cyber preparedness. During periods of geopolitical tension, routine behaviours can suddenly look anomalous: users logging in from unfamiliar locations, contractors requiring temporary privileged access, or employees interacting with sanctioned and unsanctioned generative AI (GenAI) tools in ways security teams have limited visibility into.

Traditional insider threat programmes, built on static rules and manual investigations, often falter in these conditions. Behaviour, not alerts, is the new signal. “Security teams should focus less on expanding watchlists and more on understanding how normal behaviour changes under stress,” says Dohaji.

Photo of Mazen Adnan Dohaji, senior vice-president and general manager, IMETA, Exabeam

“Security teams do not need a separate strategy for AI risk and another for insider risk. Increasingly, they are the same problem”

Mazen Adnan Dohaji, Exabeam

This is where AI-driven user and entity behaviour analytics (UEBA) gains importance. Machine learning can establish baselines for normal activity across employees, contractors, service accounts and privileged users. It helps identify subtle anomalies that may signal misuse, coercion, credential compromise or data exfiltration.

“Machine learning can establish baselines for both human and non-human activity, identify subtle anomalies, and raise risk as small signals begin to accumulate across identities and entities. That matters because insider risk is rarely a single dramatic event. More often, it emerges through a sequence of explainable but unusual actions that only become meaningful when viewed together. AI helps security teams connect those signals earlier, before misuse, compromise, or exfiltration becomes harder to contain,” says Dohaji.

Insider risk now includes machines

The rise of non-human identities is also transforming the discussion. As enterprises adopt AI agents, copilots and automated workflows to retrieve data and trigger actions, insider risk expands. It is no longer limited to employees.

“One of the biggest shifts in security operations is that insider risk is no longer limited to human actors,” explains Dohaji. “AI agents and automated workflows increasingly authenticate to systems, retrieve documents, call APIs [application programming interfaces] and trigger actions on behalf of users.”

For Middle East organisations accelerating AI adoption, particularly in sectors such as government, financial services and energy, this significantly expands the attack surface.

Compromised or over-privileged AI agents can create risks similar to those posed by human insiders, but at machine speed. That means organisations need visibility into agent behaviour, identity changes and privilege escalation, while linking human actions and machine actions into a unified investigative path.

Dohaji insists that separating AI and insider risk domains is a mistake. “Security teams do not need a separate strategy for AI risk and another for insider risk,” he says. “Increasingly, they are the same problem.”

AI for investigation, not just detection

Beyond detection, AI is also reshaping the investigation layer. “The right tooling can automatically collect evidence, correlate related activity, build timelines, summarise cases and surface the entities most likely to require action,” he says. “In a stretched SOC [security operations centre], that is not a convenience feature. It is how teams protect analyst time.”

Real resilience means giving defenders the ability to see changes in behaviour early, connect human and machine activity, investigate faster, and act before an anomaly becomes a breach
Mazen Adnan Dohaji, Exabeam

This can be especially valuable as regional defenders handle daily threats and uncertainty from geopolitical events. The bigger lesson, Dohaji suggests, is that resilience in today’s threat environment is increasingly about context.

“The lesson from the Israel-US-Iran conflict is not that every employee becomes a threat during geopolitical turmoil,” he says. “It is that unstable operating conditions make intent harder to read, risky behaviour easier to hide and traditional detection models less effective.”

For organisations across the Middle East, that means turning AI from an innovation narrative into an operational discipline: instrumenting environments where work is actually happening, monitoring sanctioned AI use, building behavioural baselines and using automation to reduce analyst workload without removing human oversight.

It also means preparing for realistic scenarios, such as excessive data movement before an employee’s exit, abnormal off-hours access, or an AI agent suddenly expanding its access pattern.

As Dohaji puts it: “Real resilience means giving defenders the ability to see changes in behaviour early, connect human and machine activity, investigate faster, and act before an anomaly becomes a breach.”

Read more about tech in the Middle East

Read more on Endpoint security