willyam - stock.adobe.com

Like it or not, AI will transform cyber strategy in 2026

Bubble or no bubble, from cyber skills to defensive strategies to governance, risk and compliance, artificial intelligence will remake the cyber world in 2026

“Back in 2020,” says Anthony Young, CEO at managed security services provider (MSSP) Bridewell, “predictions that AI [artificial intelligence] would reshape defensive strategies seemed optimistic; today, they look understated.”

Although it is still difficult to quantify the precise extent to which AI is driving real-world cyber attacks, and how serious these attacks actually are, it is hard to argue with the notion that AI will come to underpin cyber defences.

Looking back at 2025, Aditya Sood, vice-president of security engineering and AI strategy at Aryaka, says: “AI-powered code generation sped up development but also introduced logic flaws when models filled gaps based on incomplete instructions. AI-assisted attacks became more customised and scalable, making phishing and fraud campaigns harder to detect.”

So if 2025 was the year that saw the groundwork laid for these foundations, 2026 will be the year the concrete starts to pour in earnest.

“The lesson [of 2025] wasn’t that AI is inherently unsafe; it was that AI amplifies whatever controls, or lack of controls, surround it. … AI security is more about the entire ecosystem, including LLMs [large language models], GenAI [generative AI] apps and services, AI agents and underlying infrastructure,” says Sood.

Maturing approaches

Rik Ferguson, security intelligence vice-president at Forescout, says the cyber industry’s approach to AI will mature this year.

“I expect to see more serious, less hype-driven adoption of AI on the defensive side: correlating weak signals across IT, OT [operational technology], cloud and identity, mapping and prioritising assets and exposures continuously, and reducing the cognitive load on analysts by automating triage,” says Ferguson.

Artificial intelligence is not just accelerating response; it is set to completely redefine how security professionals upskill, are deployed and ultimately how they are held accountable
Haris Pylarinos, Hack The Box

He adds, however, that this need not necessarily mean unemployed cyber analysts standing on street corners holding signs that say “Will Red Team For Food”.

“Done properly, that is not about replacing people; it is about giving them the headspace to think and to delve into the more rewarding stuff,” says Ferguson.

Haris Pylarinos, co-founder and CEO at Hack The Box, adds: “Artificial intelligence is not just accelerating response; it is set to completely redefine how security professionals upskill, are deployed and ultimately how they are held accountable.

“The industry is entering a phase where skills are shifting from detection, to judgement, to learning how to learn. The organisations that succeed will not be those that automate the most, but those that redesign workforce models and decision-making around intelligent systems.”

For Pylarinos, these new workforce models will centre on proving the hybrid human-AI team. Cyber security professionals of the future won’t be technologists, he suggests, but validators, adversarial thinkers and behavioural auditors.

“The most valued cyber security practitioners will be those who can pressure-test AI behaviour under realistic conditions, ensuring that machine speed does not outpace human judgement,” he says.

For Bugcrowd CEO Dave Gerry, spreading enterprise adoption of AI is a reason to keep more humans in the loop.

“Traffic to generative AI sites jumped by 50% [between February 2024 and January 2025], while 68% of employees used free-tier tools and 57% admitted to pasting sensitive data into them. With this, it’s key to remember that AI-generated exploits and misinformation are already here,” he says.

“The security community needs to zero in on model manipulation techniques like prompt injection and proactively test these AI systems through the eyes of the attackers. Crowd-led testing remains one of our strongest defences, even across new and evolving attack vectors. Diverse human researchers can catch what others miss.”

Defensive transition

Aryaka’s Sood, meanwhile, focuses on the underlying technical transitions driving the changing role of the security professional.

He theorises that as organisations increase their reliance on AI – especially AI presenting in the form of agents – security teams will see their priorities shift away from responding to and fixing flaws and other issues, to controlling decision-making pathways within the organisation.

The future of cyber security isn’t just about securing systems, but also securing the logic, identity and autonomy that drive them
Aditya Sood, Aryaka

This will introduce a number of “new” defensive strategies, he says. Firstly, we will see security teams building out governance layers around AI agent workflows to authenticate, authorise, observe – and potentially reverse – any automated action.

“The focus will expand from guarding data to guarding behaviour,” says Sood.

Cyber teams will also need to address the risk of silent data sprawl, the creation of shadow datasets and unintended access paths, as agents and other AI systems move, transform and replicate sensitive data. Strong data lineage tracking and even stricter access controls will be a must. And just as user behaviour analytics evolved and matured for human accounts, so it will need to do so again to establish expected and allowed behaviours for AI.

Defensive strategies in 2026 will also need to adjust for changing trust landscapes. The AI enterprise requires trust verification across all layers, so Sood says security teams should look to trust-minimised architectures where AI identities, outputs and automated decisions are subject to continuous audit and validation.

On identity, stronger lifecycle management for non-human identities (NHIs) must also be prioritised. And zero trust as a compliance mandate will also become increasingly important.

Finally, says Sood, since cyber attacks will continue to exploit legitimate tools in 2026, enhanced intent-based detection will be needed, with systems called upon to analyse “why” actions took place, rather than just establishing that they did.

“If 2025 taught us that trust can be weaponised, then 2026 will teach us how to rebuild trust in a safer, more deliberate way. The future of cyber security isn’t just about securing systems, but also securing the logic, identity and autonomy that drive them,” he says.

Looking ahead to 2026

How to buy AI safely and securely

In 2026, AI-savvy buyers will also be asking increasingly tough questions of their IT suppliers, so says Ellie Hurst, commercial director at Advent IM.

Hurst says that merely copying and pasting some boilerplate text about “using AI responsibly” into the slide deck might have flown a few years ago, but in 2026, the salesperson will be rightly frog-marched out to the car park if they dare try that one on.

“Enterprise buyers, particularly in government, defence and critical national infrastructure, are now using AI heavily themselves. They understand the risk language. They are making connections between AI, data protection, operational resilience and supply chain exposure,” says Hurst.

In 2026, it will not be enough for procurement teams to ask whether or not their suppliers use AI, but rather how they govern it, she explains.

Throughout 2025, says Hurst, the language in requests for proposals and invitations to tender around AI hardened dramatically, with buyers increasingly asking about issues such as data sovereignty, human oversight, model accountability, and compliance with data protection, security and intellectual properly regulation.

This change is coming about thanks to a recognition that AI has been largely used on an ad hoc basis, and most IT leaders are unable to say with certainty that they know exactly what has been happening on their watch. All this leads to a massive governance, risk and compliance (GRC) headache.

But the good news, says Hurst, is that this can be turned around. AI governance, done right, isn’t about slowing or banning innovation, but folding it into organisational GRC practice so that its use can be explained, scaled and, critically, defended.

Buyers should consider asking questions around where AI is used in their suppliers’ services, what workflows touch sensitive data, what third-party AI models or platforms are used, and what oversight humans have. Hurst also advises buyers to look for suppliers that are aligned to ISO IEC 42001, a new standard for AI lifecycle management, including cyber.

Ultimately, she says, if the prospective supplier is adequately prepared, they should be able to present a clear story about how AI is governed as part of the wider security and GRC framework.

Winners and losers

The new year is barely a week old, and the full story of 2026 is, of course, yet to be written. Undoubtedly, it will be another turbulent one for the cyber security world, but Bridewell’s Young says that even if 2026 is not necessarily the most catastrophic year for security, AI has brought us to a precipice and what unfolds next could make the coming 12 months very telling indeed.

“The choices organisations make now, in restoring investment, rebuilding cyber skills and governing AI responsibly, will determine whether the curve bends towards resilience or further fragility,” concludes Young.

Read more on IT risk management