Maksim Kabakou - Fotolia

Passwords to prompts: Identity and AI redefined cyber in 2025

As we prepare to close out 2025, the Computer Weekly Security Think Tank panel looks back at the past year, and ahead to 2026.

I will remember 2025 as the year when AI agents became the key vulnerability, identity threats pivoted from stolen passwords to convincing synthetic impersonation, and nation-states began targeting the models and data used to train AI. On top of all that, post-quantum cryptography (PQC) moved from academic theory to a potential Millennium Bug-like risk all over again.

AI agents became the largest unmonitored attack surface

The biggest and most subtle shift in 2025 wasn’t AI-driven attacks from outside; it was what happened inside enterprises. Autonomous AI agents quietly proliferated across ticketing systems, CRMs, developer tools, and even cloud consoles. They operated with unclear boundaries, inconsistent logging, privileged access, and no unified governance. In effect, organisations created new 'employees' without background checks or monitoring. The novel risk wasn’t from malicious AI; it was the deployment of novel agentic technology into traditional domains to rapidly achieve ROI without establishing robust long-term security.

For the first time, enterprises lost visibility of their attack surface not because attackers broke in, but because internal systems began making decisions faster than humans could explain them. This looks set to make 2026 an interesting year as we start to see attacks materialise on this new, unmonitored surface. The response should include treating AI agents as identity principals on par with human users, not as invisible back-end processes, and apply the concepts of least privilege, continuous authorisation, and auditable guardrails. They should also log agent actions immutably and correlate with SIEM and IAM to investigate decisions. Then there needs to be a focus on defending data and models, training, aligning models with documented policies, and red-team evaluations to test for bias, manipulation, and prompt injection.

Identity broke, and it wasn’t about passwords

Identity compromises in 2025 saw a new twist with believable highly customised impersonation, where voices could be cloned from seconds of audio, emails written to match a person’s style, and deepfake content was created that survives standard security checks. The impersonation of a chief financial officer which enabled the theft of $25m was just one example of this growing threat.

In response, organisations should enhance staff training, provide more simulated phishing exercises, invest in solutions such as signed content provenance (cryptographic signatures, watermarks), and implement 'reject unsigned critical instructions' policies for finance, HR, and IT. Critical workflows need multi-factor verification of key instructions, especially where financial loss or regulatory scrutiny could occur. Supplier risk models should also be updated to include controls against synthetic identity compromise. Smaller companies should be extra cautious, as financial loss caused by these scams could drive a company into insolvency.

National cyber strategy shifted from 'defend networks' to 'shape influence terrain'

Geopolitical risk assessments in 2025 started to recognise that state actors are poisoning AI training data, manipulating model alignment (the rules that keep models’ behaviour acceptable), and running covert influence operations to embed bias and steer outputs at scale. This means models can produce biased or unsafe decisions across borders and industries.

One clear example is China’s DeepSeek open-source reasoning model, launched early in 2025, which disrupted global AI markets and sparked concern over alignment. Analysts noted that models trained on DeepSeek’s data began reflecting 'Chinese characteristics', silence on topics like Tiananmen Square or Taiwan, illustrating how open-source dominance can be used to export national values and priorities at scale.

In 2026  the sovereign AI debate will intensify as nations seek to protect their values and reduce dependence on foreign-control with the UK government recently committing billions in funding towards this and US National Security Agency (NSA) guidance advising organisations to treat training pipelines and alignment evaluations as critical infrastructure.

The quantum shadow arrived earlier than expected

Quantum decryption is not here yet, but the operational risk already is. Attackers are stealing encrypted data today to decrypt later (harvest-now, decrypt-later), targeting backups, health records, and IP archives for decryption around 2030, when traditional cryptography is likely to become obsolete.

Migrating to PQC should be seen as a system-wide rebuild, not a routine patch. Identity, key management, signing, authentication, firmware trust anchors, VPNs, certificates, and code signing all need replacement.

Organisations need to take proactive steps, in line with NCSC guidance on PQC, to implement a cryptographic inventory, prioritise crown jewels with long data lifespans, plan and test hybrid periods, and involve vendors. Over the latter half of the decade, they should create a robust multi-year PQC strategy to spread the cost and prevent a rerun of the rush in 1999 to deal with the Millennium Bug.

Meeting the challenge ahead

Adversaries now target the logic and data that shape decisions, not just devices or networks. That requires defences that are as automated, observable, and accountable as the AI we deploy.

CISOs will have to balance the evolving threat landscape and increasing demand for secure digital systems with the transition to PQC-safe systems and standards. The key will be ensuring PQC is added to the five-year roadmap, allowing cost to be spread over multiple years rather than delaying until it is too late.

2025 revealed the risks, and 2026 must deliver the controls to manage those risks.

Daniel Gordon is a cyber security expert at PA Consulting.

Read more on Business continuity planning