Maksim Kabakou - Fotolia
Security pros should prepare for tough questions on AI in 2026
As we prepare to close out 2025, the Computer Weekly Security Think Tank panel looks back at the past year, and ahead to 2026.
For the last couple of years, many organisations have comforted themselves with a single slide or paragraph that reads along the lines of “We use artificial intelligence [AI] responsibly.” That line might have been enough to get through informal supplier due diligence in 2023 but it will not survive the next serious round of tenders.
Enterprise buyers, particularly in government, defence and critical national infrastructure (CNI), are now using AI heavily themselves. They understand the risk language. They are making connections between AI, data protection, operational resilience and supply chain exposure. Their procurement teams will no longer ask whether you use AI. They will ask how you govern it.
The AI question is changing
In practical terms, the questions in requests for proposals (RFPs) and invitations to tender (ITTs) are already shifting.
Instead of the soft “Do you use AI in your services?”, you can expect wording more like:
“Please describe your controls for generative AI, including data sovereignty, human oversight, model accountability and compliance with relevant data protection, security and intellectual property obligations.”
Underneath that line sit a number of very specific concerns.
Where is client or citizen data going when you use tools such as ChatGPT, Claude or other hosted models?
Which jurisdictions does that data transit or reside in?
How is AI assisted output checked by humans before it influences a critical decision, a piece of advice, or a safety related activity?
Who owns and can reuse the prompts and outputs, and how is confidential or classified material protected in that process?
The generic boilerplate no longer answers any of those points. In fact, it advertises that there is no structured governance at all.
The uncomfortable reality in many service providers is that if you strip away the marketing language, most professional services organisations are using AI in a very familiar pattern.
Individual staff have adopted tools to speed up drafting, analysis or coding. Teams share tips informally. Some groups have written local guidance on what is acceptable. A few policies have been updated to mention AI.
What is often missing is evidence
Very few organisations can say with certainty which client engagements involved AI assistance, what categories of data were used in prompts, which models or providers were involved, where those providers processed and stored the information, and how review and approval of AI output was recorded.
From a governance, risk and compliance (GRC) perspective, that is a problem. It touches data protection, information security, records management, professional indemnity, and in some sectors safety and mission assurance. It also follows you into every future tender, because buyers are increasingly asking about past AI related incidents, near misses and lessons learned.
Why this matters so much in government, defence and CNI
In central and local government, policing and justice, AI is increasingly influencing decisions that affect citizens directly. That might be in triaging cases, prioritising inspections, supporting investigations or shaping policy analysis.
When AI is involved in those processes, public bodies must be able to show lawful basis, transparency, fairness and accountability. That means understanding where AI is used, how it is supervised, and how outputs are challenged or overridden. Suppliers into that space are expected to demonstrate the same discipline.
In the defence and wider national security supply chain, the stakes are even higher. AI is already appearing in logistics optimisation, predictive maintenance, intelligence fusion, training environments and decision support. Here the questions are not just about privacy or intellectual property. They are about reliability under stress, robustness against manipulation, and assurance that sensitive operational data is not leaking into systems outside sovereign or approved control.
CNI operators have a similar challenge. Many are exploring AI for anomaly detection in OT environments, demand forecasting, and automated response. A failure or misfire here can quickly turn into a service outage, safety incident or environmental impact. Regulators will expect operators and their suppliers to treat AI as an element of operational risk, not a novelty tool.
In all of these sectors, the organisations that cannot explain their AI governance will quietly fall down the scoring matrix.
Turning AI governance into a commercial advantage
The good news is that this picture can be turned around. AI governance, done properly, is not about slowing down or banning innovation. It is about putting enough structure around AI use that you can explain it, defend it and scale it.
A practical starting point is an AI procurement readiness assessment. At Advent IM, we describe this in very simple terms: can you answer the questions your next major client is going to ask?
That involves mapping where AI is used across your services, identifying which workflows touch client or citizen data, understanding which third party models or platforms are involved, and documenting how humans supervise, approve or override AI outputs. It also means looking at how AI fits into your existing incident response, data breach handling and risk registers.
From there, you can develop a short, evidence-based narrative that fits neatly into RFP and ITT responses, backed by policies, process descriptions and example logs. Instead of hand waving about responsible AI, you can present a clear story about how AI is governed as part of your wider security and GRC framework.
ISO 42001 as the backbone for AI governance
ISO IEC 42001, the new standard for AI management systems, gives this work structure. It provides a framework for managing AI across its lifecycle, from design and acquisition through to operation, monitoring and retirement.
For organisations that already operate an information security management system (ISMS), quality management system or privacy information management system, 42001 should not feel alien. It can be integrated with existing ISO 27001, 9001 and 27701 arrangements. Roles such as senior information risk owner (SIRO), information asset owner (IAO), data protection officer, heads of service and system owners simply gain clearer responsibilities for AI related activities.
Aligning with 42001 also signals to clients, regulators and insurers that AI is not being treated informally. It shows that there are defined roles, documented processes, risk assessments, monitoring and continual improvement around AI. Over time, that alignment can be taken further into formal certification for those organisations where it makes commercial sense.
Bringing people, process and assurance together
Policies and frameworks are only part of the picture. The real test is whether people across the organisation understand what is permitted, what is prohibited, and when they need to ask for help.
AI security and governance training is therefore critical. Staff need to understand how to handle prompts that contain personal or sensitive data, how to recognise when AI outputs might be biased or incomplete, and how to record their own oversight. Managers need to know how to approve use cases, sign off risk assessments and respond to incidents involving AI.
Bringing all of this together gives you something very simple but very powerful. When the next RFP or ITT lands with a page of questions about AI, you will not be scrambling for ad hoc answers. You will be able to describe an AI management system that is aligned to recognised standards, integrated with your existing security and GRC practices, and backed by training and evidence.
In a crowded services market, that may be the difference between being seen as an interesting supplier and being trusted with high value, sensitive work.
The Computer Weekly Security Think Tank looks ahead
- Anthony Young, Bridewell: What lies in store for the security world in 2026?
- Dave Gerry, Bugcrowd: Cyber's defining lessons of 2025, and what comes next.
- Rik Ferguson, Forescout: In 2026, collaboration, honesty and humility in cyber are key.
- Aditya K Sood, Aryaka: From trust to turbulence: Cyber's road ahead in 2026.
