Given the level of tech industry activity in artificial intelligence (AI), if they haven’t already, most IT leaders are going to have to consider the security implications of such systems freely running in their organisations. The real risks of AI are that it offers employees easy access to powerful tools and the implicit trust many place in AI-generated outputs.
Javvad Malik, security awareness advocate at KnowBe4, urges IT and security leaders to address both. While the possibility of an AI system compromise might seem remote, Malik warns that the bigger immediate risk comes from employees making decisions based on AI-generated content without proper verification.
“Think of AI as an exceptionally confident intern. It’s helpful and full of suggestions, but requires oversight and verification,” he says.
“There’s internal data leakage – oversharing – which occurs when you ask the model a question and it gives an internal user information that it shouldn’t share. And then there’s external data leakage,” says Heinen.
“If you think of an AI model as a new employee who has just come into the company, do you give them access to everything? No, you don’t. You trust them gradually over time as they demonstrate the trust and capacity to do tasks,” he says.
Heinen recommends taking this same approach when deploying AI systems across the organisation.
KnowBe4’s Malik notes that the conversation regarding AI risks has also moved on. “It isn’t just about data leakage anymore, although that remains a significant concern,” he says. “We’re now navigating territory where AI systems can be compromised, manipulated, or even ‘gamed’, to influence business decisions.”
While widespread malicious AI manipulation is not widely evident, the potential for such attacks exists and grows as organisations become more reliant on these systems.
At the RSA Conference earlier this year, IT security guru Bruce Schneier questioned the impartiality of responses provided by AI systems, noting that if a chatbot recommends a particular airline or hotel, is it because it is genuinely the best deal, or because the AI company is receiving a kickback for the recommendation?
Security safeguards
There is general industry agreement that IT security chiefs and business leaders should work to develop frameworks that embrace AI’s value while incorporating necessary and appropriate safeguards. Malik says this should include providing secure, authorised AI tools that meet employee needs while implementing verification processes for AI-generated outputs.
Think of AI as an exceptionally confident intern. It’s helpful and full of suggestions, but requires oversight and verification
Javvad Malik, KnowBe4
Safeguards are also needed to avoid the potential of data loss. Aditya K Sood, vice-president of security engineering and AI strategy at Aryaka, recommends that those in charge of information security update existing acceptable use policies to address the use of AI tools, explicitly prohibiting the input of sensitive, confidential or proprietary company data into public or unapproved AI models.
Sood recommends that policies clearly define what constitutes “sensitive” data in the context of AI, and those policies covering data handling also need to detail requirements for anonymisation, pseudonymisation and tokenisation of data used for internal AI model training or fine-tuning. “Sensitive data could include customer personal information, financial records or trade secrets,” he says.
Alongside policy changes, Sood urges IT decision-makers to focus on AI system integrity and security by deploying security practices throughout the AI development pipeline.
“Even when datasets or tuning parameters are available, they’re often too large to audit,” he says.
Malicious behaviours can be trained in, intentionally or not, and the non-deterministic nature of AI makes exhaustive testing impossible. What makes AI powerful also makes it unpredictable and risky, he warns.
Since the output produced by an AI system is directly related to the input data it is trained on, Sood urges IT decision-makers to ensure they implement strong filters and validators for all data entering the AI system. AI models need to be rigorously tested for vulnerabilities such as prompt injection, data poisoning and model inversion, he says, to prevent adversarial attacks.
Tips for a secure AI strategy
When looking at the security changes IT and security leaders should be making to support artificial intelligence (AI), Elliott Wilkes, chief technology officer at Advanced Cyber Defence Systems, recommends establishing internal controls and red teaming exercises, such as traditional penetration testing, to stress test AI systems.
“Techniques like chaos engineering can help simulate edge cases and uncover flaws before they’re exploited,” he says.
Wilkes also believes there needs to be a cultural shift in how AI providers are selected, with security policies favouring those that demonstrate rigorous testing, robust safety mechanisms and clear ethical frameworks. While such AI technology provision may come at a premium, the potential cost of trusting an untested AI tool is far greater.
To reinforce accountability, Wilkes suggests security leaders should advocate for contracts that place responsibility for operational failures or unsafe outputs on the AI provider. “A well-written agreement should address liability, incident response procedures and escalation routes in the event of a malfunction or breach,” he says.
Similarly, to avoid malicious injections, Sood advises IT leaders to make sure AI-generated outputs are sanitised and validated before being presented to users or used in downstream systems. Wherever feasible, he says systems should be deployed with explainable AI capabilities, allowing for transparency into how decisions are made.
Bias is one of the most subtle and dangerous risks of AI systems. As Fox points out, skewed or incomplete training data bakes in systemic flaws. Enterprises are deploying powerful models without fully understanding how they work or how their outputs could impact real people. Fox warns that IT leaders need to consider the implications of deploying opaque models, which make bias hard to detect and nearly impossible to fix.
“If a biased model is used in hiring, lending or healthcare, it can quietly reinforce harmful patterns under the guise of objectivity. This is where the black box nature of AI becomes a liability,” he says.
For high-stakes decisions, Sood urges CIOs to mandate human oversight for handling sensitive data or performing irreversible operations as a way of providing a final safeguard against compromised AI output.
Alongside securing data and AI training, IT leaders should also work on establishing resilient and secure AI development pipelines.
“Securing AI development pipelines is paramount to ensuring the trustworthiness and resilience of AI applications integrated into critical network infrastructure, security products and collaborative solutions. It necessitates embedding security throughout the entire AI lifecycle,” he says.
This includes the code for generative artificial intelligence (GenAI), where models and training datasets are part of the modern software supply chain. He urges IT leaders to provide secure AI for IT operations (AIOps) pipelines with continuous integration/continuous delivery (CI/CD) best practices, code signing and model integrity checks. This needs to include scanning training datasets and model artefacts for malicious code or trojaned weights, and vetting third-party models and libraries for backdoors and licence compliance.
Given the growing openness of AI models, which fosters transparency, collaboration and faster iteration across the AI community, Fox notes that AI models are still software. This software can include extensive codebases, dependencies and data pipelines. “Like any open source project, they can harbour vulnerabilities, outdated components, or even hidden backdoors that scale with adoption,” he warns.
In Fox’s experience, many organisations don’t yet have the tools or processes to detect where AI models are being used in their software. Without visibility into model adoption, whether embedded in applications, pipelines or application programming interfaces (APIs), governance is impossible. “You can’t manage what you can’t see,” he says. As such, Fox suggests that IT leaders should establish visibility into AI usage.
Overall, IT and security leaders are advised to implement a comprehensive AI governance framework (see Tips for a secure AI strategy box).
He says AI risks need to be woven into enterprise-wide risk management and compliance practices.
The governance framework needs to define explicit roles and responsibilities for AI development, deployment and oversight to establish an AI-centric risk management process. He recommends putting in place a centralised inventory of approved AI tools, which should include risk classifications.
“The governance framework helps substantially in managing the risk associated with shadow AI – the use of unsanctioned AI tools or services,” he adds.
And finally, IT teams need to mandate that only approved AI tools are run in the organisation. All other AI tools and services should be blocked.
Gartner’s Heiner recommends that CISOs take a risk-based approach. Tools like malware detection or spell checkers are not high risk, whereas HR or safety systems carry a much greater risk.
“Just like with everything else, not every bit of AI operating in your environment is a critical component or a high risk,” he says. “If you’re using AI to hire people, that’s probably an area you want to pay attention to,” he adds. “If you’re using AI to monitor safety in a factory, then you may want to pay more attention to it.”
Read more about AI and IT security
Let’s chat about AI in networking and security by the fireside: One thing we can’t get away from right now – even if we wanted to – is AI. While there are specialist AI tech vendors, the reality is that it impacts all aspects of IT.
CISO playbook for securing AI in the enterprise: CISOs must partner with executive leadership to adopt a business-aligned AI security strategy that protects the organisation while enabling responsible AI adoption.
Read more on Artificial intelligence, automation and robotics