
CROCOTHERY - stock.adobe.com
Managing security in the AI age
Gartner experts offer guidance on harnessing AI’s power while mitigating its risks, from managing shadow AI to implementing security controls and policies
Senior Gartner analysts kicked off the analyst firm’s Security and Risk Management Summit in Sydney this week amid the ongoing hype over artificial intelligence (AI) and cyber security – though they acknowledged that “hype often contains a kernel of truth”.
In cyber security, for example, it was fortunate that executives are paying attention to it, said Gartner vice-president analyst Richard Addiscott during the event’s opening address, citing Gartner’s research which found that more technology executives plan to increase cyber security funding this year (87%) than increase funding for AI (84%).
Gartner vice-president for research Christine Lee urged technologists to help executives take intelligent risks, pointing out that there is no such thing as perfect protection. Higher levels of protection lower the risk but cost more, while lower protection reduces cost while increasing risk. By recognising that, it becomes possible to reach an agreement about the appropriate level of protection for the organisation, she added.
To operationalise that, Addiscott recommended the combination of protection-level agreements (PLAs), which are akin to service-level agreements, and outcome-driven metrics (ODMs).
Addiscott said PLAs will help security teams manage stakeholders at different levels, because “they focus on what you have agreed to do for the business and discourage ‘tech-first’ or ‘tech-only’ thinking”, while “ODM discussions get everyone on the same page”.
AI is, of course, another well-hyped area. To harness that hype, Addiscott suggested three steps: cultivate AI literacy, drive experimentation and fortify AI initiatives with versatile security.
While viewing AI with a “beginner’s mind” can help identify viable use cases, that doesn’t mean complete ignorance is a good thing. Addiscott pointed to developing internal AI champions as a means of improving AI literacy within the organisation. That literacy can help drive the responsible and productive use of AI.
Experimentation is a good thing, but organisations need to use AI intentionally. Lee gave examples of this, including the development of an internal chatbot for cyber security coaching that answers employee’s questions, leaving security staff to focus on higher value tasks.
That said, Addiscott recommended against concentrating on productivity, but rather asking the question “what good can AI do for my use case?”
Tackling shadow AI
With 98% of surveyed organisations saying they have adopted or plan to adopt generative AI, chief information security officers (CISOs) can’t sit on their hands.
Lee suggested using discovery tools to reveal any examples of shadow AI and then helping to identify and remediate security risks such as data leakage. Blocking such uses should be reserved for situations where the risks outweigh the benefits and there is no alternative product. Where the previously unsanctioned use of a tool is allowed, there is an opportunity to consider why approval wasn’t sought and possibly improve that process.
Gartner vice president analyst Pete Shoard nominated shadow AI as one of the top issues around generative AI for CISOs and their teams, with a wide range of generative AI web products being used and AI increasingly being embedded in everyday products. These tools can help people become more efficient, but they come with a number of risks including data leakage, oversharing, undetected inaccuracies and even the use of actively malicious apps.
To mitigate the risks of shadow AI, Shoard suggested identifying various roles within the organisation, such as content creation, and determining the main risks, such as brand shaming. Then, the use of an appropriate AI tool can be authorised, along with acceptable use cases, such as the production of non-technical content.
Establishing such a policy isn’t enough. It has to be enforced, which means putting into place ways to monitor anomalous use. That can include measures such as endpoint security products and role-based access controls. Importantly, this can be done at scale only with automation and good exception management, so organisations should evaluate AI usage control tools with an eye to the ease of deployment, depth of controls, and the long-term viability of the vendor.
Where organisations are building their own AI applications, Shoard suggested that the security team should work alongside the AI team from the early stages of the project to ensure privacy, security and compliance are given due attention.
Furthermore, AI applications are more than traditional applications as they involve issues such as bias and fairness. Gartner recommends the use of an AI trust, risk, security management (Trism) programme so that governance issues are addressed from the start.
AI security issues
Near-term AI issues that deserve CISOs’ attention include the use of deepfakes in fraud and spear phishing, as well as attacks on AI systems, whether they are direct or on the toolchain, according to Gartner distinguished vice-president analyst and fellow Leigh McMullen.
McMullen said “the best controls we can put in place are human controls” against deepfakes. That includes setting up secure channels for communication with the real person if there is the slightest suspicion of deepfakes.
Examples of an AI model being attacked directly include providing it with “mal-information” that affects its output. A group of artists found they were able to embed information at the sub-pixel level in images, and feeding just three such images into a large language model (LLM) was enough to make it generate a picture of a cat when asked for one of a dog.
“That’s not going to break our business, but what if somebody did that to underwriting rules?” asked McMullen.
Another novel attack involved embedding “mal-information” in video subtitles and positioning them offscreen so they were visible to an AI trying to understand what was happening in the video, but not to a person watching it. “If we can change what the AI believes the world is, you change its mental model,” McMullen said.
A particularly sneaky approach is to engineer a situation where an AI hallucinates a software tool that doesn’t actually exist and then amplify that hallucination to the point where it shows up in response to other prompts. The attacker then plants malware masquerading as that tool, which is then downloaded by victims in the making.
“That’s like those search engine optimisation attacks we’ve been seeing … at a very, very different level,” he said. It has been seen in R libraries and bioinformatics libraries, for example.
During a panel discussion, Christopher Johnson, head of group technology at real estate company Charter Hall, warned that “people don’t think there is a risk” when using AI tools. While Microsoft Copilot enforces some restrictions, for example, to stop someone seeing their colleagues’ salaries, the company needs to protect commercial-in-confidence information.
Allens CIO Bill Tanner said the law firm had already completed an enterprise search project, so access permissions were largely sorted. That allowed it to adopt Copilot more quickly, but anything with open permissions was reviewed and reset so that only the people who really needed access had it. Furthermore, the firm was well placed, having adopted an enterprise version of ChatGPT when it first appeared.
Generative AI is important to the legal sector, he said, as it is all about language. But it is important to educate people about the risks, and to look for opportunities to get value from generic or custom tools. Allens has a steering committee to check that AI projects are aligned with the firm’s strategic direction.
Johnson warned that individuals may convince themselves that what they are doing with generative AI is safe, even though it isn’t. So, they should be instructed that actions such as pasting sensitive information into Copilot are not allowed. The firm uses AI-generated activity summaries, but it will not look at the prompts employees give to Copilot, said Tanner.
AI in cyber security
There are some quick wins to be found among the uses for AI within the cyber security function, Lee said. These include security testing of in-house code, runtime controls and data masking. Strategic priorities should include data governance, DevSecOps, and incident response playbooks.
Shoard noted that while LLMs are not intelligent, there are roles for generative AI within security operations. They include the summarisation of alerts, interactive threat intelligence, producing a risk overview of the attack surface, and documenting mitigations. However, these assistants should be evaluated in terms of their ability to improve performance against existing security metrics, not against ad hoc productivity metrics.
On the topic of metrics, distinguished vice-president analyst Tom Scholtz suggested there are two tests that should be applied before adopting a new metric. The first is: ‘What resources will be needed to manage the measurement, put it in context, and to maintain the metric?’ It is possible that those resources would be better used in a different way.
The second is: ‘What will the executive audience do with the information?’ This includes consideration of the decisions that will be informed by the metric, how frequently it will be used, and whether it will be used on a regular or ad hoc basis.
Scholtz recommended that the focus should be on operational and tactical metrics, with board-level executives being regularly provided with between four and six input-focused metrics.
Non-technology threats and impact of quantum
During the closing keynote, Gartner vice president analyst John Watts warned that among the non-technology threats, the biggest is user error. “We’re really bad at creating security controls. If you create a security control that somebody can’t use, [they] bypass it.”
In a Gartner survey, 74% of business technologists said they would bypass security guidance to get their jobs done. This, he said, means organisations need better controls that employees see as enablers for themselves and their organisations.
A related issue is the sudden growth in employee activism where their personal values do not match the organisation’s values. This has been seen at Google and Amazon, he said, where employees engaged in disruptive sit-ins because they did not want the companies to be involved with the military. “It’s an interesting thing to think about,” he said.
On the technical side, he pointed to the potential for quantum computers to break commonly used cryptographic algorithms. This is going to be a problem as it would mean supposedly secure communications could be decrypted, electronic signatures could no longer guarantee non-repudiation, and changes will be needed to identity and access management systems.
It is not clear when quantum computing will be a reality, but it is time to take an inventory of cryptographic use within the organisation, and then over the next few years replace any weak algorithms with quantum-safe methods. “2030 seems like a pretty good goal” to complete such projects, Watts suggested.
Read more about cyber security in APAC
- Australian organisations are set to spend A$6.2bn on security and risk management in 2025, a 14.4% jump from the previous year, driven by the rise of AI and a growing threat landscape.
- Singapore non-profit organisation HomeTeamNS suffered a ransomware attack that affected some servers containing employee and member data, prompting an investigation and enhanced security measures.
- Gil Shwed, Check Point’s co-founder, discusses the company’s focus on AI-driven security and his commitment to remaining an independent force in the cyber security market.
- Doug Fisher, Lenovo’s chief security officer, outlines the company’s approach to security and AI governance, and the importance of having a strong security culture to combat cyber threats amplified by the use of AI.