
Weissblick - Fotolia
Gartner’s view on AI security: A Computer Weekly Downtime Upload podcast

We speaker to Gartner's Nader Heinen about why access control should be built into enterprise AI
Over the last two years, Nader Henein and other analysts at Gartner have been in discussion with CIOs, CISOs, business leaders, and people in sales and marketing who want to use AI. People are concerned about how security impacts AI.
Henein says: “What is AI and how it works are, in many instances, not really AI security issues. They're more associated with AI or because the AI function surfaces an existing problem.” However, in half of these conversations, Nader says: “The problem is not an AI security problem. It's an access control problem when you give an AI system access to data that it shouldn't have access to and then it leaks information to people that shouldn't have access to this information.”
For Heinen, this is a big problem with AI systems because such systems can easily leak data if they are not configured correctly. Looking at internal data leakage, he says: “A lot of people call this oversharing. It’s when you ask the model a question and it gives an internal user information that it shouldn't provide.”
Then there’s external data leakage where a user engages with an AI model and the information shared shows up elsewhere. Henein regards both these forms of data leakage a problem relating to the kind of data given the model initially and the guardrails that need to be in place to prevent data loss.
Now given that AI systems are probabilistic and draw on different sets of training data to derive a plausible answer to a user query, Heinen sees a need for access control within the AI engine, so it understands which datasets a user is authorised to see.
Unless such access control is built into the AI engine, if an AI model is trained on all the data that it has access to within the business, there is a very real risk, it will inadvertently reveal information to people who should not have access to this information. The alternative is to have different models for different groups of users, which, as Henein notes, is both incredibly expensive and incredibly complex. But he adds: “This may also be the way forward for a lot of cases.”
Henein believes that small language AI models potentially offer a path forward, providing models that can be tuned to individual user requirements.
Overall, he recommends that business and IT leaders need to think of an AI model as a new employee that has just been hired. “Do you give them access to everything? No, you don't,” he says. “You trust them gradually over time as they demonstrate capacity to do tasks. But we're not taking that approach with large language models because AI providers are telling everyone to give the AI access to all of the data and access to all the IT systems. It literally runs in root or ‘God Mode’, and it will give you value for money!”
But there is a lot of industry hype relating to AI, and this is putting CIOs and CISOs under a substantial amount of pressure to adopt AI. “They have to demonstrate progress. But you don't have to jump in with both legs,” Henein says. “You want to take purposeful steps and invest in the things that have some measure of ROI [return on investment], because at some point your CFO is going to show up and ask why we are paying hundreds of thousands of dollars to be able to summarise meetings. Is that really value for money?”