stokkete - stock.adobe.com

Proliferation of on-premise GenAI platforms is widening security risks

Research finds increased adoption of unsanctioned generative artificial intelligence platforms is magnifying risk and causing a headache for security teams

The three months to the end of May this year saw a 50% spike in the use of generative artificial intelligence (GenAI) platforms among enterprise end users, and while security teams work to facilitate the safe adoption of software-as-a-service (SaaS) AI frameworks such as Azure OpenAI, Amazon Bedrock and Google Vertex AI, the use of unsanctioned on-premise shadow AI now accounts for half of AI application adoption in the enterprise and is compounding security risks, according to a report.

The study, compiled by data protection and threat prevention platform supplier Netskope, examined the gathering shift among users to relying on on-premise GenAI platforms, which they are mostly using to build out their own AI agents and applications.

These platforms, which include tools such as Ollama, LM Studio and Ramalama, are now the fastest-growing category of shadow AI, due to their relative ease of use and flexibility, said Netskope. But, in using them to expedite their projects, employees are granting the platforms access to enterprise data stores and leaving the doors wide open to data leakage or outright theft.

“The rapid growth of shadow AI places the onus on organisations to identify who is creating new AI apps and AI agents using GenAI platforms and where they are building and deploying them,” said Ray Canzanese, director of Netskope Threat Labs.

“Security teams don’t want to hamper employee end users’ innovation aspirations, but AI usage is only going to increase. To safeguard this innovation, organisations need to overhaul their AI app controls and evolve their DLP [data loss prevention] policies to incorporate real-time user coaching elements.”

Probably the most popular way to use GenAI locally is to deploy a large language model (LLM) interface, which enables interaction with various models from the same “store front”.

Ollama is the most popular of these frameworks by some margin. However, unlike the most widely used SaaS options, it does not include inbuilt authentication, which means users must go out of their way to deploy it behind a reverse proxy or a private access solution that is appropriately secured with fit-for-purpose authentication. This is not an easy ask for the average user.

Agentic shadow AI is like a person coming into your office every day, handling data, taking actions on systems, and all while not being background-checked or having security monitoring in place
Netskope report

Furthermore, while OpenAI, Bedrock, Vertex et al provide guardrails against model abuse, Ollama users must take steps themselves to prevent misuse.

Netskope said that while on-premise GenAI does have some benefits – for example, it can help organisations leverage pre-existing investment in GPU resources, or help them build tools that better interact with their other on-premise systems and datasets – these may well be outweighed by the fact that in using them, organisations bear sole responsibility for the security of their GenAI infrastructure in a way that would not be happening with a SaaS-based option.

Netskope’s analysts are now tracking approximately 1,550 distinct GenAI SaaS applications, which its customers can easily identify by running focused searches for unapproved apps and personal logins within its platform for activity classed as “generative AI”. Another way to track usage is to monitor who is accessing AI marketplaces such as Hugging Face.

Besides identifying the use of such tools, IT and security leaders should consider formulating and enforcing policies that restrict employee access to approved services, blocking unapproved ones, implementing DLP to account for data sharing in GenAI tools, and adopting real-time user coaching to nudge users towards approved tools and sensible practice.

Adopting continuous monitoring of GenAI use and conducting an inventory of local GenAI infrastructure against frameworks provided by the likes of NIST, OWASP and Mitre is also advisable.

“Agentic shadow AI is like a person coming into your office every day, handling data, taking actions on systems, and all while not being background-checked or having security monitoring in place,” warned the report’s authors.

Read more about shadow AI

  • Shadow AI threatens enterprises as employees increasingly use unauthorised AI tools. Discover the risks, governance strategies and outlook for managing AI in today’s workplace.
  • Organisations need to implement policies and restrictions around AI productivity tools, but they also need to make sure the policy isn’t causing more harm than good.
  • Cisco wields the power of incumbency to weave AI security into existing cloud access management tools, including algorithmic validation testing on AI models.

Read more on Web application security