leowolfert - stock.adobe.com

‘Shadow’ AI use becoming a driver of insider cyber risk

Off-the-books use of generative AI tools will inevitably lead to a costly, high-profile data breach for someone, but a little attention paid to appropriate data management policy can help mitigate the risk

The explosion in the use of generative AI tools based on large language models (LLMs) will almost inevitably lead to multiple major insider data breach incidents within the next 12 months, threat researchers at application, API and data security specialist Imperva are forecasting.

As LLM-powered chatbots – ChatGPT being the most prevalent and notable – become more powerful, many organisations have quite reasonably cracked down on what data can be shared with them, an issue recently explored by Computer Weekly.

However, according to Imperva’s senior vice-president of data security go-to-market and field chief technology officer, Terry Ray, an “overwhelming majority” of organisations still have no insider risk strategy in place and so remain blind to the use of generative AI by users who want a little help with tasks such as writing code or filling out forms.

“People don’t need to have malicious intent to cause a data breach,” said Ray. “Most of the time, they are just trying to be more efficient in doing their jobs. But if companies are blind to LLMs accessing their back-end code or sensitive data stores, it’s just a matter of time before it blows up in their faces.”

Insider threats are thought to be the underlying reason for almost 60% of data breaches, according to Imperva’s own data, but many are still not properly prioritised by organisations since a not insignificant number of them are simply cases of human error – a recent study by the firm found that 33% of organisations don’t perceive insiders as a significant threat.

Addressing the issue

Ray said trying to restrict AI usage inside the organisation now was very much a case of shutting the stable door after the horse had bolted.

“Forbidding employees from using generative AI is futile,” said Ray. “We’ve seen this with so many other technologies – people are inevitably able to find their way around such restrictions and so prohibitions just create an endless game of whack-a-mole for security teams, without keeping the enterprise meaningfully safer.”

“People don’t need to have malicious intent to cause a data breach. Most of the time, they are just trying to be more efficient. But if companies are blind to LLMs accessing their back-end code or sensitive data stores, it’s just a matter of time before it blows up in their faces”
Terry Ray, Imperva

He suggested that instead of imposing bans or relying on employees self-policing their use of tools such as ChatGPT, security teams should be directed to focus on the data, rather than the chatbot, and make sure they know the answers to questions such as: Who is accessing what? What are they accessing? How are they accessing it? Where are they located?

There are three immediate steps that organisations can direct their security budget towards right now that may go some way towards alleviating the risk and helping ensure the business does not fall victim to a high-profile, AI-induced insider breach.

First, it is important that organisations take steps to discover and retain visibility over every data repository in their environment, which will help make sure important data stored in shadow databases isn’t overlooked or exploited – accidentally or on purpose.

Second, all of these data assets should be inventoried and classified according to type, sensitivity and value to the business. In this way, an organisation can better understand the value of its data, whether or not it is at risk of compromise, and what additional cyber security controls might reasonably be put in place to mitigate the risks to it.

Finally, organisations should look to spend on improved data monitoring and analytics tools to detect issues such as anomalous behaviour from employees, data being moved around or exfiltrated, or sudden instances of privilege escalation or the creation of new, suspicious accounts – all harbingers of a serious incident.

Read more about AI cyber risks

Read more on IT risk management

CIO
Security
Networking
Data Center
Data Management
Close