pinkeyes - stock.adobe.com
Insider risk management trends and tips for IT Leaders
The acceleration of AI has given a fresh dimension to insider threats and increased the security steps customers need to consider
Insider threats have been a hot topic in the IT channel for years, but growing adoption of AI and agentic AI—as well as recent advances in the capabilities of AI-powered tools—have given this conversation new relevance by raising the stakes and complexity of insider risk management.
AI advancements and widespread adoption are now forcing the industry to fundamentally rethink the way that we approach inside risk. As Gartner says: “The velocity of AI-enabled threats demands a security strategy pivot from reactive defence to preemptive threat neutralization”—a seismic shift that many have yet to reckon with fully.
In this piece, I’ll draw on new research and my own experience working with end-users, partners, and other organisations to identify the top trends in insider risk management today and provide some practical solutions to these new problems.
1) Accidental Exposures are a Highly Pervasive Threat
According to AvePoint’s research, 75% of organisations that use AI experienced a security breach in the last year—an eyebrow-raising figure that’s echoed in research from other sources.
When we hear “security breach,” many of us immediately think of malicious actions from bad actors, but the data increasingly shows us that, today, the most pervasive insider threats are accidental and incidental in nature, rather than malicious.
In AvePoint’s study, for example, breaches overwhelmingly resulted from poorly governed and uncompartmentalised data, which AI tools were able to access and share without authorisation. Other research—both from vendors and from 3rd-party consultancies—supports this and points to a new epidemic of accidental (rather than malicious) data breaches related to AI tools.
Without proper guardrails and governance, AI tools—agentic tools in particular—are able to access and share confidential data, and the latest data shows us that these guardrails are largely not in place. This lack of governance is driving a new wave of highly pervasive insider risk, not from human sources, but from AI systems themselves.
Bad actors are (obviously) still a threat, particularly from an insider perspective. But in the AI age, the risk of accidental exposure is even more pervasive, and largely undermanaged.
The good news for vendors, partners, and end-customers is that the industry is moving quickly to cover these threats, but awareness and action are still in the early stages. This presents a huge opportunity for industry players that are able to cater to this demand.
2) Insider Risk and Security Posture Management are Now Productivity Issues
Before AI, insider risk and data security posture management were primarily thought of as security concerns, oriented around limiting liability and fallout in the event of a breach. But the advent and proliferation of AI systems require us to take a more proactive approach, which means that these issues are now deeply intertwined with questions about employee productivity.
This goes back to the problem of data governance in the AI age. By implementing strong data governance across their IT environments, organisations limit risk by limiting the data that AI systems can access and share.
This is, of course, a security and data protection measure, but it also has serious ramifications for productivity. By making sure that AI tools only have access to current and relevant data, for example, organisations not only enhance their security posture and limit the risk of accidental breaches; they also improve the output of their AI tools, significantly enhancing employee and agent productivity.
This is an important step to optimise the performance of AI assistants and related interfaces, but it’s particularly important for AI agents, since agents don’t have human feedback to manually filter out irrelevant data from their actions and outputs.
Now more than ever, insider risk management is a productivity issue, not just a security concern. This is an important part of the broader reconceptualization of insider risk for the AI era. As we become more proactive, the relevance of our data security and protection actions become much broader and more consequential.
3) Why Zero Trust Won’t Solve Everything
A popular response to the growth of insider threats is to implement zero-trust architecture across IT environments.
Zero-trust is, of course, a powerful tool to counter insider risk. The core tenets of zero trust—like multi-factor authentication and strong compartmentalisation—do in fact limit the ability of bad actors to access confidential information, as almost everyone recognizes.
At the same time, zero trust is extremely difficult to implement at scale. Even highly organized, regimented, and regulated environments (like government IT apparatuses) struggle to implement and maintain zero trust structures seamlessly. Moreover, zero trust doesn’t completely counter the risks of accidental overexposure from AI tools and agentic software, which are often not assimilated into zero trust structures. Even when you have zero trust, you still need strong data governance for AI systems.
Rather than thinking of zero trust as a panacea that will solve all insider threat concerns, leaders should think of it as a compliment to strong data governance measures. Without strong data governance, zero-trust will be less powerful and less effective. In the age of AI, it’s just one part of a broader insider risk management strategy.
The Path Forward
As AI continues to reshape the contours of insider risk, vigilance and adaptability need to become our core companions. The future belongs to those who harmonize security with productivity, weaving governance into the fabric of their digital landscapes. The path forward calls for a thoughtful balance where trust is earned, and wise stewardship unlocks the true promise of these exciting new technologies. The opportunity for the channel is tremendous, but there’s still much more work to be done.
