
Getty Images/iStockphoto
RSAC rewind: Agentic AI, governance gaps and insider threats
AI was naturally a major theme of this year's RSAC conference, but we maybe failed to anticipate how it is coming to dominate every conversation.
This year’s RSAC Conference drew record numbers of nearly 44,000 attendees, 730 speakers, 650 exhibitors and 400 media members. And as one of those who attended and spoke with countless organizations, partners and CISO peers, I can safely say that practically every single person there had something to say about the use of or abuse of artificial intelligence (AI) in cyber security.
We all expected AI to dominate the discussion. But we didn’t anticipate how deeply it would embed into every company update or overview, strategy session, customer conversation and even hallway and happy hour chats. As is often the case, the line between reality and hype can quickly blur. In an attempt to provide a sense of clarity at his particular moment in time, here is a breakdown of three key topic points at the conference:
Full-blown AI adoption in cyber security, whether we’re ready for it, or not
We have unofficially transitioned from a proof-of-concept phase to aggressive implementation. In fact, 90% of organiaations are either currently adopting generative AI for security, or are planning to do so, according to research from the Cloud Security Alliance (CSA). The vast majority of IT and security professionals feel that these technologies can improve their skill sets and support their roles, while freeing them up for more rewarding, valuable assignments.
On the flip side, cyber criminals are also making abundant use of this ever-evolving innovation – to the point in which AI-enhanced malware ranks as a top risk for enterprise leaders, according to Gartner. This sets up a modern-day Spy vs. Spy scenario in which the good guys and bad guys battle it out in a technology arms race, with the stakes getting increasingly higher and the precarious potential for unleashed, harmful AI growing more likely.
The term “agentic AI,” for example, loomed large on the minds of many conference attendees. Simply defined, this refers to AI systems that act autonomously to pursue goals and solve problems without constant human guidance or oversight. It is difficult, however, to determine whether the concept signals genuine innovation or just repackaged marketing speak.
For now, security leaders should focus on the users and ask to what extent are they taking part in Shadow AI, and how are they deploying AI applications? In our own research, we’ve found that most generative AI (GenAI) usage in the enterprise (72%) is currently attributed to shadow IT.
We know that AI left alone will transition swiftly in the direction of any and all forms of usage. It's already starting to resemble the rapidly expanding universe of cloud adoption of years past. Transforming into this level of AI ubiquity requires deeper questions – and answers – about integration, accountability and governance. Which brings us to our next conference topic point.
Gaps in enterprise AI governance
Too often, AI governance committees are narrowly fixated on privacy and security concerns, rather than broader considerations such as legal liability, licensing exposure, cost and technology overlap rationalisation and appropriate use. As a result, organizations are approving AI tools without conducting full risk evaluations, including intellectual property and third-party risks such as code contributions.
For now, leaders seem to prioritise safe operation using local models, outright blocks, incident response and detection, along with other short-term use cases. But they must shift from this approach to a state of broader, enterprise-focused AI planning that is guided by strategic, organisational goals, and not merely functional execution.
Read more about AI's impact on cyber
- A strategic governmental steer on AI, putting guidance above rigid legislation offers the adaptability needed to innovate responsibly, contends Kiteworks executive John Lynch.
- CIISec’s annual report on the security profession finds evidence of growing concern that artificial intelligence will ultimately prove more useful to threat actors than defenders.
- Experts warn of AI’s dual role in both empowering and challenging cyber defences, and called for intelligence sharing and the need to strike a balance between AI-driven innovation and existing security practices.
Proliferating insider threats
These threats, of course, are older than cyber security itself. Think of the embezzling finance employee in the 1950s, or the factory worker who surreptitiously slipped company property in his pocket. There was plenty of chatter onsite about the widespread scam in which top tech firms in the US have been tricked into hiring remote IT workers who happen to be North Korean cyber operatives.
This speaks to the need for closer alignment among HR, legal and security teams to detect forged employment documents and eliminate hiring platform vulnerabilities. Unfortunately, there aren’t enough ongoing conversations about these emerging threats, with HR, legal, and security teams more likely to collaborate on compliance requirements and reactive, after-the-fact incident investigations.
Throughout its existence, the RSAC Conference has reflected the present state of cyber security, with impactful trends and challenges conveyed amid the cacophony of booths, presentations, demonstrations and conversations. This most recent conference has proved no exception, especially when it comes to new patterns in AI and insider threats.
That said, a consistent thread has emerged over the years: The need for proactive accountability, guidance and governance.
With this, security leaders won’t entirely mitigate the damaging outcomes of AI or ill-willed insiders. But they’ll take major steps in containing them. Hopefully in a few months, when we arrive at Black Hat, we’ll be talking more about how organizations are now able to more consistently and successfully do that.
James Robinson is chief information security officer at secure access service edge (SASE) and zero-trust specialist Netskope.