LLM series - Bugcrowd: A vulnerability rating taxonomy for LLMs

This is a conversation cum-new-analysis with Casey Ellis in his role as founder and chief strategy officer at Bugcrowd for the Computer Weekly Developer Network (CWDN).

The Bugcrowd Security Knowledge Platform™ orchestrates data, technology, human intelligence and remediation. Its Vulnerability Rating Taxonomy (VRT) for Large Language Models (LLMs) defines how AI vulnerabilities in LLMs for AI are classified, reported and prioritised.

Recent updates to Bugcrowd see expansions designed to drive the VRT as an ongoing open source effort to standardise how hacker submissions of suspected vulnerabilities are reported in an industry-standard way.

CWDN: How might this improve vulnerability reporting and general awareness of vulnerabilities in LLMs?

Ellis: We’re at a point where it’s broadly agreed that, alongside its utility, the power of AI introduces serious considerations around security and safety. The problem is the potential scope is so vast that it’s difficult to know where to start attacking the problem. The VRT is designed to simplify conversations around scope and impact, help the process of getting people on the same page and make security conversation easier and more accessible. This last part, accessibility, definitely benefits general awareness. AI is here to stay and I’d like to see everyone in security have at least some taxonomical understanding of AI security – This release is a step towards that.

CWDN: Do experts see this as an effective approach to improve trust in LLMs, is this a positive move?

Ellis: On its own it’s simply a taxonomy, but when put in the hands of the white hat hacker community it will increase the ongoing scrutiny of these models, improve security and transparency at the same time and thereby improve trust.

CWDN: What are the advantages of a crowd-sourced, standardised medium of vulnerability reporting?

Bugcrowd CEO Ellis: Let’s increase the awareness & accessibility of VRT.

Ellis: Many eyes and the right incentives and frameworks, make all bugs shallow and when you consider the crowd of adversaries and threat actors who are actively looking to exploit flaws and weaknesses in computer systems, engaging the help of an army of allies simply makes sense. On top of this, AI itself operates in ways that could be considered autonomous (even though, strictly speaking, they aren’t), so the broader the pool of defenders acting in the interest of public safety and security, the better.

CWDN: Does Bugcrowd see an open source approach as integral to building trust in rarely understood AI systems?

Ellis: 100% – Firstly, by demonstrably and transparently improving the security in those systems and having a truly positive impact on risk. Secondly, by socialising and demystifying some inherently complex, difficult to understand and in many ways “magical” technology for the average Internet user.

Bugcrowd has seen this phenomenon in many other verticals ranging from connected cars to medical devices to voting equipment – The average Internet-using layperson might never fully understand the technology that powers these systems, but they can easily grasp the concept of “Neighborhood Watch for the Internet” which gives them a greater sense of confidence and trust.

 

CIO
Security
Networking
Data Center
Data Management
Close