Koshiro - stock.adobe.com

Tenable opens playground for generative AI cyber tools

A set of generative AI cyber tools designed to help security researchers in reverse engineering, debugging and other areas of work have been made available for the community to experiment with

The security community is being invited to explore the potential of generative artificial intelligence (AI) to act as a useful tool in its research efforts, with the release of a number of prototype tools developed by Tenable, which are now available to check out on Github.

In an accompanying report titled How generative AI is changing security research, the firm’s research team shares how it has been experimenting with generative AI applications to create efficiencies in reverse engineering, code debugging, web application security and visibility into cloud-based tools.

Tenable, which describes itself as an “exposure management” company, said tools such as those based on OpenAI’s latest generative pre-trained transformer model, GPT-4, potentially now have abilities on par with those of a “mid-level security researcher”.

But, as Tenable director of security response and zero-day research Ray Carney explained in the report’s preamble, even OpenAI admits GPT-4 has similar limitations to earlier GPT models, particularly around reliability and biases that arise as a result of the model’s experiences, how it was trained, incomplete and imperfect training data, and cognitive biases among the model’s developers.

Added to this, he said, one must consider the cognitive biases of the people querying the model – asking the right questions becomes “the most critical factor” in how likely one is to receive a correct answer.

This, said Carney, relates to security researchers, because the role of such people is to offer timely and accurate data to decision-makers.

“In pursuit of this goal, the analyst must process and interpret collections of incomplete and ambiguous data in order to produce sound, well-founded analytical judgments,” he wrote. “Over the course of many years, and many failures, the analytical community has developed a set of tools commonly referred to as ‘structured analytic techniques’ that help to mitigate and minimise the risk of being wrong, and avoid ill-informed decisions.

The warnings posed by OpenAI in its announcement of GPT-4 make a strong argument for the application of these techniques,” continued Carney. “In fact, it is only through the application of these types of techniques that we will ultimately produce a well-refined dataset to train future models in the cyber security domain.

“These types of techniques will also help researchers to ensure that they are tuning their prompts for those models – that they’re asking the right questions,” he said. “In the meantime, security researchers can continue to investigate how we leverage generative AI capabilities for more mundane tasks in order to free up time for researchers and analysts to invest their time on the more difficult questions that require their subject matter expertise to tease out critical context.”

Read more about AI in cyber security

The first tool they came up with is called G-3PO. This tool builds on the NSA-developed Ghidra reverse engineering framework that has become a perennial favourite among researchers since it was declassified and made widely available in the 2010s. It performs a number of crucial functions, including binary disassembly into assemble language listings, reconstructing control flow graphs and decompiling assembly listings into something that at least resembles code.

However, to use Ghidra, one still needs to be able to meticulously analyse the decompiled code by comparing it with the original assembly listing, adding comments, and assigning descriptive names to variables and functions.

Here, G-3PO picks up the baton, running the decompiled code through a large language model (LLM) to gain an explanation of what the function does along with suggestions for descriptive variable names.

Tenable said this functionality would allow an engineer to “gain a rapid, high-level understanding of the code’s functionality without having to first decipher every line”. They can then zero in on the most concerning regions of code for deeper analysis.

Two of the other tools, AI for Pwndbg and AI for GEF, are code debugging assistants that act as plugins for two popular GNU Debugger (GDB) extension frameworks, Pwndbg and GEF. These interactive tools receive various data points – such as registers, stack values, backtrace, assembly and decompiled code – that can help a researcher explore the debugging context. All the researcher has to do is ask it questions, such as “what is happening here?” or “does this function look vulnerable?”

Tenable said these tools would help solve the problem of navigating the steep learning curve associated with debugging, turning GDB into a more conversational interface where researchers can essentially discuss what is happening without the need to decipher raw debugging data. The tool is by no means flawless, but it has shown promising results in reducing complexity and time, and Tenable hopes it could also be used as an educational resource.

Other tools being made available include BurpGPT, a Burp Suite extension that lets researchers use GPT to analyse HTTP requests and responses, and EscalateGPT, an AI-powered tool that probes for misconfigurations in identity and access management (IAM) policies for cloud environments, one of the most common and overlooked concerns among enterprises, and uses GPT to identify possible escalation opportunities and mitigations.

Silver lining

Tenable said that while it was to be anticipated that threat actors will take advantage of generative AI themselves, and it was probably only a matter of time before the threat of reliable, AI-written malware is realised, there is a silver lining in that there is still “ample opportunity” for defenders to harness generative AI, too.

Indeed, in some regards, such as log parsing, anomaly detection, triage and incident response, they could even get the upper hand.

“While we’re only at the start of our journey in implementing AI into tools for security research, it’s clear the unique capabilities these LLMs provide will continue to have profound impacts for both attackers and defenders,” wrote the research team.

Read more on Application security and coding requirements

CIO
Security
Networking
Data Center
Data Management
Close