peopleimages.com - stock.adobe.c

How Proofpoint is helping to mitigate AI security threats

Proofpoint is offering monitoring tools to prevent leakage of sensitive information in generative AI models and other capabilities to mitigate AI mediated attacks

Artificial intelligence (AI) has the potential to augment human capabilities and transform industries, but the technology can also be used by cyber criminals to conduct targeted threat campaigns at scale.

These include the potential use of large language models (LLMs) to create perfectly crafted phishing emails aimed at specific individuals, or to improve the code that goes into malicious software. Services such as DarkGPT and DarkBert that emerged from the dark web had also been offered for sale through Telegram.

Dan Rapp, global vice-president for AI and machine learning at Proofpoint, said AI should be a concern to any chief information security officer (CISO) who must balance the use of the technology among employees with the need to mitigate security risks.

That includes leveraging the technology while not inadvertently exposing proprietary or sensitive information, which requires an understanding of data governance practices of generative AI (GenAI) application providers, he said.

Rapp noted that in some cases, GenAI technology suppliers “have some pretty solid data governance”, while some startups that are fairly new have not thought about data governance.

To that, he said Proofpoint has been providing tools that will enable CISOs to monitor GenAI applications and provide some level of control around what is allowed to ensure there is no inadvertent loss of data, which can be as significant as data lost through malicious activities.

“There are over 600 generative AI apps that are in common use, and we have a catalogue of those that we curate and update every week,” said Rapp. “We also have signals to determine who is accessing which applications, and we have the capability in our web security and data loss prevention products to ensure there’s no inadvertent data loss through different methods.”

Read more about cyber security in APAC

  • Cyber security incidents were the cause of most data breaches, which rose by 26% in the second half of 2022, according to the Office of the Australian Information Commissioner.
  • The chairman of Ensign InfoSecurity traces the company’s journey and how it is leading the charge in cyber security by doing things differently, investing in R&D and engaging with the wider ecosystem.
  • The president of ST Engineering’s cyber business, outlines the common myths around OT security in a bid to raise awareness of the security challenges confronting OT systems.
  • Australia is spending more than A$2bn to strengthen cyber resilience, improve digital government services and fuel AI adoption, among other areas, in its latest budget.

Proofpoint has also built threat summarisation capabilities powered by LLMs into its security dashboard, enabling security analysts to delve into specific security incidents in an easy-to-consume manner. “You can get a narrative explanation of what’s going on and it could tell you whether or not an incident is part of other incidents,” said Rapp.

Earlier in September 2023, Proofpoint added a new capability for security analysts to interact with security dashboards using natural language. “The dashboards then come back with the data and you’re able to, within that context, further refine your questions to get key insights,” he added.

Another concern that enterprises have with GenAI models is data poisoning, where threat actors try to inject malicious data into a model’s training data to drive specific responses or outputs. Rapp said that as an organisation that builds AI models, Proofpoint is very cognisant of how it can effectively create datasets to ensure it’s responsibly and ethically practicing AI.

“There’s an awareness that data poisoning is a thing,” he said. “Most vendors right now aren’t actively training models in a repetitive way for data poisoning to become an issue, but it’s something to be aware of as we move forward with this.”

With growing recognition that it will take AI security tools to fend off AI-mediated attacks, do security analysts need to pick up additional skills, to the extent of understanding how AI models work?

While Rapp does not believe AI will take away the jobs of security analysts, he said they will need to leverage LLMs to be more productive. “Otherwise, you will be at risk of being replaced by someone, whether that’s an attorney, software engineer or a security analyst,” said Rapp.

Read more on Artificial intelligence, automation and robotics

CIO
Security
Networking
Data Center
Data Management
Close