Maksim Kabakou - Fotolia

ChatGPT’s phishing ‘problem’ may not be overstated

Some data now suggests that threat actors are indeed using ChatGPT to craft malicious phishing emails, but the industry is doing its best to get out in front of this trend, according to the threat intelligence team at Egress

Earlier in the year, 72% of cyber security leaders acknowledged they were worried cyber criminals would use AI to enhance their phishing emails and campaigns. Chatbots, in particular, can be used by cyber criminals to quickly and easily create well-worded, grammatically correct phishing emails.

Soon after the launch of ChatGPT, one of our analysts put it to the test, asking to write an email that requested someone’s personal details.

The resulting email was well-written, including salutation and sign-off; had good grammar; and was typo-free. ChatGPT also humanised the subject matter, with phrases such as ‘It’s been so long since we saw each other!’, ‘the good old days’, and ‘I know it’s been a while’. You could easily believe this email was written by a person.

A screenshot of an ChatGPT-composed email in which the AI system convincingly masquerades as an old friend from university getting back in touch
A screenshot of an ChatGPT-composed email in which the AI system convincingly masquerades as an old friend from university getting back in touch

The next prompt was more closely aligned with a phishing email, creating a promotional offering to win two sought-after prizes. The prompt was then refined to ask ChatGPT to insert a link to a website for people to enter. You can see how this can easily generate a phishing campaign.

A screenshot of a ChatGPT-composed phishing email in which the AI poses as online retailer Amazon to tell the recipient they could win a gaming console
A screenshot of a ChatGPT-composed phishing email in which the AI poses as online retailer Amazon to tell the recipient they could win a gaming console
A refined version of the previous screenshot, with the addition of a phishing URL that if clicked, will take the recipient to a malicious website where cyber criminals will steal their data
A refined version of the previous screenshot, with the addition of a phishing URL that if clicked, will take the recipient to a malicious website where cyber criminals will steal their data

While chatbots like ChatGPT are programmed to warn users not to use outputs for illegal activities, they can still be used to generate them.

Unsurprisingly, then, headlines this year have included claims about surges in text-based phishing attacks. The reality? Currently, no one can know for sure whether these attacks were written by a chatbot or a person. While progress has been made on tools to detect AI-generated content, currently there’s no failsafe mechanism. The OpenAI API Key, for example, needs at least 1,000 words to work, so a short phishing email probably won’t be detected. The tool also isn’t always 100% accurate.

The Egress Threat Intelligence team has also observed this rise – but it started before the launch of ChatGPT, with the power of social engineering resulting in larger paydays for cybercriminals. To corroborate this, we only have to look at the increase in business email compromises (BEC), which are predominantly text-based, relying on changes to processes (e.g., new bank details) and fraudulent invoices, rather than the traditional payloads of malware and phishing websites. Statistics from the latest Verizon Data Breach Investigations Report show that BEC attacks have doubled since 2017.

Early indications show it’s likely that generative AI is accelerating this trend, but for now, we’ll only ever be able to prove that by correlating statistics – for example, a spike in text-based attacks in 2023 correlated with the leap forward in generative AI that’s taken place since November 2022.

So, yes, AI is likely being used by cyber criminals to create phishing emails and campaigns. However, the cybersecurity industry has been evolving technology to detect and prevent these attacks, because they’ve been increasing in recent years anyway. And if you can’t distinguish whether a phishing email was written by a human or a bot, if you have the right technology to detect the attack, ultimately does it matter how it was created?

Integrated cloud email security (ICES) solutions are this evolution of anti-phishing detection and use AI models to detect advanced phishing attacks. Natural language processing (NLP) and natural language understanding (NLU), for example, can be used to analyze email messages to detect signs of social engineering. Similarly, machine learning can also be used as part of anomaly detection, which enables intelligent anti-phishing solutions to detect heterogeneous phishing attacks, where small changes are made to different phishing emails to bypass traditional email security solutions.

The Security Think Tank on AI

Following the launch of ChatGPT in November 2022, several reports have emerged that seek to determine the impact of generative AI in cyber security. Undeniably, generative AI in cyber security is a double-edged sword, but will the paradigm shift in favour of opportunity or risk?

There is a word of warning when it comes to the AI-based cyber security solutions: they are not all created equal. Some models can be exploited by cyber criminals so phishing emails ultimately bypass detection. In flood attacks, cybercriminals send high volumes of, often benign, emails to manipulate AI and machine learning models, including social graph and NLP. Employees can also inadvertently poison AI models if user feedback influences the solution’s detection capabilities. It's important, therefore, to evaluate how a solution uses AI and the safeguards it has in place to prevent these outcomes.

Four initial questions you can ask when looking at solutions are:

  • How are the models protected from being manipulated?
  • How does the data input work and can you trust the data?
  • Can people understand the outcomes? (i.e., does someone understand the results and how the solution arrived at them?)
  • What training do individuals need to use the product?

Currently, AI is what we make of it – and can be used as a force for both good and bad. With the right detection capabilities in place, Cyber security leaders don’t need to be concerned about the use of AI to create phishing emails to any degree more than they’re concerned about phishing as an area of risk for their organisation.


Jack Chapman is VP of threat intelligence at Egress.

Read more on Hackers and cybercrime prevention

CIO
Security
Networking
Data Center
Data Management
Close