Maksim Kabakou - Fotolia

Generative AI – the next biggest cyber security threat?

Following the launch of ChatGPT in November 2022, several reports have emerged that seek to determine the impact of generative AI in cyber security. Undeniably, generative AI in cyber security is a double-edged sword, but will the paradigm shift in favour of opportunity or risk?

ChatGPT is a large language model (LLM) falling under the broad definition of generative AI. The sophisticated chatbot was developed by OpenAI using the Generative Pre-trained Transformer (GPT) model to understand and replicate natural language patterns with human-like accuracy. The latest version, GPT-4, exhibits human-level performance on professional and academic benchmarks. Without question, generative AI will create opportunities across all industries, particularly those that depend on large volumes of natural language data.

Generative AI as a security enabler

Enterprise use cases are emerging with the goal of increasing the efficiency of security teams conducting operational tasks. Products such as Microsoft’s Security Co-Pilot, draw upon the natural language processing capabilities of generative AI to simplify and automate certain security processes. This will alleviate the resource burden on information security teams, enabling professionals to focus on technically demanding tasks and critical decision making. In the longer term, these products could be key to bridging the industry’s skills gap.

While the benefits are clear, the industry should anticipate that the mainstream adoption of AI is likely to occur at glacial speeds. Research by  PA Consulting found that 69% of individuals are afraid of AI and 72% say they don’t know enough about AI to trust it. Overall, this analysis highlights a reluctance to incorporate AI systems into existing processes.

Generative AI as a cyber security threat

In contrast, there are concerns that AI systems like ChatGPT could be used to identify and exploit vulnerabilities, given its ability to automate code completion, code summarisation, and bug detection. While concerning, the perception that ChatGPT and similar generative AI tools could be used for malware development is oversimplified.

In its current state, the programming capabilities of generative AI are limited and often produce inaccurate code or ‘hallucinations’ when writing functional programmes. Even generative AI tools that are fine-tuned for programming languages show limited programming potential, performing well when resolving easy Python coding interview questions, but struggling with more complex problems.

And, while there are examples of malware developed using generative AI, these programmes are written in Python, which is impractical for real world use. Ultimately, adversaries seeking to develop malware will not gain further advantages from generative AI in comparison to existing tools or techniques. Currently, it is still in its infancy, but the AI arms race currently being waged by ‘big-tech’ organisations is likely to result in more powerful and reliable models. Managing this shifting threat landscape requires a proactive and dynamic risk posture.

Organisations should not completely dismiss today’s security threats posed by ChatGPT and other generative AI models. LLMs are extremely effective at imitating human conversation, making it challenging to differentiate generative AI-synthesised text from human discourse. Adversaries could implement generative AI in WhatsApp, SMS, or email to automate conversations with targets, build rapport, and obtain sensitive information. This could be requested directly or gathered by persuading targets to click links to malware. Generative AI may also be used for fraudulent purposes, such as deepfake videos and AI-powered text-to-speech tools for identification spoofing and impersonation.

A proactive approach for organisations

In 2022, human error accounted for 82% of data breaches; with the advent of generative AI tools, this is likely to increase. But while people may be the weakest link, they can also be an organisation’s greatest asset.

In response to the changing threat landscape, they must ensure their employees are prepared for more convincing, more sophisticated attacks. Leaders must be visible advocates of change, and ensure their people are well-equipped and informed to manage threats. By building psychological safety into their cyber culture, organisations will empower individuals to report security events such as phishing without fear of retribution. This kind of inclusive, transparent cyber culture will be the key differentiator for those with effective cyber security.

Learn more about this topic

Regular corporate communications highlighting emerging threats, case studies, and lessons learned should be supported by regular training that reflects new trends. For example, now that generative AI can write error-free, colloquial prose, it’s no longer possible to identify non-human communication through grammatical errors or robotic sentence structures. By re-evaluating their approach to scam awareness training, organisations should teach employees to verify the recipients of sensitive or personal information.

It’s important to keep it simple. The key to a secure culture is implementing straightforward processes and providing accessible training and guidance. Practically, this includes automated nudges to warn colleagues of potentially unsafe actions and HR policies that support a culture of ‘better safe than sorry’.

The way forward

Organisations are staring deeply into the generative AI kaleidoscope, but a watchful eye must be kept on the potential security, privacy, and societal risks posed. They must balance the benefits and threats of introducing AI into their processes and focus on the human oversight and guidelines needed to use it appropriately.

By Luke Witts, cyber security expert at PA Consulting. Special thanks to Cate Pye, Budgie Dhanda, Tom Everard and Alwin Magimay for contributing their insights to this article.

Read more on IT risk management

CIO
Security
Networking
Data Center
Data Management
Close