efks - stock.adobe.com
Along with machine learning (ML), artificial intelligence (AI) represents both the answer to many cyber security challenges, and the dystopian future presented by all good apocalyptic sci-fi films and books.
Beyond the inclusion of AI in many vendor slide decks as a way of positioning next-generation technology and a more advanced, automated approach to technology, it’s now becoming increasingly mainstream as a concept. At the end of 2022, it crashed through into public awareness in a whole new way with the launch of ChatGPT, part of the OpenAI platform that allows for conversational dialogue to answer questions, engage in dialogue and provide detailed responses.
This step up from a glorified Google query has fired the imagination, with the awareness of students looking to fast-track essay responses without the frustrating need to actually read source materials at the forefront, followed by teachers looking to similarly automate marking.
Anyone who has suffered at the hands of a website chatbot when trying to get help or an answer to a vaguely complex question also saw hope in the chance of a more positive result when talking to a computer. Without doubt, it’s opened up a world of potential for AI to impact how we engage with technology on a day-to-day basis as individuals, rather than just a mysterious black box powering systems from weather predictions to space rockets.
Inevitably, the potential impact on cyber security also quickly become a key conversational topic – from both sides. For attackers, there was instantly the chance to transform basic, often childlike phishing text into more professional structures, along with the chance to automate engagement with multiple potential victims trying to find their way out of the ransomware trap they’ve fallen into.
For defenders, could this also offer the chance to revolutionise processes, such as code verification or improving phishing education? It’s at a very early stage, and certainly has a number of flaws (not least the way you game the system to produce malicious responses despite the safeguards built in), but it has broadened the debate as to how AI can change the cyber security industry.
As a new tool, ChatGPT is fun to play with – and as Dave Barnett, head of secure access service edge (SASE) and email security for Europe, the Middle East and Africa (EMEA) at Cloudflare, says: “It’s a frighteningly good system – anyone who has asked ChatGPL to explain some obscure technical concept in the form of a poem will realise just how powerful it is.”
However, Barnett highlights the serious concerns that have arisen. “I think the information security establishment needs to take a bit of time to consider the implications – firstly, that it used to be pretty easy to understand when we were being fooled by 419 scams, delivery payment SMS, or business email compromise, as they all appeared fake to humans,” he says.
“Could synthetic intelligence fool us? We also need to be very careful about the data protection risks – such as where the data goes, and who the controller and processor is. Humans are naturally inquisitive, so if we start talking to computers as if they’re human, we’re highly likely to share information we probably shouldn’t. Finally, could this be a way to address the skill shortage in IT? If OpenAI can even write code in a long forgotten language, it will probably be of great help.”
For Ronnie Tokazowski, principal threat advisor at Cofense, the chance to be creative with the platform was to be enjoyed, as “creating AI-generated rap lyrics about world peace and UFO disclosures is fun and cheeky, however it is possible to trick the AI into giving the information you’re looking for”.
Ensuring there are safeguards is essential with any application build, and security by design for AI compared with just a post-build security wrapper will always be the preferred option. That’s clearly the aim of the developers, as “the AI’s intent is good and doesn’t want to create phishing simulations”, but asking in multiple ways (such as removing the word phish) still fails to generate a positive response. However, being creative did deliver results.
“ChatGPT even returns education about how to stay protected and to verify before using a gift card as a form of payment,” he says.
Tokazowski concludes that “it all comes down to intent and how it’s used, as anything can be used for malintent.
“To share some happy thoughts in an email about the negative impacts of AI, I’m just going to ask the AI to help me close this email out,” he says.
And with ChatGPT’s perspective: “I personally have to agree with its sentiment. Emotionally and as a human, frankly I have no idea how I feel agreeing with a robot.”
The responses that were provided are good, but generic and with little of the complexity and nuance we would expect from a human writer. That’s a key challenge – it’s a blunt tool for defenders, but for attackers, that’s often all they need. Automation at scale can help them, and simple, corporate-style writing can easily help them in attempting to produce content that’s looking to impersonate the dull, generic password reset or delivery emails we all receive.
Ultimately, AI will be used as a weapon by bad actors and cyber criminals, and organisations need to know how to defend against it – as well as leverage the potential benefits, whether it’s AI or ML built into commercial products, or using the power of tools like ChatGPT in user education, securing code or better understanding the threat.
But as I asked the tool how businesses can defend against AI, I received typically generic responses (highlighting its use as a limited search engine), it made one key point: “Use AI responsibly”. That’s the biggest takeaway we all need to remember.