rvlsoft - Fotolia

AI cyber monitoring: A Computer Weekly Downtime Upload podcast

Listen to this podcast

In this podcast, Darktrace’s Max Heinemeyer discusses the good – and the bad – to come out of artificial intelligence in IT security

The world of cyber security is being disrupted by artificial intelligence (AI). Max Heinemeyer, chief product officer at Darktrace, whose technology helps solve the problem of companies getting hacked, is seeing hackers increasingly develop sophisticated attacks using advanced AI.

Large language models such as ChatGPT mean it is no longer simply a case of teaching users the basics of good security hygiene, such as checking for spelling mistakes and bad grammar in email messages. Targeted email attacks can be very personalised, drawing on the huge volume of personal data that is readily available on the internet and social media.

One could very well argue the case for making email “less convenient”. For example, preventing links from directly opening if a user clicks on one in a message would limit the impact of a phishing attack. But Heinemeyer is not keen on such preventative measures. “I see where you’re coming from, but I think we’re putting the carriage before the horse if we go down that route,” he says.

Heinemeyer believes security should support businesses. “We want people to be astronauts, research law and access entertainment, and not be inconvenienced by security. We should not hinder their communication. Personally, I think security technology needs to step up and work seamlessly in the background,” he adds.

Given email is sent via a server to a client device, such as the user’s handset, there are plenty of opportunities to use gateway-based security that scan email messages and maybe even sandbox suspicious links that are embedded in those messages. But this is not the approach Darktrace takes. Instead, the company uses machine learning to understand the normal business communications of individual employees.

“It should be okay to click on a link you’ve never clicked on before, but if that link came from a sender that is trying to incite you to click on something and it is clearly weird in a business context, then something should be done,” says Heinemeyer.

He believes cyber security systems need to be sufficiently clever to allow normal business to continue. But he says weird things should be stopped and prohibited. For instance, he says: “If you drill down into the message, and find that the sender has never sent anything to the rest of your organisation, and notice that the syntax doesn’t reflect how people normally speak to you, then you’d better not click on the click.”

He says there are hundreds of tiny markers discovered through semantic analysis that identify differences compared to normal business speak. Heinemeyer says humans would never be able to pick these out, but machine learning can understand the user and what normal behaviour looks like.

Heinemeyer believes the use of automation and machine learning can help to counter personalised targeted cyber attacks. “We need to have clever enough systems that keep messages from cyber criminals away from the user, or strip out the links,” he adds. Machine learning should be smart enough not to stop normal business.

But there is a very real sense that such monitoring is a bit like the Hollywood blockbuster Minority Report, where machines watch people in the background to predict if a criminal activity is about to take place. While there are legal and ethical arguments over such technology being used in policing to target criminal activity, questions need to be asked about the societal impact of using AI to monitor email and other forms of electronic communications.

CIO
Security
Networking
Data Center
Data Management
Close