Weissblick - Fotolia
Offensive AI unlikely in the near future, says Mikko Hypponen
Cyber criminals are not close to being able to use AI for attack purposes, according to industry veteran Mikko Hypponen, who has been at the forefront of AI use in cyber defence for several years
The world is unlikely to see artificial intelligence (AI) used by cyber attackers “any time soon”, according to Mikko Hypponen, chief research officer of F-Secure.
Speaking from the perspective of a researcher who has been involved in machine learning-based security for the past eight years, he said it is unlikely AI will be used to carry out attacks in the near future, which means defenders have the upper hand in this regard for now.
Hypponen admits that when F-Secure started exploring the potential for automating malware analysis, they did not realise that what they were developing was in fact machine learning.
“But in hindsight, that is exactly what we were doing when we started developing algorithms to tell the difference between a malicious and non-malicious program, which became necessary when we realised we could not keep up with the number of malware samples we were getting,” said Hypponen.
“The algorithms also had to be able to determine when they were not able to tell if a program was malicious or non-malicious and to refer that to a security analyst instead, who could then improve the algorithm to be able to deal with similar situations in future.”
Since then, F-Secure has been expanding this work to enable their systems to handle not only the analysis of malware samples, but also to handle website reputation, email categorisation and filtering of network streams, for example.
“Clearly machine learning can be used for security, but I don’t think we will see it being used for offense in the near future because anyone with machine learning skills is unlikely to choose to use them for criminal purposes,” said Hypponen.
“The most common story I have heard from the cyber criminals I have met through my work with law enforcement is that they were unable to earn a living from the technical skills they had, and that cyber crime was the only way they could use them to make money.”
According to Hypponen, the cyber criminals he has interviewed would much rather have had a well-paying job than turning to crime, which means having to worry constantly about law enforcement catching up with them and ending up in prison.
“They tell me they would prefer to have a job in which they did not have to spend their life looking over their shoulder. I believe it is unlikely that anyone who would be capable of using machine learning technologies to carry out cyber attacks would be pursuing a career in cyber crime, because they would be able to get a well-paying legitimate job with those skills.
“We are struggling to find machine-learning specialists, artificial intelligence programmers and data analysts, and anyone with those skills does not have to go into a life of crime to make a living because companies like ours would import them from wherever they are,” he said.
Hypponen believes this situation is likely to continue for quite some time. “For this reason, I believe criminals using machine learning will be way in the future. It unlikely we will see people with machine leaning skills using those highly sought after skills for criminal purposes because they will be able to get a well paying job,” he said.
Read more about AI and security
- Security leaders investing in automation and AI, study shows.
- AI a threat to cyber security, warns report.
- How AI will underpin cyber security in the next few years.
- Cyber security professionals urged to embrace AI and automation.
Dismissing as “marketing hype” claims that AI technology is already in use by cyber criminals, Hypponen said he has yet to see blackhats using what he would define as AI to carry out cyber criminals activities.
“There is no doubt AI can be used for criminal purposes, but looking at organised cyber crime gangs, I have not seen a single example of them using AI from all the malware campaigns we have seen and reverse engineered so far,” he said. “We have not seen it yet, and I don’t think we will any time soon.”
However, Hypponen said if AI platforms and algorithms become widely available and easy to use, it is likely that cyber criminals will start tapping into this technology. “We could see such platforms first used for simple stuff like generating different kinds of phishing emails, but even that will require some specialised skills,” he said.
The benefits of machine learning to cyber defence capabilities are undeniable, said Hypponen. “Before we started using machine learning systems, we were analysing 10,000 to 15,000 malware samples a day, but now we are analysing up to 650,000 samples a day, which means performance is 50 times better.”
The only other way F-Secure could handle the number of malware samples it receives would be to have an army of analysts, according to Hypponen. “We would need more people with skills than there are available anywhere in the world, so using machine learning is the only way we are able to do this. It is not optional anymore,” he said.
Hypponen is to discuss this topic in more detail at Infosecurity Europe 2018 in London on 6 June during his keynote presentation entitled: Friend or foe? Can (and will) AI & machine learning stop hackers? Or will AI be the hacker?