
Security experts air concerns over hackers using AI and machine learning for phishing attacks
Security experts share views on risk of artificial intelligence being re-purposed by hackers for phishing attacks
The next 12 to18 months will see an acceleration in the adoption of machine learning by hackers in an attempt to carry out increasingly sophisticated phishing attacks, it is claimed.



Computer Weekly's Buyer's Guide to GDPR Part 2
In this 12-page buyer’s guide, we look at the tools that could be used for compliance, the incentive to create a smarter, leaner business, and the myths surrounding the new rules.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.
Anup Ghosh, chief strategist for next-generation endpoint at Sophos, made the prediction during a panel session at the NetEvent’s Press and Analyst Summit on 28 September, where he outlined the potential for machine learning to help craft compelling content for phishing campaigns.
“You can use machine learning to craft really good campaigns, whether it’s for Twitter, Facebook or email, to get humans to click on links. The evidence is out there that machine learning is far better at crafting emails and tweets that get humans to click on these,” he said.
In order for enterprises and security suppliers to remain on the front foot with hackers, they will also need to incorporate machine learning and artificial intelligence (AI) into their cyber security strategies, creating what Ghosh terms an “AI on AI” situation.
“Security companies that fight these bad guys will also have to adopt machine learning. Now you have an AI on AI scenario, and it will propel us forward to adopt machine learning for real time,” he said.
Where the technology comes into its own for enterprises is in the detection of cyber threats, he said. “The volume of data that’s available on certain types of threats like malware is effectively infinite,” he added.
“The problem with sticking humans on a malware detection problem is that it’s not a good fit. Humans are good at making decisions, and machine learning is very good at crunching very large data sets and recognising patterns.”
Fellow panel participant Oliver Tavakoli, CTO at AI threat detection and response firm Vectra, said getting humans to try to make sense of this data is often inefficient and impractical.
Read more about AI
- The UK is at risk of being left behind when it comes to artificial intelligence advances, with the current shortage of AI skills likely to get a lot worse.
- Microsoft SRD is a new cloud service that aims to detect vulnerabilities in software using artificial intelligence.
“It becomes impractical at a certain point to have a user stare at this data, squint at it, and try to find patterns in it,” he said.
“Machine learning can unlock patterns in large swathes of data, express it in a compact form and then hopefully allow you, in real time, to efficiently apply it to detecting something and making a decision.”
Start the conversation
0 comments