peshkov - stock.adobe.com

AI-enhanced security tools necessary for today’s threats

Machine learning-enhanced tools are necessary to keep up with current threats, but are not perfect and will not solve the security skills gap problem, says KuppingerCole

Despite spending billions of dollars on cyber security products in 2017, cyber attacks continue to be successful, which means tools need to be updated, according to John Tolbert, lead analyst at KuppingerCole.

“Security tools have tended to be focused on prevention, but now we need to take a more realistic view and ensure we are focusing more time and tools on detection and response,” he told attendees of the KuppingerCole Cyber Security Leadership Summit in Berlin.

Defences based on Lockheed Martin’s cyber kill chain were mainly aimed at preventing reconnaissance, weaponising, delivery and exploitation, said Tolbert, with detection and response only required at the malware installation, callback and execution phases of the kill chain.

While this is still a valid approach, he said the Mitre framework was more up to date and more realistic, with prevention mentioned only in connection with the initial access and execution phases, while detection and response is specified with regard to the eight other phases, including privilege escalation, credential theft, lateral movement and exfiltration.

“These frameworks are useful in helping organisations to plan where they need to do work, and while prevention always will be important, there has been a shift in emphasis to detection and response. We believe artificial intelligence [AI] and machine learning [ML] can help in making this shift,” said Tolbert.

However, he said the way these terms were used by security tool marketing teams could be confusing.

“When security suppliers use the term AI, they do not mean strong AI in the sense of a computer having the capability to think in the same way as a human being. They usually mean that their product uses a machine learning algorithm to solve particular problems,” said Tolbert.

“Security tools have tended to focus on prevention, but now we need to take a more realistic view and ensure we are focusing on detection and response”
John Tolbert, KuppingerCole

“There are, however, several places where machine learning comes into play for cyber security, particularly in anti-malware tools, where ML is a must because there are now millions of malware variants being created every day and only ML-assisted malware prevention products can keep up,” he said.

Other areas where ML comes into play, said Tolbert, is with firewalls, web application firewalls and application programming interface (API) gateways where ML can be used to analyse traffic patterns; threat hunting, where ML can augment capabilities to deal with huge volumes of data across thousands of nodes; data governance for auto-classification of data objects; authorisation and access control policies, where ML can aid with the analysis of access patterns and analyse regulations to auto-generate rules and polices; and with security information and event management [Siem] and user behaviour analytics, where ML can be used for efficient baselining and anomaly detection.

“Current tools are not able to cope with unknown attacks, and this is where AI and ML can be used to augment those tools. At the same time, we are seeing the emergence of tools that can help organisations to comply with regulations by building polices that can be reviewed by humans,” he said.

The use of AI and ML is delivering better tools that can reduce the time spent on mundane tasks and free up staff to work on more likely threats and improve an organisation’s cyber security posture, said Tolbert. “This might reduce the skills gap, but it will not eliminate it by replacing information security experts,” he said, which is often cited as a reason for switching to AI and ML-enhanced security tools.

While AI and ML tools can help, Tolbert said it should not be forgotten that they need models and quality data, and as a result they can be manipulated.

“While AI and ML can be used for good, they can also be used for bad. We have already seen attackers carrying out generative adversarial attacks for password cracking and steganography attacks that hide data inside images,” he said.

In summary, Tolbert said ML-enhanced tools were necessary to keep up with threats and needed to be deployed across all layers of the IT stack, but warned that such tools were not perfect as they could be gamed and it was likely that AI and ML would be used to create cyber weapons.

Read more about artificial intelligence

 

Read more on Hackers and cybercrime prevention

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close