This article is part of our Essential Guide: Essential guide to operation-centric security

Intelligent ways to tackle cyber attack

Artificial intelligence-powered security tools should enable IT security teams to achieve more with less

In early March 2020, UK artificial intelligence (AI) security startup Darktrace was able to contain the spread of a sophisticated attack by Chinese cyber espionage and cyber crime group APT41 exploiting a zero-day vulnerability in Zoho ManageEngine.

In a blog post describing the attack, Max Heinemeyer, director of threat hunting at Darktrace, wrote: “Without public indicators of compromise (IoCs) or any open source intelligence available, targeted attacks are incredibly difficult to detect. Even the best detections are useless if they cannot be actioned by a security analyst at an early stage. Too often, this occurs because of an overwhelming volume of alerts, or simply because the skills barrier to triage and investigation is too high.”

Heinemeyer says Darktrace’s Cyber AI platform was able to detect the subtle signs of  this targeted, unknown attack at an early stage, without relying on prior knowledge.

Such techniques can be used to protect both large and smaller organisations, where the level of IT security may not be as sophisticated as in a large enterprise. Andrew Morris, managing consultant at Turnkey Consulting, says: “AI and machine learning are of strategic importance, especially in organisations that are scaling up and do not always have the capability to scale up back-office compliance and security teams at a rate that is proportional to their expansion.”

The monitoring and automated incident response offered by AI-powered security tools enables IT security teams to achieve more with less. Morris adds: “Automating wherever possible reduces pressures without compromising compliance.”

According to Morris, AI’s ability to simultaneously identify multiple data points that are indicators of fraud, rather than potential incidents having to be investigated line by line, is hugely helpful in pinpointing malicious behaviour.

Integrating AI into security toolsets

There is plenty to be confused about when assessing where the IT security market is heading. For instance, as Petra Wenham, security expert and IT volunteer, at BCS, The Chartered Institute, points out, people want to know whether the analytics available in security incident and event monitoring (SIEM) products are akin to AI, or is AI just analytics rebranded for sales purposes?

She says: “If you do an internet search, you’ll find more than a few SIEM products and, without trying too hard, I found 16 from the usual suspects, such as Splunk, LogRhythm, McAfee, Solarwinds, Nagios and others, with some even claiming AI capabilities.”

Wenham defines SIEM product analytics as a way to correlate events from different sources gathered over a relatively short period – typically hours and days, not months, quarters or years – and, when compared with an infrastructure’s baseline, will output a prioritised alert should the set thresholds be exceeded. She says: “SIEM products will also generate a variety of daily and weekly reports and it can take upwards of a month to six weeks to bed down and tune a new SIEM system in order to establish an infrastructure’s baseline.”

This, in effect, sets up the system to tune out the noise of normal operation. In Wenham’s experience, over time, some retuning of a SIEM system may be needed, particularly if there have been upgrades or other changes to a company’s IT infrastructure.

Part of SIEM tuning is the adjustment of system event logging. Wenham says this includes the establishment of what needs to be logged by each system or process in an IT infrastructure and then setting the required Syslog parameters. SIEM systems are certainly not fit and forget.

The role AIOps can play in IT security

AIOps is the use of AI for IT operations. Generally, such tools store event information being gathered over a long period of time, perhaps years, in a database and then apply analytics to that data. From a security perspective, Wenham believes AIOps can help IT managers to adjust the infrastructure baseline and adjust alerting thresholds over time, as well as automatically undertake some remedial actions based on correlated events. “A valuable feature of using big data is the ability to detect very slow or stealth activities on a network that would otherwise be missed or dismissed as a one-off,” she says. “By detecting these slow or stealth activities, a security team is able to take action before a major security incident occurs.”

Read more about AI in IT security

  • AI and machine learning techniques are said to hold great promise in security, enabling organisations to operate a IT predictive security stance and automate reactive measures when needed. Is this perception accurate, or is the importance of automation being gravely overestimated?
  • Machine learning can help enterprises to stay ahead of potential threats – using existing datasets, past outcomes and insight from security breaches.
  • While signature-based tools such as antivirus and IDS are commonplace, it is almost compulsory now for new tools to be AI-enabled.

But it is challenging to figure out that someone is trying to break in, says Turnkey Consulting’s Morris. However, he adds: ”Machine learning can help enterprises to stay ahead of potential threats; using existing data sets, past outcomes and insight from security breaches with similar organisations all contribute to an holistic overview of when the next attack may occur.”

Although AI can automate numerous tasks, helping understaffed security departments to bridge the specialist skills gap and improve the efficiency of their human practitioners, Richard Absalom, analyst with the Information Security Forum (ISF), warns that AI is not a cure-all. “Like humans, AI systems make mistakes and can be deliberately manipulated,” he says.

AI systems are liable to make mistakes and bad decisions, as a series of high-profile cases has shown – from sexist bias in recruitment tools to Twitter chatbots that learn to become racist.

These errors are typically accidental in nature, caused by bias in datasets used to train the system, or by modelling decisions either too closely or too loosely to the available information. However, malicious parties can also target systems, “poisoning” them by inserting bad data into the training datasets. Or, if they don’t have access to the training data, attackers may tamper with inputs to trick the system into making bad decisions.

Systems often require time to achieve a good level of decision-making maturity. The importance of AI in security is not necessarily overstated, but organisations will need to find a way of balancing the efficiencies of automation with the need for human oversight. This will ensure that such systems make good decisions and secure information rather than putting it at risk.

AI in defence: detect, prevent and respond

According to the ISF’s Absalom, current AI systems have “narrow” intelligence and tend to be good at solving bounded problems – those that can be addressed with one dataset or type of input, for example. He says: “No single AI system can answer every problem – this ‘general’ AI does not exist yet.” Instead, there are three ways in which different AI systems can improve cyber defences:

  • To detect cyber attacks, artificial intelligence can enhance defensive mechanisms such as network monitoring and analytics, intrusion detection/prevention, and user and entity behavioural analytics (UEBA).
  • To prevent cyber attacks, AI can be used to test for vulnerabilities during software development, improve threat intelligence platforms, and identify and manage information assets.
  • To respond to cyber attacks, AI tools can support security orchestration automation and response (SOAR) platforms by pushing instructions to other security platforms, or force connections to drop when malicious network activity is identified.

Absalom recommends that security practitioners balance the need for human oversight with the confidence to allow AI-supported controls to act autonomously and effectively. He says: “Such confidence will take time to develop, just as it will take time for practitioners to learn how best to work with intelligent systems.” 

Given time to develop and learn together, Absalom believes the combination of human and artificial intelligence should become a valuable component of an organisation’s cyber defences.

As Morris points out, fraud management, SIEM, network traffic detection and endpoint detection all make use of learning algorithms to identify suspicious activity – based on previous usage data and shared pattern recognition – to establish “normal” patterns of use and flag outliers as potentially posing a risk to the organisation.

For companies with a relatively small and/or simple IT infrastructure, Wenham argues that the cost of an AI-enabled SIEM would probably be prohibitive while offering little or no advantage when coupled with good security hygiene. On the other hand, for an enterprise with a large and complex IT infrastructure, Wenham says the cost of an AI-enabled SIEM might well be justified. However, she warns: “Beware the snake oil salesman and undertake a detailed evaluation of the products available. SIEM products and many of their suppliers have been around for a long time and their capabilities have not stood still.”

While AI and machine learning are ideal for a predictive IT security stance, Morris warns that they cannot eliminate risk. He says this is especially true when there is an over-reliance on the capabilities of the technology, while its complexities are under-appreciated. He adds: “Risks such as false positives, as well as failure to identify all the threats faced by an organisation, are ever-present within the IT landscape.”

Morris recommends that organisations deploying any automated responses need to maintain a balance between specialist human input and technological solutions, while appreciating that AI and machine learning are evolving technologies. “Ongoing training enables the team to stay ahead of the threat curve – a critical consideration given that attackers also use AI and machine learning tools and techniques,” he says. “Defenders need to continually adapt in order to mitigate.”

Heinemeyer adds: “Businesses need to implement cyber AI for defence now, before offensive AI becomes mainstream. When it becomes a war of algorithms against algorithms, only autonomous response will be able to fight back at machine speeds to stop AI-augmented attacks.”

Defensive AI versus malicious AI

Max Heinemeyer, director of threat hunting at Darktrace, says AI can be used for good or evil. “We use it to catch hackers, stop ransomware,” he says. “It absolutely works. Attackers always used to be one or two steps ahead. Now AI has given security professionals the upper hand. What used to take 99% of time was finding a needle in the haystack. Automation reduces this.”

But the situation is changing, says Heinemeyer. “We are seeing the first signs of offensive AI. It is something that has been growing over the last five years and has been ramping up in the last two to three years.” Heinemeyer says any low-level hacker can easily pick up open source code that uses AI to run highly targeted and personalised attacks. These, he says, may work as software bots, sending out tweets for spear phishing attacks, with targeted tweets based on what someone is talking about on social media. Heinemeyer adds: “The barrier to entry has been lowered tremendously.”

Security professionals have traditionally used the power of machines to fight human hackers. Heinemeyer says: “Now there is defensive AI versus malicious AI and winning depends on whose AI is better.” He believes security professionals will have a significant role to play. “It is almost like a fighter jet. You still need a human pilot, but AI automates a lot of the repetitive work.”

Read more on Hackers and cybercrime prevention

CIO
Security
Networking
Data Center
Data Management
Close