Maksim Kabakou - Fotolia

Security Think Tank: Balancing human oversight with AI autonomy

Artificial intelligence and machine learning techniques are said to hold great promise in security, enabling organisations to operate an IT predictive security stance and automate reactive measures when needed. Is this perception accurate, or is the importance of automation gravely overestimated?

Security practitioners are always fighting to keep up with the methods used by attackers, and artificial intelligence (AI) – defined as systems that can learn, reason and act independently of a human programmer – can provide at least a short-term boost by significantly enhancing a variety of defensive mechanisms.

AI can automate numerous tasks, helping understaffed security departments to bridge the specialist skills gap and improve the efficiency of their human practitioners. Protecting against any existing threats, AI can put defenders a step ahead.

However, artificial intelligence is not a cure-all. Like humans, AI systems make mistakes and can be deliberately manipulated. They often require time to achieve a good level of decision-making maturity. The importance of AI in security is not necessarily overstated, but organisations will need to find a way of balancing the efficiencies of automation with the need for human oversight. This will ensure that such systems make good decisions and secure information rather than putting it at risk.

AI in defence: detect, prevent and respond

Current AI systems have “narrow” intelligence and tend to be good at solving bounded problems – those that can be addressed with one dataset or type of input, for example. No single AI system can answer every problem – this “general” AI does not exist yet. Instead, there are a select number of ways in which different AI systems can improve cyber defences:

  • To detect cyber attacks, AI can enhance defensive mechanisms such as network monitoring and analytics, intrusion detection/prevention, and user and entity behavioural analytics (UEBA).
  • To prevent cyber attacks, AI can be used to test for vulnerabilities during software development, improve threat intelligence platforms, and identify and manage information assets.
  • To respond to cyber attacks, AI tools can support security orchestration automation and response (SOAR) platforms by pushing instructions to other security platforms, or force connections to drop when malicious network activity is identified.

How AI systems make errors

AI systems are liable to make mistakes and bad decisions, as a series of high-profile cases have shown – from sexist bias in recruitment tools to Twitter chatbots that learn to become racist in the space of 24 hours.

Read more about AI in security

These errors are typically accidental in nature, caused by bias in datasets used to train the system, or by modelling decisions either too closely or too loosely to the available information. However, malicious parties can also target systems, “poisoning” them by inserting bad data into the training datasets. Or, if they don’t have access to the training data, attackers may tamper with inputs to trick the system into making bad decisions.

Success will require a combination of human and artificial intelligence

Until a system has demonstrated maturity and trustworthiness, organisations are rightly unwilling to give it a high level of autonomy and responsive capability – whether it is deployed for information security or any other type of business function. The risk of AI systems making bad decisions means that organisations are likely to always require the presence of a human who can take control and press the off switch when necessary.

However, the desire to keep humans in the loop creates its own challenges. Placing too much emphasis on the need for human oversight can reduce the effectiveness of the AI system, leading to a deluge of notifications and alerts rather than letting the AI take automatic responsive measures.

Security practitioners need to balance the need for human oversight with the confidence to allow AI-supported controls to act autonomously and effectively. Such confidence will take time to develop, just as it will take time for practitioners to learn how best to work with intelligent systems. Given time to develop and learn together, the combination of human and artificial intelligence should become a valuable component of an organisation’s cyber defences.

Read more from this Security Think Tank series

Read more on IT risk management

CIO
Security
Networking
Data Center
Data Management
Close