
freshidea - Fotolia
Can security operations ever be fully autonomous?
Focusing on targeted improvements, not full automation, is key to scaling security operations with AI
Monitoring the wide range of security issues thrown at organisations continues to create scaling headaches for many security operations teams. The more issues that need to be monitored, the more data, people and technology are required. The sheer number of alerts, and the work required to handle them, causes many security operations to stall.
Automation promises to solve some of these challenges, but it isn’t necessarily a cure-all by itself. Organisations can’t simply “deploy automation” in their security operations centres (SOCs) and solve the scale problem. New threats evolve and technologies emerge, and the requirements for manual intervention will change, but never be completely removed.
Achieving complete automation is impossible because it would require flawless accuracy, which is unobtainable. The fast pace of change means that by the time teams have managed to automate something, 10 new problems will have arisen. The notion that the human contribution to SOC responsibilities can be replaced entirely isn’t realistic.
Using artificial intelligence (AI) along with automation within the SOC is inevitable, as the vast areas of log data, complex investigational methods, difficult to spot patterns and wide-ranging skills requirements continue to get harder to manage.
While AI and automation are unlikely to deliver full autonomy, they will create much-needed scalability, particularly to augment roles within the SOC, and to adapt to future requirements without necessitating significant overhauls.
Realising more impactful results
Automation and AI are already changing executive perceptions around what is plausible or practical within security operations teams. However, it would be a mistake to make them the focus of your security operations strategy.
Instead, developing strategies that focus on improving the efficiency of the team through various projects and implementing both AI and automation will be more impactful. Gartner predicts 25% of common SOC tasks will become 50% more cost efficient by 2027 due to automation enhancements and hyperscaling strategies.
SOC initiatives that focus on task and workflow optimisation, as opposed to end-to-end automation, realise more impactful results. It’s more realistic to aim to scale SOC roles, such as a level 1 incident investigator, or to improve process areas like alert contextualisation, rather than replace an entire workflow or role.
The potential for budget allocation shifts from operational security tools to automation-driven solutions and those that promote the use of AI is high. So, seek vendors that can do both automation and AI within a singular solution.
The rise of generative AI
Security operations teams have been inundated with the promises of generative AI (GenAI), particularly how it will help close gaps like manning shortfalls and detection and response capabilities. However, Gartner predicts that 30% of SOC leaders will have failed in their efforts to integrate GenAI into production processes by 2027 due to inaccuracies and hallucinations in outputs.
Very few organisations have factored in the required additional verification processes when using GenAI in security operations tools. This lack of planning means there are inadequate compensations for outputs and actions taken that may be influenced by GenAI’s inaccuracies.
There is also a lack of focus on measuring the value and impact of GenAI in SOC processes. This means organisations can’t justify the effective spend on GenAI capabilities, leading to significant time spent retroactively analysing the value provided.
It’s important to measure existing SOC processes, staffing and technology costs, before embarking on an AI SOC initiative. Understanding the potential savings will be useful in any business case and provide a benchmark to compare future value gained by using AI.
Build versus buy AI
Security operations teams have seen lacklustre results from mature AI-based technologies like user and entity behaviour analytics. This is highly likely to lead organisations to abandon these products and opt to build specific detection capabilities to suit their needs.
Gartner predicts 45% of SOCs will reevaluate their build-versus-buy decisions for AI detection technology by 2027, with an emphasis on enhancing security analyst capabilities.
Evaluating the financial, operational and strategic implications of building versus buying AI-driven technology is essential, so too is having the requisite skill sets for the undertaking.
Erosion of security skills
Business leaders pushing for more automation and AI within security operations to reduce cost and gain efficiencies, will lead to the decline of foundational security analysis skills. As a result, Gartner predicts 75% of SOC teams will experience erosion in these skills by 2030.
Security learning, development and certification programmes will also shift away from on-the-job training to offer more data entry and automation process development. In addition, human threat and incident analysis expertise will be concentrated at vendors and outsourced providers.
Organisations must engage with their security analysts now to determine what they want their role to be if AI and automation security solutions are implemented. Discuss the real possibility of skill erosion due to AI, identify where human-led SOC functions persist and how to transition SOC analysts to roles that require more human-in-the-loop decision making.
In addition, a business case should be prepared for maintaining an insourced SOC to counter pressure from business leadership, whereby humans retain an advantage over machines regarding organisational knowledge, intangible office dynamics and intuitive decision making.
Pete Shoard is vice-president analyst at Gartner, covering security operations trends and technologies