Maksim Kabakou - Fotolia
AI is currently at the centre of countless discussions, with these as likely to be positive as negative. We’ve covered the risks this technology introduces; this time we’ll look at the opportunities that ground-breaking AI provides cyber security professionals in terms of the strategies and tools it enables to manage the security of the huge volumes of data that are a feature of daily life.
Data analysis and threat detection
AI excels at rapidly analysing extensive data sets – and doing this in the context of the wider environment. Its ability to continually learn also enables it to very quickly adapt to the changing landscape, meaning it is more accurate, and more likely to protect an organisation from an attack than traditional software that requires frequent patching and updating.
The human factor is a well-documented cyber security risk, but one that can be tackled with AI’s ability to analyse patterns in user behaviour. Continuous monitoring of human activity, along with systems, end points and networks, can identify patterns and detect anomalies that could indicate potential security risks such as insider threats and unauthorised access attempts.
AI’s capabilities also include the analysis of email and network traffic, thereby encompassing phishing and fraud detection and reducing the risk of successful attacks. Meanwhile, machine learning algorithms combat advanced and previously unseen threats, and are therefore highly effective at detecting malware.
As well as reducing the time taken to identify a threat, AI may be able to play a part in the initial response, triggering automatic fixes, isolating an affected machine, or preventing a particular user account from accessing a system for example.
In effect, AI provides IT security teams with the insights that enable them to mount quick and targeted reactions to cyber-attacks. Microsoft is one of the companies at the forefront of using AI technology to solve security and productivity issues, announcing the forthcoming launch of its Security Copilot earlier this year. This tool explores and analyses any security-related event based on an organisation’s pre-defined security procedures and policies. And it can summarise a security issue, explain the root cause and suggest solutions in simple terms to an analyst to mitigate the security risk in an organisation.
Authentication and access control
Behavioural analytics and anomaly detection as described above, along with tools such as biometrics, are some of the AI techniques that can strengthen authentication systems and processes. This can help cyber security professionals improve identity verification, reduce the risk of unauthorised access, and enhance overall access control.
In addition, the extensive number of roles and job responsibilities that exist within many organisations increases the risk of loopholes that a user can exploit to escalate their privileges in order to access systems for which they are not authorised. This issue is currently managed by manually-developed policies, but looking ahead AI, could potentially design these – again, saving time and improving accuracy.
SailPoint provides a good example here. The company’s identity solution uses AI to efficiently manage user access and access governance by continuously monitoring behaviour patterns to provide the right access at the right time to the right identities. Actionable insights help security teams spot risky access in a user profile and provide automated remediation recommendations to managers.
The Security Think Tank on AI
- As with any emerging technology, AI’s growth in popularity establishes a new attack surface for malicious actors to exploit, thereby introducing new risks and vulnerabilities to an increasingly complex computing landscape.
- Following the launch of ChatGPT in November 2022, several reports have emerged that seek to determine the impact of generative AI in cyber security. Undeniably, generative AI in cyber security is a double-edged sword, but will the paradigm shift in favour of opportunity or risk?
- Some data now suggests that threat actors are indeed using ChatGPT to craft malicious phishing emails, but the industry is doing its best to get out in front of this trend, according to the threat intelligence team at Egress.
- One of the most talked about concerns regarding generative AI is that it could be used to create malicious code. But how real and present is this threat?
- Balancing the risk and reward of ChatGPT – as a large language model (LLM) and an example of generative AI – begins by performing a risk assessment of the potential of such a powerful tool to cause harm.
- We know that malicious actors are starting to use artificial intelligence (AI) tools to facilitate attacks, but on the other hand, AI can also be a powerful tool within the hands of cyber security professionals.
AI’s ability to accurately automate repetitive jobs such as log analysis and incident detection and response enables it to identify and prioritise high level incidents that require urgent action. It provides real time feedback of what is happening in the background to help organisations make faster and informed decisions; it also frees up cyber security professionals’ time for more strategic and complex activities.
In addition, AI enhances vulnerability management by automating scanning, assessment, and prioritisation processes.
AI could prove to be a game changer in terms of an enterprise’s IT operations and service management (ITOM/ITSM) capabilities, with the implementation of the correct tools and techniques dramatically improving incident and service-level management.
AI powered tools can identify potential incidents and issues in their early stages and before they become critical, while AI-driven cyber security solutions save time and enhance the accuracy of risk assessments in the ITSM tools.
For example BMC has embedded AI in its product Helix Operations Management which proactively responds to real-time telemetry (events, incidents, logs, and metrics), thereby preventing adverse impacts on service performance and the availability of an organisation’s IT infrastructure.
Effective AI requires human input
It’s fair to say that AI’s current ‘hot’ status leaves it open to being exaggerated by some players for marketing purposes; it’s also relatively early days in terms of its use becoming widespread and it will be some time before the solutions that have been thrown together to keep up with the trends are weeded out.
That said, genuine AI-powered security solutions do exist, and their ability to rapidly analyse vast amounts of data, detect complex patterns and anomalies, and automate many security processes, has the potential to transform the role of cyber security professionals.
However, AI must complement human expertise rather than replace it. Cyber pros should keep themselves informed on the latest AI developments in their fields and evaluate AI solutions based on their specific needs and requirements. It’s also important to maintain human expertise, oversight, and ethical considerations to ensure the responsible use of AI throughout the organisation.