Maksim Kabakou - Fotolia

Vigilance advised if using AI to make cyber decisions

The AI arms race is heating up, and the battle lines are being redrawn. Still, organisations should proceed cautiously and remain vigilant in scrutinising AI’s ability to ensure accurate, safe, and informed decision-making.

As tech ecosystems become more interconnected and data is exchanged more frequently, the complexity of the cyber landscape has significantly grown, making it incredibly challenging to manage using existing cyber security tools and resources. Consequently, the task of enhancing cyber security has surpassed human-scale capabilities.

Artificial intelligence (AI) and machine learning (ML) driven tools and technologies are on the rise to help organizations address these challenges by significantly improving their security posture efficiently and effectively. Tools using ML and AI are improving accuracy and speed of response.

When a vendor says AI is in their product, what does that mean?

A vendor’s claim that its product incorporates AI can mean many things. The vendor may have utilised AI in various product development stages. For instance, AI could have been employed to shape the requirements and design of the product, review its design or even generate source code. Additionally, AI might have been used to select relevant open-source code, develop test plans, write the user guide or create marketing content. In some cases, AI could be a functional product component. However, it’s important to note that sometimes an AI capability might really be machine learning (ML).

Determining the legitimacy of AI claims can be challenging: the vendor’s transparency and supporting evidence are crucial. Weighing the vendor’s reputation, expertise and track record in AI development is vital for distinguishing authentic AI-powered products from “snake oil.”

What are the current challenges cyber security teams are facing?

As the digital realm continues to expand at an unprecedented pace, the challenges faced by cyber security teams have multiplied exponentially:

Massive amounts of data and data flow. One of the biggest hurdles for cyber security teams is dealing with the sheer amount of data and increasingly complex data flows. Organisations struggle to track the locations, uses and protections of their data.

More complex information processing ecosystems. Organisations use numerous cloud-based services, including IaaS, PaaS and SaaS, and APIs connecting them with complex data flows and architectures that change so frequently that no one in the organisation precisely understands it all.

Broader attack surface. The ever-growing attack surface, coupled with the emergence of shadow IT, BYOD and novel attack vectors, has left organisations vulnerable and in need of robust defences.

Increased reliance on third-parties. The increased reliance on third parties introduces additional vulnerabilities requiring constant vigilance. Third parties are more difficult to assess for risk, as they are less transparent than internal IT departments. Often, we are offered a SOC 2 audit report or an ISO 27001 certification, which lacks the detail that cyber security teams need to understand a third party’s defences.

Highly skilled hackers. The formidable presence of highly skilled offenders and hackers poses a persistent threat, exploiting weaknesses and breaching even the most fortified systems.

Mature cyber criminal ecosystems. The cybercriminal industry is teeming with service providers of all kinds: exploits as a service, ransomware as a service, DDoS as a service and the sale of stolen valuable information. We daresay that cyber crime is healthier and growing faster than the world’s national economies. Their profit margins can be better than those of the organisations they attack. 

Amidst this dynamic landscape, organisations worldwide are racing to harness AI’s transformative power. Those who fail to embrace AI may fall behind and be disadvantaged in the relentless battle against cyber threats.

What is special about AI?

AI consists of powerful components, including machine learning, deep learning and natural language processing, allowing it to transcend human limitations. What sets AI apart is its unparalleled ability to swiftly analyse colossal volumes of data, surpassing the capabilities of human minds and existing technologies, and it continuously evolves and becomes more intelligent over time. Furthermore, AI is a force multiplier, amplifying the intentions of its users, irrespective of their understanding of cyber security principles, whether those intentions are good or bad, offensive or defensive in nature, and whether its use is well designed and planned.

What is AI good for in the cyber security field?

AI is making significant strides in cyber security, transforming how we defend against cyber threats. With its remarkable capabilities, AI is an invaluable ally, bolstering security measures, and safeguarding critical information.

AI is revolutionising cyber security by improving information security awareness, quantifying risks, identifying malware, enhancing vulnerability management, enabling behavioural analysis, detecting fraud and anomalies, revolutionising incident response, strengthening identity and access management, protecting privacy, and supporting cyber security audits.

The Security Think Tank on AI

In software and systems development, AI-powered tools will enhance developers’ ability to produce software free from vulnerabilities. Additionally, AI-powered tools will enable researchers to effortlessly spot exploitable vulnerabilities.

Current Limitations of AI

AI does not replace a human worker, although AI may augment a human worker’s capabilities. Large language model AI systems are not entirely trustworthy, as they are known to “make up” facts in what is called AI hallucination. Further, AI systems can be biased: all humans are biased, and humans train AI systems. Also, malicious persons and organizations deliberately poison AI systems with false or highly biased information, which can result in AI systems reflecting these motives in their results.

All of these factors make it imperative that any AI system be coupled with human inspection of AI results, to ensure that systems continue to operate correctly and that people make sound decisions, not purely because “AI said so.”

Conclusion

Both sides in the cyberwar are using AI to enhance their abilities. The AI arms race is heating up, and the battle lines are being redrawn. Still, organisations should proceed cautiously and remain vigilant in scrutinising AI’s ability to ensure accurate, safe, and informed decision-making.

Cyber security organisations such as ISACA are even taking proactive steps to establish professional credential programs focused on AI. These programs aim to enhance people’s understanding of AI concepts, principles and applications, equipping them with the knowledge needed to navigate this evolving field.

In this era of rapid technological advancement, adopting AI is not merely a choice but a necessity. The future of cyber security hinges on embracing AI’s potential to revolutionize threat detection, response, and system fortification.

Read more on Business continuity planning

CIO
Security
Networking
Data Center
Data Management
Close