How can security vendors react to the growing AI threat?

Some of the great and the good of the cyber security vendor world share their predictions about what technologies will dominate this year

This article can also be found in the Premium Editorial Download: MicroScope: MicroScope: Is cloud the best option?

To get an idea of what 2024 would hold, distributor e92plus quizzed vendors about their expectations for the year ahead, focusing on the topic of artificial intelligence (AI) and the growing concerns around its effect on cyber security.

The automation of AI in cyber attacks

Dirk Schrader, field CISO for EMEA and vice-president of security research at Netwrix

The use of automation will continue to increase, heavily affecting the way attackers plan and execute their activities. End-to-end automation will encompass artificial intelligence deployment for creating phishing emails, fake websites and responses to foreseeable circumstances.

This escalation will accelerate the pace of cyber crime. The entire sequence – from reconnaissance, weaponisation, delivery, infiltration, installation, command and control, to actions on tar- gets – will advance to a new level.

Nevertheless, automation won’t eliminate the necessity to exploit credentials privileges and vulnerabilities. Addressing the challenge posed by AI-powered attacks requires crucial proactive defence measures. These include identifying the sensitive data the organisation holds and protecting it through the least privilege approach and zero-trust principles, all while ensuring the resilience of the existing infrastructure.

Any organisation heavily reliant on reactive defence methods and lacking governance over identities and infrastructure is likely to be overwhelmed by the repercussions of attacks

The response from AI

In 2024, cyber security heavily relies on automation to combat sophisticated automated cyber threats. Automated systems detect anomalies, analyse behaviours and respond rapidly to incidents. AI-driven tools enable real-time threat detection, incident response and patch management. These systems continuously learn, adapt and autonomously fortify defences, enhancing security postures.

Automation facilitates efficient penetration testing, vulnerability assessments and adaptive authentication. Its pivotal role lies in empowering cyber security teams to detect, mitigate and proactively defend against evolving automated cyber attacks, ensuring robust protection for digital infrastructures.

The use of personalisation and social media in cyber attacks

Allen Manville, channel director, strategic partners at Cofense

Despite a tough economic climate throughout 2023, cyber security has moved up the priority scale for organisations large and small, with more budget being made available as we see a continued increase in volume and sophistication of email threats, with 85% more indicators of compromise than 2022.

In 2024, we don’t predict this trend slowing as threat actors look to utilise the capabilities of AI to advance email threats that exploit known traditional security weaknesses. The creative applications of AI will enable the development of highly personalised threats that exploit traditional technology’s lack of ability to read and identify hidden malicious content.

We see the continued trend in increased image-based attacks, following a 2,400% increase in QR code phishing alone last year, due to their effectiveness at evading traditional technology. Combine this with AI’s deepfake capabilities, exploiting personal information, online behaviour or even impersonating senior members of staff, and campaigns will become increasingly challenging to distinguish between genuine and fraudulent, with a high penetration rate.

This could also be applied to the exploitation of trusted resources that sit outside an organisation’s control. Social media scams, vishing and smishing have all been on the rise, and highly targeted campaigns delivered with little to no security could pose a high risk to an organisation’s security. As a result, more value will need to be placed on the last line of defence: employee resilience.

AI could also exploit traditional security’s inability to deal with the sheer volume and speed of campaigns it could deliver. The use of AI will enable threat actors to automate high-volume campaigns at speed, and with their evasive techniques, we could easily see email servers and security teams overwhelmed in 2024.

Employees will become an integral part of any security stack, and their conditioning to accurately identify the evolving threats getting through will need to be a priority. Security teams will also need to be enabled to proactively identify and remove threats before they become a problem, and this will be reliant on the access to vital live threat intelligence.

The response from AI

Personalisation and social media are powerful tools that cyber criminals can exploit to conduct more targeted and effective cyber attacks. Here’s how they might be used:

  • Spear phishing and social engineering: Cyber criminals leverage personalisation to craft sophisticated spear-phishing emails or messages.
  • Identity theft: Cyber criminals can harvest social media to create detailed profiles for identity theft.
  • Social engineering attacks: Using personalised information gathered from social media, cyber criminals can manipulate victims into divulging sensitive information or performing actions that compromise security.
  • Physical security risks: Posting specific personal details, travel plans, or location information on social media platforms can create physical security risks.
  • Fake profiles and social engineering scams: Cyber criminals create fake profiles mimicking legitimate individuals or entities to conduct social engineering scams.
  • Reconnaissance and social engineering pretexting: Social media can be a treasure trove for cyber criminals conducting reconnaissance.

The use of shadow AI

Kunal Agarwal, CEO and founder of dope.security

Shadow AI, an iteration of shadow IT, is going to be a growing problem in 2024 and beyond. People see the labour savings that generative AI (GenAI) can provide, and many are going to take advantage regardless of what their company policy is. This has the potential to expand an organisation’s vulnerable surface area in a way that they are somewhat blind to.

Thinking back to 10 years ago, organisations faced similar problems with cloud apps such as Box or Dropbox. It’s going to be a rinse and repeat. Organisations will look to understand what apps are being used in the company and evaluate whether the company should pay to control them, accept the risk, or block the app. Hopefully this time we can skip the ambulance chasers and fear mongering to try to scare businesses though – all you really need is visibility.

If your teams are using a ChatGPT or Bard, you should be aware of it. But rather than intercepting and recording all requests and chats, we would recommend a different strategy. The company should choose to educate (through a warning page) or block the app entirely. This gives a more clear cut answer to employees rather than creating grey areas and raising privacy concerns.

The response from AI

Cyber criminals may leverage shadow AI to craft more sophisticated cyber attacks, utilising AI-generated deepfakes for social engineering, automated malware development for tailored exploits, or AI-driven evasion tactics to bypass security measures.

Additionally, AI-powered reconnaissance can enhance targeted attacks, and adversarial machine learning can create stealthier threats. Cyber criminals could misuse AI to amplify phishing, create convincing fake content, or enhance evasion techniques, heightening the challenge for cyber security defences, as AI-driven attacks become more complex and difficult to detect.

Using edge AI and quantum computing

Dave Barnett Head of SASE at Cloudflare

The year 2024 will bring us more experiences driven by AI. For instance, although today you can ask Siri in a smart car inter- face to “find” a specific restaurant as you’re driving, that isn’t an AI experience. Instead, an AI-fuelled experience would have Siri actually “recommend” restaurants, based on data inferred from your past preferences, real-time location, time of day, and other unique data points.

AI inference on the edge (sometimes also referred to as “edge AI”) is now becoming more affordable and accessible. This will be a game changer in 2024. Hyper-local AI inference will let manufacturers more easily deploy AI-trained models as close as possible to the user. Ultimately, that opens up a whole new world of AI-powered, internet-based experiences.

On the security side, while AI holds incredible promise, it’s also terrifying for data control policies. Team members treat AI like a sounding board for painful problems, and it’s far too easy to upload sensitive data to AI tools. Unless every organisation places strict guardrails around GenAI usage, we’ll see security incidents caused by oversharing information in the process. Those that allow third-party AI services but have not yet adopted zero-trust security will be particularly vulnerable to data breaches and intellectual property losses.

Finally, there is a warning rather than a traditional prediction. While we all know about zero-day vulnerabilities, I believe more organisations and governments will start preparing for the threat of “Q-day” – an attack that exploits quantum computing to crack cryptographic algorithms.

Quantum computing is far from mainstream, as very few quantum computers currently exist today. But a sufficiently advanced quantum computer can, in the future, decrypt what’s currently encrypted information.

Deloitte and others have previously warned that adversaries are targeting organisations now in these harvest now, decrypt later (HNDL) attacks. Why wouldn’t an attacker want to acquire sensitive data now, then patiently “wait” until they can use a machine to read it later? And, then, use AI to help them interpret that data? We’ve heard repeatedly of breaches where attackers “only” accessed email addresses or account records – with some small comfort that more sensitive data such as bank account numbers and passwords were encrypted.

But, now imagine the attackers who have stolen (and continue to steal) data from organisations such as the NHS, Facebook, Microsoft, LinkedIn, British Airways, Equifax, eventually have access to tools (i.e., quantum computing) to help read all that encrypted data.

While that scenario is at least several years away, alarm bells will really start ringing this year around HNDL and Q-day. Organisations should pay far more attention to protecting today’s communications from future decryption.

The response from AI

Quantum computing poses a potential threat to existing encryption methods by exploiting its ability to quickly solve complex mathematical problems. Quantum algorithms, such as Shor’s algorithm, could decipher widely used cryptographic algorithms, such as RSA and ECC, undermining data security.

Cyber criminals foresee exploiting quantum computers’ immense processing power to break encryption, compromising sensitive information. As quantum computing advances, it threat- ens the current cryptographic infrastructure, necessitating the development of quantum-resistant encryption methods to safe- guard against potential cyber threats that leverage quantum computing’s capabilities to undermine encryption protocols.

Assume breach vs prevention first

Yariv Fishman, chief product officer of Deep Instinct

The “assume breach” mindset will finally start to shift to prevention-first.

Since the dawn of cybersecurity, organisations have operated with a “detect and respond” approach to breaches. This also meant that they had an “assume breach” mentality. The rapid advances in AI mark a turning point for organisations and the end of the endpoint detection and response (EDR) honeymoon period, making way for a focus on prevention. It’s high time to eliminate the “assume breach” status quo. 

In 2024, organisations will start shifting towards a prevention-first mindset, utilising deep learning (DL), the most advanced form of AI, which allows for increased prevention capabilities, reduction of total cost of ownership (TCO), and tools to block and prevent future adversarial AI cyber attacks.

Large language models (LLMs) hold a lot of promise, but they are nowhere near their maximum potential. In 2024, as public LLMs become more accurate and powerful, we’ll simultaneously see an uptick in adversarial AI. LLMs can already perform standalone vulnerability research, exploit implementation, and execute attacks, including custom obfuscation and malware builders like we’ve never seen before. 

Furthermore, existing tools have proven they cannot address zero-day threats, creating the need to fight AI with AI – specifically, a more advanced form of it, deep learning. Only DL, the most sophisticated form of AI, is best positioned to combat these zero-day adversarial AI threats.

The response from AI

Organisations should adopt an “assume breach” mindset alongside preventive measures. While preventive measures are crucial, assuming a breach acknowledges the inevitability of potential security incidents. This approach emphasises continuous monitoring, rapid detection, and response readiness, assuming attackers might already be inside the network. 

Implementing robust preventive strategies, such as firewalls and access controls, remains vital, but coupling these with proactive threat hunting, real-time monitoring, and incident response plans prepares organisations to swiftly contain and mitigate breaches, minimizing potential damage and improving overall cyber security resilience in an ever-evolving threat landscape.

Continuing social engineering threats 

Comment from the ZeroFox Threat Intelligence Team

The threat from social engineering will likely remain on an upward trajectory in 2024. Threat actors continue to evolve traditional phishing techniques such as the use of malicious attachments, delivered by email or popular messaging applications such as Zoom and MS Teams.

Microsoft’s disabling of default VBA macros in its Office program will very likely continue driving an increase in both the use of file types omitting mark-of-the-web controls, such as archive files (RAR) and Windows Shortcut files (LNK), and the use of Adobe, Google and Dropbox files to facilitate HTML smuggling.

Attacks associated with search engine optimisation (SEO) poisoning are very likely to remain a threat, as threat actors continue to find success in leveraging SEO cloaking – the manipulation of search engine web crawlers, malicious redirects and website compromise attacks.

When conducting look-alike domain and email spoofing attacks, threat actors are likely to increasingly harness the perceived authenticity offered by the use of paid top-level domains, such as .com, rather than free ones, such as .tk and .ga.

The use of real-time, multifactor authentication (MFA)-bypassing techniques is likely to continue on an upward trajectory, as threat actors circumvent the popular and fast-proliferating tool often considered secure. MFA fatigue and OAuth consent phishing are likely to remain threats, and various types of “in-the-middle” attacks capable of token theft and session hijacking are very likely to become increasingly sophisticated and more difficult to detect.

Phishing-as-a-Service operations are likely to continue proliferating in dark web marketplaces, with off-the-shelf packages offering services of an increasingly wide range of prices and sophistication.

These services will continue to lower the barriers to entry for threat actors, enabling less technically-skilled individuals to conduct man- in-the-middle, MFA bypassing and session-stealing attacks.

The response from AI

Cyber criminals exploit search engine poisoning by manipulating search results to drive users to malicious websites. They employ black hat SEO tactics, injecting malicious links or keywords into legitimate sites, aiming to rank high in search results for popular queries. These poisoned links lead to malware distribution, phishing sites or scams. Victims unknowingly click these manipulated search results, exposing themselves to malware downloads, credential theft or scams.

Cyber criminals continuously adapt these techniques, leveraging current events, popular topics or trending keywords to deceive users, making search engine poisoning a significant threat to unwary internet users’ cyber security

Read more on Data Protection Services

ComputerWeekly.com
ITChannel
Close