peshkov - stock.adobe.com

Attacks against AI systems are a growing concern

European research group says attacks against AI systems are already occurring, difficult to identify, and could be far more common than currently understood

Cyber attackers currently focus most of their efforts on manipulating existing artificial intelligence (AI) systems for malicious purposes, instead of creating new attacks that use machine learning.

That is the key finding of a report by the Sherpa consortium, an EU-funded project founded in 2018 to study the impact of AI on ethics and human rights, supported by 11 organisations in six countries, including the UK.

However, the report notes that attackers have access to machine learning techniques, and AI-enabled cyber attacks will be a reality soon, according to Mikko Hypponen, chief research officer at IT security company F-Secure, a member of the Sherpa consortium.

The continuing game of “cat and mouse” between attackers and defenders will reach a whole new level when both sides are using AI, said Hypponen, and defenders will have to adapt quickly as soon as they see the first AI-enabled attacks emerging.

But despite the claims of some security suppliers, Hypponen told Computer Weekly in a recent interview that no criminal groups appear to be using AI to conduct cyber attacks.

The Sherpa study therefore focuses on how malicious actors can abuse AI, machine learning and smart information systems. The researchers identify a variety of potentially malicious uses for AI that are already within attackers’ reach, including the creation of sophisticated disinformation and social engineering campaigns.

Although the research found no definitive proof that malicious actors are currently using AI to power cyber attacks, as indicated by Hypponen, the researchers highlighted that adversaries are already attacking and manipulating existing AI systems used by search engines, social media companies, recommendation websites, and more.     

Andy Patel, a researcher with F-Secure’s AI centre of excellence, said that instead of AI attacking people, the current reality is that humans are regularly attacking AI systems.

“Some humans incorrectly equate machine intelligence with human intelligence, and I think that’s why they associate the threat of AI with killer robots and out-of-control computers,” said Patel.

“But human attacks against AI actually happen all the time. Sybil attacks designed to poison the AI systems that people use every day, such as recommendation systems, are a common occurrence. There are even companies selling services to support this behaviour. So, ironically, today’s AI systems have more to fear from humans than the other way around.” 

Read more about AI security

Sybil attacks involve a single entity creating and controlling multiple fake accounts in order to manipulate the data that AI uses to make decisions. A popular example of this attack is manipulating search engine rankings or recommendation systems to promote or demote certain pieces of content. However, these attacks can also be used to socially engineer individuals in targeted attack scenarios.

“These types of attack are already extremely difficult for online service providers to detect, and it is likely that this behaviour is far more widespread than anyone fully understands,” said Patel, who has conducted extensive research on suspicious activity on Twitter.

According to researchers, potentially the most useful application of AI for attackers in the future will be helping them to create fake content. The report noted that AI has advanced to a point where it can fabricate extremely realistic written, audio and visual content. Some AI models have even been withheld from the public to prevent them being abused by attackers.

Patel added: “At the moment, our ability to create convincing fake content is far more sophisticated and advanced than our ability to detect it. And there are many different applications for convincing, fake content, so I expect it may end up becoming problematic.”

Other key findings of the report include:

  • Adversaries will continue to learn how to compromise AI systems as the technology spreads.
  • The number of ways attackers can manipulate the output of AI makes such attacks difficult to detect and harden against.
  • Powers competing to develop better types of AI for offensive/defensive purposes may end up precipitating an “AI arms race”.
  • Securing AI systems against attacks may cause ethical issues, such as privacy infringements through increased monitoring of activity.
  • AI tools and models developed by advanced, well-resourced threat actors will eventually proliferate and become adopted by lower-skilled adversaries.

Sherpa project coordinator Bernd Stahl, a professor at De Montfort University, Leicester, said F-Secure’s role as the sole partner from the cyber security industry is helping the project account for how malicious actors can use AI to undermine trust in society.

“Our project’s aim is to understand the ethical and human rights consequences of AI and big data analytics to help develop ways of addressing these,” he said. “This work has to be based on a sound understanding of technical capabilities as well as vulnerabilities. We can’t have meaningful conversations about human rights, privacy or ethics in AI without considering cyber security.”

Read more on Hackers and cybercrime prevention

CIO
Security
Networking
Data Center
Data Management
Close