Fotimmz - Fotolia

Davos 2024: AI-generated disinformation poses threat to elections, says World Economic Forum

Disinformation and misinformation are the top risks facing businesses, governments and the public over the next two years

This article can also be found in the Premium Editorial Download: Computer Weekly: Davos 2024: AI disinformation tops global risks

Artificial intelligence (AI)-generated disinformation and misinformation poses risks to upcoming elections in the US, the UK, Asia and South America over the next two years.

Attempts to undermine the democratic process by spreading false narratives could erode confidence in governments and lead to civil unrest.

The World Economic Forum (WEF) warned today that online misinformation and disinformation, generated by AI, is the top short-term risk facing countries.

With three billion people expected to vote in elections worldwide between now and 2026, the WEF ranks the risk posed by disinformation and misinformation ahead of severe weather events, social polarisation and cyber security.

AI also poses new risks to computer systems by allowing hostile states and hacking groups to automate cyber attacks, while in the longer term, dependence on AI for decision-making will create further risks, the WEF predicted in its Global risks report 2024, published today.

The vulnerability of governments, businesses and society to AI-generated fake narratives will be one of the key risks under discussion when business leaders, politicians, academics and non-government organisations meet at the World Economic Forum in Davos from 15-19 January 2024.

The World Economic Forum’s Global risks report 2024, which draws on the views of 1,200 risk experts, policy-makers and industry leaders around the world, paints a gloomy picture, predicting a tough outlook for the next two years that is expected to worsen over the longer term.

Some 30% of experts consulted by the WEF said the world was on the precipice of facing catastrophic risks over the next two years, rising to 60% of experts predicting catastrophic risks over the next decade.

Saadia Zahidi, managing director of the WEF, described the situation as “an unstable global order characterised by polarising narratives and insecurity, the worsening impacts of extreme weather and economic uncertainty”, which was “causing accelerating risks – including misinformation and disinformation”.

Election risks

With elections looming, the WEF warns that social media companies could be overwhelmed by multiple overlapping misinformation campaigns, making attempts to manipulate elections difficult to police.

Deep-fake AI-generated campaign videos, podcasts or websites could influence voters and lead to protests, or in more extreme scenarios, lead to violence or radicalisation, according to the WEF’s analysis.

False narratives will be increasingly personalised and targeted to specific groups, and will spread through less open channels such as the WhatsApp messaging service or China’s WeChat, it predicts.

Misinformation campaigns could destabilise newly elected governments, potentially leading to political unrest, violence and terrorism, according to the WEF.

“The potential impact on elections worldwide over the next two years is significant, and that could lead to elected governments’ legitimacy being called into question,” said Carolina Klint, chief commercial officer for Europe at Marsh McLennan. “This, in turn, could impact the democratic process, leading to further social polarisation, riots, strikes or even increasing violence.”

Cyber security risks of AI

The WEF warns that AI will expose businesses and organisations to new cyber security risks by providing cyber criminals with new pathways to hack companies.

AI can be used to create advanced malware that can impersonate people to win their trust and trap them into revealing their passwords during phishing attacks.

“Cyber security is not about protection of a computer or protecting a file, it’s more about making sure supply chains work and that society as a whole is up and running”
Carolina Klint, WEF

When North Korean hackers attacked the Bangladeshi central bank in 2016, it took them two years to map out the bank’s computer networks and figure out how to attack the system. “Had the attack been powered by AI, it would have taken two days,”  Klint told a press conference today.

Businesses will need to respond by using artificial intelligence to automate defences against cyber attacks, automatically patch vulnerable systems and close security gaps.

“We have to recognise that everything we use – such as water and electricity, the financial system, the communication system – is dependent on the integration of an incredibly complex network of systems,” said Klint.

“Cyber security is not about protection of a computer or protecting a file, it’s more about making sure supply chains work and that society as a whole is up and running,” she added.

In the case of the Bangladeshi central bank, Klint said it took investigators months to find out what had happened and put a stop to the attack, but if AI had been available, it could have detected the intrusion within two days.

AI and social media regulation

Despite the increasing isolation of many countries, the WEF said businesses and governments would need to collaborate to find solutions to AI-generated disinformation campaigns and rising cyber risks.

One answer is to regulate technology companies to require AI-generated articles and images to include a watermark that would identify them as artificially generated.

Greater regulation may also be needed for social media companies, which amplify the spread of disinformation and misinformation by feeding people more articles on topics they “like”.

John Scott, head of sustainability risk at Zurich Insurance Group and one of the contributors to the report, said the lack of editorial selection on social media could lead to a world where no one knows who to trust and what content is reliable.

“Somehow, we have got to create a veracity, some sort of arbiter of truth, that we can understand individually and collectively,” he said.

Proposed measures include digital literacy campaigns on misinformation and disinformation, and international agreements to limit the use of AI in conflict decision-making.

Risks are interlinked

According to the risk report, concerns about the risks of AI-driven misinformation will dominate 2024, along with the cost-of-living crisis and social polarisation.

The risks are interlinked and may be exacerbated by geopolitical tensions that could mean conflicts underway in the Ukraine, Israel and elsewhere lead to further conflicts in other parts of the world.

Over the next decade, environmental risks will continue to dominate, with extreme weather, critical changes to the earth’s systems, loss of biodiversity, pollution and shortages of natural resources featuring in the top 10 risks.

The next few years will be characterised by economic uncertainty creating growing economic technology and social divides, the WEF predicts.

Marsh McLennan’s Klint said breakthroughs in artificial intelligence would cause radical disruption for organisations, with many struggling to react to threats from misinformation alongside other risks.

“It will take a relentless focus to build resilience at organisational, country and international levels – and greater cooperation between the public and private sectors – to navigate this rapidly evolving risk landscape,” she said.

Scott added: “Collective and coordinated cross-border actions play their part, but localised strategies are critical for reducing the impact of global risks.”

Read more from the World Economic Forum

Read more on Data quality management and governance

CIO
Security
Networking
Data Center
Data Management
Close