Laurent - stock.adobe.com

How GCHQ proposes to implement and use ethical AI

The rise of cyber crime and the escalating threat vectors facing the UK have led GCHQ to invest in automated threat detection and response systems to meet this challenge, as well as liaising with the private sector for the first time

In early March 2021, GCHQ released a whitepaper outlining its plans for the future. In response to the escalating threats faced online, the government intelligence and security organisation will increasingly use automated systems.

The amount of data generated, in particular from social media posts and online forums, is increasing exponentially.  It is now reaching levels where it is impossible for humans, no matter how large the team, to effectively monitor the online world.

Conventional artificial intelligence (AI) tools were initially used to support GCHQ staff, but these carried historical biases from the data they were trained on. This was exemplified in Deloitte’s recent study, State of AI in the enterprise, in which a significant number of respondents expressed concerns around ethical risks associated with AI.

“While this unprecedented technological evolution comes with great opportunity, it also poses significant ethical challenges for all of society, including GCHQ,” says Jeremy Fleming, director of GCHQ.

Ethical AIs are intelligent systems with codes of conduct that ensure an automated system makes decisions in a fair and unbiased way. Utilising an ethical AI that is unimpeded by historical bias ensures that the AI’s outcomes are fair and rational. This will provide GCHQ with a clearer picture of what is happening within the UK.

“When you have any part of government that is dealing in things like counter-terrorism and cyber security, it is important to stay within the boundaries of what we understand as basic frames of legality, privacy, data protection, non-discrimination and human rights,” says David Leslie, ethics theme lead and ethics fellow at the Alan Turing Institute.

The UK is currently facing escalating threats, both within and outside its borders. From external threats to democratic processes, to the growing problem of human trafficking, recent events have illustrated the scale of threats that GCHQ now faces.

GCHQ is currently focusing on ethical AI for use in three key areas:

  • Foreign state disinformation.
  • Child abuse.
  • Human trafficking.

Foreign state disinformation

Hostile actors sponsored by other countries have used AI to conduct disinformation campaigns, typically through automating the production of fake content to undermine public discourse. Such content can include deepfake videos and audio material designed to mislead listeners, as well as automated social media accounts (bots). AI can also be weaponised to analyse specific users, thereby enabling personalised political targeting, as occurred with the Cambridge Analytica scandal.

As a defence against these disinformation campaigns, AI-enabled tools can be deployed for automated fact-checking, verifying information against known trusted sources. Social media companies use this to flag content that is deliberately misleading or false, such as some posts recently published by anti-vaccine campaigners.

Ethical AI provides GCHQ with the ability to detect botnets of machine-generated social media accounts. Through this, it can identify the sources generating misinformation, thereby allowing online operations to directly counteract these malicious accounts.

Child abuse

Child abuse is one of the most insidious threats facing society. In the UK alone, it has been estimated that there are 300,000 people who present a threat to children.

Through analysing past accounts, AI can be trained to identify potential grooming behaviour within direct messages and chat rooms. They can also be used to detect the exchange of illegal images, as well as tracking disguised identities of offenders across multiple accounts. AI can also be used to search for illegal activities on the dark web and enable law enforcement agencies to infiltrate rings of offenders.

With AI tools trained to analyse intercepted imagery and messages, as well as detecting chains of contact, they can support investigators in identifying and protecting victims and discover any accomplice offenders. Also, the use of AI to analyse content and metadata protects GCHQ’s analysts from unnecessary exposure to disturbing material.

Trafficking

More than 350,000 individuals are estimated to be involved in serious organised crime (SOC) in the UK, at huge cost to the economy. Most SOC groups are involved in multiple types of trafficking, such as drugs, weapons and human trafficking, which in turn enable other crimes, such as identity theft and bribery.

SOC’s use of technology is increasingly sophisticated, involving the use of encryption tools, the dark web and virtual assets, such as bitcoin, to conceal transactions.

AI can help GCHQ map the complex networks, both nationally and internationally, that enable trafficking. This is achieved by identifying individuals, as well as their associated accounts and transactions, to reveal criminal groups and their associations. It can also be used to analyse large-scale chains of financial transactions, as payments are made and received online.

Information obtained using ethical AI could provide geographical information on illicit activity, enabling the analysis of multiple sources of imagery, messaging and sensor data to predict when and where the delivery of illegal cargoes will take place.

“It is undeniable that these data-driven technologies are being used on all sides right now,” says Leslie. “In order to carry out their responsibility, they are going to need to understand and combat the various threats that are out there, using these types of digital data-driven technologies.”

A common misconception about the use of AI within the intelligence services is that it will replace human analysts, but this has never been the intention. Rather, AI would augment existing teams by automating processes, thereby allowing analysts to focus on the important tasks that require human judgement. AI can trawl masses of data for patterns of suspicious activities, something that would otherwise be impossible by conventional methods using human analysts.

However, training AI is challenging, and more so when it needs to be ethical. Advanced AI methodologies can make it impossible for humans to fully assess the factors that the AI took into account. This carries the risk of it becoming a black box, where AI produces the answer, but we have no way of understanding how it came to that conclusion. This has significant implications for accountability in the overall decision-making process.

“When you have these massive machine learning and AI programmes churning away, they are working at such a high dimension of variables that simply out-distances human-scale understanding,” says Leslie.

Data fairness

One of the core challenges in developing an AI system is ensuring it is fair, in that it makes equitable and reasonable decisions, and is not biased. Key to this is data fairness, as historically, many organisations have inadvertently relied on skewed datasets for training AI systems. A consequence of this is that the AI either unfairly supports a particular demographic or actively discriminates against others. A prime example is how speech and facial recognition systems fail to work effectively with significant parts of society.

There also needs to be outcome fairness, in that an AI can be biased if it fails to treat individuals equally. If an AI were given an unacceptable goal, based on unsupported assumptions (such as the ethnicity or gender of targets), the AI system would achieve this goal, but in so doing it reinforces discrimination.

Finally, there needs to be design fairness, particularly in providing data. Regardless of good intentions, if everyone on the design team comes from a western, educated, industrialised, rich, democratic (WEIRD) background, the AI will pick up their subconscious biases and assumptions.

As part of its approach to AI ethics, GCHQ will be informed by the framework created by the Alan Turing Institute in 2019 (published as Understanding artificial intelligence ethics and safety). This framework is designed to help organisations create ethical AI capabilities that are fair, non-discriminatory and justifiable to stakeholders. It will also take into account the findings from the recent RUSI project into national security and AI.

Because of the sensitive nature of its work, GCHQ has traditionally been reticent about working with external parties. But due to the inherent challenges of training ethical AI, GCHQ has made the ground-breaking decision to approach the UK’s AI sector to work alongside it.

Central to this will be its new office in Manchester, which will incorporate an “industry-facing” lab, dedicated to prototyping projects to help keep the country safe.  GCHQ will also mentor and support startups based around its other offices, through accelerator schemes.

Little has been announced about how the industry-facing laboratory or accelerator schemes will operate, or what GCHQ is looking for. However, GCHQ has said interested companies should “follow us on Twitter”.

Ultimately, by using ethical AI and collaborating with the UK tech industry, GCHQ will pioneer a new kind of security for the future. Instead of reacting to national security threats, GCHQ wants to prepare for potential threats in the future through gaining new insights.

“We keep exponentially increasing the amount of data we are producing with the accelerating development of cyber physical systems,” says Leslie. “Responsibly utilising auxiliary tools, such as AI systems, to help us manage the corresponding risks seems to be the right direction.”

Read more on Privacy and data protection

CIO
Security
Networking
Data Center
Data Management
Close