koya979 - stock.adobe.com

Hungry for data: Inside Europol’s secretive AI programme

The EU’s law enforcement agency has been quietly amassing data to feed an ambitious but secretive artificial intelligence development programme that could have far-reaching privacy implications for people across the bloc

It is described by critics as a data grab and surveillance creep strategy. Europol calls it Strategic Objective 1: to become the European Union’s (EU) “criminal information hub” through a strategy of mass data acquisitions.

Europol officials have rarely hidden their appetite for gaining access to as much data as possible. At the core of Europol’s hunger for personal data lies growing artificial intelligence (AI) ambition. 

The ambition is stated openly in the agency’s 2024 to 2026 strategy document: absorb data on EU citizens and migrants from European law enforcement databases, and analyse the mass of data using AI and machine learning (ML).

Since 2021, the Hague-based EU law enforcement agency has embarked on an increasingly ambitious, yet largely secretive, mission to develop automated models that will affect how policing is carried out across Europe. 

Based on internal documents obtained from Europol and analysed by data protection and AI experts, this investigation raises serious questions about the implications of the agency’s AI programme for people’s privacy. It also raises questions about the impact of integrating automated technologies into everyday policing across Europe without adequate oversight.

Responding to questions about the story, Europol said it maintains “good contacts” with a range of actors, and that it maintains “an impartial position” towards each of them to fulfil its mandate of supporting national authorities in combating serious crime and terrorism: “Europol’s strategy ‘delivering security in partnership’ sets out that Europol shall be at the forefront of law enforcement innovation and research.”

Europol added that its “approach of cooperation is guided by the principle of transparency”.

Since 2021, Europol has embarked on an increasingly ambitious, yet largely secretive, mission to develop automated models that will affect how policing is done across Europe

Mass data leads to new opportunities

Europol’s critical role in three mega-hack operations that dismantled encrypted communication systems EncroChat, SkyECC and Anom in 2020 and 2021 landed the agency with enormous volumes of data. 

During these cross-border operations, Europol mainly served as a data transit hub, playing middleman between the policing authorities that had obtained the data and those that needed the data to pursue criminals in their jurisdictions. 

But Europol went further. Instead of limiting itself to a mediating role, it discreetly copied the datasets from the three operations into its repositories and tasked its analysts with investigating the material. 

The impossibility of the task – Encrochat having 60-plus million message exchanges and Anom having 27-plus million messages exchanged – accentuated the agency’s interest in training AI tools to help expedite the analysts’ work. Their motive was clear: to prevent criminals escaping and lives being lost.

In September 2021, 10 inspectors from the European Data Protection Supervisor (EDPS) parachuted into Europol’s Hague headquarters to inspect what was set to be Europol’s first attempt to train its own algorithms, which the agency planned to do with data harvested from EncroChat. 

According to an EDPS document, Europol’s aim was to “develop seven machine learning models … that would be run once over the whole Encrochat dataset” to help analysts reduce the volume of messages they had to check.

However, the development of these models was paused after the EDPS initiated a consultation procedure to review the agency’s proposed data processing operations. 

While Europol initially resisted the consultation process – arguing that its use of machine learning did “not amount to a new type of processing operation presenting specific risks for the fundamental rights and freedoms” – the EDPS unilaterally started the consultation procedure. 

This forced the agency to supply a set of internal documents, including copies of the data protection impact assessments (DPIAs). The subsequent inspection brought to light a serious disregard for safeguards and highlighted the shortcuts Europol was willing to take in developing its own AI models. 

For example, according to an EDPS document, a “flash inspection” in September 2021 found almost no documentation for monitoring the training had been “drafted during the period in which the models were developed”.

EDPS inspectors also noticed that “all the documents the EDPS received about the development of models were drafted after the development stopped (due to the prior consultation send to the EDPS) and only reflect partially the data and AI unit’s developing practices”. The EDPS also mentioned that “risks related to the bias in the training and use of ML models or to statistical accuracy were not considered”.

For Europol’s analysts, though, there seemed to be no reason to worry, as they considered the risk of the machines wrongly implicating an individual in a criminal investigation to be minimal. 

Europol’s empire-building masterminds saw a new opportunity in the potential use of AI for scanning every EU citizen’s digital communication device

While models developed during this time were never deployed for operational reasons, as the agency’s legal framework did not provide a mandate for developing and deploying AI for criminal investigations, this changed when Europol’s mandate was expanded in June 2022.

By then, the agenda on AI and the importance of access to data had shifted to the issue of online child sexual abuse material (CSAM). Thrust to the forefront of the political agenda by the European Commission’s proposal a month earlier to introduce so-called client-side scanning algorithms for the detection of abusive material, it had sparked a polarising debate about the threat of breaking end-to-end encryption and the dangers of mass surveillance.

Europol’s empire-building masterminds saw a new opportunity in the potential use of AI for scanning every EU citizen’s digital communication device. 

Doubling down during a meeting with a commission home affairs director, Europol proposed that the technology could be repurposed to look for non-CSAM content.

The agency’s message was clear. “All data is useful and should be passed to law enforcement,” according to minutes of a 2022 meeting between Europol and a senior official from the European Commission’s directorate-general for home affairs. “Quality data was needed to train algorithms,” the Europol official said.

Europol officials also urged the commission to ensure that all European law enforcement bodies “can use AI tools for investigations” and avoid limitations placed in the AI Act, the EU’s then-forthcoming law to limit intrusive and risky algorithm use. 

In response to this investigation, Europol claimed that its operational use of personal data is subject to tight supervision and control, and that under the Europol regulation, documents shared with the EDPS for oversight purposes should not be shared publicly.

“Accordingly, Europol considers that the DPIA documentation is not subject to general public disclosure, including given that the knowledge of the specifics set out in a DPIA can position crime actors in an advantageous situation over public security interest.”

In bed with private agendas

Europol’s concerns about the restrictive regime set by the AI Act echoed those of major AI players. The shared interests of Europol and private actors in developing AI models are hardly a secret. On the contrary, it’s often mentioned in the agency’s documents that maintaining close contact with AI developers is considered of strategic importance.

One important point of contact has been Thorn, a US non-profit developer of an AI-powered CSAM classifier that can be deployed by law enforcement agencies to detect new and unknown abuse images and videos.

Since 2022, Thorn has been at the forefront of an advocacy campaign in favour of the CSAM proposal in Brussels, pushing to mandate the obligatory use of AI classifiers by all companies offering digital communication services in the EU.

What is less known, however, is the close contact or coordination between Thorn and Europol. A cache of email exchanges between the company and Europol between September 2022 and May 2025, obtained via a series of Freedom of Information (FOI) requests, lays bare how Europol’s plan to develop a classifier has been closely tied to the company’s advice.

In April 2022, in anticipation of Europol’s expanded mandate entering into force, which would allow the agency to exchange operational data directly with private entities, a Europol official emailed Thorn “to explore the possibility for the Europol staff working on CSE area … to get access” for a purpose that remains redacted. 

Thorn responded by sharing a document and advised Europol that further information was needed to proceed. However, that document has not been disclosed, while details of the information needed are redacted. “I have to stress out this document is confidential and not for redistribution,” Thorn’s email concluded.

Five months later, Europol contacted Thorn for help in accessing classifiers developed in a project the non-profit had taken part in, so the agency could evaluate them. 

According to machine learning expert Nuno Moniz, the exchanges raise serious questions about the relationship between the two actors. “They are discussing best practices, anticipating exchange of info and resources, essentially treating Thorn as a law enforcement partner with privileged access,” said Moniz, who is also associate research professor at the Lucy Family Institute for Data & Society at the University of Notre Dame.

Udhav Tiwari, vice-president for strategy and global affairs at Signal, said: “These interactions point towards a potentially dangerous nexus of conflicted interests that could circumvent important democratic safeguards designed to protect civil liberties.”

The intimate collaboration between Europol and Thorn has continued ever since, with a planned “catchup over lunch” in one instance, and another of Thorn presenting its CSAM classifier at Europol’s headquarters.

In the most recent exchanges obtained by this investigation, an email exchange from May 2025 reveals Thorn discussing its rebranded CSAM classifier with the agency.

Although much of Europol’s correspondence with Thorn remains heavily redacted, some emails have been withheld in full, in disregard of the European Ombudsman’s call to provide wider access to the exchanges prompted by complaints filed in this investigation. 

Europol claims that some of the undisclosed documents “contain strategic information of operational relevance regarding Europol’s working methods in relation to the use of image classifiers, whereby specific such classifiers are mentioned concretely and have been the subject of internal deliberations but also external discussions with Thorn”. 

In response to this investigation, a Thorn spokesperson said: “Given the nature and sensitivity of our work to protect children from sexual abuse and exploitation, Thorn does not comment on interactions with specific law enforcement agencies. As is true for all of our collaborations, we operate in full compliance with applicable laws and uphold the highest standards of data protection and ethical responsibility.”

Europol told this investigation that “to date, not a single AI model from Thorn has been considered for use by Europol”, and hence, “there is no collaboration with developers of Thorn for AI models in use, or intended to be made use of by Europol”.

It added that the consultation process of the EDPS “requires significant time and resources before deployment”, and that “any output generated by AI tools is subject to human expert control before being used in analysis or other support activities”.

Patchy scrutiny 

It is not only Europol’s deliberations with Thorn that remain opaque. The agency has doggedly refused to disclose a range of critical documents regarding its AI programme, from data protection impact assessments and model cards to minutes from board meetings. 

Disclosed documents often remain heavily redacted on questionable legal grounds. In many instances, Europol has flouted statutory deadlines for responding to requests by weeks. 

Europol has doggedly refused to disclose a range of critical documents regarding its AI programme: from data protection impact assessments and model cards, to minutes from meetings of its management board

In most cases, the agency has cited “public security” and “internal decision-making” exemptions to justify withholding information. The European Ombudsman, however, has repeatedly questioned the vagueness of those claims in preliminary findings, noting that Europol has failed to explain how disclosure would concretely endanger its operations.

Five transparency complaints filed by this investigation are currently pending in front of the European Ombudsman.  

But Europol’s apparent aversion to transparency is but one aspect of a failing accountability architecture that is, on paper, meant to ensure that all of Europol’s activities, including the roll-out of AI tools, comply with fundamental rights obligations. 

Inside Europol, that task falls mainly on the shoulders of the agency’s fundamental rights officer (FRO), an internal watchdog position introduced with Europol’s 2022 mandate to appease concerns that its wide expansion of powers did not have strong enough guardrails.

Put in place in 2023, the position has not addressed concerns about lack of robust oversight.  

“Europol’s fundamental rights officer does not function as an effective safeguard against the risks posed by the agency’s increasing use of digital technologies. The role is institutionally weak, lacking internal enforcement powers to ensure that its recommendations are followed,” said Bárbara Simão, an accountability expert at Article 19, a London-based international human rights organisation that tracks the impact of surveillance and AI technologies on freedom of expression. Simão reviewed several FRO “non-binding” assessments of Europol’s AI tools obtained by this investigation. 

“To fulfil its role as an internal oversight mechanism, it must move beyond a symbolic function, properly scrutinise the technologies being deployed and be given genuine authority to uphold fundamental rights,” she added.  

Many of the non-binding reports issued by the FRO contain a copy-and-pasted admission that this capacity to robustly review Europol’s AI tools is not in place. 

“At this moment, no tools exist for the fundamental rights assessment of tools using artificial intelligence. The assessment methodology the FRO uses is inspired by a document edited by the Strategic Group on Ethics and Technology, and on a methodology to deal with dilemmas,” the reports noted. 

External oversight does not appear much stronger. The principal mechanism – the so-called Joint Parliamentary Scrutiny Group (JPSG), which brings together national and European parliamentarians to monitor Europol’s activities – is a body that can ask questions and request documents, without any enforcement powers. 

Ironically, Europol, responding to the European Ombudsman’s inquiries about the agency’s questionable transparency practices, claims that its “legitimacy and accountability” is “already largely and necessarily being fulfilled by the statutory democratic scrutiny carried out by the European Parliament together with national parliaments through the Joint Parliamentary Scrutiny Group (JPSG)”.

Europol’s fundamental rights officer does not function as an effective safeguard against the risks posed by the agency’s increasing use of digital technologies. The role is institutionally weak, lacking internal enforcement powers to ensure that its recommendations are followed
Bárbara Simão, Article 19

It is left to the EDPS to scrutinise the agency’s hasty expansion with limited resources and an inadequate data protection-focused mandate, which does not fully capture the range of human rights harms presented by Europol’s AI efforts.

‘Severe consequences’ 

By summer 2023, developing its own CSAM classifier was a top priority for Europol’s AI programme. A two-page advisory document issued by the agency’s FRO indicates the aim was to develop “a tool that uses artificial intelligence (AI) to classify automatically alleged child sexual abuse (CSE) [child sexual exploitation] images and video”. 

In just four lines, Europol FRO Dirk Allaerts addressed the issue of bias, indicating that a balanced data mix in age, gender and race was necessary “to limit the risk the tool will recognise CSE only for specific races or genders”.

The development phase would happen in a controlled environment to further limit any risks of data protection violations. To train the tool, the project would use both CSE and non-CSE material. While it is unclear how Europol would obtain the non-CSE material necessary for training the algorithm, the CSE material would mostly be provided by the National Center for Missing and Exploited Children (NCMEC), a US-based non-profit closely aligned with the federal government and its law enforcement agencies. 

Although Europol had already put plans to train a classifier on the backburner by late 2023, data delivered from NCMEC was ingested into the agency’s first in-house AI model, deployed in October 2023.

Named EU Cares, the model is tasked with automatically downloading CSE material from NCMEC, cross-checking it with Europol’s repositories, and then disseminating the data in near real time to member state law enforcement authorities. 

The volume of material ingested – mainly from US-based digital giants like Meta, which are obliged to report any potential CSAM to NCMEC – became so large that manual processing and dissemination, which Europol implemented before deploying AI, was no longer feasible. 

Europol’s own assessment of the system had identified risks of “incorrect data reported by NCMEC” and “incorrect cross-match reports” that may wrongfully identify people as “a distributor or owner” of CSAM.

Still, according to the EDPS, the agency “failed to fully assess the risks” linked to its automation of these processes.

In an EDPS opinion obtained via FOI requests by this investigation, the data protection watchdog underlined the “severe consequences” that data inaccuracies could cause. 

It requested that the agency implement additional mitigation measures to tackle errors that can occur by automating the process. In response, Europol committed to marking suspect data as “unconfirmed”, adding “enhanced” trigger alerts for anomalies, and improving its system for removing retracted referrals. Among other measures, the agency said these steps would address the EDPS’s concerns about accuracy and cross-match errors.  

In February 2025, the agency’s executive director, Catherine De Bolle, said EU Cares “had delivered 780 thousand referrals in total with enrichment packages until January 2025”. The question remains, how many of these are false-positive or redundant leads? The German federal agency, which receives reports directly from NCMEC without using Europol’s system, told this investigation that out of 205,728 reports received in 2024, 99.375 (48.3%) were not “relevant under criminal law”. 

The next frontier: facial recognition

Even as the EU’s privacy regulators pressed for safeguards on EU Cares, Europol was expanding automation into another sensitive field: facial recognition

Since 2016, the agency has tested and purchased several commercial tools. Its latest acquisition, NeoFace Watch (NFW) from Japanese software firm NEC, was meant to eventually replace or complement an earlier in-house system known as Face, which could already access about one million facial images by mid-2020.

Heavily redacted correspondence shows that by May 2023, Europol was discussing the use of NeoFace Watch. When it later submitted the new programme for review, the EDPS warned of the “risk of lower accuracy processing for the faces of minors (as a form of bias)” and “of incoherent processing” if old and new systems (such as the existing Face and the NeoFace Watch) run in parallel. 

After the consultation, Europol decided to exclude the data of minors under the age of 12 from being processed, as a precaution. 

The watchdog asked Europol to run a six-month pilot to determine an acceptable accuracy threshold and minimise false positives. 

Europol’s submission to the EDPS included reference to two studies by the National Institute of Standards and Technology (NIST), a US government body. While the studies were intended to support Europol’s choice of NeoFace Watch as its new go-to system, NIST specified in one report that they did not use “wild images” sourced “from the internet nor from video surveillance”, which are the kind of sources Europol would use. 

In a related report, NIST evaluations for NEC’s algorithm documented that using photos in poor light conditions had an identification error rate of up to 38%.

However, Europol signed a contract with NEC in October 2024. The agency confirmed to this investigation that the software is used within a specific CSE unit by trained expert staff. 

Similar deployments of NeoFace Watch in the UK have faced legal challenges over bias and privacy. 

In a non-binding advisory opinion in November 2024, Europol’s FRO described the system as one that “raises risks of false positives that can harm the right of defence or of fair trial”. The system is considered high risk under the new EU AI Act. Nonetheless, the FRO cleared it for use, merely urging the agency to acknowledge when the tool is used in cross-border investigations to “enhance transparency and accountability, key to keep the trust of the public”.  

The EDPS told this investigation that it is currently preparing an inspection report and cannot communicate further at this stage about Europol’s use of NEC software.

NEC also told this investigation that NeoFace Watch was ranked as “the world’s most accurate solution” at NIST’s most recent testing round. It added that its product “has undergone extensive independent testing by the National Physical Laboratory (NPL) and was found to have zero false-positive identifications when used live in typical operational conditions”.

High accuracy figures alone do not make facial recognition safe or address the legal and rights concerns documented in this case. Experts including Luc Rocher, an associate professor at the Oxford Internet Institute, have demonstrated that facial recognition evaluation methodologies still fail to fully capture real-world performance, where factors like image quality, population scale and demographic diversity cause accuracy to degrade significantly, particularly for racial minorities and young people. 

Simão, the Article 19 expert, noted that emphasising technical performance “tends to downplay risks associated with facial recognition technologies”, including the bias against minors flagged by the EDPS and threats to fair trial rights identified by Europol’s own watchdog.

The bigger picture 

A binding internal roadmap drawn up by Europol in 2023 outlines the true scale of Europol’s ambition: 25 potential AI models, ranging from object detection and image geolocation to deep-fake identification and biometric personal feature extraction. The vision would place the agency at the centre of automated policing in the EU, as tools deployed by Europol could virtually be used by all law enforcement bodies across the bloc. 

In February 2025, Europol’s De Bolle told European lawmakers that the agency had submitted 10 DPIAs to the EDPS – seven were updates for models already being developed and three for new ones.

Members of the JPSG asked Europol to provide a detailed report of its AI programme. When the agency delivered, it sent lawmakers a four-page paper with generic descriptions of its internal vetting processes, without any substantive information on the ΑΙ systems themselves. 

Green MEP Saskia Bricmont, part of the JPSG, told this investigation that because AI being developed by Europol “can entail very strong risks and consequences for fundamental rights”, strong and effective supervision must be ensured. 

“In spite of the information provided, it remains very complex for MEPs to fulfil their monitoring task and fully assess the risks associated with the use of AI-based systems by the agency,” said Bricmont.

At the same time, the European Commission is preparing to present a new, comprehensive reform to turn Europol into “a truly operational agency”. 

The precise meaning of this transformation remains unclear. However, the European Commission has proposed doubling Europol’s budget for the next financial term to €3bn in taxpayers’ money. 


This investigation was supported by IJ4EU and Lighthouse Reports.

Read more about technology in law enforcement 

  • Microsoft hides key data flow information in plain sight: Microsoft’s own documentation confirms that data hosted in its hyperscale cloud architecture routinely traverses the globe, but the tech giant is actively obfuscating this vital information from its UK law enforcement customers.
  • MPs propose ban on predictive policing: MPs are attempting to amend the UK government’s forthcoming Crime and Policing Bill so that it prohibits the use of controversial predictive policing systems.
  • UK government to consult on police live facial recognition use: The UK’s policing minister has confirmed the government will consult on the use of live facial recognition by law enforcement before expanding its use throughout England, but so far, the technology has been deployed with minimal public debate or consultation.

Read more on Artificial intelligence, automation and robotics