nassim - stock.adobe.com

Thales: Trust in AI for critical systems needs to be engineered

Confidence in AI capabilities that power critical systems must go beyond words and be built on a hybrid model that combines data with physics and logic to prove reliability, according to Thales’ chief scientist

Trust in the artificial intelligence (AI) capabilities employed by critical systems, such as air defence or flight control, must be proven rather than just declared, according to a top scientist from French aerospace and defence giant Thales.

During a media briefing in Singapore, Thales’s chief scientist Marko Erman emphasised that the AI used for music recommendations is significantly different from the AI required to manage critical systems, where lives, infrastructure, and national security could be jeopardised if something fails.

“A critical system is a system where things that are very critical are at stake,” Erman said. “It could be personal data, life, health, infrastructure and peace,” Erman said. “If the question is whether something that you observe is a friend or foe, and you have one millisecond to react, you better be sure that the answer is the one that you expect.”

Erman stressed that trust in AI systems must be earned through engineering, not marketing. “It’s not the words – you’d have to prove trust,” he said, adding that this requires organisations to go beyond AI models that rely solely on data, which can lead to misleading correlations without understanding the cause.

For example, Erman observed that while the number of shark bites is correlated with the quantity of ice cream sold, the result simply implies that people swim in warmer weather, which is also when more people tend to eat ice cream. “There’s no correlation between having ice cream and being bitten by a shark, but that’s what the data tells you.”

To address the pitfalls of data-only analysis, Thales has been championing hybrid AI, which combines data-driven machine learning with established knowledge such as physics, operational rules, and logic. This provides causality, not just correlation, allowing the company to scientifically prove that a system is reliable.

Hybrid AI is already being applied in the development of Thales’s products. During the briefing, Thales executives showed off an AI-powered classifier for its radars that improves drone identification by 35% while reducing false alarms from birds or urban clutter by 70% – a critical capability for protecting airports and key infrastructure.

Another system, an AI-enhanced tool for managing air traffic, can optimise flight approaches to airports. By more accurately predicting aircraft arrival times, it not only enables airports to open up more landing slots but also reduces fuel consumption and increases runway capacity, directly addressing environmental concerns and improving operational efficiency.

For Thales, which has been working on neural networks since 1989, well before the current AI boom, these advancements are part of a long-term effort to embed AI deeply and responsibly into all of its products.

In fact, Erman predicted that AI will eventually become so integrated into technology that it will no longer be a topic of conversation, similar to how electricity is now used in home appliances. “Nobody will talk about AI in 30 years because it will be everywhere,” he said.

Thales has been in Singapore since 1973, expanding from its initial focus on avionics to a major hub with 2,000 employees working across areas such as aerospace, defence, cyber security, and digital identity. In May 2025, it announced plans to open an AI accelerator centre in the city-state, extending its global AI network of such facilities in the United Kingdom, France, and Canada to Asia.

Read more about AI in APAC

Read more on IT risk management