
kirill_makarov - stock.adobe.com
Why AI reliability is the next frontier for technical industries
AI is no longer a futuristic idea — it’s embedded in the core operations of many of today’s industries. But can we trust its outputs?
Despite advanced system designs, many companies report AI accuracy rates as low as 75%. In high-stakes settings like financial institutions, government agencies and hospitals, a 25% error rate isn’t just inefficient — it’s dangerous. Consequently, there’s a growing push for organisations to move beyond generic AI models and invest in domain-specific, semantically rich systems that promote cross-industry collaboration and information sharing.
The limits of generic AI in complex domains
Gartner states that 62% of CFOs and 58% of CEOs believe AI will significantly impact their industries over the next three years. Yet, their optimism is balanced by the challenges of implementation — especially in regulated or technically complex sectors.
Generic AI models, including large language models (LLMs), are powerful but lack the deep knowledge needed in specialised industries. This often means these models rely heavily on surface-level keyword matching and broad training data. However, in technical fields like telecom, healthcare, and manufacturing — where layered systems, specialised terminology, and nuanced workflows are standard — this semantic deficiency becomes a major flaw.
To bridge this gap, companies like Nvidia are enabling customised LLMs trained on proprietary, domain-specific data. This vital step allows companies to overcome a long-standing AI barrier and access high-quality, usable training data.
Closing the semantic gap through collaboration
The World Economic Forum’s (WEF) 3C Framework — Combination, Convergence, and Compounding — suggests that emerging technologies create the greatest value when they are deeply integrated across systems and industries. This integration occurs when different technologies are brought together, transforming operations and creating exponential impacts across ecosystems.
Therefore, for AI to be truly reliable in technical fields, it must be part of this broader, collaborative evolution — not just a standalone tool, but a component of a smarter, interconnected infrastructure. Some businesses are already forming strategic partnerships that demonstrate how open ecosystems, and shared infrastructure can accelerate enterprise AI transformation. By pooling expertise, data, and platforms, these alliances help close the semantic gap and develop AI systems that are more accurate, explainable, scalable, and aligned with real-world complexity.
Implementing these strategies not only reduces risk but also improves performance. Companies focusing on fewer, high-priority AI projects, especially those tailored to their industry, can expect more than double the ROI compared to their peers. Additionally, the WEF notes that AI’s convergence with technologies like automation and quantum computing is reshaping value chains and generating exponential returns. For example, healthcare companies deploying domain-specific generative AI are already seeing improvements in operational efficiency and patient outcomes.
The cost of inaccuracy
For certain sectors, ensuring depth of knowledge and reliable data isn’t just desirable — it’s essential. A 2024 report from Boston Consulting Group (BCG) on AI risk management highlights that in industries like healthcare, banking, and insurance, errors caused by AI can result in regulatory violations, misdiagnoses, or financial losses.
In these fields with minimal margins for error, trust and accuracy are non-negotiable. Inaccurate outputs can erode stakeholder confidence, delay ROI, and raise legal costs.
The future of AI isn’t about broad, unchecked use — it’s about precision. As industries deepen their reliance on AI, the importance of semantically aware, domain-specific systems becomes increasingly evident. Companies that lead this transformation by investing in tailored models, aligning cross-functional strategies, and engaging in collaborative ecosystems will unlock AI’s full potential. The next frontier isn’t about doing more — it’s about doing it right.
Chris Bennett is global vice president of the AI & machine learning (ML) practice at Unisys
Read more stories about AI training
- Mixed-precision training in AI: Everything you need to know: Training AI models can be expensive and time-consuming. Mixed-precision training uses both FP16 and FP32 to lower memory use and reduce costs without sacrificing model accuracy.
- 6 steps in fact-checking AI-generated content: Generative AI tools are now prevalent, yet the extent of their accuracy isn't always clear. Users should question an AI's outputs and verify them before accepting them as fact.