Old Machine - Adobe
Use second-layer tools for AI safety
While some enterprises find it difficult to adopt AI, they can now use second-layer AI tools that enable businesses to implement the technology safely
As businesses accelerate artificial intelligence (AI) adoption, they often face technical obstacles such as poor data quality, data silos and integration issues with legacy systems. Ironically though, many of these challenges – from automated data cleaning and scaling cloud platforms to monitoring and maintaining the performance of AI models – are increasingly being addressed through AI tools themselves.
Sometimes called second-layer AI, these tools can play a crucial role in making AI more accessible and safer by also incorporating explainability and governance to aid compliance with evolving AI regulation. By strategically applying this second layer of AI support tools, companies can better manage the complexity of AI adoption and speed up deployment of the primary AI tools that will enhance business performance.
According to EY’s Responsible AI pulse survey, released in June 2025, seven in 10 organisations said they are already using or planning to use newer AI technologies within the next year – such as agentic AI, multimodal AI, synthetic data and more. But there are still those with fears.
A survey by IBM, from February 2025, shows that these include concerns around poor data quality and an insufficiency of proprietary data to train customised generative AI (GenAI) models; lack of in-house expertise; fears over data security; concern that expensive AI models trained to spot fraud, for example, or to manage customer relationships might lose relevancy as a business scales; difficulties integrating new AI agents and other AI tools with legacy systems.
The oldest law in information technology holds: “garbage in, garbage out”. If a company lacks clean data, or if the data it holds on customers or counterparties is scattered inside multiple siloed systems across different divisions, then there are AI tools that can extract this data, scan it for anomalies, resolve these, then normalise and standardise the data and tag it correctly.
Do all this basic work first, automated by second-layer AI tools, and then the time and money invested in AI agents to deliver personalised nudges into new products to customers and negotiate terms might generate a strong return. Fail to do it, then investment even into the right new AI tools for a business might be wasted.
GenAI is still a very new technology. The tech giants themselves are in a war for talent against the venture capital-funded start-ups. That makes it tough even for large corporations with hefty IT budgets to compete. But remember that the key challenge for most businesses is not so much recruiting top-ranked AI scientists as it is upskilling large numbers of the existing workforce to engage with the technology, instead of dismissing it out of fear it will replace them.
Again, the so-called second-layer AI can help. Large language models (LLMs) can tutor business users, help them to interact with AI tools and explain their decisions. Low-code or no-code AI platforms allow staff with limited formal IT training to learn by experimenting with the technology. Many will already be using basic AI tools at work, brought in from their personal lives to improve their productivity. A global study from the University of Melbourne shows that often these go unvetted by IT security.
It is better to provide simple AI platforms for employees to train on. These might become a pathway for development of new AI-powered workflows. Certainly, they will reduce the risk of employees putting sensitive customer and company data into unapproved AI tools that then might disclose it to outside parties.
When it comes to integrating agentic and other cutting-edge AI tools into existing processes, it soon becomes apparent that legacy systems do not speak the same language as the new technology. The good news is that LLMs can be deployed as translation layers, generating interfaces to legacy databases. This middleware can extend to wrapping application programming interfaces (APIs) originally developed to connect to legacy systems for connection to AI tools.
A big worry for businesses looking to deploy AI is that it might embed bias against certain customer segments and in doing so bring down the regulators, lower customer satisfaction and inhibit growth. To govern this, there are now AI explainability tools that maintain transparency around outcomes and model decisions, as well as AI-driven fairness audits and policy engines that flag high-risk cases for human review.
Another fear is that if a company embeds an AI model into the core of its operations, its utility may decay over time. This is because the business may develop new offerings for different customer segments in new geographic markets.
Meta-AI orchestration platforms can sit atop AI tools and monitor them for performance degradation as distribution patterns shift. They can flag model drift and automatically retrain AI tools in step with changes in the scale of a business. Autonomous AI platforms will be able to automate selection, deployment, retraining and eventual retirement of the AI tools a company uses in the core of its operations.
AI investment, like any other, is all about risk-adjusted return. The scale of returns from GenAI is still uncertain. But most business leaders, especially in Asia, now believe the risk of not adopting it is growing alarmingly. According to Microsoft’s 2025 Work trend index, over 80% of Asia-Pacific leaders see 2025 as the pivotal year to rethink core strategies and operations and are confident in using AI agents to expand workforce capacity. Yes, there are obstacles. But there are now tools to step over them.
Sooyeon Kim is AI leader at EY Korea. The views reflected in this article are the views of the author and do not necessarily reflect the views of the global EY organisation or its member firms.
