Why traditional automation is key to avoid AI solution trap
Boards are pushing for AI, but Nintex CTO Niranjan Vijayaragavan warns that without a foundation of traditional auto-mation and clean data, AI projects are destined to fail
Many enterprises risk falling into the “solution trap” when they deploy new technologies such as artificial intelligence (AI) without giving sufficient thought to whether they address real business problems.
That was according to Niranjan Vijayaragavan, chief technology officer at workflow automation specialist Nintex, who called for organisations to apply the right tools to the right problems. But with growing interest in generative AI driven by tools like ChatGPT, many corporate boards pushed their organisations to adopt AI immediately, often without knowing where it could be usefully applied.
“Experimentation is good,” Vijayaragavan conceded, but he stressed the importance of establishing where the real bottlenecks lie in any process. IT leaders must have a hypothesis about how AI can deliver value in those specific situations, particularly given that 80-90% of tasks can be automated using traditional technology.
The “shotgun approach” taken by some organisations has led to fragmented projects that deliver little to no return on investment. Vijayaragavan suggested it was time for “pause and reflection,” noting that while “there is value there, the structure is lacking to derive that value.”
Key considerations for this period of reflection include security and privacy, specifically, whether an underlying large language model (LLM) is absorbing the data it accesses.
There is also the cost of rectification. If an AI agent saves an employee one hour a day, but that time is lost unwinding the agent's mistakes, the value proposition collapses. Then, there’s also the opportunity cost: money spent experimenting with undefined AI projects is money not spent on initiatives of known value.
A recent study by Nintex found that 84% of CIO and chief financial officer (CFO) respondents now believe automation is a necessary precursor to successfully implementing AI in business processes. Nintex’s position is that automation acts as the muscle to complement AI’s brain. Automation supports the scaling of AI projects by standardising processes, ensuring data quality, and providing a foundation for governance.
Structured vs unstructured data
Making data AI-ready was a major topic throughout 2025. Vijayaragavan observed that while progress has been made, success depends on clear use cases.
Despite earlier concerns regarding unstructured data, LLMs have proven efficient at processing it. Nintex, for example, has used LLMs to make its technical documentation and related material more accessible.
However, when it comes to structured data, such as that stored in SQL databases, siloed, unclean data “is still the state of the union,” Vijayaragavan said. It will take time to resolve these legacy issues, even with the aid of modern data platforms like Snowflake and Databricks.
He pointed out that combining text from different sources is a non-deterministic process, whereas combining database records should be deterministic. Because LLMs are inherently probabilistic, IT teams will achieve more accurate results by using SQL statements to combine records. “As a general rule, do not use inferencing models where a deterministic result is expected,” he said.
However, AI can assist in mapping records from different tables where column names are inconsistent, for example, matching a column named ‘Customer’ with one named ‘Cust’.
There are also scenarios where the two approaches work in tandem. An HR system might use an LLM to interpret a natural language question, such as “how many days leave have I got left?” and convert it into a specific database query. This query would then deterministically look up the leave entitlement and subtract days taken. “[That] is a great use of LLMs,” he suggested.
Consequently, enterprises need a platform that supports both deterministic and inferencing technologies to use the right technology for the right context.
When it comes to the application of generative AI in customer support and software development, Vijayaragavan believes “the promises have largely materialised.” While AI-powered customer support still makes errors or occasionally adopts the wrong tone, it is generally effective enough for deployment.
AI serves as a capable assistant, he added, but it is not yet trusted to operate autonomously. In the coming year, Vijayaragavan expects to see more examples of agentic AI entering production. These will not be fully autonomous, with humans continuing to provide oversight.
The governance of AI agents should mirror that of human workers, he argued. Both require clear roles, responsibilities, and access rights. Agents must keep records of their actions and be able to explain what they did and why.
Vijayaragavan noted a “healthy acknowledgement” in the industry that traditional automation remains more appropriate than agents in many contexts. However, as foundation models become more accurate and specialised models improve, the scope for agents will grow, provided organisations invest in context engineering.
“Currently, a lot of contexts is required to get the quality of responses that are required,” he said. Enterprise users are in a unique position to build that context into their agents, rather than leaving it to individual users.
Ultimately, organisations must weigh the total costs of deploying AI, including the impact on employee morale, against their risk appetite.
“A multi-year investment may be needed before you see the returns, so it is important to think carefully why you are embarking on a project,” he said. “‘The board thought it was a good idea’ should not be sufficient reason.”
Read more about AI in APAC
- SK Telecom is leading efforts to turn South Korea into a global heavyweight in AI with a sovereign foundation model that not only excels in Korean-based tasks, but also mathematics and coding.
- Japanese banking giant MUFG aims to transform into an AI-native company by using agentic AI, changing how it handles data, and inking key partnerships with OpenAI and Sakana AI.
- Malaysia’s Ryt Bank is using its own LLM and agentic AI framework to allow customers to perform banking transactions in natural language, replacing traditional menus and buttons.
- Singapore researchers show how adapting pre-trained AI models can solve data scarcity issues in countries with limited resources.
