Agentic AI exposes flaws in enterprise workflows, highlighting weak data, unclear ownership and undefined processes, but better governance and integration can help
When DevRev co-founder Manoj Agarwal told an audience in London’s Canary Wharf last November that “work is broken”, he captured a frustration that many CIOs recognise.
Years of fixes, plug-ins and new platforms have left behind sprawling software estates, complexity and workflows that depend as much on manual oversight as automation. Enterprise software may be more powerful than ever, but the flow of work remains fragmented and opaque in many organisations.
The suggestion now that agentic AI is a panacea for all ills is understandably being met with some scepticism. And yet forecasts point to rapid expansion over the next few years as organisations continue to increase spending on AI and automation. Global technology investment is expected to rise steadily through the end of the decade, driven in part by AI, according to Forrester.
Gartner predicts that a majority of brands will be using agentic AI in customer interactions within the next few years. At the same time, it has warned that more than 40% of agentic AI projects could be cancelled by the end of 2027 as governance, cost and execution challenges become clearer.
Recent analysis from McKinsey underscores the point. While agentic capabilities are advancing quickly, most organisations remain in experimentation mode, struggling to scale beyond tightly defined pilots without addressing deeper operating-model and data issues.
All of which raises a more fundamental question – is enterprise work itself structurally sound enough to support autonomy at scale?
Ankur Anand, global CIO at Nash Squared, suggests not. He says that when agents are allowed to orchestrate real workflows across systems, they become “brutally honest about what is actually there”, adding: “It will happily follow the process you have designed, and in many enterprises that means it faithfully reproduces the chaos.”
CIOs describe agents stalling on access controls, escalating exceptions that were never formally defined, or generating outputs that reveal how fragmented the underlying data landscape really is. The technology does what it is designed to do. It executes against the rules and information available. Where those rules are unclear or the information incomplete, the weaknesses become difficult to ignore.
In practice, this means stalled workflows, duplicated records and unclear ownership surface quickly. Gartner, in its warning about agentic projects being scaled back, suggests that governance and execution challenges have emerged as key reasons. Meanwhile, survey data from Camunda states that many organisations remain stuck in pilot phase despite widespread experimentation.
Joe Turner, global director of research at Context, sees a familiar pattern. He compares the current phase of agentic experimentation to the early days of private cloud adoption, when powerful infrastructure was layered onto operating models that had not materially changed. The result, he argues, was predictable: sophisticated platforms built on fuzzy governance and manual ticket queues. “Putting a high-speed engine onto a weak architecture,” he adds, “is rarely a recipe for efficiency.”
The parallel extends beyond process design to cost discipline. With private cloud, many organisations discovered they had built flexible environments without the consumption controls to manage them. Agentic AI carries a similar risk. Unbounded model calls across poorly defined workflows can turn into significant token expenditure, particularly when agents are allowed to iterate or escalate repeatedly. As Ankur Anand notes, cost itself becomes a diagnostic tool. If the economics do not hold, it often indicates that the workflow being automated was never stable enough to begin with.
The governance challenge runs deeper than cost alone. Agentic systems are not passive tools – they take actions, call APIs and move data across boundaries. This is great if everything works well, but it also increases the blast radius when something goes wrong.
Sam Sutherland, principal software engineer at tech consultancy Parallax, argues that the engineering discipline required for agentic deployments is often underestimated. “It’s fairly easy to get something working,” he says. “It’s much harder to make it safe, reliable and governable.”
In several projects, he says, teams have deliberately avoided building a single, all-powerful agent. Instead, they design smaller, narrowly scoped agents with tightly defined remits. The aim is containment. Limiting access reduces the impact of mistakes and improves reliability, particularly where long reasoning chains can degrade performance.
Visibility is another concern. Without detailed telemetry showing which tools an agent has called, what decisions it has taken and where it has escalated, systems quickly become opaque, which is not great if you work in a regulated sector.
AI trap
Adam Low, chief technology officer at secure communications platform Wire, cautions against what he calls the “AI trap” – deploying agentic systems where more conventional workflow tooling would suffice. Autonomous agents excel in dynamic, unstructured environments; in static, repeatable processes, they can introduce unnecessary complexity. Autonomy expands capability, but it also expands responsibility.
If early deployments are exposing weakness, they are also forcing organisations to be clearer about how work gets done. Anand says many enterprise processes were never properly defined end to end, but instead evolved over time, shaped by workarounds and individual judgement. Agents remove that flexibility.
The architecture underneath is what determines whether [an agent] adds value or becomes another failed pilot
Steve Januario, Bill.com
“The value comes when you treat those failures as telemetry,” he says. A stalled action or repeated escalation highlights a gap in data, ownership or governance. For CIOs prepared to confront it, the friction becomes useful as it shows where process discipline is thin. That may help explain why some projects are moving forward while others are being paused.
At Bill.com, a DevRev customer, CIO Steve Januario says five internally built agents are already supporting customers in production. The focus, he says, was not simply on model capability but on the architecture underneath. “You can’t just throw an agent out there,” he adds. “The architecture underneath is what determines whether it adds value or becomes another failed pilot.”
The roll-out took seven weeks from contract to production. The speed worked because teams embedded together and worked through the practical details of integration and control. The contrast is straightforward. Where autonomy is layered onto unclear workflows, it exposes disorder. Where processes are mapped and ownership defined, it can remove friction.
Data flaws
Where there is a problem, data sits at the heart of it. Ravi Malick, global CIO at Box, says many organisations still struggle to create a single, reliable source of truth. Data is spread across multiple applications and varies in quality from team to team. Agents cannot reason effectively if the underlying information is inconsistent or incomplete.
“Businesses need to focus on data unification and curation to ensure agents have the correct, up-to-date context to do the work,” he says.
Malick draws a parallel with earlier cloud migrations. Some companies moved infrastructure without changing how they worked – costs shifted, but processes did not. The same risk applies to agentic AI, as threading agents into existing silos without redesigning the operating model is unlikely to deliver the expected gains.
Jon Bance, chief operating officer at consultancy Leading Resolutions, sees a similar pattern. Most CIOs he works with are not rushing towards fully autonomous agents, they are focused on cleaning up data, simplifying workflows and reducing operational noise.
“Without stable data foundations, clear guardrails and well designed workflows, AI agents simply amplify existing problems rather than solve them,” he says.
In early pilots, he argues, weaknesses surface quickly, as the work now is less about expanding autonomy and more about strengthening the basics. That does not mean organisations are stepping back from agentic AI altogether – in many cases, they are becoming more selective.
Readiness
Arthur Hu, senior vice-president and global CIO at Lenovo, says the issue is about readiness. In research the company conducted with IDC, it found that only a minority of organisations report significant agentic usage today, and many expect it will take more than a year before they are ready to scale. The barriers tend to centre on governance maturity, integration complexity and unclear ownership rather than model performance.
Early enthusiasm encouraged broad experimentation. Teams tested agents across multiple functions, often without fully defining where autonomy began and where human oversight ended. That approach is changing – decision rights are being made explicit; audit requirements are being defined earlier; and autonomy is introduced in stages, starting with observation and supervised execution before moving to constrained action.
The pattern is familiar – a new capability arrives: the first phase is exploration and the second is discipline. For CIOs, it’s another wave of investment layered onto an already crowded estate. The risk is obvious – more tooling, more integration, more complexity.
Agentic AI will not fix broken work by itself. It cannot compensate for weak data, unclear ownership or undefined processes. What it can do is expose where work depends on human intervention rather than design. It forces organisations to define what they previously left implicit. That pressure is uncomfortable but perhaps it is also overdue. DevRev and its customers certainly think so – and by the look of it, they are not alone.
Read more about agentic AI in enterprise IT
The agentic AI future of enterprise architecture: As agentic AI changes business processes, it will also redraw the role of enterprise architecture.