Agent wars sound exciting? The reality is more complex
This is a guest blogpost by John Bates CEO, Doxis. In it he expresses concern about the naivety of allowing AI free rein over business information and processes.
Remember Robot Wars? Maybe you caught it during its original 1998–2004 heyday, or perhaps you’re a fan of the 2016–2018 revival, but either way, you’ll remember the idea: tournaments where engineers pitted their remote-controlled mechanical warriors against each other.
Recent discussions about the rapid rise of agents in business makes me genuinely fear we’ll soon have the same amount of android aggression, but in the back office. Are we unwittingly setting ourselves up for a dangerous era of not Robot, but Agent Wars?
Too many agents
Imagine the enterprise as a smart home full of helpful robots; one’s cleaning, one’s vacuuming, another is cooking. That could save a lot of labour, but without a smart home brain coordinating the domestic automatons, this could descend into chaos very quickly.
With multiple vendors encouraging us to buy their agents, and often not making it that easy to use other suppliers’ wares, workflow chaos could soon be the norm. Agents moving across your application catalogue without full visibility, sometimes competing with other bots, could mean dialogues like this, potentially multiple times an hour:
Agent 1: I’ve created some valuable new reports. Do you want them?
Business User: Yes, please!
Agent 2: I did not authorise these items, I’m purging them at once.
Business User: !!?!
We naively left ChatGPT in charge
Of course, Agent Wars may not be as savage as the Robot version, but accepting agents into business processes carries potential risks. Bots interacting with other bots will produce unexpected side effects and, in some cases, cause damage.
A parallel is the perhaps premature way many organizations let ChatGPT run aspects of their businesses. There was an assumption that LLMs (large language models) would keep improving indefinitely, becoming more reliable. In reality, many models are now trained on AI-generated data, degrading their ‘DNA’. By design, LLMs are also non-deterministic in design—asking the same question 100 times will not give you the same 100 answers.
This creates increasing uncertainty within your systems, with unpredictable consequences. Amplified with semi- or fully autonomous agents running around, there’s clear risk an enterprise’s knowledge ecosystem could degrade due to a mix of random, competing, and black box agents.
Think Shadow IT, only worse; your organisation has installed a new enterprise software suite that has an embedded agent, generating outputs that could have unforeseen side effects. Have these bots been tested? Do they comply with ESMA, MAR, or MiFiD II regulations, as well as the new EU AI Law? Has the equivalent of the car crash test dummy certification process been carried out?
Open source and potential back doors
I’m sure the major vendors will do all they can to protect users and avoid releasing products that could cause harm. Still, commercial software is never released with malicious intent (setting aside hackers), yet unintended consequences occur (e,g., the market flash crash from algorithmic trading). As anyone familiar with open source can attest to, we need to be careful about introducing new software into corporate environments.
There’s also the issue of data protection. By design, AI will search everywhere it can access, which means it could expose sensitive information that no one even knew existed. It’s easy to imagine a scenario where information leaks occur and no one knows how or why because there’s no system to check, and nothing to compare against the original data before agents acted.
I’m not suggesting we face a Cyberdyne Systems scenario, where systems turn hostile, but there are likely risks we haven’t yet anticipated. AI governance is a non-negotiable priority. Organisations must not take unnecessary risks; they need to fully understand what’s in the contract and be certain that the terms match exactly what has been agreed upon.
Agents are exciting—they can streamline and optimise business processes. But there must be rules governing when generative or agentic AI is applied versus when more rule-based approaches are used. The advantage of rule-based systems is they are usually extremely fast and 100% reliable in terms of outcomes.
Can we afford to wait while this escalates?
There are already efforts underway to head off the risks of potential Agent Wars, most notably through initiatives such as A2A and the model context protocol. However, I would argue that we already possess a more immediate and practical mechanism for controlling agents in modern document management—what I term ‘document intelligence’. By that, I mean the next generation version of a tried-and-true backbone of IT and business governance: document management. In essence, organisations must maintain a single source of truth above our agents, grounded in an accurate, well-managed, searchable, and interrogatable repository of the business’s current and past state as represented by its documents.
I feel sorry for the humble business document. It arrives, gets filed away somewhere unseen, and is eventually recycled. Yet these documents are the living currency of an organisation—rich with insight and institutional knowledge that, if properly managed, can deliver immense value.
That is why document intelligence offers a practical way to avoid the dangers of agent wars and unreliable LLMs. Think of it as your internal MI6 or CIA: a control layer that ensures oversight, mitigates rogue AI risks, and maximises the value embedded in your existing documents.
Agents proliferating and running unchecked across your company’s processes cannot be a good idea. A smart document management and control structure could be your best defence.
