Rogue agents: how can organisations manage thousands of new micro decision makers?

This is a guest blogpost by Markus Mueller, global field CTO APIM at Boomi.

AI agents are autonomous by definition, which makes them ideal for automating data analysis, orchestrating workflows, and enforcing checks on data quality. A recent McKinsey study found that 78% of companies are using AI in at least one business function, while more than one-in-five (21%) have redesigned workflows around GenAI and agentic AI.

These streamlined workflows can create significant efficiency gains, reducing the cost of repeatable manual tasks. This is particularly important in industries such as manufacturing and logistics, where firms are grappling with volatile cost surges and thin margins. For example, supply chain solutions provider Crane Logistics has leveraged Boomi’s Scribe agent to reduce partner documentation processes by 60-70%.

Yet, as AI agents are plugged into business processes and systems, many organisations underestimate their role as decision-making entities. As a result, they are overlooking the security and governance risks that they can introduce without controls in place.

Enterprises need to impose clear guardrails on their AI agents now. If not, they will be fighting a flood of shadow AI decisions that will expose them to serious security risks, from leaked sensitive data to the introduction of vulnerabilities through AI-generated code.

Action without instinct

Currently, too many organisations have no visibility into what their AI agents are doing, whether it’s how they work, which data they access, or how they reach decisions.

What’s more, AI agents aren’t deterministic, which limits their predictability and repeatability. Just because an agent performed well in a test environment, it doesn’t mean it will behave the same way in production.

 

Some might argue that AI is no riskier than an employee who is capable of human error. But there’s a critical difference: humans have instincts. If a user unwittingly gains access to sensitive data, they will likely have a gut feeling that something is wrong and flag it to the IT or support team.

AI agents don’t have the same sense. They execute tasks with laser precision, acting without hesitation and treating any access as a given. This makes them incredibly powerful assistants, but if the right guardrails aren’t in place, it can lead to inaccurate outputs, exposed sensitive data, or even breaches.

Balancing risk with reward

In risk-averse sectors like manufacturing and logistics, letting AI agents loose on their data can feel like too large a risk. Many factories have spent the past decade deciding whether to integrate data across shop-floor systems and back-end IT infrastructure, let alone entrusting it to autonomous agents.

When it comes to agentic AI, firms can’t afford to be over-cautious. The gains on offer – from faster workflows to lower costs and improved insights – are too significant to ignore. If they can’t keep up, organisations will be leapfrogged by the competition. Every new technology carries risk as well as reward. An IT leader’s job is to drive innovation and unlock those benefits with the right controls in place, not avoid it.

A controlled approach to data access

To reap the rewards of agentic AI without the risk, organisations must grant it access to their data on a need-to-know basis, only giving agents what they need to function. This least-privilege approach will keep the impact as small as possible if something does go wrong.

To support this, organisations must focus on building a single source of truth for their AI. Teams need a central hub that they can use to track every agent’s permissions, performance, and behaviour. They should be able to see every agent at a glance, understand in real-time what systems and APIs they are interacting with, and determine which datasets they are accessing, so they can course correct when required.

 

This hub should also enforce tight governance and security controls. Access policies can be enforced to ensure data pipelines are secured and AI agents adhere to the same privacy guidelines as their human counterparts. Low-code integration tools are essential to this capability, offering standardised templates and natural language prompts that teams can use to create secure, compliant, and effective AI agents from the outset.

Building trust in AI

Ultimately, the speed of innovation is determined by the speed of trust. If business leaders can’t trust AI agents, they will not deploy them at scale – no matter how capable they are. And the truth is agentic AI is already capable of so much more than it’s being used for.

To unlock its potential, organisations need to build their confidence in AI. This can only be achieved by seeing more proven deployments and public success stories. At the moment, no one wants to go out into the world with a previously unexplored agentic AI use case and see things go wrong. To overcome these concerns, organisations should focus on small wins and simple use cases to demonstrate the value of agentic AI. This will help to win over hearts and minds across the organisation and increase adoption.

Balancing innovation with oversight

With a mature AI, API and data management strategy in place, even the most risk-averse industries can benefit from agentic AI without compromising on security, compliance, or control.

By unifying data, connecting systems, and supervising AI agents holistically, organisations can enforce privacy rules and limit the chances of errors creeping into outputs. Focusing their early efforts on smaller use cases that can deliver quick wins will help them to reap the efficiency gains of agentic AI and then realise them at scale as their confidence grows.