bajita111122 - stock.adobe.com

Why OpenClaw agents are the next big enterprise challenge

As users flock to deploy OpenClaw agents for everything from gig work to shopping, IT leaders warn that bringing these autonomous systems into the enterprise will require strict guardrails and a mix of AI models

From hunting for shopping deals to seeking employment, OpenClaw has seen a wave of viral adoption, particularly in China, where users are reportedly lining up to have the software installed on their personal computers.

The phenomenon has been dubbed “Claw mania”, with users referring to their personal OpenClaw agents as lobsters. Since its debut last November, the number of GitHub stars – a measure of an open-source project’s popularity – for OpenClaw has equalled that of established projects such as React and Linux.

OpenClaw, an orchestration framework for long-running, self-evolving artificial intelligence (AI) agents, essentially functions as a modern computer system in its own right. It has working memory, access to file systems, the ability to schedule tasks, and an application programming interface (API)-like sense of skills that allows it to operate various software applications.

At GTC 2026 in San Jose last week, Nvidia CEO Jensen Huang discussed the scale of the OpenClaw frenzy, noting that users are creating OpenClaw agents to interact with other agents, form virtual companies, and even seek out gig work to offset the compute costs required to keep them running.

“I’ve actually heard that somebody’s lobster consumed 50 million tokens in a day; at $1 per million tokens, that’s 50 bucks,” said Huang. “And so that lobster has got to go out and at least get a job that pays more than 50 bucks. It’s got to pay 51 bucks to stay alive.”

But as these new OpenClaw agents impress users with their ability to execute complex, multi-step tasks independently, IT leaders are faced with a slew of security, governance and infrastructure challenges should these systems find their way into the enterprise.

During a panel discussion with AI technology leaders hosted by Huang, Arthur Mensch, co-founder and CEO of Mistral AI – Europe’s biggest AI company – noted that deploying OpenClaw at scale can quickly expose cracks in data and governance as adoption expands across an organisation. “You need primitives to have the right governance and scalability and to host everything in the same control plane,” said Mensch, referring to the foundational building blocks of software development. “That’s actually harder to do than just buying a computer and setting up OpenClaw.”

The core issue is granting AI agents the authority to act on a user’s behalf. Huang distilled the security challenges down to three capabilities: accessing sensitive information, executing code and communicating with the outside world. “If we want to be secure as an enterprise, you should allow someone, including an AI, any two of those three things at one time, but not all at one time, unless it’s the CEO,” said Huang. “All of it should be governed.”

To safely deploy AI agents, Harrison Chase, co-founder and CEO of AI tooling provider LangChain, said enterprises can adopt harness engineering – the practice of building guardrails, tools and connections around the core model that powers an AI agent to ensure it behaves correctly in a specific domain.

Additionally, organisations can look to open models whose underlying code and weights are publicly accessible and modifiable, rather than relying solely on proprietary, closed models from suppliers like Google and OpenAI.

Privacy is a key driver for the use of open models. Hanna Hajishirzi, a senior director at the Allen Institute for AI, noted that for an AI agent to be truly useful, it must be able to access sensitive corporate and personal data. “I feel more comfortable letting an open model access my private data,” she said.

Large, closed models are also often too generalised for specialised workloads. Daniel Nadler, CEO of healthcare AI firm OpenEvidence, likened massive, multi-trillion-parameter closed models to an 800-year-old parent who is incredibly smart but difficult to retrain for niche tasks.

For example, in healthcare, where an AI agent might be tasked with battling an insurance company over a denied claim or filing a prior authorisation letter, hospitals need agents trained strictly on medical claims procedures, not generalists.

“You can’t start with these 800-year-old models that are set to think about pattern recognition in a certain way. You actually want to train in the tails,” Nadler said, referring to the long tail of specialised use cases. “And open models are the requisite foundation for that.”

Read more about AI in APAC

Anjney Midha, founder of AMP PBC, which provides compute and capital to frontier AI firms, added that open models are also critical for mission-critical deployments where lives or core infrastructure are at stake.

“If it’s not an open model, you can’t introspect it or host it – you’re dependent on third parties for that,” he said. “If we want to welcome agents into the most mission-critical parts of our lives, we’re going to have to find a way to trust them. Open models are one of the fastest ways to trust a system.”

However, Huang believes the future of enterprise AI isn’t a strict dichotomy between open and closed models. “Even for a closed-model company, I believe open models will be used as part of the agentic system where the closed model is your crown jewel,” he said.

Aravind Srinivas, co-founder and CEO of AI-powered search tool Perplexity, agreed. He noted that enterprises will likely use highly capable proprietary models as reasoning engines, along with open models to handle specific formatting, routing and tool-use tasks efficiently.

“Models are essentially becoming just tools, like file systems and connectors, and we’re finally able to operate at an abstraction above models – and that’s very exciting,” he said.

Whether through lobsters paying their own server fees or hospital claim-processing agents filing medical paperwork, the tech industry is bracing for a future where AI agents operate alongside humans as colleagues.

“I think the economy-wide story of AI is this: what started working in coding last year is going to work in all other domains,” said Michael Truell, CEO of AI coding startup Cursor. “We’re going to see agents become co-workers that take on incredibly complex workloads and tasks that take many hours or days.”

Read more on Artificial intelligence, automation and robotics