Zencoder CEO to developers: OpenClaw is the canary, not the destination

As every good software engineer known, OpenClaw is an open source autonomous AI agent framework that runs locally on a users own hardware.

Quite viral in nature (its status has seen it rocket up the GitHub starred projects chart) the software itself is distinct in operation from other standard chatbots in that it works as a personal assistant inside messaging applications, including WhatsApp, Telegram, Slack and Discord.

OpenClaw can be set to execute and run 24×7 using what has been called “heartbeats” to oversee and check tasks proactively. It is also capable of sending reminders and being used to monitor projects without the need for a user to prompt it into action each time.

Not without its faults, OpenClaw has been called out for has security vulnerabilities, which may have been the result of misconfigured instances leading to users’ local files being exposed.

Zen and the art of code

Keen to explain how we should regard this technology today is Andrew Filev, CEO and founder at Zencoder, a company known for its cloud-based video encoding service that converts files for any device.

“OpenClaw going from zero to the most-starred software project on GitHub in four months tells you where developer attention is heading: from passive copilots to autonomous agents that execute workflows, manage infrastructure and operate across systems. The enthusiasm is real. But so are the consequences,” said Filev.

What are those real consequences he alludes to?

Filev offers reports of some 135,000 instances conscripted into a cryptojacking botnet, 12% of marketplace plugins distributing malware, a 1-click RCE that Cisco called a “security nightmare” no less.

“OpenClaw is the canary bird, mapping both the oases and the pitfalls for the rest of us,” he said. “Developers are voting with their attention, and what they’re voting for is autonomy. But ‘just trust  it’ is the 2026 version of ‘123password.’ Any agent with tool access needs sandboxing, least-privilege credentials and blast radius containment as defaults, not afterthoughts. The question isn’t whether your agent will do something unexpected. It’s what it can reach when it does.”

OpenClaw is the canary 

He clarifies the canary in a coalmine analogy further and urges us to think of OpenClaw as a canary i.e. a scout rapidly clearing the fog of unknown territory, mapping both its oases and its pitfalls for the developers who will follow.

“We’re getting genuinely close to fully autonomous AI workflows that deliver real value. OpenClaw proved you can wire an agent to your email, your calendar, your codebase, your browser and get meaningful work done through natural language instructions. The gap between ‘demo’ and ‘daily driver’ is closing fast,” detailed Filev.

He says that we’re also seeing the early outline of agents with something resembling memory and self-improvement. 

OpenClaw’s persistent memory lets it accumulate context across sessions, learn a user’s preferences and refine its behaviour over time – a fundamentally different relationship than the stateless copilot model most developers are used to.

What the pitfalls look like

That same memory capability is where things get instructive says Filev.

“Already this year, Meta’s director of AI alignment pointed OpenClaw at her inbox with an explicit instruction: “suggest what to delete, don’t act until I tell you to”… and the inbox was large enough to trigger context compaction, which compressed away the safety instruction. The agent deleted 200+ emails. She tried to stop it twice from her phone. It kept going. She had to physically run to her machine and kill the process,” clarified Filev.

The Zencoder CEO explains that this isn’t a bug in OpenClaw specifically – it’s a structural property of how autonomous agents work today. 

“Long-running sessions compress earlier context and safety instructions are just text that can be compressed away like anything else. The ‘Agents of Chaos’ study from Northeastern, Harvard, MIT and Stanford documented the same class of failure across multiple agents over two weeks, including agents that misrepresented the outcomes of their own actions,” concluded Filev.

There is much to learn here and Filev’s comment on AI now moving from ‘demo’ to ‘daily driver’ perhaps really encapsulates just how much we’re moving from prototype to production with AI in general. If OpenClaw teaches us anything by being the craved-but-critiqued canary that it is, we may all live to breath clean air for another day.