Virtue AI: New programming paradigms command AI-native security
Software application development is code-native, cloud-native… sometimes mobile-native and now AI-native, especially given the fact that code assistants and agentic programming functions have been permeating the software application development landscape for many months now.
Logically, then (if we accept the above truisms), we should also now move all the other essential ancillary disciplines within the software application development lifecycle to a state of AI-nativeness. But, remembering how cloud computing initially grew without enough robustness provisioning, let’s start with security.
Virtue AI thinks this is a good idea.
The company offers an AI-native security and compliance platform designed to help a business to deploy and manage secured generative AI systems and agents alongside an organisation’s cadre of enterprise applications. The company provides a suite of tools that it says are capable of addressing “unique risks of the agentic era”… so this therefore spans vulnerability zones such as prompt injections, hallucinations and data leakage
Security, for AI
This is not security for run-of-the-mill software systems per se, this is Virtue AI explaining that it builds security and governance capabilities specifically for AI systems, especially those with more complex operational abilities that can act autonomously to make decisions about executing code, calling services or interfacing with business systems. The company argues that this is a critical part of AI evolution as artificial intelligence moves above and beyond simple prompts to become part of operational workflows.
Virtue AI’s Virtue AI’s AgentSuite is described as a service that brings high-performance AI risk detection into any workflow. The technology is designed for production environments that require low latency, high accuracy and continuous adaptation to evolving threat models. The company explains its platform’s core function as an infrastructure layer that allows businesses to move from experimental AI to production-ready deployments by ensuring their AI agents act responsibly, follow policies and also remain secure from adversarial attacks.
The company’s key industry verticals include a focus on highly regulated sectors such as healthcare, financial services, healthcare, IT, retail and insurance. When we get to the point where AI is being applied to applications in these sectors (which, as we know, is now) we need to make sure that compliance and security are upheld at a mission-critical level.
Virtue AI’s AgentSuite has been built and engineered to plug directly into existing development workflows, embedding into integrated development environments (IDEs) such as VS Code. It also enjoys smooth integration with AI model providers, including Anthropic, OpenAI and Google.
Core capabilities
On offer here are core functions, including continuous, automated “red-teaming” (use of 100+ simulated proprietary cyberattack techniques used to find and fix security vulnerabilities and provide risk assessment) and VirtueRed continuously tests models and systems against hundreds of risk categories (including hallucinations, data leakage, prompt injection, IP theft) with automated algorithms to find vulnerabilities before they’re exploited.
VirtueGuard also enforces customised, policy-aligned safety and security controls in real-time across multiple input/output types (which here includes text, images, audio, video and code) to block harmful or non-compliant actions with very low latency.
A governance layer (VirtueGov) helps businesses enforce standards, uphold compliance requirements and meet internal policies across AI deployments and agents. The platform also includes tools to secure the full lifecycle of AI agents, from pre-deployment evaluation through runtime enforcement.
Recent news from the company has seen it note that VirtueGuard, Virtue AI’s enterprise-ready AI guardrail model, is now available in Google Cloud’s Model Garden on the Vertex AI platform. This integration enables enterprises to deploy generative AI systems with real-time content security, policy enforcement and regulatory compliance and they can do that from within Google Cloud’s managed AI platform.
The word from the CEO
Keen to know more about how Virtue AI is actually operating today, the Computer Weekly Developer Network (CWDN) sat down with Bo Li, CEO of Virtue AI.
CWDN: As agentic functions now really start to take hold and appear throughout the modern stack in the developer workflow and act to execute code (performing functions like calling MCP servers), how does Virtue AI secure the non-deterministic nature of these autonomous loops without breaking the developer’s intent?
Bo Li: Behind all of this is the depth of our research foundation. Virtue AI is built by an AI and AI security research team that has been working on adversarial machine learning, model robustness and system-level AI security for decades i.e. long before agents became mainstream. The challenges posed by non-deterministic, tool-using autonomous agents are not new to us; they are problems we have studied, stress-tested and formalised over many years.
On top of this fundamental research, we build production-grade systems that protect real-world agent deployments. This includes VirtueRed, our automated red-teaming platform that delivers risk assessment for AI agents across prompts, tools, environments and multi-step behaviours; VirtueGuard, which provides real-time, multimodal and multilingual runtime protection aligned with enterprise policies; and our Virtue AgentSuite, which delivers end-to-end security and governance for agents across the prompt, action, tool and network layers. Together, these products form a unified security architecture designed specifically for complex, non-deterministic agentic systems – all allowing enterprises to deploy agents with confidence, without constraining innovation or developer intent.
CWDN: We see Virtue AI now integrated directly in VS Code and Gitpod. Does that represent a mission to shift ‘security-as-code’ further left in the software development lifecycle?
Bo Li: At Virtue AI, we believe AI security has to span the entire lifecycle of an AI system, not just a single checkpoint in development or deployment. Our integrations with VS Code and Gitpod reflect this philosophy. With VirtueGuard-Code, we protect AI-generated code at the moment it is written – whether it comes from copilots, agents, or autonomous coding workflows. It integrates as a lightweight plugin into any IDEs, so developers get immediate, low-friction security feedback without changing how they build. That’s very much a “security-as-code” and “shift-left” capability.
Bo Li, CEO of Virtue AI: The reality is that many of the most serious AI risks only emerge after deployment, when systems become non-deterministic, tool-using and autonomous.
At the same time, we don’t stop at development. The reality is that many of the most serious AI risks only emerge after deployment, when systems become non-deterministic, tool-using and autonomous in real environments. That’s why our platform AI also provides end-to-end runtime protection and governance for deployed AI systems and agents, covering prompts, actions, tools, networks and multi-agent interactions. Virtue AI will close this last mile, which is critical and it’s where traditional AppSec and DevSecOps tools fall short.
So the short answer is: yes, we are pushing AI security further left, but we’re also extending it forward. Virtue AI protects AI systems before, during and after deployment, giving enterprises a continuous security posture as AI moves from code, to agents, to real-world autonomous systems.
Continuous red-teaming
CWDN: AgentSuite claims to be able to cover the full agent lifecycle, a universe that we can take to include continuous red-teaming, MCP server and tool validation, runtime alerts for insecure or out-of-policy actions and visibility, access control and audit trails as agent usage scales. How does the platform break down (and achieve) such a big list of tasks?
Bo Li: Yes, that’s a fair question. Because if you look at the surface area of the agent lifecycle, it does sound like an impossibly large scope for a single platform. The way AgentSuite achieves this is by standing on deep foundational research and a rigorously modular architecture.
First, this platform is backed by long-term research. For every major capability AgentSuite provides, continuous red-teaming, tool and MCP validation, runtime policy enforcement and scalable governance, we have years of prior work behind it, often represented by multiple peer-reviewed papers, including best papers in top AI conferences and the National Security Agency per function that formalise the threat models, attack surfaces and enforcement mechanisms. AgentSuite is essentially the productisation of that research into a coherent enterprise system.
Second, the platform is deliberately modular, with each layer responsible for a specific class of risk, while sharing a common policy and telemetry backbone. For example, VirtueRed handles continuous and automated red-teaming of agents as full systems; PromptGuard focuses on prompt- and instruction-level risks; MCPGuard validates MCP servers and tools, including schemas, behaviours and injection risks; ActionGuard reasons over full agent trajectories to prevent insecure or out-of-policy actions at runtime; and NetGuard enforces network-level constraints and detects anomalous external interactions. On top of this, we provide enterprise-grade access control, observability and audit trails, which become essential as agent usage scales across teams, workflows and environments.
What ties all of this together is a shared policy layer and execution gateway. Policies are customisable and defined once and enforced consistently across testing, deployment and runtime. Signals from every module feed into a unified view of agent behaviour, giving security and compliance teams both real-time protection and long-term visibility.
So the way we “break down” such a large problem is by combining research depth with architectural clarity. Each module is independently strong, grounded in rigorous research and production-ready on its own, but when composed through AgentSuite, they deliver continuous protection across the entire agent lifecycle, from pre-deployment testing to live, large-scale enterprise operation.
