Virtue AI PolicyGuard stacks custom AI guardrails with compliance frameworks
There’s a whole lot of discussion around the need to give enterprises control over how AI policy is defined and enforced across agents, models and applications.
Deeper than those core parameters, control is needed with explainable, audit-ready decisions.
Aiming to deliver this stream of agentic management is Virtue AI.
The company’s newly announced PolicyGuard is a software system offering designed to enable enterprises to define, edit and enforce custom AI runtime protection guardrails across – exactly as we have initially tabled above – models, agents and applications.
AI acceptable use policies
As a growing number of organisations now have “AI acceptable use policies” in motion, but… (and it’s a big but) when they need to enforce those policies, the tooling is static, fragmented and generic.
What makes implementation even tougher is that these tools are built for no industry in particular and no organisation specifically.
Policies vary across teams and are hard to translate into adaptive, enforceable controls.
Flaky text-level guidelines
At the same time, AI behaviour has outpaced text-level guidelines i.e. it now spans agents, API calls and multi-step action agent workflows, where the risk is not just what AI says, but what the model and agents do.
Without AI-native policy enforcement, enterprises carry an uneven AI risk posture, tight in some places, absent in others, and nowhere strong enough to contain an incident or satisfy an audit.
According to Virtue AI, PolicyGuard puts an end to AI policies that exist on paper but can’t be enforced in practice, giving enterprises a single enforcement layer across models, agents and applications.
It allows teams to define their own policies in natural language without relying on engineering teams; it also ensures regulatory compliance with 30+ stackable security frameworks like GDPR, EU AI Act and FINRA. Users can extract policies from existing policy documents and make them enforceable – and automatically refine policies over time using Policy Lab to improve coverage and reduce gaps.
The promise from Virtue AI is the power to deploy in an organisation’s own environment (on-prem, cloud, or SaaS) without major changes – and PolicyGuard helps teams keep policies aligned with how their AI systems actually operate, while maintaining consistent enforcement as things evolve.
“Enterprises already have AI policies. The challenge is enforcement,” said Bo Li, CEO and co-founder of Virtue AI. “With a simple PDF upload, or a few lines of natural-language, PolicyGuard defines and enforces AI policies, tailor-made for an organisation.”
Users can define and enforce AI policy without bottlenecks, they can define risk categories, behaviours and enforcement criteria in natural language, aligned to a team, department or organisation’s standards… and they can convert existing PDFs, websites, and JSON into enforceable controls.
Further, users are able to layer regulatory frameworks such as EU AI Act, GDPR, FINRA and MLCommons alongside internal policies into a single, traceable enforcement layer. They can use Policy Lab to close gaps and reduce blind spots continuously without retraining as they also enforce policies in real time.
“By combining policy definition, enforcement, and continuous optimization, PolicyGuard enables enterprises to quickly and easily align AI security enforcement to their enterprise needs,” stated Li and team. “[Users can] evaluate content in its original language to eliminate translation blind spots and reduce false positives; operate with low latency using lightweight infrastructure; and extend policies to agent traces, tool calls, and multi-step workflows so policy follows actions, not just text.”
She notes that every decision clear, traceable,and audit-ready – which means users can provide detailed explanations for every allow or block decision, monitor violations i.e. users, API keys and latency through centralised dashboarding.
Audit-ready enforcement
This technology provides a way to maintain audit-ready enforcement by default, to deploy without reworking infrastructure and it supports on-premises, cloud, and SaaS environments.
Virtue AI CEO Bo Li
Keen to find out more, the Computer Weekly Developer Network spoke to CEO Li for the inside track.
CWDN: How does PolicyGuard actually intercept and evaluate agent tool calls in real time without slowing everything down?
Li: We built a dedicated, purpose-built model for agent action identification and intent prediction. Instead of relying on generic models, we generate the long-tail distribution of agent behaviours and train specifically against that. On top of that, we’ve optimised the inference layer so decisions happen with extremely low latency, which lets us evaluate calls in real time without introducing meaningful overhead.
CWDN: Is PolicyGuard itself powered by a language model? If so, how do you avoid it becoming another attack surface?
Li: PolicyGuard is powered by a lightweight model with an architecture we’ve optimized for strong reasoning and runtime efficiency. The key is that parts of the architecture are designed to improve resilience against new and evolving attacks, so it’s not just another exposed layer. It’s built to withstand the same kinds of threats it’s evaluating.
CWDN: What happens when different regulatory frameworks conflict with each other? How do you decide which rule wins?
Li: By default, we apply the most conservative set of rules across regulatory frameworks. That gives users a safe baseline. From there, teams can configure how strict or flexible they want enforcement to be based on their specific requirements.
CWDN: AI systems change fast. How do you handle policy drift as agent workflows evolve?
Li: That’s what Policy Lab is designed for. It learns from failure cases and updates the model harness itself, so enforcement improves over time. The system can close gaps automatically through self-learning, without requiring full retraining or backend changes.
