
Im Vector - stock.adobe.com
How channel vendors are responding to the EU AI Act
Gary Morris, director, Climb Channel Solutions, shares some thoughts from a number of vendors about the regulations an the impact on the industry
Since the EU AI Act took effect, the usual talk about AI innovation in the IT channel has started to run alongside bigger questions about compliance and trust. For resellers, MSPs, and partners, conversations include how to meet new legal requirements, build in responsibility from the start, and make sure risk is shouldered together.
At Climb, we see an opportunity to help set clear expectations for AI usage in the channel, beginning with honest conversations about what’s actually happening on the ground. So, I asked vendors across our ecosystem: What does high-risk AI mean in practice? Where are organisations falling short, and what can partners do right now? Most importantly, how can vendors and partners build the kind of trust that turns compliance into real progress? Here’s what they shared.
Defining “high-risk AI”
The conversation begins with a deceptively simple question: what is “high-risk AI”? For Superna, the answer cuts to the heart of data integrity and confidentiality. Any system touching critical data—through access controls, intrusion detection, or automated remediation—falls into the high-risk category. But Steve Arlin, vice president global technical alliances, Superna, cautions: “Many AI systems aren’t equipped with the continuous monitoring or transparent auditing that the EU Act requires. Without these, customers can easily fall short of compliance.”
Yet, the risks aren’t only technical. Larissa Schneider, Co-Founder & COO, Unframe, observes, “Responsible AI innovation starts with infrastructure that respects agility and accountability.” How you design architecture matters as much as any compliance efforts. And when responsibility is engineered into systems from the start, regulation moves away from box-ticking to proof of credibility.
Risk reduction strategies
Moving beyond the legal specifics of the EU AI Act, I wanted to ask our vendors a more fundamental question: what does AI risk really mean for the channel? The answer, echoed by every vendor I spoke to, always circles back to data—and not just its quality and origin but also the complex realities of stewardship, interpretation, and trust. Is risk simply about compliance, or is it about whether we can truly understand and control the data that underpins every AI decision?
Matthias Nijs, VP of EMEA Sales, Datadobi, pointed out that unstructured data, if ignored, is a major risk—outdated files, ownerless datasets, or redundant content quietly eroding model integrity. To mitigate this risk, Matthias outlines, organisations should map and classify the data landscape, then establish “golden copies”—secure, immutable datasets that are the only trusted input for AI. This shields models from ransomware, data drift, and accidental corruption.
Rosa Lenders, Marketing Manager EMEA, Cloudian, highlighted a risk often overlooked in the origins and history of unstructured data. If you can’t trace where training data came from, who’s accessed it, or how it’s changed, your models can’t be trusted. “If you can’t explain your data, you can’t explain your model.”
Data is just the beginning. Natalie Spence, Senior Partner Marketing Manager, EMEA, Sonatype moved the conversation to model risk itself and the threat from biases or vulnerabilities. Few organisations can pinpoint which AI models are running where, or what risks those models introduce. “Instant visibility into what AI is being used” is essential, but that must be paired with ongoing assessment for bias, malware, and compliance, applied continuously.
How, then, can partners move beyond box-ticking and genuinely mitigate the risks that come with deploying AI tools and solutions? VimalRaj Sampathkumar, Technical Director for UK & I, ManageEngine, argues that risk-aware practices—clear documentation, audit trails, and explainability—need to be “embedded early in the AI adoption journey.”
What comes through in all these conversations is that AI risk isn’t a narrow technical issue or something that can be crossed off a to-do list. It’s multi-dimensional, spanning data integrity, model behaviour, transparency, and human oversight. And everywhere there’s AI, there’s AI risk.
Collaboration powers compliance
What stood out in these conversations was how few vendors believe responsible AI can be achieved through technology or compliance alone. There’s broad agreement: no single solution, tool, or regulation can answer every question about AI risk. Instead, it’s partnership and collaboration that shapes not only how AI is developed and adopted but also how compliance becomes an ongoing, shared commitment rather than a box-ticking exercise.
VimalRaj explains that ManageEngine believes vendors should guide, empower, and share accountability with channel partners on AI compliance and risk. “Ultimately, it’s about empowering our partners with the tools, training, and clarity to offer AI solutions that are not only intelligent but also responsible, ethical, and ready for regulatory scrutiny.”
Rosa (Cloudian) sees the vendor’s role as helping partners make compliance an embedded strength. “That’s why our solutions include built-in governance—data immutability, metadata tagging, versioning, and audit logging—so traceability and compliance are standard.” By offering these capabilities natively, Cloudian enables MSPs and resellers to deliver transparent, trustworthy infrastructure that supports long-term customer trust and adoption.
For Unframe, the path to compliance starts with secure-by-design architecture and a commitment to trust: “We always respect our customers’ underlying permissions, and data never leaves their perimeter unless they choose to share it.” That foundation of control, Larissa explained, allows partners to move fast, scale with confidence, and adapt as regulation evolves.
The message, reinforced repeatedly: trust isn’t built through one-off audits or checklists, but through transparency, guidance, and a willingness to build resilient, innovative, risk-aware systems together.
Compliance vs innovation
When it comes to AI, the line between compliance and innovation isn’t always clear. Are organisations forced to choose between the two?
RealVNC and Panzura made it clear the answer is no.
For Chris Beagle, Senior Partner Manager, RealVNC, “Security and compliance can enable innovation, rather than inhibit it.” Real-time performance data and robust security create the ideal environment for AI to enable progress, without compromising compliance. With the right technology, AI’s full potential can be used while still meeting regulatory demands.
Nicola Houghton, Partner Manager International, Panzura explains that robust metadata management makes it possible to meet regulatory requirements without holding back innovation. “Leveraging metadata to gain deep understanding means organisations don't have to choose between innovation and regulatory requirements; they can have both.”
There’s no doubt that the EU AI Act has moved compliance and risk management from the sidelines to the centre of the channel conversation. But as our vendor insights show, the organisations that will thrive aren’t those that treat compliance as a tick-box exercise…they’re the ones that make transparency, partnership, and accountability part of their everyday operations.
Three things stand out to me:
Data matters more than ever. Quality, traceability, and stewardship are non-negotiable for anyone deploying AI.
Compliance is not a one-off project or an annual review. It’s a continuous process built into architecture, documentation, and daily practice.
None of this happens in isolation. AI success in the channel is a collaborative, iterative process—no single vendor, partner, distributor, or technology holds all the answers for embedding AI responsibly and sustainably. Getting this right means learning together, adapting together, and being prepared to challenge one another along the way.