Brian Jackson - stock.adobe.com

Is AI dragging security back to the 1990s?

The introduction of artificial intelligence promises much, but the industry must prevent it from becoming a hacker’s paradise

At the Black Hat USA 2025 conference, experts argued that artificial intelligence (AI) defences are taking cyber security back to the 1990s, with Wendy Nather, senior research initiatives director at 1Password, comparing AI agents to toddlers. “You have to follow them around and make sure they don’t do dumb things,” she said.

Nathan Hamiel, senior director of research at Kudelski Security, claimed many old vulnerabilities were resurfacing because the generative artificial intelligence (GenAI) arena is full of security bad practices.

“When you deploy these tools, you increase your attack surface. You’re creating vulnerabilities where there weren’t any,” he said.

Hamiel suggested we should treat AI agents like “drunk robots”, adding: “Vulnerabilities are showing up at alarming rates because of these tools. It’s just getting worse due to vibe coding and AI coding assistants. If you wanted to know what it was like back in the 1990s, now’s your chance.”

In September, IBM’s latest Cost of a data breach report added substance to those fears, stating that “AI adoption is greatly outpacing AI security and governance” and noting that “AI is already an easy, high-value target”.

It argued: “Organisations are bypassing security and governance for AI in favour of do-it-now AI adoption. Ungoverned systems are more likely to be breached – and more costly when they are.”

So, what can vendors and channel partners do to try to ensure customers don’t put their organisations at risk in the rush to adopt AI? Are vendors and partners guilty of endangering their clients with the IT industry’s relentless – some might say reckless – promotion of AI and neglect of the attendant risks of a headlong rush to adopt it?

Trust is a must

Richard Eglon, chief marketing officer (CMO) at Nebula Global Services, acknowledges that AI’s reliance on data introduces significant risks.

“Without robust governance and security frameworks, AI can become a liability, exposing firms to regulatory breaches, reputational damage and ethical pitfalls. In this new era, trust is not a byproduct of innovation; it’s a prerequisite,” he says.

We wouldn’t hire a team of people to build or do something without getting all the requisite business practices in place. Implementing AI without all of those usual business practices leaves clear room for exploitable areas for threat actors
Lance Williams, Offerlogic

Eglon sees the current situation as a risk and opportunity for channel firms. “As trusted architects, they are uniquely positioned to guide end-user clients through secure AI adoption,” he argues. “However, doing so requires a shift from reactive support to proactive strategy.”

This means establishing clear AI usage policies, defining what data can be used, how it should be processed and who has access. Eglon stresses that security policies must be embedded from the outset. “Channel partners can assist clients in deploying secure architectures, implementing encryption, access controls and anomaly detection systems to protect sensitive data. They can also help clients monitor for insider threats, particularly those arising from unsanctioned AI use.”

Lance Williams, founder and director at Offerlogic, describes AI as a rapid acceleration agent for data access and logical action. “We wouldn’t hire a team of people to build or do something without getting all the requisite business practices in place – background checks, references, contracts, business plan, product vision, secure access and working, etc. Implementing AI without all of those usual business practices leaves clear room for exploitable areas for threat actors.”

Innovate securely

There needs to be a focus on education and awareness about security by design, with clear guidelines and best practices for AI deployment. Partners should also help customers to establish roadmaps that incorporate security and governance from the outset.

It’s not a matter of blaming vendors or partners for the promotion of AI, but about “emphasising the need for a more balanced approach that prioritises security alongside innovation”, Williams argues, adding: “It’s time to stop acting like excited kids with a new fad and to return to a responsible, adult state.”

Camden Wooliven, group head of AI product marketing at GRC International Group, says the onus is on partners and vendors to stop just pushing AI and start helping customers to use it safely.

“Too many firms are wiring these tools into core systems without thinking about data access, APIs [application programming interfaces] or third-party risks. That means tightening contracts, monitoring usage properly, and giving customers the training to know what the tech can and cant do,” she says.

Wooliven warns that their job “isnt to fuel the rush, its to make sure adoption doesnt backfire”.

Wooliven accuses parts of the industry of being reckless. “AI tools are being rolled out faster than the guardrails needed to secure them. Most companies dont have mature AI security policies, yet employees are already using these tools daily, often without oversight,” she says.

It is vital to treat AI systems like critical infrastructure from day one. “Know exactly what theyre plugged into, what data they touch and what actions they can take. Build proper access controls, log everything and have someone clearly responsible for keeping it in check,” she says.

She stresses that “the goal isnt to slow down innovation but to make sure youre not blindly introducing new risk while trying to move forward”.

Kelvin Lim, senior director, head of security engineering at Black Duck, says AI agents are “changing how cyber security works, with both cyber criminals and security teams using these advanced tools”.

But while security professionals are using AI agents, they are “not yet fully reliable, as they can make errors, overlook important details, and sometimes behave unpredictably, especially when they operate without close oversight”.

He stresses that AI agents “need controls”. If given too much freedom, they might “accidentally open up new security gaps or worsen existing problems”. To counter this threat, organisations should look at giving AI agents “only the minimal access required for their tasks, restricting their reach across sensitive parts of the network”. They should also “require human review for high-stakes decisions or actions initiated by AI agents, keeping people actively involved in oversight”.

Dangerous imbalance

Kevin Curran, IEE senior member and professor of cyber security at Ulster University, believes that by prioritising speed over governance, many organisations risk undermining the very benefits they are hoping for. “Without appropriate security and oversight, AI systems are more likely to be breached and the consequences far more costly both financially and reputationally,” he says.

While AI is enhancing defences, over-reliance is becoming a problem. “We must remember that human judgement, strong access controls and staff training are critical. Vendors and channel partners need to make sure that adoption doesnt bypass these fundamental safeguards,” adds Curran. “Their role should go beyond initial promotion to providing ongoing guidance that helps customers establish AI governance frameworks and align with industry standards.”

AI tools are being rolled out faster than the guardrails needed to secure them. Most companies don’t have mature AI security policies, yet employees are already using these tools daily, often without oversight
Camden Wooliven, GRC International Group

Laura Ellis, vice-president of data and AI at Rapid7, says vendors and partners need to do more than push AI. “They need to help customers keep it under control. That means visibility into how AI is being used, clarity on the risks in their business and the ability to act fast. Kick off a workflow. Lock down access. Remove risky assets. If customers can see it, understand it and respond, they can adopt AI without putting the whole organisation at risk,” she says.

Ellis warns that too much of the industry is selling AI as a “limitless opportunity” while ignoring the risks. “That imbalance is dangerous. AI is still emergent and unpredictable. It can take unintended actions or be manipulated into harmful ones. Without oversight and governance, customers are exposed. Vendors chasing deals without guardrails are putting their own clients in harms way,” she adds.

Ellis focuses on the importance of governance, observability and oversight. “Governance sets the rules for what is allowed and who has access. Observability shows what is really happening, not just what should be happening. Proactive defences and human oversight keep AI in check. As I see it, governance is the guardrail, observability is the rear-view mirror, and oversight is the human at the wheel,” she says.

Like others, she highlights that the human side matters as well. “Employees need advanced AI skills to innovate while managing risk,” Ellis notes. “Training and hands-on experience build confidence and awareness. Pair that with strong oversight, and organisations can reap the benefits of AI without sacrificing security or trust.”

Jeff Schwartzentruber, senior machine learning scientist at eSentire, makes an interesting point about the risks associated with AI, pointing out that it would affect companies even if they werent implementing this technology. “Were seeing hyper-personalised phishing and deepfake social engineering at scale; automated reconnaissance for exploitation of vulnerabilities; and malware that mutates on demand,” he says.

Schwartzentruber adds that because of the hype around AI, teams that are not ahead of things will potentially face “shadow AI” deployments with unsanctioned employee use of AI tools. “AI can also have opaque third-party model supply chains, leaving the organisation flying blind around their actual risks,” he says.

Security teams might not want to get in the way and just want to have a secure-by-design mindset in place. “But when those developing around AI are moving so fast, they are moving ahead without all the necessary guardrails in place,” he adds.

Partner potential

The value partners provide can be in helping to deliver AI deployments that follow a secure-by-design approach, leading to more speed for the business but with safety in mind.

Customers should be able to “adopt AI at pace, while knowing exactly what is in use, where data flows, and how incidents would be contained”, according to Schwartzentruber.

Ian Ashworth, senior director of partners and alliances EMEA at Qualys, says the current perception that companies “need to invest in AI now, and ask security questions later” doesnt mean they should ignore security.

“There are steps around the infrastructure and architecture that companies should take to secure their systems, and they should not be ignored in the rush to keep up with demand from the business,” he says.

Vendors and partners should demonstrate how AI acts as a decision support technology rather than pushing it as a cure-all for every security challenge
Alex Glass, Expel

Partners should start by ensuring customers have the infrastructure security basics completed – access control, patching and network security should all be in place and enforced as standard. “This is particularly important when you have agentic AI systems that might include multiple AI models all working as part of one workflow, and where you will find it harder to get that consistency of approach in place,” he adds.

Dan Jones, senior security advisor at Tanium, says organisations rushing to adopt AI without guardrails create blind spots that attackers can – and will – exploit. “Security needs to keep pace with innovation,” he says, adding that channel partners can play a critical role in preparing customers “by helping leverage automation capabilities from existing and new vendors”.

He acknowledges that the hype around AI “is creating familiar problems: speed without safeguards”, adding: “We saw this before with cloud and mobile, where innovation raced ahead of security and created costly blind spots.”

Jones believes that AI presents a chance to reset: “The implementation of AI should force businesses to rethink how they secure data, endpoints and users. Done right, you avoid risk and build stronger, more adaptive defences.”

Apply patience

Alex Glass, global head of channels and alliances at Expel, stresses the value of patience in ensuring that customers dont rush to adopt AI purely for the sake of it. “Instead of encouraging customers to run full-throttle into an AI-dominated future, vendors and channel partners must encourage patience and balance. Gradually and deliberately rolling out AI initiatives alongside human-led checks and updates will ensure any new automated systems are not left exposed as the threat landscape shifts on its axis.”

He accepts that it can be easy to get carried away with shiny, new tools. “AI is no different, with vendors and partners promoting new AI capabilities that may risk prioritising hype over safety. This headlong rush could endanger customers if the technology is adopted without consideration for integration and human oversight. In other words, encouraging customers to hop on the AI hype train could leave vendors and partners red-faced, exposing them to uncharted vulnerabilities,” says Glass.

The danger isn’t from AI, but the careless application of it. “Vendors and partners should demonstrate how AI acts as a decision support technology rather than pushing it as a cure-all for every security challenge,” he adds.

Patricia Murphy, vice-president of EMEA and LatAm ecosystems, strategic alliances and channel at Palo Alto Networks, accepts there are some “vendors and partners out there who are hastily pushing out AI offerings without an adequate focus on security, which is clearly dangerous for customers”.

“This could cause some AI-related risks to go under the radar, like data poisoning, model theft and prompt injection. Shadow AI also adds another layer of concern, as unsanctioned tools bypass security oversight and expose sensitive data,” adds Murphy.

She warns that “the dual nature of AI – its ability to both strengthen cyber security and create new vulnerabilities – requires careful consideration”.

“While the technology can drive competitive advantage, poorly implemented AI can lower the barrier for successful breaches,” says Murphy.

Too often, we see a ‘just switch it on’ approach, where security and assurance get sidelined in favour of speed. The result is ungoverned systems which are harder to protect and more costly to fix when something goes wrong
Fred Tromp, UBDS Digital

The answer is to “balance speed with safety in AI adoption by embedding ethical considerations and compliance into existing structures and conducting impact assessments … aligning AI governance with current policies such as data privacy, information security and ethical conduct”.

Fred Tromp, chief security architect at UBDS Digital, believes partners and vendors need to accept that every new AI tool increases the attack surface, and if governance is skipped, sensitive data can quickly end up in the wrong place.

“Too often, we see a ‘just switch it on’ approach, where security and assurance get sidelined in favour of speed. The result is ungoverned systems which are harder to protect and more costly to fix when something goes wrong,” he says.

Tromp notes that the IT industry has been very good at celebrating what AI can do, but sometimes too quiet about the risks. “The answer isnt to hold back adoption, but to make sure its done with security and governance in mind from the start. The businesses that will get the most out of AI are those that embrace it with both enthusiasm and care,” he says.

Christina Decker, director of strategic channels, Europe, at Trend Micro, says that while vendors and partners have a responsibility to drive innovation and make new technologies accessible to their customers, they are also obliged to clearly address the associated risks and put responsible AI adoption at the centre.

“Those who focus only on speed and competitive advantages without embedding security and governance act short-sightedly,” she says, adding that vendors and partners must operate with transparency and provide customers with the right guardrails. “Innovation should never be pushed blindly – it always needs a safety net,” she concludes.

Danny Jenkins, CEO of ThreatLocker, observes that in the modern world, “speed is everything – we want it all and we want it now”.

“But when it comes to AI adoption, moving too fast creates real dangers, especially when handling sensitive data,” he adds. “People need to slow down and apply the same security principles we know that work, rather than being blinded by the shiny novelty of AI.”

Jenkins believes vendors and partners can be guilty of promoting AI at the expense of the risks that come with it: “The rush to implement AI to be utilised as a differentiator means the risks are often downplayed or ignored altogether. Speed to market trumps security, and customers pay the price later.”

Jenkins is clear that vendors and partners have a duty of care to their customers. “That means being honest about risk and ensuring AI adoption is done safely, with suitable guidelines in place. Overselling the promise while glossing over the pitfalls isnt just irresponsible – it creates the same conditions that made the 1990s a golden age for hackers,” he says.

Read more on Cloud Platforms