InfiniteFlow-stock.adobe.com

Smaller, safer AI models may be key to unlocking business value

While AI presents a significant opportunity to further the way we do business, what if it’s time to consider a new direction? What if the safest and most effective path for AI isn’t to go larger, but smaller instead?

Artificial intelligence (AI) I hasn’t just hit the big leagues; it is the big leagues. Over the course of 2025, AI was embedded into every workflow, leveraged across IT operations, and relied upon to build out a wide variety of content touching every corner of the web. In essence, large language models (LLMs) have taken centre stage, with businesses investing heavily in them to further autonomous gains.  

Users, for the most part, have been more wary of the power and limitations of AI. As organisations continue to grapple with the challenges of managing them (shadow AI and vibe coding are just a few of the more precarious trends that have come to light in the last few years; threatening data leakage and larger software supply chain issues when adopted en masse), while struggling to elicit meaningful gains.  

However, the greatest challenge with autonomous, generalist AI lies within. If not properly managed or configured, their broad nature can mean they are more likely to overreach, make critical errors, and then defend them, while adding complexity to governance.  

While AI presents a significant opportunity to further the way we do business, what if it’s time to consider a new direction? What if the safest and most effective path for AI isn’t to go larger... but smaller instead? 

The large AI problem

Large models, such as general purpose LLMs, aren’t specialists; they generalise. They link disparate data points together to provide answers, sifting through vast datasets to do so. While broad knowledge is helpful in many areas, including research and content generation, it also leads to a lot more room for error. Hallucinations on these tools are common and often baffling. While these errors might be trivial in day-to-day life, they have the potential to create nightmarish scenarios if integrated into broader business workflows.  

Faulty AI can have repercussions beyond inaccuracy. A recent survey found that 80% of firms have found AI agents to take rogue actions, including accessing unauthorised systems or resources, or undermining IT systems.  

Additionally, large AI models are resource intensive (and more costly as a result). They demand significant compute power, integration layers, and data pipelines to function. These dependencies can be inefficient and obscure visibility into what data is being accessed, shared, or exposed. As new threats and AI-driven exploits emerge, these blind spots have the potential to evolve into more hostile attack vectors. In short, the more power we give all-access AI, the more risk organisations inadvertently inherit. 

Specific models for specific challenges

The surest way to make AI safer and more effective is to make it smaller. Task-specific AI models operate within tightly defined boundaries, performing one function exceptionally well rather than attempting to handle everything at once. That narrow focus makes them easier to secure and manage: access rights are limited, data exposure is reduced, and behaviour is more predictable as a result.  

These smaller models can be more easily audited, governed, and isolated, aligning with zero-trust security principles. They are also faster to deploy in controlled environments, meaning IT teams can maintain oversight of them easily while reaping the productivity benefits of automation.  

In regulated sectors such as healthcare, finance, or government, visibility and containment are invaluable. Instead of giving an all-knowing model the “keys to the kingdom,” smaller AI systems act as expert assistants. They can offer accurate, auditable insights while keeping humans firmly on the loop, and more importantly, in control. 

Read more about security and AI

Efficiency and security in tandem

Security and efficiency shouldn’t be opposing forces. With smaller AI models, both of these values can be realised more effectively. While large models require constant tuning and demand extensive integration work, smaller models can sidestep this cost and risk. 

Because they focus on a single task, they deliver more consistent outcomes without the risks that arise from unpredictable leaps of logic. Their simplicity becomes an advantage: fewer assumptions, fewer permissions, and a smaller margin of error. Ultimately, fewer headaches for the IT teams charged with managing them. 

Organisations can also chain small models together to automate workflows without creating a single point of failure. If something misfires, the impact is contained. That modularity gives IT teams the freedom to scale AI capabilities thoughtfully and intelligently, without exposing their organisation to unnecessary risks or incurring additional costs. 

2026 belongs to small AIs

In 2026, AI adoption will be defined by precision – and we’ll see organisations opt for smaller, more targeted AI use cases to fuel growth. Organisations need systems that are as transparent as they are capable, and smaller models naturally suit this demand. Plus, AI should be used as a lever to fuel human productivity and decision-making, not replace it.  

As organisations continue to move towards more targeted AI deployments and smaller purpose-built use cases, we'll see more effective results across the board. In the long term, it’s the smaller wins that will lead to much larger leaps, and more intentional, AI-enabled gains. Not the other way around. 

Joel Carusone is senior vice president of data and AI at NinjaOne, a specialist in secure unified endpoint management.

Read more on Artificial intelligence, automation and robotics