zapp2photo - stock.adobe.com

Why IoT grows in agriculture but needs tonic for healthcare

Deployment environments matter far more than sophistication, when it comes to IoT success. This includes scaling costs, on-ground network realities and the difficulty of integrating new tech into legacy systems

According to research from connectivity firm Eseye, internet of things (IoT) technology deployment has seen a significant uptake in the past few years, with implementations at mid-sized companies in particular rising from 51% in 2021 to 76% in 2025. Furthermore, according to business management firm McKinsey, IoT technology is expected to generate around $5.5tn to $12.6tn of global economic value by 2030.

However, growth across sectors has not been uniform so far. For example, agriculture IoT deployments have scaled across millions of acres and are using low-cost sensors to decrease chemical usage, improve irrigation and boost yields, despite patchy connectivity in remote areas. In contrast, similar deployments have often struggled to scale beyond pilots in healthcare and home care, slowed down by high security requirements and integration challenges.

“IoT scales in agriculture and logistics because those environments can usually absorb some delay, packet loss and partial visibility. A soil sensor can miss a reading and the farm still functions. A pallet tracker can reconnect later and the shipment still arrives,” says Leid Zejnilovic, co-academic director at Nova SBE’s Digital Data Design Institute.

“Healthcare is different because the cost of a bad assumption is not inconvenience but harm. The hard problem in healthcare is not connectivity alone, but trustworthy operation inside a safety-critical workflow.”

Where IoT works 

One of the sectors where IoT has scaled most effectively is agriculture, transforming traditional farming into smart farming. Sensors offer real-time data on temperature, soil moisture and nutrient levels, allowing hyper-targeted input applications of water and fertilisers.

Advanced data analytics predict plant disease and harvest quality through crop growth monitoring, whereas automation and robotics enable self-driving machinery to plough, plant and harvest autonomously. Connected wearables such as smart collars and ear tags enable remote health monitoring, enhancing productivity and decreasing animal losses.

The main reason IoT works so well in agriculture is because of the environment, rather than the technology. Agricultural systems have low bandwidth requirements and can tolerate intermittent connectivity, packet loss and latency without many major consequences. 

“Agriculture and logistics are forgiving environments. If a soil sensor misses a reading for 15 minutes, the crop doesn’t die. If a fleet tracker drops a signal in a tunnel, the driver is still driving. It’s not that there’s no business impact, but in most cases, slight failures can be absorbed,” says Pratik Mistry, executive vice-president of technology consulting at custom software development company Radixweb.

This makes agriculture well-suited to LPWAN technologies such as NB-IoT and LoRaWAN, which provide long battery lives, wide coverage and lower costs, even in rural areas with patchy connectivity. 

Similarly forgiving conditions are also seen in supply chains and logistics, where IoT is used to monitor storage conditions and track shipments. Here, systems can also still function efficiently despite incomplete or delayed data. As such, IoT functions best in settings where connectivity does not have to be reliable or continuous. 

Where IoT breaks down 

Yet despite IoT’s progress in various sectors, the technology still struggles to scale in some areas, especially healthcare. Even though IoT has significantly helped to advance remote patient monitoring, many deployments have not scaled beyond pilots, especially in care homes. This is mainly because healthcare environments work under zero-tolerance conditions.

“When you’re monitoring a patient’s cardiac rhythm or managing a connected insulin pump, the tolerance for failure just disappears. It doesn’t shrink, it just isn’t there,” Mistry points out. “When that happens, all the assumptions you built your system on like the network is stable, that latency is acceptable, that the device will behave the same way in ward four as it did in your controlled pilot get stress-tested in ways they never were in a greenhouse.”

High security and compliance requirements further complicate deployments in environments with sensitive patient data, as does the risk of cyber attacks. 

“Many IoT and OT devices in healthcare are not secured to the same standard as mainstream IT systems. They often have longer lifecycles, weaker patching routines, limited update capability and poor visibility once deployed,” adds Martin Butler, professor of digital transformation at Vlerick Business School. “This creates a serious problem in healthcare. A compromised device can expose sensitive patient data, disrupt care processes, or create a route into wider clinical systems.”

Scaling involves high costs and operational roadblocks when growing healthcare deployments across entire regions or trusts. This makes funding harder to obtain, especially when ROI [return on investment] cannot be easily quantified in clinical terms.

Integrating new sensors with legacy electronic patient record systems further adds to costs and holds back deployment cycles. When pilots do show early promise, healthcare staff may resist IoT systems that introduce extra steps into workflows, reducing day-to-day clinical adoption.

At the organisational level, a lack of clear outcome ownership constrains pilot scaling, which cannot roll out fully without procurement alignment, governance structures and accountability. Projects are thus left without actionable roadmaps and stumble between experimentation and production. 

Connectivity choices: the real make-or-breaks 

Connectivity choices are often underestimated when it comes to IoT deployments, despite being a key cause of scaling failures. This leads to deployments being incorrectly set up right from the get-go.

“A large share of failure does come from unrealistic assumptions about connectivity, latency and reliability. Many projects are designed around average conditions, when high-stakes environments are governed by exceptions,” says Zejnilovic. “If the workflow needs deterministic behaviour, ‘usually connected’ and ‘about right’ is not enough. That is why successful deployments push more resilience to the edge through local decision-making, store-and-forward logic and explicit fallback paths.”

Different connectivity types are optimised for different use cases, which can lead to struggles if applied elsewhere. A key mistake that enterprises often make in connectivity choices is choosing what is available, rather than what the use case requires, which can cause higher maintenance, weaker data quality and shorter battery life at scale.

Low-powered wide-area network (LPWAN) works best for low-data, low-power deployments and long-lasting battery-operated devices, such as in smart cities or agriculture. However, high latency and limited payloads cause them to break down in environments that need high-volume or timely data, such as critical care.

By contrast, cellular technology provides mobility and scale, ideal for fleet management and consumer wearables, but is very expensive and complex to manage at scale. Security concerns also make it unsuitable for highly regulated environments dealing with sensitive data such as healthcare or national industrial systems. Wi-Fi can be relatively cost-effective, especially for smart homes, but risks instability when deployed across distributed or dense environments such as city-wide applications. 

“Choosing the right connectivity is critical, as poor decisions can lead to exponential deployment costs, typically from underestimating the project’s full scale,” says Iker Mayordomo, solutions consultant at Zebra Technologies. “To avoid this, you must answer key questions from the start: How many assets need tracking? What level of accuracy is required? Is point-in-time location sufficient, or is real-time visibility necessary? How frequently is position data needed?”

Why so many IoT deployments fail to scale beyond pilots

Over 70% of IoT deployments never grow beyond the pilot stage, despite showing promise, according to IoT solutions company Metadesk Global. This is mainly due to the “pristine environment” fallacy, as pilots run in ideal conditions that often do not reflect practical deployment conditions. 

“Pilots are almost designed to succeed. You’ve got a controlled environment, a motivated team, the supplier is on-site, leadership is paying attention, everything is freshly configured. Your network, data governance, even device lifecycle management is all for 50 devices, 50 people and 50-minute tests – times it by 10 and your system come crashing down,” says Mistry.

The key reason for this pilot purgatory is enterprises still treating IoT pilots as technical experiments, instead of strategic operational initiatives, making them underprepared for sharp scaling costs and challenges.

“Many IoT projects stall after pilot because the pilot proves possibility, not operational scale. It can hide manual provisioning, extra engineering attention and a narrow estate,” says Zejnilovic. “The real roll-out exposes simple things like battery replacement cycles, or firmware updates, to more complex socio-technical challenges like support ownership, dead zones, identity management and integration with legacy systems. That is where organisations discover they funded a demo rather than an operating model.”

Similarly, total cost of ownership (TCO), is underestimated, as long-term maintenance, data storage and support costs quickly add up. This results in funding and resource mismatches, inviting more C-suite hesitancy. The labour and resource cost of device management and security at scale is a challenge too, with thousands of devices needing provisioning, firmware updates and security patching. Keeping track of device battery lifecycles adds another layer of complexity. 

What enterprises get wrong about IoT

Despite IoT’s widespread uptake in recent years, organisations still consistently misunderstand what drives success. Strategy misconceptions about IoT’s value are the biggest issue. Several enterprises assume that the technology will directly improve efficiency and deliver predictable ROI and cost savings at scale. 

This lends to the expectation that a successful pilot in one, tightly controlled environment will scale effortlessly across others. In reality, IoT’s value is highly dependent on deployment contexts. Returns often only appear in specific and narrow environments, while scaling unveils considerable operational and cost factors previously invisible at the pilot stage.

Many organisations also make flawed architectural and design assumptions. This occurs mainly by treating IoT as a one-time modular hardware roll-out that can easily be plugged into existing infrastructure, instead of a continuously managed system. Through this, enterprises also tend to defer integration problems with legacy systems to be solved later. 

However, legacy systems are brittle and not modular, and ignoring integration costs and hurdles can cause them to compound monumentally at scale. This can have significant consequences in situations where systems need to support continuous data flows, interoperability across environments and device management. 

Enterprises also widely assume that connectivity will be stable enough everywhere – however, practical variations such as density, buildings and geography can fragment connectivity. 

“The assumption I see most often is that because a hospital has Wi-Fi everywhere, connectivity is solved, which is understandable. The coverage maps look great and the signal strength looks fine,” says Mistry. “But coverage and reliability are completely different things. Imaging equipment, patient monitors, visitor phones and staff tablets all compete on the same infrastructure. IoT devices, especially continuous biometric streaming ones, tend to be the lowest-priority traffic on that network. When things get congested, they get throttled first.”

Organisations also often overestimate what pilot success proves, buoyed by confidence from artificially controlled wins. However, in practical environments, system dependencies, cost pressures and variability change outcomes significantly. As a result, enterprises risk building growth strategies based on limited evidence and treating systemic problems as edge cases, which allows confidence to grow faster than capability. 

The key decider: environments, not technology

Deployment conditions matter far more than sophisticated technology when it comes to IoT success, especially as the technology continues to be applied in a variety of industries globally.

“Some applications scale well because the data is useful, the costs are manageable and the system can be integrated into everyday operations,” says Butler. “Others struggle, not because the sensors are inadequate, but because governance, integration, liability, reliability or operating costs make scale difficult for a specific use case.”

As such, organisations which design for practical network realities instead of perfect conditions, will be best suited to scale IoT deployments successfully.

Read more about IoT

Read more on Internet of Things (IoT)