Internet of Things - Architectures of Jelly

| No Comments
| More

In today's world of acronyms and jargon, there are increasing references to the Internet of things (IoT), machine to machine (M2M) or a 'steel collar' workforce. It doesn't really matter what you call it, as long as you recognise it's going to be BIG. That is certainly the way the hype is looking - billions of connected devices all generating information - no wonder some call it 'big data', although really volume is only part of the equation.


Little wonder that everyone wants to be involved in this latest digital gold rush, but let's look a little closer at what 'big' really means.


Commercially it means low margins. The first wave of mobile connectivity - mobile email - delivered to a device like a BlackBerry, typically carried by a 'pink collar' executive (because they bought their stripy shirts in Thomas Pink's in London or New York) was high margin and simple. Mobilising white-collar knowledge workers with their Office tools was the next surge, followed by mobilising the mass processes and tasks that support blue-collar workers.


With each wave volumes rise, but so too do the challenges of scale - integration, security and reliability - whilst the technology commoditises and the margins fall. Steel collar will only push this concept further.


Ok, but the opportunity is BIG, so what is the problem?


The problem is right there in the word 'big'. IoT applications need to scale - sometimes preposterously - so much so that many of the application architectures that are currently in place or being developed are not adequately taking this into account.


Does this mean the current crop of IoT/M2M platforms are inadequate?


Not really, as the design fault is not there, but generally further up in the application architectures. IoT/M2M platforms are designed to support the management and deployment of huge numbers of devices, with cloud, billing and other services that support mass rollouts especially for service providers.


Reliably scaling the data capture and its usage is the real challenge, and if or when it goes wrong, "Garbage in, Garbage out" (GiGo) will be the least of all concerns.


Several 'V's are mentioned when referring to big data; volume of course is top of mind (some think that's why it's called 'big' data), generally followed by velocity for the real-timeliness and trends, then variety for the different forms or media that will be mashed together. Sneaking along in last but one place is the one often forgotten, but without which the whole of the final 'V' - value - is lost - veracity. It has to be accurate, correct and complete.


When scaling to massive numbers of chattering devices, poor architectural design will mean that messages are lost, packets dropped and the resulting data may be not quite right.


Ok, so my fitness band lost a few bytes of data, big deal, even if a day is lost, right? Or my car tracking system skipped a few miles of road - what's the problem?


It really depends on the application, how it was architected and how it deals with exceptions and loss. This is not even a new problem in the world of connected things - supervisory control and data acquisition (SCADA) - that has been in existence since well before the internet and its things.


The recent example of problem data from mis-aligned electro-mechanical electricity meters in the UK shows just how easy this can happen, and how quickly the numbers can get out of hand. Tens of thousands of precision instruments had inaccurate clocks, but consumers and supplier alike thought they were ok, until a retired engineer discovered a fault in his own home that led to the unearthing that thousands of people had been overcharged for their electricity.


And here is the problem, it's digital now and therefore perceived to be better; companies think the data is ok, so they extrapolate from it and base decisions on it, and in the massively connected world of IoT, so perhaps does everyone else. The perception of reality overpowers the actual reality.


How long ago did your data become unreliable; do you know, did you check, who else has made decisions based on it? The challenge of car manufacturers recalling vehicles will seem tiny compared to the need for terabyte recalls.


Most are rightly concerned about the vulnerability of data on the internet of people and how that will become an even bigger problem with the internet of things. However, that aside, there is a pressing need to get application developers thinking about resilient, scalable and error-correcting architectures, otherwise the IoT revolution could have collars of lead, not steel and its big data could turn out to be really big GiGo.


Enhanced by Zemanta

Leave a comment

Have you entered our awards yet?

About this Entry

This page contains a single entry by Rob Bamforth published on April 17, 2014 11:19 AM.

Managing a PC estate was the previous entry in this blog.

Print security: The cost of complacency is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Archives

Dilbert

 

 

-- Advertisement --