After a recent bad experience I decided to do a little experiment in my local supermarket. This was the outcome, and it has a potential impact on the internet of things.

I got to the check out and there were two people in the queue. It took me 16 minutes to get through to pay. The two people in front of me bought around 80 items and I bought around 30.

12 items failed to scan first time. Three had to be manually entered. One person used a number of vouchers of which three failed to scan. A supervisor had to be called twice. By my reckoning around five minutes of my queuing time was caused by failures of the technology, systems or processes.

That’s quite an overhead, around 30 per cent. Now bar-coding technology is mature and has created many benefits. Last year was the 60th anniversary of the invention and 40 since major rollout. I will try the same experiment in a different store soon to see what the overhead is there.

Personally I still find the self-checkout systems to be more work than value.

Now, with the internet of things (IoT) we want to put a tag on everything from cars to domestic appliances to clothes and so on. The potential is enormous, of that I have no doubt.

However, if it is to create the social and economic benefits envisaged, then what level of reliability will we need, of the infrastructure, the applications and services and the business processes for those benefits to be realised. What kind of overheads will we find in practise as opposed to the lab-based theory?

The early bluetooth gadgets were pretty unreliable in my experience. I had an earpiece for my mobile phone, but I gave up after finding it unreliable. Now bluetooth works to a level where it is only noticeable on the rare occasions that it fails.

Consumer acceptance of IoT technologies could be severely impacted if the early experience is poor and could put it back or even kill some useful applications and technology potential.

Imagine a home with, say, 1,000 IoT-enabled objects. Imagine if my supermarket experience as above was repeated.

There is a current example that may hold a clue. It was in the late 90s during the dotcom bubble that I first heard of the 'home hub' and the 'battle for the living room'. The idea was that the big suppliers were fighting it out to sell us the one device that would be a TV, radio, game machine, internet...box. Well it hasn’t happened in the way the advocates suggested.

In principle it sounds great, but what I don’t want are weekly software updates and hardware breakages. I prefer to have separate devices so that if one goes I don’t lose the lot.

Comparing to fridges and other devices gives me a clue.

My fridge has run for five years without a single incident. I have a TV that is 10 years old that has not had a single incident. My digital camera is four years old and has not broken once. My central heating has needed one visit in three years.

So, for me, the dream of a home hub has to meet certain criteria for it to be totally acceptable:

  1. No more than one software update per year, or less than one hour outage for upgrade.
  2. A meantime between failures or need to restart at two years.

For me, functionality only gets you part of the way. Before we put a tag or address on everything, we need to look beyond the technology benefits and look at the consumer and citizen experience.

I’ve seen so many 'x of the future' presentations and demonstrations, be it cities or schools or cars, to have seen early hope unfulfilled.

Can we get it right this time with the internet of things?