Internet marketeers appear to have only just discovered that half their audience “won’t even wait three seconds” for a website to load. I was, however, more interested to learn that average response times are now 4.5 seconds and growing. Back in 1971, I was taught than an average response time of over 4 seconds was unacceptable for what were then called “on-line transaction processing systems”. The users would get pissed off and rebellious. If my system could not respond regularly within 2 – 4 seconds I had to disguise the delay, e.g. by overlaying requests for more information. So why are we going backwards on one of the key metrics for user satisfaction?
Excuses of complexity will not wash. Yes, I was only putting a couple of dozen VDUs on a mainframe which could not handle more than half a dozen “apps” at the same time. But technology has supposedly moved on in 45 years – even if user expectations regarding acceptable response time have not! In the 1980s, when I was running the NCC Microsystems Centre, our yardstick for testing response times and reliability, including for pre-Internet on-line systems was “If this was a life support system, how many times a day would you be dead”. Over the next twenty years both power and reliability improved dramatically. A decade I could use the measure of “How many times a month or year …”
Over the past couple of years we have indeed been going backwards
The same BBC article gives one of reasons why: “it’s mainly because of all the third party connections … ” The article mentions those to Google, Facebook and Twitter and the latency delays while waiting for responses from the US. Apparently Australian load times have increased from 5.4 to 8.2 seconds. The cost in lost business has also been measured: 10% for an extra half a second. Hence the growing pressure for high speed broadband (backhaul not just local access), the growing US investment in European data centres and internet exchanges.
But who are the real culprits?
I now look to who who is to blame when my systems slows or stops dead, usually while I am following up news stories, visiting a wide variety of sources. The culprit is nearly always some piece of cloud-based monitoring or tracking software which is trying to record what I visit and is waiting for a response or has crashed. I recently set about deleting the cookies of unknown origin on the system I most commonly use for web-browsing, leaving only those from sources I could recognise and felt likely to use again. A thankless task. I am still tempted to delete the lot, block cookies entirely and see what happens.
Then I read that, according to Kaspersky, 38% of targeted cyber attacks involve the employees of Telcos and ISPs . I began to wonder how many of these involve “unauthorised access” to analyses of tracking software. Not only are the delays caused by monitoring bloatware costing you more sales than the analyses gain. The data collected for those analyses of such dubious value may be about to cost you massive fines under the GDPR when the breaches are finally detected.
I say dubious value because I am not interested in adverts from what I bought last week or hotel offers from towns I have just visited. More-over now that I have got into the habit of ringing to check the supposedly confirmed hotel bookings, I might as well ring those I have visited before and save them the on-line booking fee. I also avoid the risk of turning up unexpectedly because their Internet has been down or so slow they have given up and, either way, not received anything for several days. Then there are all the bargains not available on-line, Recently after two days of fruitlessly hunting on-line for a fridge to fit an unfashionable space (you try getting an old one repaired). I gave up and visited John Lewis to talk through my problem with a human being. We found what I wanted. While it was on display it was not in the on-line system because it was discontinued, due to replaced (probably by something that would not fit!). There is much to be said for spending more time off-line.