The issue with the Hamley’s website mis-pricing goods (see Computer Weekly 19 Dec) is the sort of embarrassing, costly, and totally avoidable sort of glitch that can be easily prevented through basic common sense processes. Now, we can only guess the underlying cause of the problem and without more information from Hamley’s it’s open to speculation. But I’ll guess that it was process behind one or other of the following:
– a change was implemented without being properly tested
– some new code was written that contained an error (malicious or accidental)
The result has been: public loss of face, financial loss, and operational costs to fix the underlying problem. Not a nice Christmas present for them, and a reminder to the rest of us that stories such as this continue to make the news and cost money (and it also says a lot about those people who took advantage of the error).
What should Hamley’s do next? There are many common sense measures that help mitigate the risk of this sort of event occurring. Some or all of the following areas are probably applicable:
– documentation of security requirements and development standards
– security between development and production domains (i.e. we don’t want development code ending up on the production servers and we don’t want development occuring against the live product)
– security of back-end databases
– test plans and testing
– vulnerability testing
– change control
– use of encryption
– management of outsourced development
In the past when I’ve encountered similar problems – fortunately before they were discovered by the public – the underlying cause could always be mitigated through a combination of controls in the list above.
It would be interesting for some-one from the Hamley’s team to comment so that we can all learn from the experience. Now, what are the chances of that?