Libor also broke the first rule of Information Governance

Until two days ago I was among those who believed that “Libor” was an authoritative index based on actual transactions, not unchecked estimates, collated once a day, from traders with a vested interest, personal as well as organisational.

I did my original systems analysis training during the run up to Decimalisation in 1971: the first nation-wide opportunity for large scale computer-assisted fraud.

We spent more time learning about how to ensure that the data going into the system was accurate and that what was reported was fit for purpose than about the technologies we would use to process it.

The first “rule” was that unless the data was provided by those who had both a vested interest in its accuracy and the knowledge and the opportunity to check that it was indeed correct accurate, it was likely to be at best  full of random errors and at worst  systemically misleading.

I had already (in 1969) had occasion to see the truth of that statement with regard to the statistics used by the government of day, when a blip caused by the inclusion of three years exports for my previous employer in a single months balance of payments led the then Chancellor of the Exchequer to say, erroneously, that Britain had “turned a corner”.

If it is correct that LIBOR was indeed based on subjective inputs from traders as to what they would like to have seen, as opposed to being an objective by product of processing actual transactions, then the real question  is “how on earth was that allowed to happen at all, let alone go on for so long?”

The honesty or otherwise of those involved in the process is almost irrelevent by comparison with the cavalier attitude to  information governance. We can see that elsewhere with regulators obssessing over data protection as opposed to accuracy and integrity. It is as though the errors in the patient record that led to the treatment that caused your death did not matter, provided the Caldecott guardian was happy with its security.

Some of the asumptions underlying the Open Data  White paper  mean this is not an academic, “post mortem question”.  Accurate and timely data is of great value. But a mash-up of garbage is toxic sludge.

Unless we once again take seriously the issues of data quality  (and the disciplines of information governance  we are in danger of building a future based not merely on sand  (silicon) but on quick sand (silicon processed sludge). 

Hence the critical importance of the recommendations in the report by EURIM on “Improving the Evidence Base: the Quality of Information“, published this time last year

Enhanced by Zemanta


SearchCIO
SearchSecurity
SearchNetworking
SearchDataCenter
SearchDataManagement
Close