Rawpixel.com - Fotolia
The UK public sector spends several billion pounds each year on acquiring and maintaining software systems – nobody seems to know the actual amount – but this infrastructure investment is extraordinarily badly managed.
Policy discussions on any subject you can think of – health, education and crime, for example – are informed by masses of statistics, yet central government has no coherent data repository on the performance of the supply of its software systems, whether internally or externally sourced.
Some individual departments, such as HM Revenue & Customs and parts of the Ministry of Defence, have good data and use it, but there is no overall knowledge base to inform policy formation and to support organisational learning.
Frustration with the performance of major IT suppliers, described as a “recipe for rip-offs” in 2011 by members of Parliament (MPs) on the Public Administration Select Committee, resulted in current policies to replace long-term IT outsourcing deals with shorter-term contracts, with more work for small to medium-sized enterprises (SMEs).
Fixed-price contracts for waterfall-style software development projects are giving way to “time and materials” (T&M) contracts for agile delivery.
When contract managers responsible for controlling supplier performance could negotiate fixed-price deals, they had at least one prop to hang on to. However, with T&M contracts this prop no longer exists and, with no supplier performance data and more complex supply chains, their control task is nigh-on impossible.
The recently published supplier standard for government technology purchasing aims to help contract managers.
Open-book contract management (OBCM) will be introduced for all IT contracts so that “outcomes should be a fair price for the supplier, value for money for the client and performance improvement for both over the contract life… (and) can lead to better outcome in terms of both delivery and better value for money”.
But OBCM has no answers for the problem of how to measure performance on the supply of software systems.
For example, under OBCM, T&M contracts with two different suppliers may offer the same, well-justified day-rates. But the customer will still have no idea whether one supplier is more productive than the other, nor whether their estimates are reasonable and likely to be deliverable.
Given the difficulties of controlling agile/T&M projects, it is interesting to see how this policy arose. It seems to have partly originated from a 2011 Institute for Government report.
The report gives only one quantification of the expected benefits, namely: “The benefits of this change (adopting agile) can improve delivery performance, in terms of cost, quality and speed, by a factor of 20.”
Now it may be that performance can be improved using agile methods, but anyone with real experience of software project performance measurement and benchmarking would know that, as a generality, this claim is absurd.
The same report noted: “At the Department for Work and Pensions, the timeline for delivering the new Universal Credit system using a traditional approach was expected to be 2015. By adopting a more agile approach, the programme managers now hope to deliver the essential functionality by 2013.”
It’s now 2016 and Universal Credit is still some years away.
Agile vs waterfall
Agile processes can, when properly managed, undoubtedly deliver a better service for customers than waterfall methods. Gone are the fat, indigestible and out-of-date statements of requirements, replaced by user-defined “stories” with rapid implementation and frequent deliveries of working software.
But agile processes for project estimation, being non-repeatable, non-comparable across projects and non-traceable, seem to be designed for unaccountability. Combine that with T&M contracts and we are back to another recipe for rip-offs.
Data suggests that savings from any gain in productivity of agile over waterfall processes may be offset by the additional costs of dealing with code delivered at high speed that may not be maintainable in a few years’ time. Finding the right balance of trade-offs for these various performance parameters is complex and cannot be done without reliable data.
What we need is an IT equivalent of Nice, the Department of Health’s National Institute for Clinical and Care Excellence.
Nice evaluates proposed new drugs and treatments based on hard evidence. If approved, it is then the task of NHS administrators to decide how much should be spent on a new drug, and for individual doctors to decide whether to prescribe it for a particular patient.
Read more on government IT purchasing
- The government’s £4bn technology products and services framework opens up for SMEs and moves focus to quality rather than price.
- What’s going wrong with government IT procurement?
- The government’s new digital chief, Kevin Cunnington, talks about his plans for the future and the challenges of transforming public services.
This analogy helps to sort out the different roles needed in central government IT. Policies and standards for new software technology, data and supply contracts should be set by an IT equivalent of Nice, based on hard evidence from research and feedback from public sector projects.
Decisions on proposals and contracts for new systems should be left to individual departments to decide within their treasury-approved budget. The Government Digital Service (GDS) currently undertakes both of these roles – without the benefit of hard data – as well as developing new digital services. This is a confused set of responsibilities. GDS could, however, function well as an IT equivalent of Nice.
Other governments, notably Brazil, China, Finland, Mexico, the Netherlands and Poland, are starting to collect hard data in this sphere. When will we learn?