The agency broke down the $26bn IT budget for the US Department of Defense into the following categories: business systems, $5.2bn; business infrastructure, $12.8bn; mission support (including its own separate infrastructure), $8bn.
Less than half of the total cost is accounted for by the share of spending that directly and visibly supports users. The lion's share goes toward the "infrastructure" - the hole from where bugs, disruptions and mysterious failures come.
Here we have an audit confirming what has been creeping up on IT for more than 20 years: It isn't the applications but the need to support a costly infrastructure that has been dampening the funding for technological innovation.
You can always get votes for adding another attractive application. But hardly anybody will sign up to support an infrastructure that may be serving customers who aren't paying their way. Selling tickets for seats in fancy rail carriages was always easy. Finding money to pay for the track, switches, signal equipment and the fuel depot was always much harder.
The root cause of IT failures and excessive IT costs in large organisations lies in rickety infrastructures put in place one project at a time. What you usually have in large organisations is not a secure, low-cost and reliable infrastructure, but a patchwork of connections cobbled together without sufficient funding and rushed to completion without sufficient safeguards.
The fashionable approach is to impose centrally dictated "architectures" to cure the pains from incompatible and redundant systems. Such architectures are just another way of achieving order through centralisation and consolidation. Unfortunately, under rapidly changing conditions, such a cure may be worse than the original disease.
Invariably, centralisation involves the awarding of a huge outsourcing contract to a supplier for whom a critical piece of the infrastructure is carved out, such as the management of desktops. Associated servers, switches and data centres may also be included in the IT territory ceded to the outsourcer, while the resident IT bureaucracy always keeps tight control of a few fatally critical components to retain its absolute power.
This approach to fixing infrastructure deficiencies is flawed because the sequence for fixing a broken setup is backward. Contracting for an infrastructure should be the last - not the first - step in putting improved systems in place.
First, IT managers should focus on determining which applications must be delivered immediately. The reliability, affordability and timing of application services will dictate which one of the many conceivable infrastructures would work best to solve high-priority problems.
Second, the organisation's management structure and business goals must be set. I don't see how one can get funding for overhauling infrastructure as a separate investment. As a credible business case, such investments offer notoriously sterile ground.
Infrastructures must be designed so that each step can be financed with incremental funding. Such economics make outsourcing of infrastructure services to a computing "utility" the preferred solution. The recent huge wins by a computer services firm offering "on-demand" usage pricing is a good sign that customers are ready to buy computing "by the quart" instead of owning a farm.
Third, a feasible transition plan for legacy applications must be developed and tested before making the least risky technical choices.
Only after the completion of this sequence would it be safe to proceed with outsourcing. Precipitous contracting for infrastructure services is only for the hasty and the impatient (who will be long gone when the auditors finally show up).
The best way of presenting a business case for IT investments is to make it a discretionary variable expense.
Paul Strassman is an IT consultant and former chief information officer at the US Department of Defense, space agency NASA and Xerox.