Calculating the cost of failed software projects

The need to adopt a consistent approach to software development and implementation cannot be underestimated. Vague specifications, insufficient planning and poor project management are common causes of software failure

There are many factors that can compromise software development to the point where investment loss and exposure to risk may dwarf the cost of the software itself.

Imprecise client specifications, insufficient planning and analysis, poor project management, continually moving goalposts, unrealistically short timescales, weak quality assurance and underestimated costs are all common causes of failure.

Faced with shrinking deadlines, spiralling costs and a need to get the 'product' to market quicker it is often software testing and the validation processes that are sacrificed. In our experience, only 30% of organisations allocate a separate testing budget when implementing new technology, even though more than 70% of them recognise the crucial role of testing in the development process.

Without having undergone rigorous de-bugging, projects may be delivered with high levels of errors (as high as 45%). If the application works at all (and often it doesn't, in which case it is quietly shelved) errors may only be discovered when transactional breakdown occurs.

The high cost of bugs

No-one is certain of the real cost of failed software projects, but in the US alone it is estimated to be upwards of $75bn a year in re-work costs and abandoned systems. A few years ago, poorly tested software caused transaction processing problems for millions of online customer accounts at one large multinational bank, leading to wide-scale email phishing attacks that cost the bank more than £50m.

Then there was the Federal Bureau of Investigation initiative in the US to enable agents to share case files electronically. The computer code was so bug-ridden the bureau was forced to abandon the project and lose $170m. There is also the famous case in 1999 when a $125m NASA spacecraft was lost in space because of one simple data conversion error.

Where good quality assurance procedures are in place, a newly completed application may still contain 10% to 15% errors. These are usually passed on to the customer who then has the burden of finding and correcting them - hopefully before any major risk episodes. Despite testing procedures, even a best-of-breed product usually still contains a margin of error, typically around 5%. At this point it may be deemed by both vendor and customer that trying to reduce this percentage is a case of diminishing returns and not worth the additional Investment.

Steps to success

Customers and suppliers need to adopt a consistent approach to software development and implementation. They need to be prepared to invest sufficient resources, optimise pre-existing modules and factor the necessary testing procedures into the development lifecycle. Good project management, detailed specifications and good documentation must be provided upfront and should make allowance for the fact that things can take longer than expected.

Re-work can account for up to 40% of the project cost, so the earlier that defects or potential problems are identified, recorded and rectified in the lifecycle, the lower the cost.

Pricing a project accurately cannot be a 'guesstimate', based on loose assumptions that any unforeseen cost overruns can just be passed onto the client. When estimating software development costs, accurate metrics are crucial. These include quality assurance requirements, offshore sourcing variables, identifying reusable code components, factoring in the complexity of existing infrastructures and much more.


Paul Michaels is director of consulting at Metri Measurement Consulting, UK.

Read more on Operating systems software

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close