The software industry can't get a break these days. As if plummeting sales weren't enough, bad publicity about shoddy products plagues vendors such as Microsoft and JD Edwards. A new study last month pegged the cost of software bugs at a staggering $59.5bn. Where does this money go?
Buggy software crashes - and when it is an enterprise app like JD Edwards One World or SAP R/3, businesses dependent on that app shut down. Meanwhile, IT and vendor service reps scramble to apply a fix. This downtime can cost companies thousands of dollars per hour in lost revenues.
Unreliable software dramatically increases the likelihood of user error - or worse, security holes. With Microsoft IIS, for example, 625 combinations of patches were required to fix the Nimda worm virus. To cope, even midsize firms typically spend millions on additional training and application repair.
Users of Oracle 11i complained that the earliest releases had modules with transactions that did not always work properly, forcing customers to implement workarounds and wait for patches. As employees inadvertently undermine customer relations because of data management errors, hidden costs skyrocket. If software bugs pollute just two of the 50 contacts a call centre agent handles each day, that's more than one-quarter-million botched calls.
Software execs acknowledge that products are shipped long before they've been properly tested and debugged. Why do they get away with it? User companies let them. The same buyer who would refuse to bring home a car with 800 defects out of 8,000 components routinely signs purchase orders for applications that are at least that defective. But firms can force vendors to change the way they do business if they:
Identify clear measures of quality
Firms that are specific about feature lists for both new versions and upgrade releases are surprisingly vague about what constitutes tolerable quality. Yet vendor bug classification systems pave the way for these specifics. A level 0 bug crashes the system, a level 1 renders a module unworkable, and a level 2 defect can be worked around. Firms can make an RFP requirement that new software have no known level 0 or level 1 problems, period.
Incorporate quality measures into buying decisions
The decision-making buyers of software are not always hands-on users. Why is that a problem? Representing the broad-based needs of the overall firm, they use feature-by-feature comparison to compare SAP with Oracle or Siebel Systems with PeopleSoft. Buyers instead must move software reliability up to the top of the feature list and eliminate a vendor that can't meet minimum quality standards.
Bring the contract to centre stage
Buyers typically spend a year looking for the right application and then hand the vendor's contract over to their legal department demanding that the contract be signed by Friday. But with quality benchmarks built into the RFP process, companies can hold vendors accountable. So buyers must rev up legal departments and write quality clauses - like right of refusal, payment holdbacks, or expectations about response times - into contracts representing their interests.
Deploy technology to detect and repair problems
Firms should employ Net services like Bugtoaster for logging system crashes or Keynote Systems for detecting Web transaction reliability. Why? These configurable outside resources can help IT monitor operational software and free up employees to work on testing scenarios for incoming new releases. For firms that need the highest level of uptime, Geodesic offers a software fault-tolerant framework that automatically corrects memory leaks, buffer overflows, and deadlocks - at runtime.
Laurie Orlov is a research director at Forrester Research