Corporate backlash emphasises importance of software testing

As firms rail against "buggy" code, testing is growth area

As firms rail against "buggy" code, testing is growth area


What is it?

Testing used to be the bit that got left out when software projects overran. The search for 2000 date-change problems turned testing into big business, and it received a further boost when a number of large financial institutions suffered the embarrassment of having their online account services crash because they had not been load-tested for scalability.

Software testing is now in the news again, as the implications of Microsoft's decision to release monthly batches of patches sinks in. A change to any part of any system can have an unforeseen impact on any other part. This is multiplied when, as happened with Microsoft last month, a service pack includes 14 different patches.

The only way of ensuring there are no unintended consequences is thorough regression testing, which essentially means putting the whole system through all its paces before putting it back online.

Where did it originate?

Testing as an independent discipline, with packaged tools and external verification and validation services, began in the mid-1960s. Automated tools, a humane alternative to the slog of working through printouts to find and fix bugs, became widely available in the early 1990s, but as Y2K revealed, a lot of companies did not invest.

What is it for?

Functional testing makes sure the software does what it is supposed to, and reliability testing makes sure the software does not fall over as it is doing it.

The British Standards Institute defines testing as "the process of exercising software to verify that it satisfies specified requirements and to detect errors".

What makes it special?

Ovum, which publishes reviews of all the major testing suites, said, "The cost of fixing an error increases exponentially as development proceeds and the error becomes more ingrained in subsequent work. If an error is present in a delivered application, the consequential loss amounts to many times the cost of actually repairing the software."

There has been a growing backlash from groups such as The Corporate IT Forum against buggy software. Many companies have deferred their software purchases for a year or two until new releases have had the bugs thrashed out of them. As software suppliers are losing revenue, more thorough testing could restore customer trust.

Where is it used?

Testing should be built into the development cycle from the outset and continued for as long as systems are operational.

How difficult is it to master?

According to risk management company Vizuri, 70% of software testers are not worthy of the title "professional" because they are not only non-accredited, but also under-qualified and lacking in industry experience.

What systems does it run on?

Test suites need to be able to handle all the different environments your application needs to interact with.

Not many people know that . . .

About 50% of testers are women, and that number is expected to grow to 75% by 2006.

What is coming up?

Automated testing tools for web services.


Training

You could learn a single testing suite, but in recent years there have been a lot of mergers and acquisitions among testing tool suppliers and some tools have been discontinued as product sets are consolidated.

You are probably better advised to take a generic course from the BCS, the University of Sheffield or other supplier-independent sources.

www1.bcs.org.uk

www.shef.ac.uk


Rates of pay

For junior testers, rates can be as low as £15,000, although £23,000 to £26,000 is more typical. Senior testers and team leaders can get £35,000 to £43,000.

Read more on IT risk management

SearchCIO
SearchSecurity
SearchNetworking
SearchDataCenter
SearchDataManagement
Close