Developers need to focus on decreasing risk in new releases to an acceptable level. Rather than tackle every single flaw the most risky areas should be addressed first, says Sarah Saltzman
Recent events have only served to highlight the flippant attitude of some companies towards flaws which have then caused untold business problems. These problems are magnified if your company operates on a global scale and end-user organisations are starting to take a stand. Their message is clear: bugs are unacceptable, whether they are in an ERP application, a bespoke banking system or in software that drives hardware on a daily basis.
This has sparked concern that unless the industry starts to get its act together over the number of bugs that are still prominent in many software releases, it will be subject to a form of regulation similar to that imposed on the financial sector.
I am pretty sure IT directors don't want to be saddled with yet another regulation to navigate - they already have enough goalposts to run around. The onus is therefore on organisations to tackle the issue of software quality proactively by adopting a best practice approach to development and testing.
Often people equate quality with the number of bugs contained in an application, but this doesn't make sound business sense. It is unrealistic to think that you can test the whole application and eliminate every element of risk. Time pressures and budgets mean that testing the whole application is impractical, but this isn't an excuse for poor software quality. The industry needs to encourage developers to focus on decreasing risk to an acceptable level, rather than striving for the impossible.
IT staff need to prioritise their testing efforts thoroughly by testing the parts of the software or hardware that hold the biggest business risk.
This is something of an admirable assertion. Perhaps a few years ago it would have been just that, but in today's hostile and competitive market place, prioritising testing effort based on risk is a must. Adopting this risk-based approach enables testing teams to identify the less critical parts of the application where defects will be more acceptable, rather than taking a blanket approach and saying that a certain amount of defects can be tolerated without knowing which part of the application - and more importantly what business process - they might affect. If organisations know what the repercussions are of going live with an application, or going to market with a piece of hardware that may have software flaws, then they can take informed decisions about whether it's worth the risk or not.
So how exactly is this approach going to stave off regulation? Let's consider an example. Due to a partnership with a new back-office credit card processing provider, a retailer has had to make significant changes to the payment processing modules in the online order processing system. Each of the payment methods presents a potentially critical failure point that should be tested, but the testing team has only five days to complete the 140 test cases needed to validate the entire checkout process, something that typically takes eight days. A decision must be made whether to hold up a release of the application and risk loss of revenue and potentially lose face with the new partner, or blindly choose a few tests to run and deploy "with fingers crossed".
Neither option is appealing, but by adopting a risk-based approach a more predictable and manageable outcome can be determined. Risk-based testing ensures that the most important functions of an application are tested first, regardless of how limited the testing timeframe may be. The results of a test phase can then be easily communicated to the management team in terms of "risk mitigation" rather than technical facts about "how many tests have been run". Rather than experiencing untold havoc the organisation knows what the repercussions will be of going live and can put processes in place to minimise the impact, should the decision be taken to release software with incumbent bugs.
Bugs have long been cause for concern, but now the gloves are coming off. End-users have made it clear that they will no longer tolerate poor quality, and neither should they. Risk-based testing is a viable best practice approach that allows organisations to make go-live decisions based on sound business fact. If the developer industry wants to avoid regulation then the time to act is now.
Sarah Saltzman is solutions manager at Compuware