kornienko - Fotolia
We all know that any good developer will test their software before delivering it to a testing team or to a client, but how should they test and what should they test?
There are plenty of available software testing techniques, but some need specialist tools, and some need quite a lot of test design to make them effective. What is needed is a set of techniques that are quick and easy to implement, and that give some confidence in the software that is being handed over to a testing team or to users.
The most easily implemented technique is experience-based testing. This is especially useful for developers working in an environment where they are also providing customer support or designing maintenance updates for a product.
All you have to do is to look back over reported customer support issues, defect logs or whatever records you have of problems – they will provide you with a list of the issues that have caused problems in the recent past. You can then check that what you are delivering deals with all the obvious and not so obvious issues that would upset a customer.
A second simple technique is use case testing. Chances are you have designed your software around use cases or something equivalent, so you may already have a flow for the use cases – a simple flow chart that identifies all the things a use case should enable and the things it should not allow provides you with a list of test cases that you can apply.
All you would need would be a sheet of paper and a pencil for your sketch flow chart and a second sheet of paper to record the inputs and outputs for your test cases – or you could be really slick and use a spreadsheet so that you can save those use cases and use them again or build some new ones from them.
Boundary values are always worth checking – at the user level, checking that dates and addresses are in the correct format, and at the code level, the number of times an iterative process repeats under given circumstances, for example. These can all be exercised with a simple set of inputs and expected outputs. How deeply you explore is up to you – some boundaries may be more important than others.
Decision tables can be useful to check that all the different alternatives are covered when you are presenting users with options. All you need is a simple spreadsheet that shows all the combinations of choices that a user could make and identifies the expected outcome for each, including the ones that will generate an error message.
Decision testing of your code is the most detailed of these simple techniques. Have you checked that every decision point in your code has been checked and that combinations of decisions come out correctly – or at least as you expected? That is always a good set of tests to do, and it combines well with use case and experience-based testing.
Of course, this level of testing does not ensure that the software works correctly, but it would provide a testing team with a good base to work from or give you confidence that there are no nasty surprises in store for users.
All of these techniques are covered by the ISTQB Foundation level syllabus (available free here) and explained in detail in Software Testing: An ISTQB-BCS certified tester foundation guide – fourth edition.