Erik van Veenendaal, managing director and senior consultant at Improve Quality Services and author and lecturer on testing, said estimation had traditionally been very approximate, often simply based on a percentage of the development effort estimate - typically 30%.
Van Veenendaal identified three cornerstones for estimating testing effort.
The first was the system size, measured in terms of features, lines of code, function points, the number of screen displays or other factors. System size on its own is a traditional basis for estimating, but it fails to take account of his other two cornerstones.
One is strategy, which might include the relative risk of different parts of the system.
The third cornerstone is productivity issues. These can include the quality of documentation, such as the requirements specification; the testing team's knowledge and skills; and the availability of test plans, testing tools and test specifications from a previous project, which can be re-used: all these save effort.
After system size had been explored, some related methods could expand on the strategy and productivity issues to help create an accurate estimate for testing, van Veenendaal said.
Work breakdown could be used to determine the factors likely to be involved in the testing.
"Sit down with people from various system disciplines to produce a list of deliverables," said van Veenendaal.
"Look at the project deliverables: what do they tell us about the testing? You know there will be certain system features that will need test plans. You probably know about many of the drivers and interface specifications to be tested. Look at the IEEE 829 standard, which shows test deliverables, so that you don't forget anything.
"Look at overhead tasks such as test management and meetings. No one works 40 hours a week on a project: the very best is 70%, and it is usually 60%."
The results of the size estimates and work breakdown, plus data from previous projects and team members' experience, can contribute to Wide Band Delphi, an estimation technique already known in other fields.
"We decide on a maximum deviation for the estimates," van Veenendaal said. "If all our estimates are within, say, 20% we average them to get the final estimate.
"If some estimates are outside the margin, we ask the people who gave the highest and lowest estimates to explain their thinking. They might have spotted issues no one else thought of, or know of a testbed we can reuse to save time.
"The individuals then typically go away and estimate again. This usually leads to a consensus."
Van Veenendaal has seen Wide Band Delphi producing testing estimates accurate to within 10%-20%, which he said was "pretty good".