Comparative testing of products that detect malware is important to most of us. However, some tests are better than others, and some dramatically jangle the nerves of the anti-malware research community. Not just the suppliers whose products get a raw deal, but the independent researchers who want testing to benefit users, not mislead them, writes security researcher David Harley, administrator of the Anti-Virus Information Exchange Network.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Established testing organisations such as Virus Bulletin and ICSA Labs have earned the trust of the industry by demonstrating their own knowledge of the field, and by impartially following strict, safe methodologies.
That is important, because it is impossible to do some kinds of testing without the co-operation of the industry: who else has access to the range of samples needed to perform a meaningful detection test?
Does this mean, then, that you have to be an anti-malware researcher to test detection? Certainly testing is a very specialised area within research, and the number of individuals and organisations whose work has gained respect and acceptance among the research community is very small, and those individuals do tend to belong to that group.
Two major problems
The principles of testing are fundamentally similar for anti-malware as for other software. However, there are at least two major problems:
● For some reason anti-malware testing attracts many people who are not well-versed in testing methodologies in general.
● Even worse, it also attracts people who have a somewhat distorted idea of what this type of software is and how it works. (I will not dispute that the research community has, to some extent, brought that upon itself by cultivating a secretive, ultra-paternalist culture.)
At a technical level, this may be true of, say, a spreadsheet program, too. However, when people review a spreadsheet program or a word processing program, they take a lot for granted: when did you last see a review of a spreadsheet program that included a check of the mathematical or statistical functions?
If you did see such a review claiming that Excel was incapable of performing basic mathematical functions (rather than the minor bug recently patched), you might well want to verify that the testing was sound.
Yet apparently anyone with half a dozen files that might be viruses in their mailbox is qualified to test malware detection tools.
A major problem with testing is validating samples: testing with objects that may not be viruses or malicious at all is not only useless, but also misleading.
Casual testers usually "validate" with an AV scanner, but that is a bit like "proving" a bug in Excel because Symphony returns a different result to the same calculation, rather than establishing the correct answer by other means and testing both packages. However, sound validation takes serious time and resources.
There has been much industry discussion of testing. The real difficulty, however, will be in persuading casual testers to follow improved methodological models, and that doing so benefits the consumer, rather than the industry.
Security Zone is a bi-weekly series in Computer Weekly covering all aspects of IT security management. Each article is written by a member of the International Information Systems Security Certification Consortium (ISC)2.