There's an excellent and humourous article on OWASP on how to write insecure code. This is essential reading for all developers...
To ensure an application is forever insecure, you have to think about how security vulnerabilities are identified and remediated. Many software teams believe that automated tools can solve their security problems. So if you want to ensure vulnerabilities, simply make them difficult for automated tools to find. This is a lot easier than it sounds. All you have to do is make sure your vulnerabilities don't match anything in the tool's database of signatures. Your code can be as complex as you want, so it's pretty easy to avoid getting found. In fact, most inadvertent vulnerabilities can't be found by tools anyway.
The point about automated tools is an important one. They have undoubtably become more sophisticated since I started messing around with the likes of Sanctum AppScan and Kavado ScanDo about 5 years ago. At the time I recall being overwhelmed with false positives in the reports and a feeling that the twenty grand spent on the license would have been better spent on my end of year bonus.
These days, SPI Dynamics claim, amongst other things, that the new version of WebInspect (v7) "is the first and only solution to address the complexity of Web 2.0 and identify vulnerabilities that are undetectable by traditional scanners." Acunetix state their product "has a state of the art vulnerability detection engine which quickly finds vulnerabilities with a low number of false positives. It also locates CRLF injection, Code execution, Directory Traversal, File inclusion and Authentication vulnerabilities."
If the claims are true then we're all saved and we'll never see another vulnerable web product. The problem is in the words "automated scanner." Unfortunately, that is not how a hacker behaves. When I point my scanner at a URL and click scan, the machine will work as programmed and if you're lucky, it will identify some security flaws that you can fix, or report that your application is clean so that you can go running to the boss with the good news. Point a hacker at the same URL and I can guarantee that he (or she, because I'm sure there are lady hackers too) will not behave like a programmed machine. Firstly, there will be the requirements for coffee and pizza. Input both and the output, more often than not, is a lot of mess that no scanner is likely to find. If you're lucky, the cleanup will be the result of a test that you solicted. If you're unlucky it will be after you've discovered that the hacker has compromised your database.
If, after reading this, you still want to rely on the output of an automated scanner, then don't sign the purchase order before reading this report from Virtual Forge. The report, rather dissapointingly, does not name the scanners under test but the results are conclusive
The best scanner found 13 out of 85 vulnerabilities, the worst only 4Mark Curphey drives the point home in his own blog: "While these tools may seem attractive (”press the shiny red button and away it goes”) anyone who understands...the nature of bugs and flaws will also understand they typically find a small percentage (10%-25%) of the average issues in a web site.."
Do I think that application scanners have any place at all? The answer is yes IF you have some skilled in-house resources with the ability to use the generated reports as a guide and not as a statement of fact. Conversely, the danger if you don't is that you'll generate, for example, a PCI report that states you're compliant when you're not, or just as bad, generates a vulnerability report that has you running around screaming that the sky's falling down when it isn't.
Before I close down for the night, I promised my good friend Hassan that I would mention it was with his encouragement that I recommenced writing this blog. So, thank you Hassan. And do you know what? I'm glad I did...