A good article at the moment on SecurityFocus on “Vulnerability Scanning Web 2.0 Client-Side Components.” While Web 2.0 applications offer the ability to create very rich client interfaces, it’s a bit of a step backwards for security because so much more of the processing is coming back onto the client-side. This does not need to be a problem so long as the old maxim of “never trust anything submitted from the client to the server” remains true,but from what I’ve seen so far, all of the old errors are still very much in evidence and making a strong come-back!
I did spend a while trying to find some evidence of real-world Web 2.0 attacks that have actually resulted in some loss to the organisations concerned. There’s plenty of anecdotal evidence which doesn’t really convince me however, read the attack vector list: code injection, cross site scripting, malicious code execution: spot anything new yet?
So, that’s the problem, what’s the solution? As always it comes down to process – in particular training and awareness: not just amongst the development teams but across each of the different stakeholder groups. You might also want to take a look at available OWASP resources, particularly the AJAX Security Project. I make no apologies for repeatedly plugging OWASP – it’s the best resource by far.
Have a good weekend.