Last week we all got a front seat view of a real cyber war. First it was the WikiLeaks website brought down by massive denial of service attacks (DoS), followed by an army of anonymous hackers launching a botnet-driven distributed denial of service (DDoS) attack on PayPal, Visa and MasterCard as a protest against financial organisations that have blocked transactions for WikiLeaks. So has this gentle brush with World War Three taught us anything…?
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
This is a guest post by Rutul Dave of Coverity, a company that builds tools and technology to equip developers with resources, techniques and practices to help maximise the integrity of software.
The basic idea behind typical denial of service attacks against websites is fairly simple. Rogue software programs running on multiple systems flood the target web server with so many requests that it becomes overwhelmed and unavailable to visitors. Some DoS attacks however are even more basic and don’t require more than a single attacker. The attacker figures out how to crash the web server, cause it to loop endlessly, or send repeated requests to it until it runs out of resources. Such types of denial of service vulnerabilities are caused by errors in the code more commonly known as program crashes, hangs and resource leaks.
An example of this is the known NT IIS FTP DoS vulnerability. A denial of service of the FTP server, caused by a program crash, could be executed by any anonymous user connecting to it. The user simply had to issue an ‘ls’ command with 316 characters, immediately crashing the service and causing it to be unavailable until it was either restarted or the server rebooted.
Basic as it may be, the business impact of such attacks is potentially very large. For a company like PayPal, the direct financial impact would be from the number of lost transactions while their website is down. An even larger indirect impact of lost customer confidence is harder to measure. Ultimately though it is the end users, the customers who need to access the websites and services, who suffer most.
As an organisation, can you protect against this? Can you expect your web server software and web applications to hold up if attackers are looking for exploits to launch against you? Perhaps more importantly, for the software that your organisation creates, how can you ensure that it avoids such errors?
Identifying errors early in development with automated code testing technology, such as static analysis, can be a strong weapon in a developer’s arsenal in this battle. Here we are talking about errors such as program crashes, hangs and resource leaks stem from defects in code like pointer problems, memory management and access problems, incorrect arithmetic operations and use of uninitialised variables.
Static analysis engines test code as it compiles to identify these types of programming defects. By not only pointing to the defect, but by also identifying the events that trigger the defect, good analysis guides the developer in verifying the existence of the defect so that necessary changes in code can be made to fix it. Automation, ease-of-use and the ability to find defects that could result in unintended system behaviour, system crashes, or security vulnerabilities as soon as they are introduced make static analysis a potent weapon in the hands of developers creating software applications.
At the end of the day, these types of security problems underline the importance of software integrity. In other words, the software that operates the products and services we use in our daily lives, from planes to online banking, needs to meet the highest levels of product quality, safety and security. This means delivering products to market that operate as intended, products that are reliable, products that are safe to use and products that secure our privacy.
Achieving software integrity begins in development, as the code is being written. The reduction of vulnerabilities in code can be achieved in part through the application of automated code testing in development. By unifying quality and security analysis together in the development workflow, development teams are able to address security vulnerabilities early in the process, in the same way they address quality-related defects today.
Ensuring the integrity of the software that your organisation develops is no longer an option: it’s a business requirement and a business imperative. It might just be one of the strongest defences you have when it’s cyber war. Fortunately, there are technologies that are on your side in this battle.
About the author: Rutul Dave received his Masters in Computer Science with a focus on networking and communications systems from University of Southern California. Within nine months into graduate school while learning about creating high-performance networking and distributed systems, he found his passion creating real bleeding-edge technology systems at various Bay Area-Silicon Valley startups like Procket Networks, Topspin Communications and then moving to Cisco Systems. He has years of software development experience in embedded and real-time systems.