Developers should be encouraged to write security into the code for new applications

Securing applications is as important as securing systems but is often overlooked

IT security has been a top priority for the past few years, with many companies investing in technologies such as firewalls and intrusion detection systems. But as they have been busy securing their networks, firms have largely ignored application security.

This has left applications vulnerable, especially as, according to analyst firm Gartner, 70% of successful attacks occur through the application, rather than through the network or operating system.

Cyber intruders, especially, are succeeding in breaking into enterprise applications, which can contain sensitive information such as customer credit card details or financial data. This means that large parts of the corporate IT infrastructure are exposed to massive potential damage, in terms of reputation and cost.

Many organisations do not seem to realise that their web applications, in particular, are easy targets. Malicious users do not need a lot of time or knowledge to vandalise websites or use applications as a gateway into the corporate infrastructure.

Legislation aimed at motivating enterprises to secure their web applications has added fuel to the fire. Under the Data Protection Act, for example, companies could be liable to legal action as a result of problems with application security if, unknown to them, they are exposing confidential data.

Organisations must not make the mistake of thinking that this is about phishing or identity fraud: it is about the way web applications are developed. Far too often they are developed without security being considered or given due prominence at the outset.

Building security into the design of applications is not as hard as it sounds. There are documented application architectures that promote security best practices. The real challenge is when the code comes to be written. Application security is not a high priority in application development - functionality and delivery come first.

This cannot continue. Security must be built into the code from the beginning and, by definition, this means that testing for security loopholes becomes a priority also. The emphasis must change from building functionality to building secure functionality.

There are coding techniques that produce functional results, yet offer intruders a way to use the application or the underlying operating system to gain unauthorised access.

One example is that most operating systems offer multiple ways to designate the same logical location in the file system. A developer may write some code that closes a path to that specific location, but this will only block hackers' access if they try to get in using exactly that path. There will be many other paths that they could use. The only way to protect against this type of weakness is to identify all possible routes or paths and protect each of them individually.

Insecure coding practices often creep in when developers are trying to get around persistent errors in applications. For example, often the ValidateRequest attribute of pages at the front end of a web application is set to "false" by a developer, to get around the problem of the page crashing when the validation request is set to "true".

Although by doing this developers do not change the functionality of the application, they introduce a security vulnerability, as intruders can then use data entry fields to try and send commands to the application, database, or operating system, that could allow them access to private corporate data.

These are just a few examples of how coding practices are resulting in insecure web-based applications. There are hundreds of different practices for dozens of different application scenarios that are constantly changing and evolving as hackers become more sophisticated and new vulnerabilities are found. It is impractical to think that developers can keep abreast of all of these, and even if they could, most organisations could not afford to retrain all their developers to write secure code.

However, developers can be armed with tools and processes that enable them to produce better quality code. They should be given tools that check their code against known security errors and that analyse internal calls and data transfers within the application, as well as the operating system and software environment in which it operates.

Improving the quality of code from a security perspective will undoubtedly make your applications more secure. However, you cannot stop there: you also need to test against security loopholes.

It is not unheard of for organisations to bring in professional hackers to test an application by trying to break into it. But this is not always possible from a cost and resource point of view. In addition, this type of activity will prove expensive and is not done over the lifetime of an application. Security vulnerabilities can arise at any point in an application's lifecycle, so organisations need to find ways to continually test for security loopholes.

One way to do this is with automatic attack simulation, where software pretends to attack the application through known paths of intrusion. This software can be updated and re-run on the application across its lifecycle to ensure you are constantly testing your application against known attacks.

Although it is virtually impossible to create a 100% secure application, by changing development and testing practices organisations can significantly reduce the number of security vulnerabilities that exist in many of the applications developed today.

Organisations must sit up and take note, otherwise they will soon find that security holes in their applications affect their ability to do business and could land them in legal hot water.

Sarah Saltzman is technology support manager at software and services company Compuware

Read more on Hackers and cybercrime prevention