Making software secure from first principles

Steve Lipner is no stranger to the challenge of building software programs without security bugs. The director of security engineering strategy at Microsoft started trying to write secure software code in the seventies. "My idea at the time was that we'd build a full mathematical model of security," Steve Lipner says, recalling a plan to write a set of specifications that would guarantee a secure piece of software. "We'd build our systems to implement the specifications. We'd prove that the mathematical model was consistent, and that the specificiations corresponded to the model, and that the code would conform to the specifications. Then we'd all go home and work on something else."

Steve Lipner is no stranger to the challenge of building software programs without security bugs. The director of security engineering strategy at Microsoft started trying to write secure software code in the seventies. "My idea at the time was that we'd build a full mathematical model of security," Steve Lipner says, recalling a plan to write a set of specifications that would guarantee a secure piece of software. "We'd build our systems to implement the specifications. We'd prove that the mathematical model was consistent, and that the specificiations corresponded to the model, and that the code would conform to the specifications. Then we'd all go home and work on something else."

It was a nice idea even if didn't pan out and now Lipner is one of the people responsible for Microsoft's Secure Development Lifecycle (SDL),a software development model introduced following a major turnaround for the company in 2002. Bill Gates recognised that Microsoft's software portfolio faced significant security challenges, and embarked on a major cultural change in the organisation - he wanted people to develop secure software. Better late than never. The company froze development of all products for several months and focused on training its staff in secure coding. "That encompasses things like threat modeling, code inspection, security testing, and so on," Lipner says.

Microsoft wasn't the only one who needed to teach its staff to develop code properly. "Big companies use coders straight out of university, and they're trying to make the funcationality simply work, rather than make it work securely," warns Tony Fogarty, principal security consultant at DNV IT Global Services, which specialises in reducing technology risk for its clients.

One of the biggest technology risks is code filled with holes, but such code often passes quality assurance, says Bruce Potter, founder of security consulting firm Ponte Technologies. "People don't often say, 'oh, and we don't want this software to be subject to an SQL injection attack'," he says. "It's not seen as a feature." Clearly, the first step to solving the problem is to outline proper security specifications when gathering requirements. The second step is to teach developers that the value in code lies not simply in its functionality, but in how secure it is.

 

Categories of software security flaws

To do that, we have to understand the crucial security mistakes that can be made when writing software, explains Gary McGraw, CTO at Cigital, a company that specialises in advising on secure code development. That involves collecting common exploits together and looking at the common mistakes that make them possible, he says: "We identified the top seven categories of software security flaw, which we called the seven pernicious kingdoms," he says.

These broad categories of rules can be broken down into more specific ones that can then be checked for in software. Such rules may often involve not using functions that have been deprecated, for example. Microsoft put a moratorium on several C++ functions that if found were vulnerable to exploits such as buffer overflow attacks. Putting new functions in the library and checking that programmers are using them is relatively easy to do, and can solve a lot of the security problems in code development up front.

Automated tools can reduce the workload when checking for compliance with such rules. Static analysis tools look at source code (and sometimes binaries) to find security flaws and other bugs. Such tools exist for languages including C++ and Java, but can also be used for scripting languages such as PHP and JavaScript. Microsoft offers a free static analysis tool for .net code assemblies called FxCop, for example. Other tools often incorporate some form of dynamic analysis which watch an application as it runs. One example is Compuware's DevPartner SecurityChecker, which specifically looks for security bugs in Web applications and can analyse code at compile-time as well as when it is actually running in production.

 

Methodology framework

Such rules will be of limited use on their own, however. They must sit within a framework to be formalised properly. McGraw points to three: Microsoft's SDL (which the company has published in book form) his own methodology, based around security 'touch points' and the comprehensive, lightweight, application security process (Clasp), produced by the Open Web Application Security Project, a community of application developers trying to make Web-based software more secure.

Clasp tries to put security considerations into the software development process at an early stage. It uses a series of perspectives to examine the software development process at a managerial level. These are broken down into activities, that in turn process specific components. For example, one view is based on roles, enabling a separation of duties among people involved in software development.

"If you stand all the way back, the methodologies look very similar," Says McGraw, although he argues that his own and the SDL are more mature and singles out Touchpoint as more generic, because it is not written by a vendor of operating systems.

Regardless of which methodology a company chooses to implement, there is a big gap between understanding how mistakes find their way into code - and how they can be avoided - and actually executing on this knowledge. Gordon Alexander, technology manager at Compuware, highlights a fundamental problem here: developers' mistakes rarely affect them directly. "Defects manifest themselves in operation, and the cost of that will be borne out of the operational budget, and the development budget doesn't see that cost ," he says. "That makes it difficult for developers to invest in the process to fix these security problems."

 

Management approach

How can managers bridge that gap? Software security is effectively a governance issue, if badly coded software puts your software - and your customers' data - at risk. Making a cultural change that will spread throughout the company requires engagement and training at all levels, says McGraw, from low-level coders all the way up to senior management. It can also result in the involvement of people you might not normally expect to see in the process. For example, he says, one way to make sure developers actually code properly is by training them. However, many developers won't be interested in coming to the training. Something will always come up. So he suggests enlisting the human resources department to ensure that developers actually attend and learn. "If you'd like to get a raise, you're going to have to go on the training course. So go do it," he quips.

Rolling together coding practices with other business functions can help ensure developers follow guidelines at an operational level, too, McGraw says. Static analysis tools can be used to find mistakes, but he suggests attributing mistakes to individual coders and linking the issue to bonuses. If a developer continually contravenes programming security guidelines and has to be repeatedly corrected during the quality assurance phase, it means they are not following company policy. They are not doing their job properly. Why shouldn't they be penalised in the same way as a salesperson who fails to meet targets, or a line manager who doesn't hit key performance indicators, asks McGraw.

 

Quality control

One common best practice is to encourage regular feedback between developers and quality assurance teams. Dan Bodart, technical lead at global IT consultancy Thoughtworks, says this is central to his firm's process and connected to its coding practice of using agile programming techniques. Agile development encourages the coding of applications in relatively small chunks, enabling development teams to test code and interact with users on a more frequent, regular basis. Extreme programming, a methodology originally conceived by Kent Beck, advocates the use of paired programming to help bolster code quality. This involves two people coding a particular function, building an additional layer of quality control into the process.

Ultimately, secure coding comes down to a mixture of well-trained, engaged developers, and management who are prepared to instigate robust processes and oversee them. This will require a considerable investment from companies who have hitherto paid only lip service to software security. But as with many risk management exercises, the return on investment lies largely in prevention. If you don't do it, and your applications suffer an embarrassing breach as a result, how much will that end up costing you?



This was last published in March 2008

Read more on Antivirus, firewall and IDS products

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close