Creating a good Web application is never easy, and making sure it’s secure is even more difficult. To build a fundamentally sound application, developers need to know what they are protecting and what they’re protecting it from. Threat modelling is a design analysis technique that should be used at the software design stage of any project to discuss, define and document its security requirements. Along with other
Continue Reading This Article
Enjoy this article as well as all of our content, including E-Guides, news, tips and more.
Threat modelling is a team exercise involving project managers, developers and users. It’s important to engage all stakeholders at this stage, even though they may not be security experts, as it gives them a sense of ownership and an understanding of the security model. It also pays to look at the application from as many different viewpoints as possible.
Depending on the size of your organisation, threat modelling may take the form of a brainstorming session using a flipchart, but you’re better off using a specialized tool, such as Microsoft’s free SDL Threat Modelling Tool. It can be integrated with any issue tracking system, and its reports can be used during testing in the verification phase of the project. Software-centric threat modelling involves four major steps: diagramming, threat enumeration, mitigation and verification. The SDL Threat Modelling Tool provides guidance with drawing a model and analyzing threats and mitigations.
Whatever approach you take, standard data flow diagram (DFD) elements are typically used to graphically represent an application. They are easy to understand and bring issues to life allowing non-security experts to participate and provide input. As DFDs are data-centric, they keep attention focused on what really matters, as the goal of most software attacks involve data in some way.
Getting this diagram right is key, and it’s important to make sure all elements of your application, data stores, data flows, processes and interactors are represented. (Interactors are the users, services and servers that aren’t part of your application, but are clearly related to it.) For example, make sure there is always a process that reads and writes data for each data store and user. Also, use a dotted line to mark trust boundaries, where elements on each side of the boundary operates at different privilege level. Here careful thought needs to be given to how data crosses between trust boundaries.
I suggest you use the S.T.R.I.D.E. security threat model to identify various types of threats. S.T.R.I.D.E. splits security threats into the six major categories, and stands for the following threats:
- Spoofing of user identity
- Tampering with data
- Information disclosure
- Denial of Service
- Elevation of privilege
Data stores and data flows are susceptible to tampering, information disclosure and denial of service, and interactors are susceptible to spoofing and repudiation. Processes are susceptible to all six threats. STRIDE and the DFD provide a framework for investigating how an application might fail. Once the vulnerabilities in the application have been identified, measures can be taken to mitigate them. One interactor that can lead to serious security issues if it’s not modelled sufficiently is the user. Failure to authenticate the application to users and, in turn, correctly authenticate them and their email addresses, opens the door to phishing and fraud.
Once the threats have been identified, they obviously need to be addressed. The four approaches to mitigation are: redesign (e.g., removing a feature deemed risky), standard mitigations such as access control lists (ACLs), unique mitigations such as a proxy server in front of the application, or risk acceptance in accordance with company’s risk appetite. Each identified threat and how it has been mitigated must be recorded. This ensures developers can be held accountable for implementing the security controls. The security controls required to counter the threats listed above in STRIDE include:
During the development phase it’s common for an application to deviate, sometimes quite significantly, from the functional and design specifications created during the requirements and design phases. Regular checks should be made as development gets underway to ensure the threat model has all the same trust boundaries and entry points as the code, and the two haven’t drifted apart. Once the application’s code has been completed, the threat model should be revisited yet again to ensure any new attack vectors created as a result of design or implementation changes are recorded and mitigated.
There is no single best or correct way to perform threat modelling. Costs such as setup and training need to be taken into account and may dictate how complex your model is, and how it is used throughout the life of the application. The goal of modelling is to improve security, so do the best you can within the constraints of time and money of your particular environment.
Remember, too, that threat modelling can be used to analyse not just applications but networks and systems. The security of your application is no better than its weakest link. Flaws embedded in a network infrastructure or configuration may not be encountered during ordinary use but may well appear during unexpected use, such as during an attack. Finding and resolving such weak links is how threat modelling and analysis can help improve the security of your applications and the systems on which they run.
About the author:
Michael Cobb, CISSP-ISSAP, CLAS is a renowned security author with more than 15 years of experience in the IT industry. He is the founder and managing director of Cobweb Applications, a consultancy that provides data security services delivering ISO 27001 solutions. He co-authored the book IIS Security and has written numerous technical articles for leading IT publications.Cobb serves as SearchSecurity.com’s contributing expert for application and platform security topics, and has been a featured guest instructor for several of SearchSecurity.com’s Security School lessons.
This was first published in November 2011