Threat modeling is a process that analyzes the design of a system by examining every interaction and plausible threat between its components and adjacent systems. The analysis is performed by taking on the mindset of an adversary attempting to compromise all valuable assets in the system. Based on the results, threat models include specific recommendations and guidelines to verify that the system, in its current state, effectively defends against threats.
Without verification, threat models are exactly that: a model, i.e., a collection of diagrams, paragraphs, and bullet points conjecturing the security maturity of a given system. To get the most value out of a threat model, it must accurately reflect reality.
Even when threat modeling is performed early, while the design of a system is still fluid and can be easily modified to address risky situations, its value cannot be realized without continually verifying how well it represents the deployed system and tests against each threat scenario. This means that threat models should be referenced, updated, and utilized throughout the entire lifecycle of a given system – not just before or after deployment.
To better fit and utilize threat models, keep in mind the following steps:
- List and evaluate assets
- Use threats to derive design requirements
- Create independent tests for each threat scenario
- Implement detection based on threats and tests
While each step can be performed independently, the true impact lies in their collective implementation. This integration will transform your threat models into the primary documentation for intended system functionality, serving as a point of reference and augmentation throughout the system's lifecycle, including future updates.
List and evaluate assets
To better quantify the risks posed by specific systems, document every asset the system touches. In addition to comprising a “bill of materials” for the system, evaluate each asset in the list for how sensitive it is to the business. Later in the threat modeling process, you can verify that the system can only access what’s on the list.
If your organization has already defined a data classification policy, that’s a good framework to use for asset evaluation. Before jumping in, look at existing classification policies to ensure they will be a good match for the types of assets in scope for the threat model. For example:
- Can the policy classify data as well as physical assets?
- Are the classification levels granular enough for the range of assets present in this system, from customer or employee information to business intellectual property?
- Are there existing guidelines on how to handle sensitive assets?
If you don’t have an existing classification policy, focus on evaluating your assets with broad strokes. Use three classification levels, such as:
Three or at least four classification levels should be sufficient for most scenarios. One advantage of starting with a four-level classification policy is that there’s no “middle ground.” This gives a clear picture from the start, reducing time spent on each classification decision. As you create more models, you can add more granularity and set specific guidelines and security requirements for each sensitivity level.
Once you’ve enumerated and classified all assets that the system handles, take a second look and identify adjacent assets that the system might be able to access due to its placement or deployment, e.g., a specific site or network that may contain shared assets, even if those assets aren’t directly related to the operation of the system. Then, classify those assets since a system compromise could impact them.
After classifying every asset, you can now determine whether to remove them from the system or add specific security controls to protect them according to their classification.
Use threats to derive requirements
As you brainstorm threat scenarios, prioritize the risks by using the classification level of impacted assets. For example, a threat to confidential information disclosure would generally be perceived as a severe threat. Without asset classification, risk predictions won’t be as clear or as justifiable. But even without the added focus of classification, threat scenarios can still inform a system's design and implementation requirements.
You know how the system should work; now reverse the thought process and consider how the system should not work and respond to unexpected operations, states in which the system should never be in, and interactions that system users should never carry out. Come up with as many use cases as possible outside the expected, well-behaved user path.
Each threat scenario can shed light on how to design the system in a way that successfully defends against each scenario or multiple related scenarios. This is the greatest value of threat modeling: finding the best possible design for a given system.
As the system is created and deployed, adhere to the design decisions and requirements developed in the threat modeling process. Ignoring these guidelines and requirements can add discrepancies between the model and reality, potentially resulting in technical debt.
In addition to design requirements, considering how the system is implemented and deployed can generate additional guidance. In software systems, for example, secure coding guidelines and specific security controls (ideally already present in your chosen frameworks) can become product-wide requirements depending on the risks they address. Implementation and deployment requirements are easier to audit since they’re more concrete than design ones. They can easily lend themselves to codified procedures and playbooks that will only get richer as you create additional models and assess more threats.
Create independent tests
Threat scenarios can provide test cases that should be continuously executed against the targeted system. Ideally, each test should be implemented against an environment that resembles the live system as closely as possible. Additionally, each test should run independently against each security control or security layer put in place to prevent the attack from succeeding. For example, if a system relies on authentication, authorization, input validation, and infrastructure configuration to prevent injection attacks, run injection tests against each layer separately. In short, test each threat scenario against each layer of defense.
This is generally easier to achieve for software systems, where integration pipelines facilitate standing up temporary services with specific configurations for each test. Where this practice is more difficult due to the lack of proper infrastructure or for systems that encompass assets beyond software and data, group the threat scenarios by the set of assets they can affect. This reduces the number of test systems that need to be deployed and configured – especially if the same system can be reset to a common initial state or different initial states for each test.
The assets targeted by your tests should be innocuous. Use fake data or fac-similes of real assets that don’t carry any risk. However, ensure they are uniquely identifiable if your tests detect a compromise. Identifying exactly what was compromised will not only accelerate your response to each failure but potentially shed light on unexpected results, e.g., when a particular test or attack compromises more assets than initially predicted. These are also good practices for setting up live monitoring and detection on top of the real system.
Implement specific detection
Assigning a high level of risk to a threat scenario means that the organization should keep watch and prepare for indicators of that scenario being carried out. Each threat outlined in a model can be used to create specific monitoring rules and alerts that detect potential attacks.
Even threats against business logic or order of operations can be identified with some behavioral analysis or by verifying whether all prerequisites for a particular action have been met. Using threat scenarios as guidance, detection platforms can go beyond the basics of authentication and authorization failures to include rich contextual information that can flag anomalous behavior for quicker responses.
Furthermore, each test case can give you a comprehensive list of inputs and error states to prioritize over legitimate human error. Detection tailored to these test cases (which, in turn, are tailored to the threat model) has a high ratio of true positive indicators when properly implemented. This kind of detection can help verify that your design decisions have been applied effectively and that your security controls are working as intended.
As your threat models grow in number and quality, continue creating test cases and detection rules that are closely linked. This practice will help your systems become more resilient against real attacks before they even happen, and will help ensure that past mistakes won’t be reintroduced back into the system.
To further close the window of opportunity for attackers, continue to monitor for general error states and any input that deviates from normal usage. As your detection platform alerts you to newly discovered threats or attacks (whether they actually succeeded), add these new scenarios to your threat model and to your tests. If needed, tune your monitoring engine to detect these new attacks more reliably.
Threat model artifacts (diagrams, threats, mitigations, etc.) can be used at every stage of a system’s lifecycle. As you put your threat model artifacts to use in more stages, the closer your threat models will come to represent the actual state and risk of your systems.
This is how you can go from dealing with unknown or unlikely risks and ill-prepared systems to clear scenarios that you’ve practiced and prepared for. Use and update your threat models as much as possible at every stage to effectively design a strong defense in your systems.