DREAD is a threat modeling program developed by Microsoft and first published in Writing Secure Code 2nd edition in 2002 by David LeBlanc and Michael Howard. DREAD is broken down into the following 5 categories:
- Damage potential: How great can the damage be?
- Reproducibility: How easy is it to get a potential attack to work?
- Exploitability: How much effort and expertise is required to mount an attack?
- Affected users: If the threat were exploited and became an attack, how many users would be affected?
- Discoverability: How likely is it to discover the threat? Per the original publication, this is always assumed to be the max score.
By examining threats across these 5 different categories and assigning a value, you can begin to quantitatively analyze your threats across the organization which can provide a sense of priority relative to other difficult to compare threats and security vulnerabilities.
Damage potential attempts to classify threats across two different areas of concern, the type of data that is being protected and the amount of access that a threat actor will have. Damage scores are rated at high levels if the type of data being protected is especially sensitive such as financial, health, classified or other forms of protected data. Data classification policies can help guide this rating, if one already exists in your organization. The other aspect that damage attempts to measure is the level of access and elevation of privileges associated with the risk. A high risk score in this case would be where a threat allows for limited users to become administrators. When evaluating damage, you will need to look across both avenues to properly assign a rating.
Reproducibility focuses on the relative effort and ease of taking the threat to exploit repeatedly. Determination of the value assigned to reproducibility of threats takes into a number of different pieces of data to properly assign a value. For example, if an attacker has full knowledge of the threat but cannot reliably exploit it, the value would be incredibly low. On the opposite end of the spectrum are exploits that can be performed repeatedly and reliably with little or no effort. Features or configurations that are insecure by default tend to be the most common highly rated.
Exploitability is similar to reproducibility but only focuses on the effort to take a threat to exploit. Exploitability is determined based on the total amount of effort required. For example, a threat exploited by remote unauthenticated attackers using tools that have already been developed by others, or ones that are so well known that they can be automated and are actively exploited would be rated the highest. On the other hand, an attacker that develops a zero-day exploit that affects a local privilege user in a segmented network would be rated the lowest. Exploitability should take into account the total amount of effort that’s needed.
Affected users attempts to quantify either the total number of users affected and/or the importance of users affected depending on the level of threat modeling you are doing. In one example, you can simply estimate the number of users affected compared to the total number of users. In a more in depth analysis, you could assign relative importance to the type of user or users that may be affected. Similar to other aspects of DREAD, you can take both into consideration to have a fuller picture of the appropriate value to assign.
Discoverability best relates to the amount of effort needed for a threat actor to find the threat. In many implementations of DREAD, the convention is to just assign the maximum value. Many security professionals believe that all vulnerabilities are discoverable with enough effort. Discoverability relates to the idea of security through obscurity and at best, this should be expressed through reproducibility and exploitability.
DREAD as a framework for threat modeling has had multiple interpretations published from the original publication in Writing Secure Code to Improving Web Application Security to the implementation by the OpenStack Security Group. In the original, all scores were rated from 1 – 10 with discoverability always rated as a 10. The sum total of all 5 scores was then divided by 5 and provided a relative score to “build a severity matrix that will help you prioritize how to deal with issues you uncover”. In Improving Web… DREAD’s 1 – 10 scoring methodology was converted to a 1 – 3 score representing low, medium and high and discoverability was no longer assumed to be max. The methodology also included a “Threat Rating Table” that helped categorize each aspect. The scores were totaled together and further classified depending on total score. Still though the end result is to “focus on the most potent threats”. Finally, the OpenStack Security Group (OSSG) further tweaked the original to “score the potential impact of vulnerabilities on OpenStack Deployments”. The two major changes in their implementation was in the inclusion of the possibility that a score of 0 could be obtained and specific guidance for ratings. For example, Damage is the most built out and has definitions for 0, 3, 5, 7, 8, 9 and 10. The most important aspect of using this or any other threat model is that the definitions are consistent and clear across every report. Threat models allow you to quantify and communicate risk to multiple stakeholders, so it’s best that everyone is on the same page when moving forward with a specific methodology.
At Cyral, we are constantly working to provide actionable priorities to secure the data layer. With Cyral, we provide not only in depth monitoring of your most critical data repositories but also actionable items to help lower your overall risk. Gain instant insights that will allow you to prioritize and communicate your highest risks with Cyral.