Loading content...
Threat modeling is the structured process of identifying threats to a system, understanding their likelihood and impact, and determining how to address them. It answers the fundamental question: 'What can go wrong with this system, and what should we do about it?'
Without threat modeling, security efforts become reactive—fixing vulnerabilities after they're exploited rather than designing them out. With threat modeling, organizations can make informed decisions about where to invest security resources, what risks to accept, and how to architect systems that resist attack from the start.
By the end of this page, you will understand the threat modeling process, major methodologies (STRIDE, PASTA, Attack Trees, DREAD), how to create and maintain threat models, and how to integrate threat modeling into development and operations processes.
Threat modeling exists because we cannot secure everything equally. Resources are finite, and not all threats are equally likely or impactful. Without structured analysis, security efforts become either scattered (trying to address everything) or misallocated (focusing on theoretical risks while ignoring practical ones).
The core value propositions:
When to threat model:
The cost of not modeling:
Organizations that skip threat modeling typically experience:
Security economics strongly favor early intervention. Studies consistently show that fixing security flaws in design costs 10-100x less than fixing them in production. Threat modeling is the primary mechanism for 'shifting left'—moving security consideration to the earliest possible development phase where corrections are cheapest.
While specific methodologies vary, all threat modeling follows a common high-level process. Adam Shostack (Microsoft, author of Threat Modeling: Designing for Security) distills it to four questions:
Let's explore each phase in detail:
| Phase | Key Question | Activities | Outputs |
|---|---|---|---|
| 1. Decomposition | What are we working on? | Create data flow diagrams, identify trust boundaries, list assets, document entry points | System model, DFD, asset inventory |
| 2. Threat Identification | What can go wrong? | Apply methodology (STRIDE, etc.), enumerate threats per element, consider attacker motivation | Threat list with descriptions |
| 3. Risk Assessment | How bad is this? | Rate likelihood and impact, calculate risk scores, prioritize threats | Prioritized threat ranking |
| 4. Mitigation Planning | What do we do about it? | Design countermeasures, accept/mitigate/transfer decisions, document residual risk | Mitigation plan, accepted risks |
| 5. Validation | Did we do a good job? | Review completeness, test mitigations, update model | Validated threat model, testing results |
Phase 1: System Decomposition
Before identifying threats, you must understand the system. Common techniques:
Data Flow Diagrams (DFDs):
Trust Boundary Identification: Trust boundaries mark where security assumptions change:
Most threats occur at trust boundaries where data crosses from one trust zone to another.
Asset Inventory: What is worth protecting?
Standard DFD notation: External entities (rectangles, represent users/external systems), Processes (circles, represent code/logic), Data stores (parallel lines, represent databases/files), Data flows (arrows, show data movement), Trust boundaries (dotted lines, show security domain changes).
STRIDE is Microsoft's mnemonic for systematically identifying threats. Each letter represents a threat category, and the methodology involves examining each system component for vulnerability to each STRIDE category.
The STRIDE framework in depth:
| Category | Violated Property | Threat Question | Typical Countermeasures |
|---|---|---|---|
| Spoofing | Authentication | Can an attacker pretend to be something/someone else? | Authentication (passwords, MFA, certificates), mutual TLS, token validation |
| Tampering | Integrity | Can data/code be modified without detection? | Hashing, digital signatures, input validation, access controls, integrity monitoring |
| Repudiation | Non-repudiation | Can actions be denied/hidden? | Audit logging, tamper-evident logs, digital signatures, witness records |
| Information Disclosure | Confidentiality | Can sensitive information be exposed? | Encryption (transit/rest), access controls, output filtering, error handling |
| Denial of Service | Availability | Can legitimate access be prevented? | Rate limiting, resource quotas, redundancy, input validation, caching |
| Elevation of Privilege | Authorization | Can attackers gain unauthorized capabilities? | Least privilege, authorization checks, security boundaries, sandboxing |
Applying STRIDE Per Element:
For each DFD element, systematically ask STRIDE questions:
External Entities:
Processes:
Data Stores:
Data Flows:
Trust Boundaries:
Not all STRIDE categories apply equally to all DFD elements. Processes are vulnerable to all six categories. Data stores primarily face Tampering, Information Disclosure, and DoS. Data flows primarily face Tampering, Information Disclosure, and DoS. External entities primarily face Spoofing and Repudiation. This mapping helps focus analysis efficiently.
PASTA (Process for Attack Simulation and Threat Analysis) is a risk-centric threat modeling methodology that emphasizes business impact and attacker perspective. Unlike STRIDE's focus on threat categories, PASTA builds from business objectives down to technical vulnerabilities.
The seven stages of PASTA:
| Stage | Focus | Key Activities | Outputs |
|---|---|---|---|
| 1. Define Objectives | Business context | Identify business objectives, security requirements, compliance needs | Business risk profile, security requirements document |
| 2. Define Technical Scope | System understanding | Document architecture, data flows, technologies, dependencies | Technical architecture model, dependency maps |
| 3. Application Decomposition | Deep technical analysis | Identify components, trust levels, entry points, data assets | DFDs, asset inventory, trust boundary map |
| 4. Threat Analysis | Threat landscape | Research relevant threats, attacker profiles, threat intelligence | Threat library, attacker profiles, attack scenarios |
| 5. Vulnerability Analysis | Weakness identification | Identify existing vulnerabilities, weaknesses, attack surface gaps | Vulnerability map, weakness report |
| 6. Attack Modeling | Attacker simulation | Create attack trees, map threat-vulnerability intersections, model attack paths | Attack trees, probability analysis, exploitability assessment |
| 7. Risk/Impact Analysis | Business impact | Calculate risk scores, business impact, prioritize mitigations | Prioritized risk catalog, mitigation recommendations |
PASTA vs. STRIDE:
| Dimension | STRIDE | PASTA |
|---|---|---|
| Focus | Threat categorization | Risk-based, business-aligned |
| Approach | Per-element analysis | Holistic, seven-stage process |
| Effort | Lighter-weight | More comprehensive |
| Output | Threat list | Prioritized risk catalog |
| Best For | Quick design review | Detailed security analysis, compliance |
When to use PASTA:
PASTA's comprehensiveness makes it more time-consuming than STRIDE, but produces more actionable risk-prioritized outputs.
No single methodology is best for all situations. STRIDE is excellent for design reviews and quick analysis. PASTA provides comprehensive risk assessment. Attack trees excel at modeling specific attack scenarios. Many organizations use multiple methodologies for different contexts—lighter methods for ongoing reviews, deeper methods for critical systems.
Attack trees are hierarchical diagrams that represent how attackers might achieve specific malicious goals. Developed by Bruce Schneier, attack trees model attacker objectives as root nodes, with child nodes representing different ways to achieve parent goals.
Attack tree structure:
Example Attack Tree: Steal Customer Data
Steal Customer Data (ROOT)
|
+-----------------+-------+-------+-----------------+
| | | |
SQL Injection Compromise Admin Social Engineering Insider Threat
| | | |
(Leaf: exploit +----+----+ Phish Support (Leaf: malicious
input validation | | Staff employee)
flaw) Credential Exploit |
Stuffing Vuln +---+---+
| | | |
(Leaf: (Leaf: Get Trick
reuse 0-day Creds Victim
leaked or | |
creds) unpatched) (Leaf) (Leaf)
Annotating Attack Trees:
Attack trees become more useful when annotated with:
| Attack Path | Cost | Skill | Detectability | Probability |
|---|---|---|---|---|
| SQL Injection → Data Extraction | Low ($) | Medium | Low (logged but ignored) | High (if no input validation) |
| Phishing → Admin Creds → Data Access | Medium ($$) | Medium | Medium (some email filtering) | Medium (MFA may block) |
| Insider Exfiltration | Low ($) | Low | Medium (DLP may detect) | Variable (depends on controls) |
| Zero-day Exploitation | Very High ($$$$$) | Very High | Low (no signatures) | Low (expensive, rare capability) |
Attack trees excel at comparing attack paths. By annotating cost, skill, and probability, you can identify which paths attackers will prefer (low cost, low skill, high success) and prioritize defenses accordingly. The 'obvious' technical exploit may be less likely than a simple social engineering attack if the human path is cheaper and more reliable.
Once threats are identified, they must be prioritized. DREAD is Microsoft's risk ranking methodology that scores threats across five dimensions to produce a prioritized threat list.
The DREAD categories:
| Category | Question | High Score (3) | Medium Score (2) | Low Score (1) |
|---|---|---|---|---|
| Damage | How bad if exploited? | Complete system compromise, data breach | Individual data exposure | Minor data corruption |
| Reproducibility | How easy to repeat? | Always works, no special conditions | Works under specific conditions | Difficult to reproduce |
| Exploitability | How much skill/effort? | Script kiddie, automated tools | Skilled attacker, some effort | Expert attacker, significant effort |
| Affected Users | How many impacted? | All users, entire organization | Subset of users, one department | Individual user, edge case |
| Discoverability | How easy to find? | Publicly known, obvious | Requires effort to discover | Very obscure, unlikely to find |
Calculating DREAD Scores:
Each dimension is rated 1-3 (or 0-10 in some variants). The final score is the sum or average of all dimensions.
Example DREAD Calculation:
Threat: SQL Injection in Login Form
| Category | Rating | Justification |
|---|---|---|
| Damage | 3 | Could expose entire user database |
| Reproducibility | 3 | Works reliably once injection found |
| Exploitability | 2 | Requires some SQL knowledge, automated tools exist |
| Affected Users | 3 | All users have credentials stored |
| Discoverability | 2 | Not advertised, but standard attack surface |
DREAD Score: (3+3+2+3+2)/5 = 2.6/3 = HIGH PRIORITY
Criticisms of DREAD:
DREAD has been criticized and even deprecated by Microsoft due to:
Many organizations now use simpler likelihood × impact matrices or adopt CVSS (Common Vulnerability Scoring System) for standardization.
Many security practitioners argue that Discoverability should always be rated high (assume attackers will find it) or removed entirely. Rating something as 'hard to discover' encourages security through obscurity, which is not a reliable defense. If a vulnerability exists, assume motivated attackers will discover it.
After identifying and prioritizing threats, you must decide how to address them. There are four fundamental strategies for responding to identified risks:
The risk response options:
| Strategy | Description | When to Use | Example |
|---|---|---|---|
| Mitigate | Implement controls to reduce likelihood or impact | When controls are cost-effective relative to risk | Add input validation to prevent SQL injection |
| Accept | Acknowledge risk and take no action | When cost exceeds benefit, very low probability | Accept minor DOS risk during non-critical hours |
| Transfer | Shift risk to another party | When insurance/outsourcing is cost-effective | Cyber insurance for breach costs, WAF service |
| Avoid | Eliminate the risky element entirely | When risk is unacceptable and can be eliminated | Remove feature that creates unjustifiable risk |
STRIDE-Based Mitigation Patterns:
Each STRIDE category has established mitigation patterns:
Spoofing Mitigations:
Tampering Mitigations:
Repudiation Mitigations:
Information Disclosure Mitigations:
Denial of Service Mitigations:
Elevation of Privilege Mitigations:
Never rely on a single mitigation. For critical threats, implement multiple overlapping controls (defense in depth). If SQL injection prevention fails at input validation, prepared statements provide a second layer. If authentication is bypassed, authorization checks are backup. Layered defenses ensure single control failures don't result in compromise.
A threat model is only valuable if it's documented, accessible, and maintained. Poor documentation means the analysis is lost when team members leave, cannot be audited for compliance, and won't be updated as systems change.
Essential threat model documentation:
Threat Tracking Integration:
Threats should integrate with existing workflow systems:
Living Document Concept:
Threat models are living documents, not one-time exercises:
Without maintenance, threat models become security theater—impressive documents that don't reflect reality.
Several tools assist threat modeling: Microsoft Threat Modeling Tool (free, STRIDE-based), OWASP Threat Dragon (open-source), IriusRisk (commercial, automated), draw.io / Lucidchart (for DFD creation). Tools help with diagram creation, threat enumeration, and report generation, but don't replace security expertise—they're aids to human analysis.
Traditional threat modeling—comprehensive, document-heavy, conducted at design time—conflicts with agile development's iterative, fast-moving nature. Agile threat modeling adapts threat modeling to fit sprint-based development without sacrificing security rigor.
Challenges with traditional approaches in agile:
| Strategy | Description | When to Apply |
|---|---|---|
| Story-level threat analysis | Include threat thinking in story definition; 'as an attacker...' stories | Every story with security implications |
| Incremental modeling | Update threat model incrementally as features are added | Each sprint that changes attack surface |
| Sprint security reviews | Brief security review of sprint deliverables | End of each sprint |
| Continuous threat model | Maintain living threat model updated with code | Ongoing throughout product lifecycle |
| Architecture runway | Do deeper analysis for significant architectural decisions | When architecture changes |
| Threat modeling cards | Quick threat identification using card-based prompts | Story grooming, design discussions |
Embedding Threat Thinking in Development:
During Backlog Grooming:
During Sprint Planning:
During Development:
During Review:
Threat Modeling Manifesto:
The threat modeling community has articulated core values:
The best threat modeling happens when developers do it themselves with security team guidance, rather than security teams analyzing systems they don't deeply understand. Train developers in basic threat modeling; they know their systems best. Security teams provide frameworks, review results, and handle complex analysis—but developers should drive day-to-day threat thinking.
We've explored threat modeling—the structured approach to identifying, assessing, and addressing security threats. This discipline transforms security from reactive firefighting to proactive architecture, enabling organizations to build security in rather than bolt it on.
What's Next:
With threat modeling understood, we'll conclude this module with Risk Assessment—the quantitative and qualitative methods for evaluating risk to inform business decisions about security investments. Risk assessment takes threat model outputs and translates them into business terms.
You now understand how to systematically identify and analyze threats to systems. Threat modeling is a skill that improves with practice—start with STRIDE for quick analysis, use PASTA for comprehensive assessment, and integrate threat thinking into your development process. The goal isn't perfect threat models; it's better security through structured thinking.