
Introduction
Threat modeling is a structured approach for identifying, analyzing, and mitigating potential security threats. At a high level, it involves anticipating how adversaries might attack a system and designing defenses to prevent or reduce the impact of those attacks. Over the decades, threat modeling has evolved from informal military intelligence practices into formalized frameworks and methodologies used across industries. This timeline traces the history of threat modeling from its origins in the 1940s U.S. military to the latest approaches like the Meta Attack Language (MAL), highlighting key methodologies, industry adoption trends, and milestones along the way.
Methodologies and Frameworks
1940s–1960s: Military Origins – The concept of threat modeling can be traced back to military strategy in the mid-20th century. During and after World War II, the U.S. military began systematically analyzing potential threats (such as enemy aircraft or missiles) to inform defense planning. For example, by the Cold War era, the U.S. Army and other agencies were performing ballistic missile threat modeling – assessing how enemy missiles might penetrate defenses and what countermeasures were needed (The Origins of Threat Modeling: Cyberattack- New Warfare Form). This “operational design” approach in the military involved continuous threat assessment and intelligence gathering, with virtually all personnel reporting threat data as part of their duties (The Origins of Threat Modeling: Cyberattack- New Warfare Form) (The Origins of Threat Modeling: Cyberattack- New Warfare Form). These practices laid the groundwork for viewing security through an adversary’s eyes and informed later civilian methodologies. Notably, the Department of Defense used threat modeling to improve missile defense systems by identifying likely attack paths and weaknesses (The Origins of Threat Modeling: Cyberattack- New Warfare Form).
1990s: Early Formalization and Attack Trees – As computer systems proliferated, the need for formal threat analysis grew. In 1999, cybersecurity expert Bruce Schneier introduced the concept of Attack Trees, a model where security threats are structured in a tree hierarchy to map out how an adversary could achieve a particular goal (Attack Trees - Schneier on Security). Schneier’s attack tree model (published in Dr. Dobb’s Journal in December 1999) provided a systematic way to enumerate attack paths and is considered one of the first formal threat modeling techniques in software security. Attack trees influenced many later methodologies and demonstrated the value of visualizing threats. Around the same time, government and industry began developing knowledge bases of attacks. For instance, MITRE Corporation started curating attack patterns and vulnerabilities, laying foundations for future frameworks (though MITRE’s major contributions would formalize in the 2000s).
1999–2002: STRIDE (Microsoft) – One of the most influential threat modeling methodologies, STRIDE, was created at Microsoft. STRIDE (an acronym for Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege) was developed by Loren Kohnfelder and Praerit Garg in 1999 (Threat Modeling Methodology: STRIDE) as a mnemonic for categorizing threats in software systems. Microsoft embraced STRIDE as part of its Security Development Lifecycle; by 2002 it was officially adopted within Microsoft’s processes (Threat Modeling: A Summary of Available Methods). STRIDE involves creating a model of the system (often a data flow diagram of processes, data stores, and data flows) and systematically reviewing each component against the six STRIDE threat categories (STRIDE model - Wikipedia). Early usage of STRIDE typically produced a document or spreadsheet of threats and recommended mitigations – essentially a paper-based report of security findings. This was a pioneering step in bringing structured threat modeling to the software industry, and Microsoft’s evangelism (including a 2004 book Threat Modeling by Frank Swiderski and Window Snyder) helped popularize the practice. STRIDE remains widely taught; its strength is providing a checklist of common threat types to “answer the question: what can go wrong in this system?” (STRIDE model - Wikipedia). However, STRIDE’s process could be time-consuming and heavily reliant on the modeler’s thoroughness, and as systems grew, it highlighted the need for more automated support (Threat Modeling: A Summary of Available Methods) (Threat Modeling: A Summary of Available Methods).
2003–2005: OCTAVE (SEI/CERT) – In the early 2000s, academia and government contributed new frameworks. Carnegie Mellon’s Software Engineering Institute (CERT division) introduced OCTAVE (Operationally Critical Threat, Asset, and Vulnerability Evaluation) in 2003 (refined in 2005) (Threat Modeling: A Summary of Available Methods). OCTAVE is a risk-based assessment methodology that guides organizations in evaluating their operational risks, assets, and security practices. Unlike STRIDE (which focuses on technical design threats), OCTAVE takes a broader organizational view – it’s a strategic threat modeling and risk evaluation approach. The method has three phases: (1) identify important assets and threat profiles, (2) identify infrastructure vulnerabilities, and (3) develop security strategy and plans (Threat Modeling: A Summary of Available Methods). The emphasis is on assessing how security threats could impact critical business assets and making risk-based decisions. OCTAVE was primarily designed for enterprise use (with a later variant for smaller orgs) and produced extensive documentation (Threat Modeling: A Summary of Available Methods). While comprehensive, it was often considered labor-intensive. Nonetheless, it represented a shift toward integrating threat modeling with risk management and planning at an organizational level.
Mid-2000s: Emergence of Quantitative Models (FAIR) – As security risk management matured, frameworks for quantifying risk appeared. A notable example is FAIR (Factor Analysis of Information Risk), first developed by Jack Jones around 2005 (What is Factor Analysis of Information Risk (FAIR)? ). FAIR is not a traditional threat modeling method in terms of diagramming attacks; rather, it’s a model for analyzing the probability and impact of risks in financial terms. However, it complements threat modeling by providing a standard way to evaluate and compare the risks associated with identified threats. Jones’s work on FAIR (later published in the book Measuring and Managing Information Risk) became influential in enterprise cybersecurity – it was adopted as an international standard by The Open Group. The first draft of the FAIR framework was formed in 2005 (What is Factor Analysis of Information Risk (FAIR)? ), introducing a taxonomy for factors like threat event frequency, vulnerability, and impact. How many of your companies speak in terms of impacts and frequency? What certification builds a large part of risk around this very concept (CISSP)? FAIR’s evolution in the late 2000s reflects an industry trend of treating security threats in business terms, allowing organizations to prioritize threats based on quantified risk. This methodology indicated a shift from purely qualitative threat lists to data-driven decision making.
2007: MITRE CAPEC – In 2007, MITRE released the Common Attack Pattern Enumeration and Classification (CAPEC) database (CAPEC - About CAPEC). CAPEC is a comprehensive catalog of known attack patterns, essentially a library of “how attacks are executed” across various domains (web, software, ICS, etc.). While CAPEC is not a threat modeling process by itself, it provided an important resource to bolster threat modeling exercises – practitioners could consult CAPEC to ensure they weren’t overlooking common attacks relevant to their system. The creation of CAPEC (sponsored by the U.S. DHS in the Software Assurance Program) signaled the move toward standardized knowledge bases for threats. It enabled a more engineering-integrated approach: threat modeling tools and processes could leverage CAPEC entries as a reference for likely threats (CAPEC - About CAPEC). Along with the Common Weakness Enumeration (CWE) for vulnerabilities, CAPEC helped organizations move beyond ad-hoc brainstorming to more systematic coverage of attack vectors.
2012: PASTA (Risk-Centric Approach) – The PASTA methodology (Process for Attack Simulation and Threat Analysis) was introduced in 2012 by Tony UcedaVélez and colleagues (Threat Modeling: A Summary of Available Methods). PASTA is a seven-stage methodology that is attacker-centric and risk-focused. It explicitly links business objectives to technical threats, attempting to bridge the gap between developers, security teams, and business stakeholders (Threat Modeling: A Summary of Available Methods). The stages of PASTA include defining business objectives and security requirements, defining the technical scope, application decomposition, threat analysis, vulnerability analysis, attack modeling, and risk/impact analysis (Threat Modeling: A Summary of Available Methods) (see Figure 2 in UcedaVélez’s materials). PASTA’s novelty was in combining elements of traditional threat modeling (like identifying threats via use and abuse cases, similar to STRIDE) with risk assessment and attack simulation. It encourages continuous assessment akin to military intelligence cycles (The Origins of Threat Modeling: Cyberattack- New Warfare Form) (The Origins of Threat Modeling: Cyberattack- New Warfare Form), meaning threat modeling isn’t a one-time checklist but an iterative process that adapts as threats evolve. By incorporating an attacker’s perspective (threat intelligence, TTPs) and aligning with business impact, PASTA represented a maturity step: from purely technical analysis to risk management integration. The authors provided rich documentation and case studies (including via OWASP in 2012 and a 2015 Wiley book) to help practitioners adopt PASTA (Threat Modeling: A Summary of Available Methods). This method exemplified the industry’s recognition that effective threat modeling must consider who might attack and why, not just how.
2013: MITRE ATT&CK – MITRE’s next major contribution came in 2013 with the development of MITRE ATT&CK (Adversarial Tactics, Techniques, and Common Knowledge). Released publicly in 2013 (ATT&CK - Wikipedia), the ATT&CK framework is a curated knowledge base of adversary tactics and techniques, structured as a matrix. Unlike CAPEC’s focus on generic attack patterns, ATT&CK is organized by stages of an attack lifecycle (reconnaissance, initial access, execution, persistence, etc.) and lists specific techniques used by threat actors at each stage. For example, tactics like Privilege Escalation or Command-and-Control are columns in the matrix with numerous techniques under each (ATT&CK - Wikipedia). ATT&CK was originally based on real-world observations of advanced persistent threats targeting enterprise Windows environments (How Has MITRE ATT&CK Improved, Adapted, and Evolved?) (Frequently Asked Questions - MITRE ATT&CK®), and it expanded to cover Linux, Mac, cloud, mobile, and ICS domains. While ATT&CK is not a threat modeling method per se, it has become a cornerstone of modern threat modeling practices – essentially providing an exhaustive checklist of possible attacker actions. Security teams use ATT&CK to identify which known techniques their system is susceptible to and to map out attack scenarios. The rise of ATT&CK in the mid-2010s marks the evolution of threat modeling outputs from static documents to dynamic, engineering-integrated artifacts. Teams now routinely map their threat models to the MITRE ATT&CK matrix to ensure coverage of known TTPs and to facilitate communication using a common industry language. In other words, a threat model in 2020 might include references like “these threats correspond to ATT&CK techniques T1055 and T1190,” linking design analysis to a living knowledge base of threats and mitigations. ATT&CK’s popularity also spurred tooling (like the ATT&CK Navigator and various automated mapping tools), further integrating threat modeling into day-to-day security engineering.
2014: LINDDUN (Privacy Threat Modeling) – As privacy concerns grew, specialized threat modeling methodologies emerged. LINDDUN is a framework tailored for privacy threats, developed by researchers at KU Leuven (Belgium). The name LINDDUN is an acronym for seven privacy threat categories: Linking, Identification, Non-repudiation, Detectability, Disclosure of data, Unawareness, and Non-compliance (LINDDUN Threat Modeling - Threat-Modeling.com). First published around 2014 by Kim Wuyts, Riccardo Scandariato, and others, LINDDUN adapts the STRIDE approach but for privacy concerns instead of security. It provides a systematic procedure to elicit privacy requirements and identify where personal data could be misused (Empirical Evaluation of a Privacy-Focused Threat Modeling ...). The methodology involves creating a data flow diagram of the system and then iterating through each element to consider how privacy threats (like data linkability or detectability) might arise (Threat Modeling: A Summary of Available Methods). LINDDUN offers extensive supporting material, including privacy threat trees and mitigation strategies for each category (LINDDUN Threat Modeling - Threat-Modeling.com). This was an important milestone showing the expansion of threat modeling beyond traditional security (confidentiality/integrity/availability) into privacy engineering. By mid-2010s, organizations handling sensitive personal data (healthcare apps, social networks, etc.) started to adopt LINDDUN to ensure compliance with privacy principles and regulations. LINDDUN’s evolution (including lighter versions like LINDDUN GO in 2020) underscores the versatility of threat modeling – it can be tailored to specific domains like privacy, safety, or fraud by adopting different threat taxonomies.
2010s: OWASP and Community Methodologies – The 2010s also saw community-driven efforts to make threat modeling more accessible. The Open Web Application Security Project (OWASP) began incorporating threat modeling into its guidance and tools. For example, OWASP’s Application Security Verification Standard (ASVS) added requirements for architectural risk analysis and threat modeling for high-level certifications. In 2017, OWASP launched Threat Dragon, a free open-source threat modeling tool with a friendly GUI for drawing diagrams and auto-generating threats. This tool, much like Microsoft’s Threat Modeling Tool, aimed to bring threat modeling into the workflow of developers and DevOps teams in an interactive way (Shostack + Associates > Shostack + Friends Blog > Threat Modeling Tooling from 2017). By using such tools, the output of threat models became less about Word documents and more about living models stored in code repositories. The OWASP community also produced the Threat Modeling Cheat Sheet and education materials to spread best practices. Another notable community milestone was the publication of the Threat Modeling Manifesto in 2020 (co-authored by experts including Adam Shostack), capturing core values and principles to guide practitioners (History of Threat modeling). All these efforts reflect a cultural shift: threat modeling became recognized as essential for secure design. In fact, OWASP’s Top Ten 2021 list explicitly included “Insecure Design” as a category, calling out the need for more threat modeling and secure design patterns – “if we genuinely want to ‘move left’ as an industry, we need more threat modeling... An insecure design cannot be fixed by a perfect implementation” (History of Threat modeling). This was a significant endorsement of threat modeling from the broader security community.
2018: Meta Attack Language (MAL) – The latest milestone in threat modeling’s evolution is the development of Meta Attack Language (MAL). Introduced by researchers at KTH Royal Institute of Technology in 2018, MAL is a framework for designing domain-specific threat modeling languages (MAL (the Meta Attack Language) | KTH). The idea behind MAL is to enable semi-automated generation of attack graphs tailored to specific domains (such as cloud infrastructure, automotive systems, IoT, SCADA, etc.) (MAL (the Meta Attack Language) | KTH) (MAL (the Meta Attack Language) | KTH). Instead of manually creating threat models for each new system, security engineers can use MAL to define a reusable “attack logic” for a domain – essentially a meta-model of how attacks progress in that environment. For example, one could create a MAL-based language for cloud systems that encodes knowledge like “if an attacker compromises an EC2 instance, what can they do next?” With that language, an organization can input their specific cloud architecture and automatically generate an attack graph highlighting possible paths and weaknesses. MAL provides a formal syntax and semantics to define asset types, attack steps, and defenses (MAL (the Meta Attack Language) | KTH). When a model is instantiated (e.g., a specific network with certain components), the underlying attack graph can be computed and even simulated to find probabilities and potential impacts. This approach integrates with attack simulation tools (like Foreseeti’s securiCAD). The development of MAL indicates how far threat modeling has come: from manual diagrams to model-driven, automated simulations. By capturing expertise in machine-readable form, MAL allows engineering teams to continually assess threats as systems change, and to share standardized “threat languages” for different industries. In essence, it bridges threat modeling with modern model-based systems engineering. As an open project (MAL is on GitHub and detailed in academic papers), it represents the cutting-edge of threat modeling research and its practical application.
Evolution of Outputs: Through these stages, we see an evolution from primarily paper-based outputs to more integrated ones. Early threat models (e.g., a STRIDE analysis in 2005) often produced a static report or spreadsheet of threats. By the late 2010s, threat modeling results could be integrated back into engineering work items – for instance, linking threats to requirements or user stories, feeding issues into bug trackers, and using frameworks like MITRE ATT&CK to track mitigation coverage. Today’s approaches (such as those supported by MAL or by continuous threat modeling tools) enable a living model that can be queried and updated automatically as the system evolves. This integration into the development lifecycle (sometimes termed “Threat Modeling 2.0”) ensures that threat modeling isn’t a one-off checkbox but a continuous practice, much like automated testing. Modern frameworks also emphasize linking threats to mitigations and security controls. For example, an ATT&CK technique in a threat model can map to specific countermeasures (like particular detection rules or patches), and MAL-based simulations can suggest which defenses most effectively reduce risk (A Flexible Simulation Framework For Modeling Cyber Attacks | HackerNoon) (A Flexible Simulation Framework For Modeling Cyber Attacks | HackerNoon). In summary, methodologies have progressed from attacker checklists and manual analysis to comprehensive, tool-supported frameworks that align closely with both defensive measures and business risks.
Industry Adoption Trends
Financial Services: The financial industry was among the earliest civilian adopters of threat modeling. Banks and insurance companies in the 2000s faced regulatory expectations for risk management and started using threat modeling to secure online banking, payment systems, and customer data. For instance, FFIEC (Federal Financial Institutions Examination Council) guidance by mid-2010s suggested incorporating threat modeling techniques (like attack trees) in risk assessments for banking applications ([PDF] Information Security - FFIEC). Financial firms often require threat models as part of their software development lifecycle, especially for high-risk applications (e.g., money transfer systems). Additionally, standards like PCI-DSS (for payment card security) pushed companies to identify and address threats in cardholder data environments. By the 2010s, large financial institutions were investing in automated threat modeling tools and building internal “security architecture review” teams that perform threat modeling on new designs. The trend was driven not only by security concerns (financial gain motivates many threat actors) but also by compliance – regulators wanted to see that banks were proactively evaluating threats. This led to widespread adoption of methodologies like STRIDE in financial app development, FAIR for quantifying risk in dollar terms, and use of MITRE ATT&CK in cyber threat intelligence teams to map out attacker techniques affecting financial services. Today, financial industry CISOs report threat modeling as a foundational practice that informs everything from architecture decisions to incident response playbooks.
Healthcare: The healthcare sector, dealing with sensitive personal health information and life-critical devices, gradually embraced threat modeling under both security and privacy pressures. Hospitals and providers started threat modeling electronic health record systems and medical devices especially after high-profile breaches in the 2010s. Frameworks like HITECH and HIPAA in the US mandated risk assessments, implicitly encouraging threat modeling of systems that handle protected health information. A significant push came from regulators: in 2018, the U.S. FDA issued draft guidance for medical device cybersecurity, explicitly recommending that manufacturers include threat modeling documentation in premarket submissions ([PDF] Cybersecurity in Medical Devices: Quality System Considerations ...). This meant companies developing devices like insulin pumps or pacemakers needed to model potential threats (e.g., an attacker hacking the device) and incorporate mitigations early in design. By 2023, the FDA finalized these guidances, essentially making threat modeling a required practice for medical device approval. Similarly, EU regulations (e.g., the MDR and General Data Protection Regulation (GDPR) for privacy) drove healthcare organizations to conduct Data Protection Impact Assessments, a form of privacy threat modeling, for their systems. Consequently, the healthcare industry has seen a surge in adopting tools and frameworks (some use STRIDE for health IT applications, others use LINDDUN for patient privacy concerns). Industry groups and agencies also published tailored threat modeling playbooks – for example, MITRE developed a “Playbook for Threat Modeling Medical Devices” to help healthcare product teams systematically address threats (Playbook for Threat Modeling Medical Devices | MITRE). The net effect is that by the late 2010s, threat modeling became ingrained in healthcare product development and IT risk management, often as a direct response to patient safety and data protection requirements.
Manufacturing & Critical Infrastructure: Industrial and manufacturing sectors, including utilities and energy, historically focused on safety and reliability, but have increasingly applied threat modeling to cybersecurity of operational technology (OT). After incidents like the Stuxnet attack (2010) demonstrated the reality of cyber-physical threats, companies began assessing threats to SCADA systems, factory controllers, and supply chain systems. Regulatory bodies and industry standards spurred this: frameworks like NERC CIP (for power grid security) and IEC 62443 (industrial control system security) require identifying and mitigating cyber threats to critical processes. As a result, manufacturers started to integrate threat modeling in the design of plant control networks and products (for example, an automobile manufacturer threat-modeling the software in a connected car to meet ISO 21434 cybersecurity standard for road vehicles). The concept of attack graphs and MAL has particular resonance in these industries – projects like EnergyShield in the EU have used MAL to develop domain-specific threat modeling languages for power systems (KTH - Energy Shield). Another trend is digital supply chain security: manufacturers work with their suppliers to model threats in the supply chain (e.g., threats to firmware integrity of a component). Government incentives also play a role; for instance, the U.S. NIST’s Cybersecurity Framework (2014) encouraged critical infrastructure operators to identify threats and vulnerabilities as a core function (“Identify” and “Protect”). In practice, manufacturing firms in the late 2010s might run tabletop threat modeling exercises to evaluate how a ransomware attack could impact a factory line and then invest in the necessary network segmentations and incident response plans. Thus, threat modeling in this sector has been driven by the need to prevent costly downtime or safety incidents, with regulators viewing it as part of due diligence for critical services.
Software & Tech Industry: The software industry – from big tech companies to startups – has arguably been the most enthusiastic adopter of threat modeling in recent years. Microsoft’s early adoption paved the way, and by the 2010s many tech companies had internal security design review processes modeled on Microsoft’s approach (often using STRIDE or variants). Secure development lifecycle (SDL) programs at companies like Google, Amazon, and Salesforce all include a threat modeling step for new features. The influence of compliance (like ISO 27001, which requires risk assessment) and customer expectations (enterprise clients asking for security architecture reviews) also pushed software providers to formalize threat modeling. Development teams increasingly practice “DevSecOps”, embedding security practices into agile workflows – threat modeling is adapted to be faster and iterative to suit frequent release cycles. This led to innovations like “Threat Modeling as Code,” where system models and abuse cases are stored in version control, and security engineers collaborate with developers much like code reviews. There’s also been growth in tooling: numerous commercial tools (IriusRisk, ThreatModeler, Microsoft's Threat Modeling Tool, etc.) emerged to integrate with design tools and CI/CD pipelines. The market for threat modeling tools has expanded rapidly – one market analysis valued the threat modeling tools market at under $1B in 2023 with an expected growth to over $3B by 2032 (Threat Modeling Tools Market to hit USD 3.37 Billion by) (Threat Modeling Tools Market to hit USD 3.37 Billion by), driven by demand in software and cloud security. This reflects that threat modeling is now seen as a necessary step to prevent costly breaches and to secure complex cloud architectures. Furthermore, technology companies often share and open-source their threat modeling practices; for example, Netflix created “Chaos Monkey for Security” exercises (akin to threat modeling failure modes), and the open-source community offers projects like OWASP Threat Dragon (mentioned above) and PyTM (a Pythonic threat modeling framework) to lower the entry barrier. By the late 2010s, it became common for even mid-sized organizations to boast about doing threat models — it became a mark of mature security. Additionally, secure design principles propagated via standards like OWASP Top 10 started reaching developers directly, making them more aware of design-level threats and mitigations, and thus more receptive to threat modeling from the outset of projects.
Regulatory Influence: Across all industries, regulatory and standards bodies have significantly influenced the development and adoption of threat modeling. We’ve already noted sector-specific mandates (financial, healthcare, etc.), but there have been broader pushes as well. For instance, in 2016 NIST released Special Publication 800-154 (Guide to Data-Centric System Threat Modeling) (Examples of Threat Modeling That Create Secure Design Patterns), which provided federal agencies and businesses a generic methodology to follow. This NIST guide outlined steps like identifying system architecture, identifying threats, and deciding on responses – effectively validating threat modeling as an expected activity in secure system engineering. In Europe, the GDPR (2018) required Data Protection Impact Assessments for systems processing personal data, which is essentially a privacy-focused threat model; this requirement forced many organizations to do structured analyses of how personal data could be exposed or misused, very much in the spirit of LINDDUN. Governments also incorporated threat modeling in procurement standards – e.g., the U.S. Department of Defense’s STIGs (Security Technical Implementation Guides) and the UK’s NCSC guidance recommend threat modeling critical systems before deployment. Another example is the Security of Critical Infrastructure Act in some countries, pushing operators to assess threats to essential services. Overall, compliance needs transformed threat modeling from an optional best practice into a required component of security governance. Organizations that might have been reluctant due to cost or expertise found themselves needing to adopt threat modeling to satisfy auditors or obtain certifications. This had a side effect of growing the ecosystem of training and certification – by the 2020s, you can find courses and certifications specifically on threat modeling (offered by SANS, ISC2, etc.), again showing how regulatory expectations created a market for threat modeling knowledge. In summary, regulatory drivers have ensured that industries like finance, healthcare, and critical infrastructure incorporate threat modeling not just as a one-time project but as part of ongoing operational risk management.
Timeline
Below is a chronological timeline highlighting key events and advancements in the history of threat modeling, from its military origins to the introduction of MAL. This visual timeline underscores how each milestone built upon the previous and how the focus of threat modeling expanded over time:
(File:Vulnerability timeline.png - Wikimedia Commons) Timeline – Evolution of threats and defenses. (Adapted from RAND report on vulnerability timelines, illustrating the concept of evolving exposure and response in security (File:Vulnerability timeline.png - Wikimedia Commons). In the context of threat modeling, earlier eras had longer “windows of exposure” due to slower threat analysis, whereas modern practices aim to quickly identify and mitigate threats, shrinking the window of opportunity for attackers.)
1940s–1950s: U.S. military incorporates systematic threat assessment in war planning (e.g. analyzing enemy bombing threats and later missile threats). Early threat models are intelligence reports and war game scenarios.
1960s: Cold War drives “missile threat modeling” by U.S. Army and NASA – analyzing how incoming threats could be intercepted (The Origins of Threat Modeling: Cyberattack- New Warfare Form). Concepts of continuous threat monitoring and reporting become standard in defense (The Origins of Threat Modeling: Cyberattack- New Warfare Form).
1970s: Security and risk analysis enter the computing realm. The foundations of cybersecurity risk assessment (like the ARPA study of multiplexed computer security in 1970 and the development of safety analysis techniques such as fault trees) influence emerging information security practices.
1980s: Increased academic interest in computer security threats; DoD’s publication of the Orange Book (1985) highlights the need to consider threats in trusted systems. Early forms of software threat analysis start appearing in research (though not yet formalized as “threat modeling”).
1990s: Rise of networked systems brings cybersecurity threats to the forefront. In 1999, Bruce Schneier introduces Attack Trees (Attack Trees - Schneier on Security), providing a formal method to model attacker goals and sub-goals. Late ’90s also see the first uses of threat modeling at Microsoft (leading to STRIDE) and other tech firms. The term “threat modeling” starts entering security team vocabularies.
1999–2002: Microsoft develops and adopts STRIDE methodology for threat modeling software designs (Threat Modeling Methodology: STRIDE) (Threat Modeling: A Summary of Available Methods). Threat modeling becomes a mandatory step in Microsoft’s SDL by 2002, raising awareness industry-wide. Other organizations begin to create internal threat libraries and checklists inspired by STRIDE.
2003: Carnegie Mellon CERT releases OCTAVE methodology (Threat Modeling: A Summary of Available Methods), shifting focus to organizational risk and strategic threat evaluation. Offers an alternative to purely technical modeling, suited for enterprise IT risk assessments.
2005: Jack Jones drafts the FAIR risk model (What is Factor Analysis of Information Risk (FAIR)? ), introducing quantitative risk analysis to complement threat models. FAIR’s publication and later adoption by financial institutions reflects a trend to marry technical threats with business impact.
2007: MITRE launches CAPEC (CAPEC - About CAPEC), a public encyclopedia of attack patterns. Standardization of attack knowledge accelerates development of tools and services that use CAPEC to inform threat modeling (e.g., generating possible attacks for a given system profile).
2012: The PASTA framework is introduced (Threat Modeling: A Summary of Available Methods), emphasizing attacker-centric and risk-centric analysis. Around the same time, other methodologies like Trike (an open-source risk-based threat modeling framework) and VAST (Visual, Agile, Simple Threat modeling by ThreatModeler Inc.) are proposed, aiming to fit threat modeling into agile and DevOps environments. This period marks a diversification of methodologies for different needs.
2013: MITRE releases the first version of the ATT&CK framework (ATT&CK - Wikipedia). Security teams begin using ATT&CK matrices for threat modeling by mapping which known tactics/techniques their systems have mitigations for. The concept of “threat model as a map of known TTP coverage” gains traction.
2014: LINDDUN (privacy threat modeling) is published, expanding threat modeling practice into the privacy domain. Also in 2014, Adam Shostack published Threat Modeling: Designing for Security, the first comprehensive book on the subject, which consolidates knowledge and best practices from the past decade and further popularizes threat modeling globally.
2015: The LinkedIn InfoSec team open-sources ThreatSpec, an approach to embed threat modeling in code (developers annotate code with threat modeling notations). While a niche approach, it hints at future automation. Meanwhile, enterprise adoption grows – the BSIMM6 report (2015) finds over 50% of surveyed firms perform threat modeling in their SDLC, up from far fewer just a few years prior.
2015: IriusRisk – The Rise of Automated Threat Modeling
As threat modeling adoption grew within organizations, the need for automation became apparent. Traditional threat modeling methods often required manual diagramming and analysis, making them time-consuming and inconsistent across teams. In 2015, IriusRisk was launched as one of the first platforms to provide automated and scalable threat modeling, shifting the practice from static documentation to an interactive and continuous security process.
Founded by Stephen de Vries and the team at Continuum Security (later acquired by IriusRisk in 2022), IriusRisk aimed to bring threat modeling into DevSecOps workflows, allowing organizations to generate threat models programmatically based on system architecture. Unlike earlier manual approaches, IriusRisk provided:
Automated Threat Generation – Using predefined threat libraries (e.g., STRIDE, OWASP Top 10, MITRE ATT&CK), it suggested relevant threats for a given system architecture.
Integration with SDLC Tools – Enabling teams to track threats as part of Jira issues, CI/CD pipelines, and security governance.
Scalability – Allowing enterprises to manage hundreds of threat models across different teams, ensuring consistency and alignment with security standards.
Impact on the Industry
IriusRisk’s introduction marked a significant shift in threat modeling’s maturity, making it more accessible to non-security professionals (such as developers and product managers) while ensuring repeatability and consistency. This approach aligned with industry trends toward "Threat Modeling as Code", where security models could be stored in version control and iteratively refined, much like infrastructure-as-code practices.
By the late 2010s, IriusRisk had gained traction among financial services, healthcare, and cloud-native companies, reinforcing threat modeling as an essential security engineering discipline. It played a key role in shifting threat modeling from a manual process to an automated, scalable security function, a critical step toward continuous threat modeling.
2016: NIST’s draft SP 800-154 (Guide to Threat Modeling) (Examples of Threat Modeling That Create Secure Design Patterns) is released, providing a government-blessed methodology that validates the practice. The same year, the Air Force’s NASIC celebrated a “decade of threat modeling” in intelligence (visualizing threats for policymakers) – showing the concept’s value in military/intel domain as well ( Visualizing threats: A decade of threat modeling > Air Mobility Command > Article Display ). Commercial tools like IriusRisk (initially Continuum Security) launch, offering automation to generate threats from architecture diagrams.
2017: OWASP introduces Threat Dragon, an open-source tool for creating threat diagrams and tracking threats/mitigations, reflecting a push for accessible, collaboration-friendly tooling (Shostack + Associates > Shostack + Friends Blog > Threat Modeling Tooling from 2017). Microsoft updates its free Threat Modeling Tool (2017 version), indicating ongoing commitment. The same year, major cloud providers (AWS, Azure) begin publishing threat modeling guidance specific to cloud architectures (e.g., AWS re:Invent talks on threat modeling cloud workloads).
2018: Researchers Johnson, Lagerström, and Ekstedt present Meta Attack Language (MAL) at ARES 2018 (MAL (the Meta Attack Language) | KTH). This marks the start of the “infrastructure as code” era of threat modeling – domain-specific languages and automated attack graph generation for large-scale systems. The concept of continuous, automated threat modeling becomes practical. Also in 2018, the EU GDPR enforcement drives many companies to adopt privacy threat modeling (LINDDUN or variants) for the first time to fulfill privacy-by-design obligations. Singapore’s 2018 Cybersecurity Act indirectly makes not doing risk assessments (including threat modeling) a potential compliance issue (History of Threat modeling - IriusRisk), showing global regulatory momentum.
2019–2020: Broad industry recognition: The Threat Modeling Manifesto (2020) is published (History of Threat modeling), distilling expert consensus on best practices. The manifesto and accompanying principles indicate that the community has matured to the point of agreeing on foundational values (like “Think strategically about risk” and “Threat modeling is about communication”). In 2019, major security conferences (Black Hat, RSA) have multiple talks and training sessions on threat modeling, whereas a decade earlier there were few. This signifies that threat modeling has become a mainstream cybersecurity practice.
2021: OWASP Top 10 adds “Insecure Design” (A04) (History of Threat modeling), effectively telling the world: if you’re not doing threat modeling and secure design reviews, you’re likely to end up with design flaws. This is a direct call to action for developers and management to integrate threat modeling to avoid being in the Top 10 list of failures. The message resonates widely. Also, the first dedicated Threat Modeling Conference (ThreatModCon) is held (2021 by community groups), indicating the emergence of a specialized community of practice.
2022: Digital Operational Resilience Act (DORA) – Threat Modeling Becomes a Regulatory Requirement in the Financial Sector
The Digital Operational Resilience Act (DORA) was adopted by the European Union in 2022 as a landmark regulation aimed at strengthening the cyber resilience of financial institutions. For the first time, DORA explicitly mandated threat-led risk assessments, requiring financial entities to conduct threat modeling as part of their security and resilience strategies.
Key Threat Modeling Implications in DORA:
Article 9: ICT Risk Management Framework – Requires financial organizations to implement a risk-based approach to managing ICT security, which includes threat identification, analysis, and continuous risk assessments.
Article 11: ICT Risk Scenario Testing – Mandates the use of advanced testing methods, including Threat-Led Penetration Testing (TLPT), which relies on threat modeling methodologies such as MITRE ATT&CK to simulate real-world cyber threats.
Article 23: Third-Party Risk Management – Encourages organizations to model threats in supply chains and critical third-party dependencies, ensuring that external vendors align with operational resilience requirements.
Alignment with NIS2 and Global Standards – DORA enforces alignment with existing cybersecurity frameworks, pushing financial institutions to integrate automated threat modeling solutions like IriusRisk, PASTA, and MITRE ATT&CK mapping to maintain compliance.
Industry Impact:
DORA transformed threat modeling from an industry best practice into a regulatory requirement for financial services, driving widespread adoption across European banks, insurance companies, and financial market infrastructures. Many organizations had to formalize and automate their threat modeling processes, integrating them into security governance, risk management, and DevSecOps workflows.
With an enforcement deadline of January 2025, financial entities across the EU are now required to demonstrate ongoing, proactive threat modeling as part of their cyber resilience and operational risk strategies.
2022–2023: Regulators double-down: The U.S. FDA’s final cybersecurity guidance (2023) for medical devices mandates threat models in submissions ([PDF] Cybersecurity in Medical Devices: Quality System Considerations ...). The SEC proposes rules requiring public companies to describe their cybersecurity risk assessment (implying threat modeling processes) in filings. At the same time, threat modeling tooling sees innovation with AI – some startups begin leveraging machine learning to suggest threat scenarios or analyze system models (a nascent trend). By 2023, the discipline is so established that new sub-specialties are forming (e.g., threat modeling for machine learning systems or for blockchain smart contracts). The market for threat modeling professionals grows, and some organizations create roles like “Threat Modeling Lead” or “Secure Design Architect,” dedicated to this practice.
This timeline showcases the progression from informal practices to rigorous methodologies and the growing integration of threat modeling into all stages of system development and operations. Regulation is formalizing what threat modeling is and how it improves security of an SDLC process.
Conclusion
Over the decades, threat modeling methodologies have undergone major shifts in scope and technique. The practice began with a narrow focus – protecting military assets from clearly defined threats – and has grown into a broad discipline applied to software, systems, and organizations. Early approaches in the 1940s–60s were manual, intelligence-driven, and often ad hoc, but they introduced the fundamental idea of anticipating an adversary’s moves. The late 1990s and early 2000s brought structured paradigms (like STRIDE and attack trees) that made threat modeling accessible to software engineers, essentially creating a blueprint for finding “what can go wrong” in a system’s design (STRIDE model - Wikipedia). During this period, threat modeling was typically a one-time activity yielding a document – valuable, but sometimes siloed from other processes.
As the timeline shows, the mid-2000s to 2010s saw an expansion in two dimensions: depth and breadth. In depth, threat modeling began to incorporate risk management (OCTAVE, FAIR) and attacker behavior (PASTA, ATT&CK), moving beyond checklists to analyze likelihood and impact in more detail. In breadth, it spread to new domains (privacy with LINDDUN, safety-critical systems, etc.) and new audiences (developers via OWASP, executives via risk quantification). The outputs evolved accordingly – from paper reports to integrated models linked with requirements, test cases, and controls. Methodologies like PASTA emphasized collaboration (using RACI charts to involve all stakeholders) (The Origins of Threat Modeling: Cyberattack- New Warfare Form) and continuous assessment, reflecting a cultural change where threat modeling is not just for security experts but for everyone involved in delivering a system.
A key shift has been automation and tooling. Modern threat modeling increasingly uses tools that fit into development workflows, enabling iterative refinement. For example, where a 2005 threat model might have been a Visio diagram and Word table, a 2025 threat model could be a living model in a tool, updated by architects during each sprint, with links to a knowledge base like ATT&CK for known adversary techniques. This integration allows organizations to measure their security coverage (e.g., “we have mitigations for X of Y ATT&CK techniques relevant to us”) and quickly adapt models when the architecture changes. It also supports “threat modeling at scale” – large enterprises can have hundreds of threat models, something feasible only with software to manage the data.
The Meta Attack Language (MAL) represents the latest milestone in this journey. MAL’s significance lies in how it encapsulates many lessons of the past: it treats threat modeling as an engineering problem – one that can be formalized, codified, and even automated. With MAL, we see the convergence of threat modeling and attack simulation, providing a way to generate consistent, repeatable threat models for complex systems (A Flexible Simulation Framework For Modeling Cyber Attacks | HackerNoon) (A Flexible Simulation Framework For Modeling Cyber Attacks | HackerNoon). This is a far cry from the artisanal threat modeling of the early 2000s. MAL and similar innovations hint at a future where security modeling is as standard and automated as functional testing. Yet, it’s built on the foundation of all prior frameworks: understanding assets, threats, and controls in a structured way.
In conclusion, threat modeling has evolved from its wartime and mainframe-era origins into an indispensable practice for modern cybersecurity. Each era built upon the prior – from military strategy to STRIDE’s developer-friendly checklist, to risk-aligned methods like PASTA and OCTAVE, to community knowledge bases like ATT&CK, and now to meta-modeling with MAL. The evolution reflects a growing awareness that proactive threat identification is crucial for security. It also mirrors changes in technology: as systems became more complex and adversaries more sophisticated, threat modeling had to mature in response. Today, effective security programs use a blend of these methodologies, choosing the right tools for the job – be it a quick STRIDE analysis for a web app feature or a MAL-driven simulation for an entire enterprise network. The journey to Meta Attack Language shows an ongoing trajectory toward greater automation, collaboration, and integration of threat modeling into the fabric of how we design and operate systems. Threat modeling is no longer a niche art; it’s a standard engineering practice – one that will continue to adapt as we face new frontiers of technology and threat.
References and Sources
Kohnfelder, L. & Garg, P. (1999). STRIDE threat model – Developed at Microsoft to categorize security threats (Threat Modeling Methodology: STRIDE). (Referenced in IriusRisk Blog, Claire Allen-Addy, Sept 2023)
Microsoft Trustworthy Computing. “Twenty Years of STRIDE: Looking Back, Looking Forward.” (2020). [Microsoft’s adoption of STRIDE in 2002 and its evolution] (Threat Modeling: A Summary of Available Methods).
Schneier, B. (1999). “Attack Trees: Modeling Security Threats.” Dr. Dobb’s Journal, Dec 1999. (Introduced attack tree concept for threat modeling) (Attack Trees - Schneier on Security).
VerSprite. “The Origins of Threat Modeling.” (Dec 2022) – Discusses military use of threat modeling and ballistic missile defense applications (The Origins of Threat Modeling: Cyberattack- New Warfare Form) (The Origins of Threat Modeling: Cyberattack- New Warfare Form).
NASA & U.S. Army – Ballistic Missile Threat Modeling (c.1960s onward). VerSprite blog notes these practices have been used “for over 50 years” (The Origins of Threat Modeling: Cyberattack- New Warfare Form).
Swiderski, F. & Snyder, W. (2004). Threat Modeling. Microsoft Press. (One of the first books detailing Microsoft’s threat modeling approach with STRIDE and DFDs).
Mead, N. et al. (2018). “Threat Modeling: A Summary of Available Methods.” CMU/SEI Whitepaper – compares STRIDE, PASTA, LINDDUN, CVSS, Attack Trees, etc. (Threat Modeling: A Summary of Available Methods) (Threat Modeling: A Summary of Available Methods).
UcedaVélez, T. & Morana, M. (2015). Risk Centric Threat Modeling: Process for Attack Simulation and Threat Analysis (PASTA). Wiley. (Book introducing PASTA’s seven-stage methodology).
UcedaVélez, T. “Real World Threat Modeling Using PASTA.” OWASP AppSecEU 2012. (Technical report that first presented PASTA) (Threat Modeling: A Summary of Available Methods).
MITRE Corporation. CAPEC – Common Attack Pattern Enumeration and Classification (v1.0 released 2007) (CAPEC - About CAPEC). (MITRE CAPEC history and usage in threat modeling).
MITRE Corporation. ATT&CK Framework (released 2013) – ATT&CK is a knowledge base of adversary tactics/techniques (ATT&CK - Wikipedia). (Referenced via MITRE ATT&CK Wikipedia and FAQ) (How Has MITRE ATT&CK Improved, Adapted, and Evolved?).
Wuyts, K., Scandariato, R., Joosen, W. (2014). “LINDDUN: A privacy threat modeling framework.” KU Leuven Technical Report. (Defines LINDDUN methodology and threat tree approach for privacy threats) (LINDDUN Threat Modeling - Threat-Modeling.com) (LINDDUN Threat Modeling - Threat-Modeling.com).
Wallarm. “What is Factor Analysis of Information Risk (FAIR)?” (2021). – History of FAIR, first developed by Jack Jones in 2005 (What is Factor Analysis of Information Risk (FAIR)? ).
FAIR Institute. “Who is the Author of FAIR?” (2016) – Background on Jack Jones and creation of FAIR standard.
Shostack, A. (2014). Threat Modeling: Designing for Security. Wiley. (Comprehensive guide consolidating threat modeling practices up to 2014, including STRIDE, attack trees, etc.).
Shostack, A. – Shostack + Associates Blog: Threat Modeling Tooling from 2017 – notes the introduction of OWASP Threat Dragon and new tools (2017) (Shostack + Associates > Shostack + Friends Blog > Threat Modeling Tooling from 2017).
OWASP Threat Dragon – Open source tool (v1 released 2017). [GitHub project documentation] – used as an example of modern threat modeling tooling and community involvement (Shostack + Associates > Shostack + Friends Blog > Threat Modeling Tooling from 2017).
IriusRisk. “History of Threat Modeling.” (Nov 2024) – Highlights recent community milestones: Threat Modeling Manifesto (2020), OWASP Top 10 (2021) emphasis on threat modeling (History of Threat modeling) (History of Threat modeling).
FDA. “Cybersecurity in Medical Devices – Quality System Considerations (Draft Guidance).” (Oct 2018) – Recommends including threat modeling in medical device design. Also FDA Final Guidance (Sept 2023) confirming this ([PDF] Cybersecurity in Medical Devices: Quality System Considerations ...).
NIST Special Publication 800-154 (Draft). Guide to Data-Centric System Threat Modeling. (March 2016) (Examples of Threat Modeling That Create Secure Design Patterns). (Provided formal guidelines for performing threat modeling in U.S. federal agencies and industry).
FFIEC IT Examination Handbook: Information Security (Sep 2016). (Guidance for U.S. financial institutions; Section on development indicates use of threat modeling and attack trees in risk assessment) ([PDF] Information Security - FFIEC).
Johnson, P., Lagerström, R., Ekstedt, M. (2018). “A Meta Language for Threat Modeling and Attack Simulations.” Proc. of ARES 2018 (MAL (the Meta Attack Language) | KTH). (Academic paper introducing the Meta Attack Language (MAL) formalism).
Hackernoon (Robert Lagerström). “A Flexible Simulation Framework for Modeling Cyber Attacks.” (2021) – Explains MAL and its usage with securiCAD, combining threat models with automated attack graph simulations (A Flexible Simulation Framework For Modeling Cyber Attacks | HackerNoon) (A Flexible Simulation Framework For Modeling Cyber Attacks | HackerNoon).
MITRE ATT&CK – Official Website and Branding Guide (retrieved 2025) – for ATT&CK framework details and usage guidelines (ATT&CK - Wikipedia) (Legal & Branding | MITRE ATT&CK®).
RAND Corporation. Libicki et al. (2015). The Defender’s Dilemma: Charting a Course Toward Cybersecurity. – Contains the vulnerability timeline chart used as an embedded image in the visual timeline (File:Vulnerability timeline.png - Wikimedia Commons).
VerSprite. “Continuous Threat Modeling with PASTA.” (Blog, 2020) – Discusses aligning PASTA with DevSecOps and continuous monitoring (inspired by military continuous assessment) (The Origins of Threat Modeling: Cyberattack- New Warfare Form).
NCSC (UK). “Threat modelling.” (Web Guidance, c.2020) – Describes threat modeling process and importance in secure design (source of the flow chart referenced in text).
Various Industry Whitepapers: “Threat Modeling in Agile Development” (Microsoft, 2019), “Integrating Threat Modeling into DevOps” (Security Compass, 2020) – provided context on how modern engineering teams incorporate threat modeling continuously.
Globenewswire Press Release. “Threat Modeling Tools Market to hit USD 3.37 Billion by 2032…” (Jan 2025) – Market research indicating growth and demand for threat modeling solutions (Threat Modeling Tools Market to hit USD 3.37 Billion by).
OWASP Top 10 – 2021 Edition. Insecure Design category A04 (History of Threat modeling) – explicitly calls out threat modeling as a needed practice to avoid design flaws.
Comentários