Evaluating a threat model for structural validity and descriptive completeness. A threat modeling application provides a progress factor or other overall score associated with the structural validity and descriptive completeness of the threat model being evaluated. The structural validity is evaluated based on a data flow diagram associated with the threat model. The descriptive completeness is evaluated by reviewing descriptions of threat types in the threat model. The progress factor encourages modelers to provide effective models to a model reviewer, thus saving time for the model reviewer.
|
7. A method comprising:
receiving portions of a data flow diagram for an information system, said portions of the data flow diagram comprising a plurality of elements arranged to describe a flow of data through the information system;
accessing a threat model corresponding to the data flow diagram, said threat model having one or more threat types corresponding to each of the elements;
generating a completeness factor for each of the one or more threat types by evaluating a description in the accessed threat model of each of the one or more threat types for each of the corresponding elements, wherein evaluating the description includes at least identifying one or more of the following associated with one of the elements and having a threat type: empty threats, undescribed threats, and unmitigated threats;
evaluating, by one or more processors, connections between the elements in the portions of the data flow diagram as the portions are received to generate a validity factor for each of the one or more threat types, wherein evaluating includes identifying structural defects in the data flow diagram; and
providing the generated completeness factor and the generated validity factor for analysis of the threat model.
16. A system comprising:
a memory area for storing a data flow diagram for an information system, said data flow diagram comprising a plurality of elements arranged to describe a flow of data through the information system, said memory area further storing a threat model corresponding to the data flow diagram, said threat model having one or more threat types corresponding to each of said plurality of elements; and
a processor programmed to:
generate a completeness factor for each of the one or more threat types by evaluating a description in the threat model of each of the one or more threat types for each of the corresponding elements, wherein evaluating the description includes at least identifying one or more of the following associated with one of the elements and having a threat type: empty threats, undescribed threats, and unmitigated threats;
generate a validity factor for each of the one or more threat types by evaluating connections between the elements in the received data flow diagram;
determine a progress factor for each of the one or more threat types based on the generated completeness factor and the validity factor; and
provide the determined progress factor for each of the threat types to a user as an indication of effectiveness of the threat model.
1. One or more computer storage devices storing computer-executable components for evaluating a threat model for an information system, said components comprising:
an interface component that when executed causes at least one processor to access a data flow diagram for an information system, said data flow diagram comprising a plurality of elements arranged to describe a flow of data through the information system, said interface component further accessing a threat model corresponding to the data flow diagram, said threat model having one or more threat types corresponding to each of said plurality of elements;
a model component that when executed causes at least one processor to evaluate a description in the threat model of each of the threat types for each of the corresponding elements to generate a completeness factor for the threat type, wherein the model component evaluates the description in the threat model at least by identifying one or more of the following associated with one of the elements and having a threat type: empty threats, undescribed threats, and unmitigated threats;
a structural component that when executed causes at least one processor to evaluate connections between the elements in the data flow diagram accessed by the interface component to generate a validity factor for each of the threat types; and
a report component that when executed causes at least one processor to determine a progress factor for each of the threat types based on the completeness factor generated by the model component and the validity factor generated by the structural component, wherein the interface component provides the progress factor determined by the report component for each of the threat types to a user as an indication of effectiveness of the threat model.
2. The computer storage devices of
3. The computer storage devices of
4. The computer storage devices of
assigns a first weight to the completeness factor based on the evaluated description;
assigns a second weight to the validity factor based on the evaluated connections; and
combines the completeness factor and the validity factor based on the first weight and the second weight to determine the progress factor.
5. The computer storage devices of
6. The computer storage devices of
8. The method of
9. The method of
10. The method of
11. The method of
12. The method of
13. The method of
14. The method of
15. The method of
17. The system of
18. The system of
19. The system of
20. The system of
|
A threat model is a conception tool for identifying security risks in software and other information systems. Threat modeling often includes an analysis of a data flow diagram. Data flow diagrams describe the movement of information in an information system such as a software system, the sources of information, what processes occur on the information, where the information is stored, and where the information eventually flows. The effectiveness of a threat model is dependent upon, for example, the structural validity and completeness of the threat model. Existing systems fail to evaluate the effectiveness of the threat model prior to threat model being reviewed by a model reviewer such as a security expert.
Embodiments of the invention evaluate a threat model for effectiveness. Portions of a data flow diagram associated with the threat model are received. The threat model has one or more threat types corresponding to each of the elements in the data flow diagram. Connections between the elements are evaluated as the portions are received to generate a validity factor for each of the threat types. The generated validity factor is provided to a user for analysis of the threat model. In some embodiments, a description of each of the threat types for each of the elements is evaluated to generate a completeness factor for the threat type. The validity factor and the completeness factor are provided to the user as a progress factor for the threat model.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Corresponding reference characters indicate corresponding parts throughout the drawings.
Referring to the figures, embodiments of the invention enable the analysis of a threat model 112. In some embodiments, a threat modeling application 108 executes on a computing device operated by a user or other entity and analyzes a data flow diagram 110 and the threat model 112 stored in a data store associated with the computing device. The data flow diagram 110 includes a plurality of elements 118 arranged to describe a flow of data through an information system (see
In other embodiments, the user (e.g., at a second computing device 106) sends portions of the data flow diagram 110 and a corresponding threat model such as threat model 112 to a first computing device 102 via a network 104. The first computing device 102 includes the threat modeling application 108 that provides an analysis of the threat model 112 being evaluated. The evaluation of the threat model 112 is based on, for example, structural validity and descriptive completeness. In embodiments, the threat modeling application 108 provides structural feedback, completeness feedback, and an overall score or progress indicator regarding the threat model 112. Exemplary scores regarding a type of error encountered for a particular threat are illustrated in Appendix A. In embodiments, the structural feedback identifies structural defects of the threat model 112 in real-time or near real-time (e.g., as the data flow diagram 110 is being created or read). The completeness feedback identifies elements (e.g., elements 118 of the threat model 112) that, for example, do not have possible threats, have threats that are not described, have unmitigated threats, or have threats not associated with an issue tracking identifier (e.g., bug number).
While aspects of the invention are described with reference to threat modeling for application programs, aspects of the invention are operable generally with information systems including software systems having one or more application programs, processes, and/or data stores. Further, while the user provides the data flow diagram 110 and the threat model 112 in some embodiments, other embodiments contemplate the threat modeling application 108 accessing either or both of the data flow diagram 100 and the threat model 112 independent of the user.
Referring next to
With continued reference to
The interface component 210 accesses the data flow diagram 110 for an information system. In some embodiments, the interface component 210 displays the accessed data flow diagram 110 to a user as a two-dimensional model (see
In embodiments, the model component 212 evaluates a description in the threat model 112 of each of the threat types for each of the corresponding elements 118 to generate a completeness factor for the threat type. Exemplary descriptions of threat types may be found in
The structural component 214 evaluates connections between the elements 118 in the data flow diagram 110 accessed by the interface component 210 to generate a validity factor for each of the threat types. In embodiments, evaluating the connections between each of the elements 118 includes evaluating logical connections between the elements 118 and evaluating spatial connections between the elements 118. For example, as shown In
In some embodiments, logical evaluation includes the examination of only the abstract layout, or graph, of the data flow diagram 110 in the threat model 112. In such an embodiment, the threat modeling application 108 iterates through a list of the elements 118 in the data flow diagram 110 and compares adjacent elements. In some cases, the threat modeling application 108 follows chains of connected elements 118 until particular types of elements 118 are found or not found. In contrast, spatial evaluation examines the two-dimensional layout of the threat model 112 for errors, such as the spatial relationships of trust boundaries (see
Referring back to
Referring next to
At 304, the threat model 112 corresponding to the received portions of the data flow diagram 110 is accessed. While the embodiment of
At 306, connections between the elements 118 in the portions of the data flow diagram 110 are evaluated as the portions are received to generate a validity factor for each of the threat types. In embodiments, the connections are evaluated in real-time as the portions are received by the threat modeling application 108 in the first computing device 102. The evaluation identifies structural defects. For example, the portions are sent from the second computing device 106 to the first computing device 102 as the user creates the data flow diagram 110. In further embodiments, evaluating the connections includes one or more of the following: identifying intersections between the connections, identifying one or more of the elements 118 lacking a connection to another of the elements 118, identifying a data store element lacking a connection to a process element via a data flow element, and identifying a trust boundary element lacking a data flow element crossing over the trust boundary element.
At 308, a generated validity factor is provided to the user for analysis of the threat model 112. In embodiments, providing the generated validity factor includes providing an incremental progress bar (e.g., a completion progress bar as shown in
Referring next to
In some embodiments,
Referring next to
Referring next to
Referring next to
The exemplary flow chart 702 illustrates certifications. Certifications correspond to elements 118 and threat types for which the user or other modeler has certified that there are no threats of a given type at all. The flow chart 702 also illustrates a decision box for omitting evaluation of informational elements, or elements 118 included in the threat mode for context reasons but not threat-related reasons.
Referring next to
Referring next to
In some embodiments (not shown), the user interface 902 displays fuzzing targets identified, selected, and recommended by the threat modeling application 108. The fuzzing targets represent opportunities to fuzz, or automatically generate random input for testing, in the software system corresponding to the data flow diagram 110.
Referring next to
Appendix B includes an exemplary threat model report detailing the descriptive completeness of the data flow diagram 110, but omitting the progress bars.
Referring next to
Exemplary Operating Environment
A computer or computing device such as described herein has one or more processors or processing units, system memory, and some form of computer readable media. By way of example and not limitation, computer readable media comprise computer storage media and communication media. Computer storage media include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Communication media typically embody computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism and include any information delivery media. Combinations of any of the above are also included within the scope of computer readable media.
The computer may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer. Although described in connection with an exemplary computing system environment, embodiments of the invention are operational with numerous other general purpose or special purpose computing system environments or configurations. The computing system environment is not intended to suggest any limitation as to the scope of use or functionality of any aspect of the invention. Moreover, the computing system environment should not be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating environment. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with aspects of the invention include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, mobile telephones, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
Embodiments of the invention may be described in the general context of computer-executable instructions, such as program modules, executed by one or more computers or other devices. The computer-executable instructions may be organized into one or more computer-executable components or modules. Generally, program modules include, but are not limited to, routines, programs, objects, components, and data structures that perform particular tasks or implement particular abstract data types. Aspects of the invention may be implemented with any number and organization of such components or modules. For example, aspects of the invention are not limited to the specific computer-executable instructions or the specific components or modules illustrated in the figures and described herein. Other embodiments of the invention may include different computer-executable instructions or components having more or less functionality than illustrated and described herein. Aspects of the invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments illustrated and described herein as well as embodiments not specifically described herein but within the scope of aspects of the invention constitute exemplary means for determining the descriptive completeness and structural validity of the received data flow diagram 110 and the corresponding threat model 112, and exemplary means for generating a value indicating an evaluation of the threat model 112.
The order of execution or performance of the operations in embodiments of the invention illustrated and described herein is not essential, unless otherwise specified. That is, the operations may be performed in any order, unless otherwise specified, and embodiments of the invention may include additional or fewer operations than those disclosed herein. For example, it is contemplated that executing or performing a particular operation before, contemporaneously with, or after another operation is within the scope of aspects of the invention.
When introducing elements of aspects of the invention or the embodiments thereof, the articles “a,” “an,” “the,” and “said” are intended to mean that there are one or more of the elements. The terms “comprising,” “including,” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements.
Having described aspects of the invention in detail, it will be apparent that modifications and variations are possible without departing from the scope of aspects of the invention as defined in the appended claims. As various changes could be made in the above constructions, products, and methods without departing from the scope of aspects of the invention, it is intended that all matter contained in the above description and shown in the accompanying drawings shall be interpreted as illustrative and not in a limiting sense.
Table A1 below lists some sample structural validations evaluated by the threat modeling application 108.
TABLE A1
Exemplary Structural Validations.
Structural Validation
Explanation
DataFlowIsolated
A data flow element is not connected to
any element.
DataFlowTooFewConnections
A data flow element is not connect at
both ends.
InteractorIsolated
An external interactor element is not
connected to any element.
InteractorTooFewConnections
An external interactor has only one
connection.
ProcessIsolated
A process is not connected to any
element.
ProcessTooFewConnections
A process has only one connection.
MultiProcessIsolated
A multiprocess element is not connected
to any element.
MultiProcessTooFewConnections
A multiprocess has only one connection.
DataStoreIsolated
A data store element is not connected to
any element.
DataStoreTooFewConnections
A data store has only one connection.
DataStoreConnectedToNonProcess
A data store must be connected only to a
dataflow element that connects to a
process. This is not the case.
DataStoreWithNoInputs
A data store must receive data from
somewhere, or the model is incomplete.
The data store does not receive data as it
has no inbound data flows.
DataStoreWithNoOutputs
A data store must emit data to
somewhere, or the model is incomplete.
(There is no point emitting data that will
never be consumed.) This data store is
not connected to an outbound data flow.
DataStoreNotConnectedToInteractor
A data store is not connected, through
any number of inbound data flows and
intermediate elements, with an external
interactor. This would mean that the data
has no source, so the model is (very
likely) incomplete.
DataStoreNotConnectedFromInteractor
A data store is not connected, through
any number of outbound data flows and
intermediate elments, with an external
interactor. This would mean that the data
has no eventual use outside of the system,
so the model is (very likely) incomplete.
TrustBoundaryIsolated
A trust boundary has no data flows
crossing over it. It is either superfluous or
misplaced.
TrustBoundaryNone
There are no trust boundaries, so the
model is (very likely) not broad enough
to be useful.
NoNonInformationalProcess
There are no processes or multiprocesses
which are not marked as “informational”
(present for context but not explicitly
modeled). As a threat model reflects the
flow and processing of data, it is (very likely)
incomplete.
An exemplary list of scores per type of problem determined in a threat model evaluation is shown below in Table A2. The structural validations correspond to those in Table A1 above.
TABLE A2
Exemplary Scores for Structural Validations.
Score
Structural Validation
−200
DataFlowIsolated
−1000
DataFlowTooFewConnections
−200
InteractorIsolated
−1000
InteractorTooFewConnections
−200
ProcessIsolated
−1000
ProcessTooFewConnections
−200
MultiProcessIsolated
−1000
MultiProcessTooFewConnections
−200
DataStoreIsolated
−1000
DataStoreTooFewConnections
−200
DataStoreConnectedToNonProcess
−200
DataStoreWithNoInputs
−200
DataStoreWithNoOutputs
−200
DataStoreNotConnectedToInteractor
−200
DataStoreNotConnectedFromInteractor
−200
TrustBoundaryIsolated
−10000
TrustBoundaryNone
−10000
NoNonInformationalProcess
−1000000
Unknown
Listed below are portions of an exemplary threat model report corresponding to the data flow diagram illustrated in
CODEC
Threats:
Tampering
Information Disclosure
Information Disclosure
Elevation of Privilege
Medvedev, Ivan, Shostack, Adam, Osterman, Lawrence William
Patent | Priority | Assignee | Title |
10116704, | Jun 10 2008 | Object Security LLC | Method and system for rapid accreditation/re-accreditation of agile IT environments, for example service oriented architecture (SOA) |
10560486, | Jun 10 2008 | Object Security LLC | Method and system for rapid accreditation/re-accreditation of agile it environments, for example service oriented architecture (SOA) |
10678926, | Jan 09 2017 | International Business Machines Corporation | Identifying security risks in code using security metric comparison |
10860295, | Jan 03 2019 | Amazon Technologies, Inc. | Automated detection of ambiguities in software design diagrams |
11314503, | Jun 08 2020 | Bank of America Corporation | Software development documentation using machine learning |
11531763, | Dec 10 2019 | Amazon Technologies, Inc | Automated code generation using analysis of design diagrams |
11940901, | Dec 31 2019 | Visa International Service Association | System and method to use past computer executable instructions to evaluate proposed computer executable instructions |
12174963, | Oct 29 2018 | Amazon Technologies, Inc; Amazon Technologies, Inc. | Automated selection of secure design patterns |
9729576, | Jun 10 2008 | Object Security LLC | Method and system for rapid accreditation/re-accreditation of agile IT environments, for example service oriented architecture (SOA) |
ER2533, |
Patent | Priority | Assignee | Title |
6219805, | Sep 15 1998 | RPX CLEARINGHOUSE LLC | Method and system for dynamic risk assessment of software systems |
6681383, | Apr 04 2000 | Sosy, Inc. | Automatic software production system |
6883101, | Feb 08 2000 | Harris Corporation | System and method for assessing the security posture of a network using goal oriented fuzzy logic decision rules |
7243374, | Aug 08 2001 | Microsoft Technology Licensing, LLC | Rapid application security threat analysis |
20050086530, | |||
20050268326, | |||
20070016955, | |||
20070162890, | |||
20070192344, | |||
20070199050, | |||
20070294766, | |||
20080126902, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jun 26 2008 | Microsoft Corporation | (assignment on the face of the patent) | / | |||
Sep 03 2008 | MEDVEDEV, IVAN | Microsoft Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 022080 | /0670 | |
Sep 03 2008 | SHOSTACK, ADAM | Microsoft Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 022080 | /0670 | |
Sep 04 2008 | OSTERMAN, LAWRENCE WILLIAM | Microsoft Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 022080 | /0670 | |
Oct 14 2014 | Microsoft Corporation | Microsoft Technology Licensing, LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 034564 | /0001 |
Date | Maintenance Fee Events |
Nov 09 2017 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Nov 03 2021 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
May 20 2017 | 4 years fee payment window open |
Nov 20 2017 | 6 months grace period start (w surcharge) |
May 20 2018 | patent expiry (for year 4) |
May 20 2020 | 2 years to revive unintentionally abandoned end. (for year 4) |
May 20 2021 | 8 years fee payment window open |
Nov 20 2021 | 6 months grace period start (w surcharge) |
May 20 2022 | patent expiry (for year 8) |
May 20 2024 | 2 years to revive unintentionally abandoned end. (for year 8) |
May 20 2025 | 12 years fee payment window open |
Nov 20 2025 | 6 months grace period start (w surcharge) |
May 20 2026 | patent expiry (for year 12) |
May 20 2028 | 2 years to revive unintentionally abandoned end. (for year 12) |