systems and methods measure the effectiveness of network-centric environments. A network-centric environment is described by a computerized model that includes a plurality of parameters describing the various sensors and effectors available within the scenario. As the model is executed, numerical representations of the level of interoperability (e.g. a speed of command parameter, a situational awareness parameter, an asset utilization parameter and/or the like) are computed. A measure of effectiveness is then determined for the scenario as a function of the numerical representations. The measure of effectiveness can then be reported to a user, administrator or other use, who may use the data to compare and contrast different resources or scenarios, to design improved network centric environments, and/or for other purposes.
|
19. A system for assessing measurements of effectiveness in a network-centric environment, the system comprising:
means for executing a scenario described by a computerized model of the network centric environment, the model comprising a plurality of parameters describing sensors and effectors available within the scenario;
means for determining a speed of command factor, a situational awareness factor, and an asset utilization factor as a function of the sensors and effectors operating within the computerized model; and
means for outputing a measure of effectiveness for the scenario as a function of the speed of command, situational awareness and asset utilization factors.
1. A method of assessing measurements of effectiveness in a network-centric environment, the method comprising the steps of:
executing a scenario described by a computerized model of the network centric environment, the model comprising a plurality of parameters describing sensors and effectors available within the scenario;
computing numerical representations of the level of interoperability provided by the sensors and effectors operating within the computerized model, wherein the numerical representations are computed based upon a speed of command;
determining a measure of effectiveness for the scenario as a function of the numerical representations; and
providing the measure of effectiveness as an output.
17. A system for assessing measurements of effectiveness in a network-centric environment, the system comprising:
a computerized model of a scenario executing within a network centric environment, the computerized model comprising a plurality of parameters describing sensors and effectors available within the scenario;
an evaluation module configured to determine numerical representations of the level of interoperability provided by the sensors and effectors operating within the computerized model, wherein the numerical representations are computed based upon a speed of command; and
an output module configured to provide a measure of effectiveness for the scenario as a function of the numerical representations.
2. The method of
3. The method of
4. The method of
5. The method of
6. The method of
7. The method of
8. The method of
9. The method of
wherein Tlocal is a local execution time, Texternal is an external execution time, L is a level of interoperability factor, and Tmax is the total time allowed for the scenario.
10. The method of
11. The method of
12. The method of
wherein PD is the probability of successful detection, ns is the number of sensors used in the scenario, Ns is the total number of sensors available, and Ws is a weighting factor of a sensor.
13. The method of
14. The method of
15. The method of
wherein Pi is the probability of successful detection, ne is the number of effectors used in the scenario, Ne is the total number of effectors available, and We is a weighting factor of an effector.
16. A computer-readable medium having software instructions stored thereon, the software instructions configured to execute the method of
18. The system of
20. The system of
|
The present invention generally relates to network-centric systems, and more particularly relates to systems and methods for assessing measurements of effectiveness (MOE) in network-centric environments.
As computing devices become increasing ubiquitous in personal, industrial, corporate and governmental settings, the interoperability between the various computers and other data processing devices becomes increasingly important. In aerospace, homeland security and military settings, for example, more and more emphasis is being placed upon “network centric operations” that leverage information sharing between aircraft, vehicles, command centers, robotic devices and/or human-operated computing devices to accomplish desired tasks such as identifying a target, gathering intelligence data, engaging an enemy and/or the like. As a result, future defense and aerospace technologies will be increasingly reliant upon information sharing and interoperability between widely differing computing systems. Similar emphasis on interoperability between disjoint systems is occurring in aerospace, industrial and other settings, as well as in the commercial marketplace.
While increased interoperability is generally thought to improve the flexibility, effectiveness and overall value of data processing systems present on the battlefield and elsewhere, quantifying this increased value can be difficult in practice. Individual devices can be readily benchmarked in terms of processing power, cost, mobility and other indicia of value. Most of these conventional metrics, however, fail to adequately account for the “intangible” benefits that result from aggregation and interoperability, including improved situation awareness, decreased response times, improved asset utilization, and/or the like. Because most current metrics evaluate systems on an individual component basis, no tools or methodologies presently exist for adequately quantifying the specific advantages of interoperability. As a result, the value provided by network-centric environments is frequently understated and/or misunderstood.
It is therefore desirable to create systems and techniques for assessing the effectiveness of network-centric systems in quantifiable terms. In addition, it is desirable to create benchmarking systems and techniques that can be used in comparing different systems, scenarios, assets, and/or other factors relating to network-centric environments. These and other desirable features and characteristics will become apparent from the subsequent detailed description and the appended claims, taken in conjunction with the accompanying drawings and this background section.
According to various exemplary embodiments, systems and methods are provided to measure the effectiveness of network-centric environments. A network-centric environment is described by a computerized model that includes a plurality of parameters describing the various sensors and effectors available within the scenario. As the model is executed, numerical representations of the level of interoperability (e.g. a speed of command factor, a situational awareness factor, an asset utilization factor and/or the like) are computed. A measure of effectiveness is then determined for the scenario as a function of the numerical factors. The measure of effectiveness can then be reported to a user, administrator or other user, who may then use the data to compare and contrast different resources or scenarios, to design improved network centric environments, and/or for other purposes.
Exemplary embodiments will hereinafter be described in conjunction with the following drawing figures, wherein like numerals denote like elements, and
The following detailed description of the invention is merely exemplary in nature and is not intended to limit the invention or the application and uses of the invention. Furthermore, there is no intention to be bound by any theory presented in the preceding background or the following detailed description.
Various embodiments of the present invention present systems and methods for assessing measures of effectiveness based on artifacts and manifestations of interoperability in network-centric environments. Many of the techniques described below consider the capabilities, attributes, offerings and other aspects of one or more particular operational scenarios. The scenario is modeled, and a level of interoperability for the scenario is numerically quantified in any appropriate manner. The numerical representations generated by the model can be used by program managers, systems designers, analysts, customers and others to facilitate competitive analysis, trade studies, optimization of key performance parameters, system capability assessment and/or the like.
Much of the discussion contained within this detailed description and the associated drawing figures focuses on a particular exemplary embodiment that models actions and interactions involving four branches of the armed services. This example is solely to ease understanding of the broader concepts contained herein, however, which may be applied to any network-centric environment on or off of the battlefield. Indeed, the concepts proposed herein may be applied to any network-centric environment having any number of participants, resources available to the participant(s), and/or any other attributes. In particular, the exemplary data values contained herein (e.g. the data values shown in
Turning now to the drawing figures and with initial reference to
A computerized model of scenario 100 suitably contains numerical parameters describing the various participants 102, 104, 106, 108 as well as available resources 110, 112, 114, 116, 118, 120, 122, 124. In operation of scenario 100, one or more participants 102, 104, 106, 108 attempts to meet objective 101 (e.g. intercept a missile or other object) using resources available locally to that participant and/or externally via other participants. As decisions to use resources and/or take other actions are made by one or more participants 102, 104, 106, 108, the model suitably computes any number of appropriate numerical representations to assess the interoperability of the various components. The actions taken by any participant may be determined in real time (or in any other manner) by a human user (e.g. in a game-like simulation setting), and/or may be determined automatically by a computer. In the later case, multiple scenarios and decisions may be rapidly evaluated to ascertain an optimal (or more desired) network configuration, set of resources, course of action, and/or the like.
In various scenarios 100 that emphasize network-centricity, participants 102, 104, 106, 108 are encouraged to interoperate through the sharing of sensors, effectors and/or other resources to meet objective 101 in a more effective manner. This effectiveness (MOE) 125 can be quantified based upon numerical factors that are computed with consideration to the artifacts and manifestations of network-centric interoperability. While the particular numerical representations used in assessing MOE of scenario 100 may vary significantly from embodiment to embodiment,
MOE 125 may be computed through any combination or function of the various numerical representations to arrive at a single, numerical benchmark suitable for comparison with MOEs identified by other scenarios. In various embodiments, the SC, SA and AU factors are each formulated such that MOE 125 can be computed as a simple product of the three parameters. Alternatively, one or more parameters may be scaled to place more or less weight upon those factors, as appropriate and desired within the particular scenario 100.
Each participant 102, 104, 106, 108 logically represents an entity within scenario 100 that has command and control (C2) capability, and is therefore capable of making decisions within the network-centric environment. The decisions that may be available include the decision to use one or more sensors, effectors or other resources, as well as decisions relating to interoperability with other participants. In the example set forth in
With primary reference now to
To assess a MOE of the scenario, any number of numerical factors can be considered. Speed of command, for example, can be computed for participant 200 by simply determining the total time elapsed to complete objective 101. In a simple stovepipe scenario wherein participants only have access to local resources, the time to complete objective 101 is simply the sum of the time (Ts) to gather data 208 from sensors 204 and the time (Te) to take actions using effectors 202A-D. The time Ts to gather data may be expressed as the sum of the time Tcs to issue acquire commands 214A-D to one or more sensors 204A-D (respectively) and the report time Tsc to receive data 208 from the selected sensor(s) 204A-D. Similarly, the time Te to take actions may be expressed as the sum of the time Tce to issue action commands 212A-D to one or more effectors 202A-D (respectively) and the time Tec to receive confirmation 206 from the selected effectors 202A-D that the action has taken place. Other embodiments may additionally incorporate additional times to simulate processing delays by effectors 202A-D, sensors 204A-D and/or C2 module 201, and/or to consider other factors. Moreover, the total time may be normalized to a maximum or other reference time (Tmax) and/or otherwise massaged to provide a numerical value that can be combined with other factors to arrive at MOE 125. Speed of command, for example, may be represented for each participant in various embodiments as:
wherein Te and Ts represent the total times elapsed for participant 200 to make use of all selected effectors 202A-D and sensors 204A-D, respectively. In more complex inter-networked scenarios, additional times for gaining access to remote resources may be considered, as well as additional times for executing such access, as described more fully below. Further, the SC factors for two or more participants 200 may be summed or otherwise inter-combined to provide a more accurate representation of system-level behavior.
Other interoperability factors such as situational awareness factors and/or asset utilization factors can also be computed based upon the actions of participant 200. Situational awareness, for example, can be computed as any function of the number of sensors 204A-D used by participant 200 and the probability of success from each sensor 204A-D. In various embodiments, the probability of success (Pd) correlates to the probability of successfully detecting an object (e.g. target 101 in
wherein ns represents the number of sensors used and Ns represents the total number of sensors 204A-D available to participant 200. The particular SA values for each participant 200 operating within a scenario 100 may be summed together or otherwise inter-combined to represent system level behavior.
Using similar concepts, an asset utilization factor (AU) can be readily determined as a function of the number of effectors (ne), the probability of success (Pi) for each effector, and an optional effector weighting factor (We) that reflects the relative cost or preference associated with the use of one or more effectors 202A-D. As a result, the asset utilization (AU) factor may be defined in various embodiments as:
wherein ne represents the number of sensors used by participant 200 and Ne represents the total number of effectors 204A-D available to participant 200. Again, the AU factors of various participants 200 may be summed or otherwise aggregated to represent system level behaviors in various embodiments.
Accordingly, participant 200 selects any number of available sensors 204A-D and effectors 202A-D as well as any available external resources 215 to accomplish objective 101. As can be seen from the above equations, using additional sensors 204A-D or effectors 202A-D can improve situational awareness or asset utilization (respectively), but at the expense of decreased speed of command. By computing MOE 125 as a function of multiple factors, then, both the increase in SA or AU and the decrease in SC can be captured and evaluated.
Model 300 also includes several tables 308 and 310 that define parameters used in inter-participant behaviors. In contrast to “stovepipe” scenarios wherein participants are limited to locally-available resources, interaction between participants 102, 104, 106, 108 can provide improved MOE 125. To simulate such behavior, model 300 suitably includes one or more parameters 308 defining external delay times for communications between participants 102, 104, 106, 108. These delay times may be modified in further embodiments by multiplying the delay by an “interoperability level” 310 that can be assigned by model 300 and/or selected and applied as desired by each participant. This interoperability level reflects the speed of communication existing between participants, and may reflect geographic, bandwidth, reliability and/or other considerations.
External communications and interoperability may be implemented and modeled in any manner. In a limited interoperability scenario, for example, participants may be allowed to request resources from C2 modules 201 associated with remote participants 200. In such cases, the total command time used to compute speed of command (SC) factors would incorporate at least one inter-team delay 308, any delay associated with the resource itself (e.g. delays found in table 314), as well as any C2 delay that may be present within model 300. By adjusting the local and external delay times to correspond to real-world conditions, the SC factor can accurately reflect behaviors in the network-centric environment. An exemplary formula for computing the speed of command (SC) factor is shown below:
wherein Tlocal represents total local delays associated with each action, Texternal represents total inter-participant delays associated with obtaining access to external resources, and L represents the optional interoperability level 310 present in some embodiments. In equivalent embodiments, interoperability level 310 is assumed to be zero, effectively having little or no effect upon the SC factor.
With final reference now to
With particular reference to
While at least one exemplary embodiment has been presented in the foregoing detailed description, it should be appreciated that a vast number of alternate but equivalent variations exist. Although the systems and techniques described herein are frequently described with respect to computer spreadsheet implementations, for example, similar concepts could be readily applied with any other software or scripting languages, formats, environments, protocols and the like. Similarly, the invention is not limited to embodiments in the fields of aerospace or defense, but rather may be broadly implemented across a range of personal, corporate, industrial, governmental or other settings. While the foregoing detailed description will provide those skilled in the art with a convenient road map for implementing an exemplary embodiment of the invention, it should be appreciated that the embodiments described above are only examples, and are not intended to limit the scope, applicability, or configuration of the invention in any way. To the contrary, various changes may be made in the function and arrangement of elements described without departing from the scope of the invention as set forth in the appended claims and their legal equivalents.
Jha, Uma S., Worcester, Michael S.
Patent | Priority | Assignee | Title |
8248933, | Mar 07 2008 | The Boeing Company | Methods and systems for capability-based system collaboration |
Patent | Priority | Assignee | Title |
6687634, | Jun 08 2001 | HEWLETT-PACKARD DEVELOPMENT COMPANY L P | Quality monitoring and maintenance for products employing end user serviceable components |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Aug 30 2005 | JHA, UMA S | Boeing Company, the | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 016954 | /0792 | |
Aug 30 2005 | WORCESTER, MICHAEL S | Boeing Company, the | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 016954 | /0792 | |
Aug 31 2005 | The Boeing Company | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Jan 25 2008 | ASPN: Payor Number Assigned. |
Jul 29 2011 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Jul 29 2015 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Sep 16 2019 | REM: Maintenance Fee Reminder Mailed. |
Mar 02 2020 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Jan 29 2011 | 4 years fee payment window open |
Jul 29 2011 | 6 months grace period start (w surcharge) |
Jan 29 2012 | patent expiry (for year 4) |
Jan 29 2014 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jan 29 2015 | 8 years fee payment window open |
Jul 29 2015 | 6 months grace period start (w surcharge) |
Jan 29 2016 | patent expiry (for year 8) |
Jan 29 2018 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jan 29 2019 | 12 years fee payment window open |
Jul 29 2019 | 6 months grace period start (w surcharge) |
Jan 29 2020 | patent expiry (for year 12) |
Jan 29 2022 | 2 years to revive unintentionally abandoned end. (for year 12) |