In an example, a computer-implemented method includes generating test data that is configured to be identified as data of interest at one or more visibility points in a network having a plurality of network routes. The method also includes injecting the test data into each network route of the plurality of network routes at a location upstream from the one or more visibility points, and determining, for each network route through which the test data travels, whether the test data is identified at the one or more visibility points. The method also includes outputting, for each network route through which the test data travels, data that indicates whether the test data is identified at the one or more visibility points as data of interest.
|
9. A computer-implemented method comprising:
generating test data that is configured to be identified as data of interest at one or more visibility points in a network having a plurality of network routes, wherein generating the test data comprises generating a type of test data;
injecting the test data into each network route of the plurality of network routes at a location upstream from the one or more visibility points;
determining, for each network route through which the test data travels, whether the test data is identified at the one or more visibility points;
receiving, from the one or more visibility points, data that indicates an observation data type comprising a type of data identified by the one or more visibility points; and
outputting, for each network route through which the test data travels, data that indicates whether the test data is identified at the one or more visibility points as data of interest and data that indicates whether the type of test data matches the observation data type received from the one or more visibility points.
1. A computer-implemented method comprising:
generating test data that is configured to be identified as data of interest at one or more visibility points in a network having a plurality of network routes, wherein generating the test data comprises generating data that indicates an initialization time of the test data;
injecting the test data into each network route of the plurality of network routes at a location upstream from the one or more visibility points;
determining, for each network route through which the test data travels, whether the test data is identified at the one or more visibility points;
receiving, from the one or more visibility points, data that indicates an observation time at which the one or more visibility points identify the test data; and
outputting, for each network route through which the test data travels, data that indicates whether the test data is identified at the one or more visibility points as data of interest and data that indicates a comparison of the initialization time of the test data to the observation time at which the test data is identified.
19. A computing system comprising:
a memory configured to store test data that is configured to be identified as data of interest at one or more visibility points in a network having a plurality of network routes; and
one or more processors in communication with the memory and configured to:
generate the test data, wherein to generate the test data, the one or more processors are configured to generate a type of test data;
inject the test data into each network route of the plurality of network routes at a location upstream from the one or more visibility points;
determine, for each network route through which the test data travels, whether the test data is identified at the one or more visibility points;
receive, from the one or more visibility points, data that indicates an observation data type comprising a type of data identified by the one or more visibility points; and
output, for each network route through which the test data travels, data that indicates whether the test data is identified at the one or more visibility points as data of interest and data that indicates whether the type of test data matches the observation data type received from the one or more visibility points.
14. A computing system comprising:
a memory configured to store test data that is configured to be identified as data of interest at one or more visibility points in a network having a plurality of network routes; and
one or more processors in communication with the memory and configured to:
generate the test data, wherein to generate the test data, the one or more processors are configured to generate data that indicates an initialization time of the test data;
inject the test data into each network route of the plurality of network routes at a location upstream from the one or more visibility points;
determine, for each network route through which the test data travels, whether the test data is identified at the one or more visibility points;
receive, from the one or more visibility points, data that indicates an observation time at which the one or more visibility points identify the test data; and
output, for each network route through which the test data travels, data that indicates whether the test data is identified at the one or more visibility points as data of interest and data that indicates a comparison of the initialization time of the test data to the observation time at which the test data is identified.
2. The method of
determining, for respective visibility points of the plurality of visibility points, whether the test data is identified at the respective visibility points as data of interest.
3. The method of
4. The method of
6. The method of
wherein determining whether the test data is identified at the one or more visibility points comprises receiving data from the one or more visibility points indicating whether the test data is identified,
wherein one or more network routes of the plurality of network routes include a proxy having a proxy log that indicates whether the test data has been processed by the proxy, and
wherein the method further comprises outputting data representative of a comparison of the data that indicates whether the test data is identified at the one or more visibility points to the proxy log that indicates whether the test data has been processed by the proxy.
7. The method of
8. The method of
10. The method of
11. The method of
12. The method of
13. The method of
wherein determining whether the test data is identified at the one or more visibility points comprises receiving data from the one or more visibility points indicating whether the test data is identified,
wherein one or more network routes of the plurality of network routes include a proxy having a proxy log that indicates whether the test data has been processed by the proxy, and
wherein the method further comprises outputting data representative of a comparison of the data that indicates whether the test data is identified at the one or more visibility points to the proxy log that indicates whether the test data has been processed by the proxy.
15. The computing system of
determine, for respective visibility points of the plurality of visibility points, whether the test data is identified at the respective visibility points as data of interest.
16. The computing system of
17. The computing system of
18. The computing system of
wherein, to determine whether the test data is identified at the one or more visibility points, the one or more processors are configured to receive data from the one or more visibility points indicating whether the test data is identified,
wherein one or more network routes of the plurality of network routes include a proxy having a proxy log that indicates whether the test data has been processed by the proxy, and
wherein the one or more processors are further configured to output data representative of a comparison of the data that indicates whether the test data is identified at the one or more visibility points to the proxy log that indicates whether the test data has been processed by the proxy.
20. The computing system of
wherein, to determine whether the test data is identified at the one or more visibility points, the one or more processors are configured to receive data from the one or more visibility points indicating whether the test data is identified,
wherein one or more network routes of the plurality of network routes include a proxy having a proxy log that indicates whether the test data has been processed by the proxy, and
wherein the one or more processors are further configured to output data representative of a comparison of the data that indicates whether the test data is identified at the one or more visibility points to the proxy log that indicates whether the test data has been processed by the proxy.
|
This application claims the benefit of U.S. Provisional Application Ser. No. 62/440,974, filed Dec. 30, 2016, the entire content of which is incorporated herein by reference.
The invention relates to data security in a networked computing environment.
A networked computing environment may include a number of interdependent computing devices, servers, and processes that transmit and receive data via a network. In some instances, one or more network security tools may be deployed to monitor network data and identify data that that may be malicious. The network security tools may be deployed at a number of locations throughout the network and may be tasked with processing a relatively large amount of data. For example, an enterprise computing environment may include hundreds or thousands of computing devices, servers, and processes that transmit or receive network data via a large number of network routes. In certain instances, it may be difficult to determine whether the security tools are identifying and/or processing the data.
In general, the techniques of this disclosure relate to measuring system effectiveness. For example, a networked computing environment may have a plurality of network routes and one or more visibility points that monitor and/or process data traffic on those routes, e.g., such as one or more data security appliances. According to aspects of this disclosure, a computing device may generate test data that is configured to be identified as data of interest at the visibility points of the network. The computing device may inject the test data into each network route at a location upstream from the one or more visibility points, and verify, for each network route through which the test data travels, that the test data is identified at the one or more visibility points. In examples of networks having more than one visibility point, the computing device may also verify that each of the visibility points identifies the data, e.g., by comparing data identified at the visibility points. The computing system may repeat the testing at regular intervals. In this way, the techniques may provide a comprehensive measurement of system effectiveness by testing each route and each visibility point.
In an example, a computer-implemented method includes generating test data that is configured to be identified as data of interest at one or more visibility points in a network having a plurality of network routes. The method also includes injecting the test data into each network route of the plurality of network routes at a location upstream from the one or more visibility points, and determining, for each network route through which the test data travels, whether the test data is identified at the one or more visibility points. The method also includes outputting, for each network route through which the test data travels, data that indicates whether the test data is identified at the one or more visibility points as data of interest.
In another example, a computing device comprises a memory configured to store test data that is configured to be identified as data of interest at one or more visibility points in a network having a plurality of network routes, and one or more processors. The one or more processors are in communication with the memory and configured to generate the test data, inject the test data into each network route of the plurality of network routes at a location upstream from the one or more visibility points, determine, for each network route through which the test data travels, whether the test data is identified at the one or more visibility points, and output, for each network route through which the test data travels, data that indicates whether the test data is identified at the one or more visibility points as data of interest.
In another example, a non-transitory computer-readable storage medium has instructions stored thereon that, when executed, cause one or more processors to generate test data that is configured to be identified as data of interest at one or more visibility points in a network having a plurality of network routes, inject the test data into each network route of the plurality of network routes at a location upstream from the one or more visibility points, determine, for each network route through which the test data travels, whether the test data is identified at the one or more visibility points, and output, for each network route through which the test data travels, data that indicates whether the test data is identified at the one or more visibility points as data of interest.
The details of one or more examples of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the disclosure will be apparent from the description and drawings, and from the claims.
Traditional attempts to understand the health of certain computing systems may be limited to outside-perspective monitoring concepts. For example, outside-perspective monitoring concepts may include determining whether system components are powered and whether system components are operating. However, such monitoring concepts may not provide a reliable measure of actual system effectiveness, e.g., how well a particular system is doing its job. With respect to a data security appliance in a network system, as an example, monitoring whether the data security appliance is powered and operating does not provide a reliable indication of how well the data security appliance is monitoring network traffic and identifying unauthorized or malicious data.
According to aspects of this disclosure, a computing device may create innocuous events of interest and inject the events of interest into each possible network route of a networked environment. The computing device may then verify that the events of interest are recognized and processed by visibility points in the network, which may include upstream security tools. The computing device may repeat the process of creating, injecting, and verifying test data in order to provide a complete assessment of how well system components are operating over a given time period (e.g., over an entire day).
The architecture of system 10 illustrated in
Computing devices 12 may provide processing resources or store data to support a wide variety of applications. The functionality of computing devices 12 may be implemented in hardware or in a combination of software and hardware, where requisite hardware may be provided to store and execute software instructions. For example, computing devices 12 may include any of a wide range of user devices, including laptop or desktop computers, tablet computers, so-called “smart” phones, “smart” pads, or other personal digital appliances. Computing devices 12 may additionally include any combination of application servers, web servers, computing servers, database servers, file servers, media servers, communications servers, or another other computing device capable of sharing processing resources and/or data with other computing devices.
In some instances, computing devices 12 may form all or a portion of an enterprise computing environment that is configured to support a wide variety of services associated with a business. In an example for purposes of illustration, computing devices 12 may support services associated with a financial institution that offers different banking products, such as checking accounts, savings accounts, and credit accounts; and different lending products, such as home loans, car loans, business loans, student loans, and the like.
Network access unit 14 may provide computing devices 12 access to network 16 via network routes 17. For example, network access unit 14 may include a variety of routers, switches, or other components for routing data from computing devices 12 to an external network such as the Internet. Network access unit 14 may include one or more firewalls, sever infrastructure, or other components and may be positioned on a perimeter of network 16 acting as a gateway between computing devices 12 and network 16. As described in greater detail with respect to the example of
Network 16 may comprise a private network that includes, for example, a private network associated with a financial institution, or may comprise a public network, such as the Internet. Although illustrated as a single entity, network 16 may comprise a combination of networks. Computing devices 12 may be connected to network 16 via wired and/or wireless links. The links may form part of a packet-based network, such as a local area network, a wide-area network, or a global network such as the Internet. The links may include intermediate routers, switches, base stations, or any other equipment that may be useful to facilitate communication between devices connected to network 16.
Visibility points 18 may include a wide variety of devices and/or services that have access to data of system 10. That is, as described herein, a visibility point may generally refer to any device or service that has access to or interacts with data being transmitted or received within a system or network. In the context of network security, visibility points 18 may comprise a variety of data security appliances that are configured to inspect data for cyber threats such as malware or other unauthorized or malicious data. In other examples, visibility points 18 may include other devices incorporated in a system for carrying out a particular function, such as computing devices included in a high frequency trading platform.
Test device 20 may include a variety of devices for processing and/or manipulating data. For example, in general, the functionality of test device 20 may be implemented in a device that includes one or more processing units, such as one or more microprocessors. The functionality of test device 20 may be implemented in hardware or in a combination of software and hardware, where requisite hardware may be provided to store and execute software instructions. While shown as a single computing device in the example of
Test data 22 may include any data that is identifiable as data of interest by visibility points 18. Test data 22 may be referred to as “innocuous data,” in that test data 22 is not intended to carry out any function other than testing how effectively data is identified and/or processed by components of system 10. With respect to network security, as an example, test device 20 may generate test data 22 to be identified by visibility points 18 as being unauthorized or malicious without actually containing data that is malicious to components of system 10. For example, visibility points 18 may be configured with one or more rules that define which data to process, and test data 22 may be based on such rules.
According to aspects of this disclosure, test device 20 may generate test data 22 to measure the effectiveness of visibility points 18, e.g., to determine whether visibly points 18 are able to identify and process test data 22. The ability to identify and process test data 22 may indicate that visibility points 18 are also able to effectively identify and process other system data, such as data being received from and/or transmitted to network 16 by computing devices 12. That is, the ability to identify and process the innocuous test data 22 may indicate that visibility points 18 are also able to effectively identify and process potentially unauthorized or malicious data.
The specific configuration of test data 22 may depend on the particular system effectiveness being tested. For example, test device 20 may generate test data 22 to include a particular type of data that is meaningful to the test. In an example for purposes of illustration, test device 20 may generate test data 22 to include an indication of a time at which test data 22 is generated or injected into network routes 17. In other examples, test device 20 may generate test data 22 to include other types of data such as data that indicates a test target, data that includes strings of symbols, data that includes a hash or encryption pattern, emails and email attachments, or the like.
Visibility points 18 may be configured to identify the particular type of test data 22 generated by test device 20. For example, a system administrator may configure visibility points 18 with one or more rules for identifying test data 22. Visibility points 18 may apply the rules to all data being transmitted on network routes 17.
After visibility points 18 have been configured to identify test data 22, test device 20 may transmit test data 22 to pass through each of network routes 17 (which may be referred to as “injecting” test data 22) at a location upstream from visibility points 18. In the example shown in
According to aspects of this disclosure, test device 20 may determine, for each of network route 17 through which test data 22 travels, whether visibility points 18 have identified and/or processed test data 22. For example, test device 20 (or another management or reporting tool) may obtain, from each of visibility points 18, results data 24 indicating that visibility points 18 have identified test data 22. In some examples, test device 20 may also obtain other data from visibility points 18, such as data indicating a time at which test data 22 is processed, the manner in which visibility points 18 identified test data 22 (e.g., how visibility points 18 categorized test data 22), and/or the manner in which visibility points 18 process test data 22 (e.g., what processes visibility points 18 applied to test data 22). Test device 20 may obtain the data from visibility points 18 and determine whether visibility points 18 have identified and/or processed test data 22 based on the received data.
Test device 20 may also output data indicating whether visibility points 18 have identified and/or processed test data 22. For example, test device 20 may output data indicating that one or more visibility points 18 did not identify test data 22 or improperly categorized or processed test data 22. Again, while illustrated as a single component in the example of
According to aspects of this disclosure, test device 20 may repeat the testing (e.g., generating test data 22, injecting test data 22, and determining whether visibility points 18 have identified and/or processed test data 22) at regular intervals. In an example for purposes of illustration, test device 20 may repeat testing at every hour on the half hour. In this example, visibility points 18 may identify test data 22 in a single hour block with some time to process test data 22 late, but at regular enough intervals to identify deficiencies. In other examples, test device 20 may use other intervals between tests, such as every 10 minutes, every half hour, or the like. In this way, the techniques may provide a comprehensive indication of whether and how well visibility points 18 are identifying and/or processing test data 22 over a given time period.
In examples in which system 10 includes multiple visibility points 18, test device 20 may additionally or alternatively compare test results between visibility points 18. For example, test device 20 may obtain results data from visibility point 18A indicating that visibility point 18A processed test data 22 at a particular time. Test device 20 may obtain results data from visibility point 18B indicating that visibility point 18B did not identify or process test data 22 at the particular time. In this example, test device 20 may determine that visibility point 18B failed to properly identify test data 22 at the particular time.
In this way, the techniques may be used to measure system effectiveness in a variety of ways. For example, test device 20 may determine that visibility points 18 are actually processing data for all network routes 17, as well as the manner in which visibility points 18 are processing the data.
The example of
Firewall 34 may include devices and/or services that monitor and control incoming and outgoing network traffic based on predetermined security rules. Firewall 34 may be located at the perimeter of a network and be responsible for processing all data transmitted to computing devices 28 from network 42. Internet proxy 38 may include one or more proxy servers for managing connections to network 42. In some examples, internet proxy 38 may include individual Internet Protocol (IP) proxy routes to network 42. In other examples, internet proxy 38 may include load-balanced Virtual IP (VIP) proxy routes to network 42. Access logs 40 may provide an indication of data traffic managed by internet proxy 38.
Security tools 44 may be responsible for identifying and processing network data, such as data received by computing devices 28 from network 42, such as the Internet. Example security tools 44 include FireEye Central Management System and Dynamic Threat Intelligence products developed by FireEye Incorporated, TippingPoint Intrusion Prevention System developed by Trend Micro Corporation, Security Analytics developed by RSA Corporation, or a wide variety of other proprietary or publically available devices or services for providing cybersecurity.
Security management unit 46 may perform security analytics or intelligence as a part of a security information and event management (SIEM) platform. Security management unit 46 may receive data from security tools 44, access logs 40 from internet proxy 38, or other sources. As an example, security management unit 46 may include ArcSight Enterprise Security Manager software developed by Hewlett-Packard company.
According to aspects of this disclosure, test device 30 may generate test data 32 and inject test data 32 into the system shown in
Custom user agents added to these events include simple signatures that allow a variety of security tools 44 to detect the same test traffic. For example, according to aspects of this disclosure, test data 32 may be designed to include one of two user agents. A first user agent may be associated with a proxy IP, while a second user agent may be associated with a load-balanced Virtual IP (VIP) of a data center, as illustrated by the examples below:
In the example above, test device 30 may generate test data 32 to include the custom user agent string that is identifiable by security tools 44. The user agent includes a test target (e.g., single_proxy or VIP_proxy) and a test type (e.g., simple). In other examples, the user agent may include additional or different data. For example, the user agent may include one or more time values, symbol strings, hashes, encrypted patterns or the like.
In examples in which one of security tools 44 includes the Security Analytics (also known as NetWitness) or a similar software package, security tool 44 may use an application rule (or “App Rule”) to flag test events, so that the test events may be identified and forwarded to security management unit 46. An example application rule is as follows:
The above rule may be tagged in a meta key called “company.risk” and may be forwarded along to security management unit 46 via a reporter rule on the Security Analytics system. In this example, a Reporter rule that drives a system log (syslog) alert to security management unit 46 simply identifies company.risk=‘FireEye User Agent Checking via Proxies’ and the forwarded data includes the specific user agent involved. In this manner, security management unit 46 and event visualization tool 48 may use the data to identify and differentiate the two test types. In examples in which one of security tools 44 includes the Tipping Point or a similar software package that analyzes web traffic, security tool 44 may use a basic HTTP custom filter that matches the user agent string.
The examples above are intended for purposes of illustration only. That is, test device 30 may generate test data 32 to include any contrived data that is designed not only for security tools 44 to recognize, but also for security tools 44 to understand and process. For example, test data 32 may include multiple time values. In this example, test data 32 may include a time at which test data 32 is injected, a time at which security tools 44 receive test data 32 (e.g., an ingestion time), and/or a time at which test data 32 is realized by security tools 44.
In instances in which security tools 44 have special or advanced features for analyzing data, test device 30 may generate test data 32 to be identified by such special or advanced features. For example, security tools 44 may be configured to recognize symbol strings, hashes, or other patterns. In such an example, test device 30 may generate test data 32 to include symbol strings, hashes, or other patterns to be identified by security tools 44. In still other examples, security tools 44 may be configured to identify target destinations for data transmitted over network 42. In such examples, test device 30 may generate test data 32 to include a particular target destination.
Security management unit 46 receives data from security tools 44, access logs 40 from internet proxy 38, or other sources, and determines whether security tools 44 properly identify and/or process test data 32. For example, security management unit 46 may include a variety of tools or software for verifying that respective security tools 44 identified test data 32 injected by test device 30. Security management unit 46 may also be configured to determine other factors based on the particular data included in test data 32. For example, security management unit 46 may determine a time at which test data 32 was received by respective security tools 44, a time at which test data 32 was processed by respective security tools 44, and whether respective security tools 44 properly identified the content of test data 32 (e.g., such as symbol strings, hashes, encrypted patterns or the like).
In an example for purposes of illustration, test device 30 may be configured to inject test data 32 into the network one time per hour. Security management unit 46 may verify that respective security tools 44 identify that test data 32 one time in per hour. Security management unit 46 may identify a processing delay if a particular one of security tools 44 does not identify test data 32 during one hour, but identifies two instances of test data 32 another hour. In another example, security management unit 46 may identify a processing delay or malfunction by one of security tools 44 identifies test data 32 but another of security tools 44 fails to identify test data 32 or identifies test data 32 at a delayed time relative to one or more other security tools 44.
According to aspects of this disclosure, in some examples, security management unit 46 may also compare data from security tools 44 to one or more other sources of data to verify that test data 32 is properly identified and processed. For example, security management unit 46 may receive access logs 40 from internet proxy 38. The access logs 40 may indicate each instance of service such that each instance of test data 32 is captured in an entry of access logs 40. Security management unit 46 may compare instances in which security tools 44 fail to identify or process test data 32 to access logs 40 in order to verify that security tools 44 were responsible for the failure.
In an example for purposes of illustration, security management unit 46 may determine that security tool 44A did not identify and/or process test data 32 at a particular instance in time. Security management unit 46 may cross check access logs 40 to verify that test data 32 was serviced by internet proxy 38 at the particular instance in time. If test data 32 is included in access logs 40, security management unit 46 may determine that security tool 44A did not identify and/or process test data 32. If, however, test data 32 is not included in access logs 40, security management unit 46 may determine that internet proxy 38 may be down and not operational.
In some examples, security management unit 46 may take a hierarchical approach to determining the effectiveness of security tools 44. For example, security management unit 46 may initially determine whether security tools 44 identify an instance of test data 32. Security management unit 46 may then determine whether security tools 44 identified and/or processed test data 32 in a timely manner, e.g., within an expected time period accounting for inherent network latencies. Security management unit 46 may then determine whether security tools 44 processed test data 32 in an expected manner. For example, security management unit 46 may determine whether security tools 44 processed test data 32 based on the type of data included in test data (e.g., a sandboxing feature of one of security tools 44 was activated based on identification of a particular type of test data 32).
Event visualization tool 48 may generate a report that indicates whether security tools 44 properly identify and/or process test data 32. For example, event visualization tool 48 may generate a variety of tables, graphs, charts, or the like that indicate whether security tools 44 properly identify and/or process test data 32. An example of event visualization tool 48 is Splunk Enterprise developed by Splunk Corporation.
The example of
According to aspects of this disclosure, test device 60 may generate test email 62 that includes data of interest, e.g., that security tools 72 are configured to identify as unauthorized and/or malicious data. As described above with respect to the example of
In an example for purposes of illustration, test device 60 may be configured to generate test email 62 to include a PDF document that is designed as data of interest to be identified by security tools 72. As another example, test device 60 may be configured to generate test email 62 to include content that is identifiable based on a YARA rule enforced by one or more of security tools 72 that specifies a particular file type and/or language (e.g., a Russian Word document).
According to aspects of this disclosure, test device 60 may generate test email 62 for an email test from a command-line driven script called from a chronological job. Test device 60 may construct a subject line that includes the testing source and the Mail Transfer Agent (MTA) target, and attach a test file expected to be detected by one or more of security tools 72. An example script is written in PERL is as follows:
Test device 60 injects test email 62 into the system by transmitting test email 62 to each of email servers 66 of data centers 64. Email servers 68 respectively process test email 62 and route test email 62 to a particular target destination. Test email 62 is routed through network access units 68 to network 70. Security tools 72 are configured as visibility points for all traffic passing through network access units 68, including test email 62. Hence, security tools 72 are configured to identify test email 62, e.g., as including unauthorized or malicious data.
Security management unit 74 receives data from security tools 72 and determines whether security tools 72 properly identify and/or process test email 62. For example, security management unit 74 may include a variety of tools or software for verifying that respective security tools 72 identified test email 62 injected by test device 60. Security management unit 74 may also be configured to determine other factors based on the particular data included in test email 62. For example, security management unit 74 may determine a time at which test email 62 was received by respective security tools 72 (e.g., an ingestion time), a time at which test email 62 was processed by respective security tools 72, whether respective security tools 72 properly identified the content of test email 62 (e.g., such as data included in the sender field, recipient field or subject field, symbol strings, hashes, encrypted patterns or the like).
As described above with respect to
In some examples, security management unit 74 may take a hierarchical approach to determining the effectiveness of security tools 72. For example, security management unit 74 may initially determine whether security tools 72 identify an instance of test email 62. Security management unit 74 may then determine whether security tools 72 identified and/or processed test email 62 in a timely manner, e.g., within an expected time period accounting for inherent network and/or processing latencies. Security management unit 74 may then determine whether security tools 72 processed test email 62 in an expected manner. For example, security management unit 74 may determine whether security tools 72 processed test email 62 based on the type of data included in the email (e.g., a sandboxing feature of one of security tools 72 was activated based on a YARA rule established at the security tool 72).
Event visualization tool 76 may generate a report that indicates whether security tools 72 properly identify and/or process test email 62. For example, event visualization tool 76 may generate a variety of tables, graphs, charts, or the like that indicate whether security tools 72 properly identify and/or process test email 62.
For example, while the example of
Processors 82, in one example, are configured to implement functionality and/or process instructions for execution within test device 20. For example, processors 82 may be capable of processing instructions stored by storage units 86. Processors 82 may include, for example, microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field-programmable gate array (FPGAs), or equivalent discrete or integrated logic circuitry.
Test device 20 may utilize interfaces 84 to communicate with external devices via one or more wired or wireless connections. Interfaces 84 may be network interfaces cards, universal serial bus (USB) interfaces, optical interfaces, or any other type of interfaces capable of sending and receiving information via TCP/IP. Examples of such network interfaces may include Ethernet, Wi-Fi, or Bluetooth radios.
Storage units 86 may store an operating system (not shown) that controls the operation of components of test device 20. For example, the operating system may facilitate the communication of web testing unit 88, email testing unit 90, and reporting unit 92 with processors 82 and interfaces 84. In some examples, storage units 86 are used to store program instructions for execution by processors 82, such as web testing unit 88, email testing unit 90, and reporting unit 92. Storage units 86 may also be configured to store information within test device 20 during operation. Storage units 86 may be used by software or applications (e.g., web testing unit 88, email testing unit 90, and reporting unit 92) executed by processors 82 of test device 20 to temporarily store information during program execution.
Storage units 86 may include a computer-readable storage medium or computer-readable storage device. In some examples, storage units 86 include one or more of a short-term memory or a long-term memory. Storage units 86 may include, for example, random access memories (RAM), dynamic random access memories (DRAM), static random access memories (SRAM), magnetic hard discs, optical discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable memories (EEPROM).
According to aspects of this disclosure, web testing unit 88, email testing unit 90, and reporting unit 92 may be configured to perform the techniques described herein. For example, web testing unit 88 may be configured to generate test data 32 described with respect to the example system of
For example,
Hence,
In the illustrated example, a computing device generates test data configured to be identified as data of interest at visibility points in a network (120). For example, the test data may include any data that is identifiable as data of interest by the visibility points of a computing system. The test data may be referred to as “innocuous data,” in that the test data is not intended to carry out any function other than testing how effectively data is identified and/or processed by components of the system. The specific configuration of the test data may depend on the particular system effectiveness being tested. That is, the test data may be meaningful in that the test data may measure whether particular functions of the visibility points are operational. The computing device then injects the test data into each network route of a plurality of network routes at locations upstream from the visibility points in the network (122).
The computing device determines, for each network route, whether the test data has been identified at the visibility points (124). For example, the computing device (or another management or reporting tool) may obtain, from each of the visibility points, results data that indicates whether respective visibility points have identified the test data. In some examples, the computing device may also obtain other data from the visibility points, such as data indicating a time at which the test data is processed, the manner in which the visibility points identified the test data (e.g., how the visibility points categorized the test data), and/or the manner in which the visibility points process the test data (e.g., what processes the visibility points apply to the test data).
The computing device outputs, for each network route, data that indicates whether the test data has been identified at the visibility points as data of interest (126). For example, the computing device may output data indicating that one or more the visibility points did not identify the test data or improperly categorized or processed the test data.
It is to be recognized that depending on the example, certain acts or events of any of the techniques described herein can be performed in a different sequence, may be added, merged, or left out altogether (e.g., not all described acts or events are necessary for the practice of the techniques). Moreover, in certain examples, acts or events may be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors, rather than sequentially.
In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over a computer-readable medium as one or more instructions or code, and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another, e.g., according to a communication protocol. In this manner, computer-readable media generally may correspond to (1) tangible computer-readable storage media which is non-transitory or (2) a communication medium such as a signal or carrier wave. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. A computer program product may include a computer-readable medium.
By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but are instead directed to non-transitory, tangible storage media. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
Instructions may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or other equivalent integrated or discrete logic circuitry, as well as any combination of such components. Accordingly, the term “processor,” as used herein may refer to any of the foregoing structures or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules. Also, the techniques could be fully implemented in one or more circuits or logic elements.
The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a microprocessor, an integrated circuit (IC) or a set of ICs (e.g., a chip set). Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Rather, as described above, various units may be combined in a hardware unit or provided by a collection of interoperative hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware.
Various examples have been described. These and other examples are within the scope of the following claims.
Patent | Priority | Assignee | Title |
11323353, | Dec 30 2016 | WELLS FARGO BANK, N A | Assessing system effectiveness |
11671344, | Dec 30 2016 | Wells Fargo Bank, N.A. | Assessing system effectiveness |
Patent | Priority | Assignee | Title |
9397922, | Feb 28 2013 | CERBERUS BUSINESS FINANCE AGENCY, LLC, AS THE COLLATERAL AGENT | Automated network testing platform |
20120120819, | |||
20130003565, | |||
20150163152, | |||
20160261493, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Aug 08 2017 | Wells Fargo Bank, N.A. | (assignment on the face of the patent) | / | |||
Feb 20 2018 | CASON, DAVID | WELLS FARGO BANK, N A | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 045124 | /0791 |
Date | Maintenance Fee Events |
Feb 22 2023 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Date | Maintenance Schedule |
Sep 10 2022 | 4 years fee payment window open |
Mar 10 2023 | 6 months grace period start (w surcharge) |
Sep 10 2023 | patent expiry (for year 4) |
Sep 10 2025 | 2 years to revive unintentionally abandoned end. (for year 4) |
Sep 10 2026 | 8 years fee payment window open |
Mar 10 2027 | 6 months grace period start (w surcharge) |
Sep 10 2027 | patent expiry (for year 8) |
Sep 10 2029 | 2 years to revive unintentionally abandoned end. (for year 8) |
Sep 10 2030 | 12 years fee payment window open |
Mar 10 2031 | 6 months grace period start (w surcharge) |
Sep 10 2031 | patent expiry (for year 12) |
Sep 10 2033 | 2 years to revive unintentionally abandoned end. (for year 12) |