A computer worm defense system comprises multiple containment systems tied together by a management system. Each containment system is deployed on a separate communication network and contains a worm sensor and a blocking system. In various embodiments, the computer worm may be transported from a production network, where the computer worm is not readily identifiable, to an alternate network in the worm sensor where the computer worm may be readily identifiable. Computer worm identifiers generated by a worm sensor of one containment system can be provided not only to the blocking system of the same containment system, but can also be distributed by the management system to blocking systems of other containment systems.
|
24. A method comprising:
monitoring communications traffic from a communication network;
filtering the communications traffic from the communication network, the filtered communications traffic comprises one or more suspicious characteristics of malicious traffic, wherein the one or more suspicious characteristics indicating that the filtered communication traffic should be analyzed to determine whether or not the filtered communications traffic comprises malicious traffic;
determining whether the filtered communications traffic comprises malicious traffic by analyzing the filtered communications traffic, the analyzing comprising monitoring a processing of the filtered communications traffic within an analysis environment of an unauthorized activity detection system; and
when the filtered communications traffic is determined to comprise malicious traffic, generating an identifier for the malicious traffic based on anomalous behavior caused within the analysis environment during the processing of the filtered communications traffic.
1. An unauthorized activity defense system comprising:
one or more unauthorized activity detection systems, each unauthorized activity detection system comprising
a malicious traffic sensor implemented in a computing device and configured to generate an identifier for malicious traffic propagating within a communication network, the malicious traffic sensor comprises
an analysis environment to analyze communications traffic filtered from the communication network, the filtered communications traffic comprises one or more suspicious characteristics associated with malicious traffic, and
a controller configured to monitor the analysis environment, and to determine whether the filtered communications traffic comprises malicious traffic, the controller to
monitor a processing of the filtered communications traffic within the analysis environment, and
when the filtered communications traffic is determined to comprise malicious traffic, generate the identifier for the malicious traffic based on anomalous behavior caused within the analysis environment during the processing of the filtered communications traffic.
41. A non-transitory machine readable medium having embodied thereon executable code, the executable code being executable by a processor to perform an unauthorized activity defense method comprising:
monitoring communications traffic from a communication network;
filtering the communications traffic from the communication network, the filtered communications traffic comprises one or more suspicious characteristics of malicious traffic, wherein the one or more suspicious characteristics indicating that the filtered communication traffic should be analyzed to determine whether or not the filtered communications traffic comprises malicious traffic;
determining whether the filtered communications traffic comprises malicious traffic by analyzing the filtered communications traffic, the analyzing comprising monitoring a processing of the filtered communications traffic within an analysis environment of an unauthorized activity detection system; and
when the filtered communications traffic is determined to comprise malicious traffic, generating an identifier for the malicious traffic based on anomalous behavior caused within the analysis environment during the processing of the filtered communications traffic.
2. The unauthorized activity defense system of
3. The unauthorized activity defense system of
4. The unauthorized activity defense system of
execute a virtual machine,
transmit the portion of filtered communications traffic to the virtual destination device, and
identify anomalous behavior by analysis of a response of the virtual destination device to the portion of filtered communications traffic.
5. The unauthorized activity defense system of
6. The unauthorized activity defense system of
a malicious traffic blocking system in communication with the malicious traffic sensor over the communication network and configured to receive the identifier from the malicious traffic sensor to block the propagation of the malicious traffic within the communication network.
7. The unauthorized activity defense system of
a management system in communication with the one or more unauthorized activity detection systems and configured to obtain the identifier from a malicious traffic sensor of a first unauthorized activity detection system of the one or more unauthorized activity detection systems and distribute the identifier to a malicious traffic blocking system of a second unauthorized activity detection system of the one or more unauthorized activity detection systems.
8. The unauthorized activity defense system of
9. The unauthorized activity defense system of
10. The unauthorized activity defense system of
11. The unauthorized activity defense system of
12. The unauthorized activity defense system of
13. The unauthorized activity defense system of
14. The unauthorized activity defense system of
15. The unauthorized activity defense system of
16. The unauthorized activity defense system of
17. The unauthorized activity defense system of
18. The unauthorized activity defense system of
19. The unauthorized activity defense system of
20. The unauthorized activity defense system of
21. The unauthorized activity defense system of
22. The unauthorized activity defense system of
23. The unauthorized activity defense system of
25. The method of
26. The method of
copying at least a portion of filtered communications traffic from the communication network; and
analyzing processing of the portion of filtered communications traffic to a virtual destination device in the alternate computer network.
27. The method of
executing a virtual machine;
transmitting the portion of filtered communications to the virtual destination device; and
identifying anomalous behavior by analyzing a response of the virtual destination device to the portion of filtered communications.
28. The method of
29. The method of
30. The method of
31. The method of
32. The method of
33. The method of
generating a sequence of network activities within an alternate computer network of the analysis environment based on a pattern of network activities; and
determining the identifier by comparing observed behavior in the alternate computer network with behavior expected from the pattern of network activities.
34. The method of
35. The method of
36. The method of
37. The method of
detecting the malicious traffic within the communication network; and
blocking the propagation of the malicious traffic within the communication network.
38. The method of
39. The method of
40. The method of
42. The non-transitory machine readable medium of
43. The non-transitory machine readable medium of
copying at least a portion of filtered communications traffic from the communication network; and
analyzing processing of the portion of filtered communications traffic to a virtual destination device in the alternate computer network.
44. The non-transitory machine readable medium of
executing a virtual machine;
transmitting the portion of filtered communications to the virtual destination device; and
identifying anomalous behavior by analyzing a response of the virtual destination device to the portion of filtered communications.
45. The non-transitory machine readable medium of
46. The non-transitory machine readable medium of
47. The non-transitory machine readable medium of
48. The non-transitory machine readable medium of
49. The non-transitory machine readable medium of
50. The non-transitory machine readable medium of
generating a sequence of network activities within an alternate computer network of the analysis environment based on a pattern of network activities; and
determining the identifier by comparing observed behavior in the alternate computer network with behavior expected from the pattern of network activities.
51. The non-transitory machine readable medium of
52. The non-transitory machine readable medium of
53. The non-transitory machine readable medium of
54. The non-transitory machine readable medium of
detecting the malicious traffic within the communication network; and
blocking the propagation of the malicious traffic within the communication network.
55. The non-transitory machine readable medium of
56. The non-transitory machine readable medium of
57. The non-transitory machine readable medium of
|
This application is a continuation of U.S. application Ser. No. 13/651,331, filed on Oct. 12, 2012, now U.S. Pat. No. 8,516,593, issuing on Aug. 20, 2013, which is a continuation of U.S. application Ser. No. 13/423,057, filed on Mar. 16, 2012, now U.S. Pat. No. 8,291,499, issued on Oct. 16, 2012, which is a continuation of U.S. application Ser. No. 11/409,355, filed on Apr. 20, 2006, now U.S. Pat. No. 8,171,553, issued on May 1, 2012, which is a continuation-in-part of U.S. application Ser. No. 11/152,286, filed on Jun. 13, 2005, now U.S. Pat. No. 8,006,305, issued on Aug. 23, 2011, which claims the priority benefit of U.S. Provisional Application No. 60/579,910, filed on Jun. 14, 2004. U.S. application Ser. No. 11/409,355, filed on Apr. 20, 2006, now U.S. Pat. No. 8,171,553, issued on May 1, 2012, is also a continuation-in-part of U.S. application Ser. No. 11/096,287, filed on Mar. 31, 2005, which claims the priority benefit of U.S. Provisional Application No. 60/559,198, filed on Apr. 1, 2004. U.S. application Ser. No. 11/409,355, filed on Apr. 20, 2006, now U.S. Pat. No. 8,171,553, issued on May 1, 2012, is also a continuation-in-part of U.S. application Ser. No. 11/151,812, filed on Jun. 13, 2005, which claims the priority benefit of U.S. Provisional Application No. 60/579,953, filed on Jun. 14, 2004. Each of the above-identified applications is incorporated by reference herein.
1. Field of the Invention
The present invention relates generally to computer networks, and more particularly to preventing the spread of malware.
2. Background Art
Detecting and distinguishing computer worms from ordinary communications traffic within a computer network is a challenging problem. Moreover, modern computer worms operate at an ever increasing level of sophistication and complexity. Consequently, it has become increasingly difficult to detect computer worms.
A computer worm can propagate through a computer network by using active propagation techniques. One such active propagation technique is to select target systems to infect by scanning network address space (i.e., a scan-directed computer worm). Another active propagation technique is to use topological information from an infected system to actively propagate the computer worm in the system (i.e., a topologically directed computer worm). Still another active propagation technique is to select target systems to infect based on some combination of previously generated lists of target systems (e.g., a hit-list directed computer worm).
In addition to the active propagation techniques, a computer worm may propagate through a computer network by using passive propagation techniques. One passive propagation technique is for the worm to attach itself to a normal network communication not initiated by the computer worm itself (i.e., a stealthy or passive contagion computer worm). The computer worm then propagates through the computer network in the context of normal communication patterns not directed by the computer worm.
It is anticipated that next-generation computer worms will have multiple transport vectors, use multiple target selection techniques, have no previously known signatures, and will target previously unknown vulnerabilities. It is also anticipated that next generation computer worms will use a combination of active and passive propagation techniques and may emit chaff traffic (i.e., spurious traffic generated by the computer worm) to cloak the communication traffic that carries the actual exploit sequences of the computer worms. This chaff traffic will be emitted in order to confuse computer worm detection systems and to potentially trigger a broad denial-of-service by an automated response system.
Approaches for detecting computer worms in a computer system include misuse detection and anomaly detection. In misuse detection, known attack patterns of computer worms are used to detect the presence of the computer worm. Misuse detection works reliably for known attack patterns but is not particularly useful for detecting novel attacks. In contrast to misuse detection, anomaly detection has the ability to detect novel attacks. In anomaly detection, a baseline of normal behavior in a computer network is created so that deviations from this behavior can be flagged as anomalous. The difficulty inherent in this approach is that universal definitions of normal behavior are difficult to obtain. Given this limitation, anomaly detection approaches strive to minimize false positive rates of computer worm detection.
In one suggested computer worm containment system, detection devices are deployed in a computer network to monitor outbound network traffic and detect active scan directed computer worms within the computer network. To achieve effective containment of these active computer worms, as measured by the total infection rate over the entire population of systems, the detection devices are widely deployed in the computer network in an attempt to detect computer worm traffic close to a source of the computer worm traffic. Once detected, these computer worms are contained by using an address blacklisting technique. This computer worm containment system, however, does not have a mechanism for repair and recovery of infected computer networks.
In another suggested computer worm containment system, the protocols (e.g., network protocols) of network packets are checked for standards compliance under an assumption that a computer worm will violate the protocol standards (e.g., exploit the protocol standards) in order to successfully infect a computer network. While this approach may be successful in some circumstances, this approach is limited in other circumstances. Firstly, it is possible for a network packet to be fully compatible with published protocol standard specifications and still trigger a buffer overflow type of software error due to the presence of a software bug. Secondly, not all protocols of interest can be checked for standards compliance because proprietary or undocumented protocols may be used in a computer network. Moreover, evolutions of existing protocols and the introduction of new protocols may lead to high false positive rates of computer worm detection when “good” behavior cannot be properly and completely distinguished from “bad” behavior. Encrypted communications channels further complicate protocol checking because protocol compliance cannot be easily validated at the network level for encrypted traffic.
In another approach to computer worm containment, “honey farms” have been proposed. A honey farm includes “honeypots” that are sensitive to probe attempts in a computer network. One problem with this approach is that probe attempts do not necessarily indicate the presence of a computer worm because there may be legitimate reasons for probing a computer network. For example, a computer network can be legitimately probed by scanning an Internet Protocol (IP) address range to identify poorly configured or rogue devices in the computer network. Another problem with this approach is that a conventional honey farm does not detect passive computer worms and does not extract signatures or transport vectors in the face of chaff emitting computer worms.
Another approach to computer worm containment assumes that computer worm probes are identifiable at a given worm sensor in a computer network because the computer worm probes will target well known vulnerabilities and thus have well known signatures which can be detected using a signature-based intrusion detection system. Although this approach may work for well known computer worms that periodically recur, such as the CodeRed computer worm, this approach does not work for novel computer worm attacks exploiting a zero-day vulnerability (e.g., a vulnerability that is not widely known).
One suggested computer worm containment system attempts to detect computer worms by observing communication patterns between computer systems in a computer network. In this system, connection histories between computer systems are analyzed to discover patterns that may represent a propagation trail of the computer worm. In addition to false positive related problems, the computer worm containment system does not distinguish between the actual transport vector of a computer worm and a transport vector including a spuriously emitted chaff trail. As a result, simply examining malicious traffic to determine the transport vector can lead to a broad denial of service (DOS) attack on the computer network. Further, the computer worm containment system does not determine a signature of the computer worm that can be used to implement content filtering of the computer worm. In addition, the computer worm containment system does not have the ability to detect stealthy passive computer worms, which by their very nature cause no anomalous communication patterns.
In light of the above, there exists a need for an effective system and method of containing computer worms.
An exemplary unauthorized activity capture system, according to some embodiments of the invention, comprises a tap configured to copy network data from a communication network, and a controller coupled to the tap. The controller is coupled to the tap and is configured to receive the copy of the network data from the tap, analyze the copy of the network data with a heuristic to flag the network data as suspicious, and simulate the transmission of the network data to a destination device.
In some embodiments, the heuristic can be configured to detect unknown source devices. Further, the heuristic can also be configured to detect the network data sent to a dark internet protocol address. The controller may further comprise a policy engine configured to flag network data as suspicious based on comparing the network data to policies.
An unauthorized activity capture system can comprise a tap configured to copy network data from a communication network and a controller. The controller can be configured to receive the copy of the network data from the tap, analyze the copy of the network data with a heuristic, retrieve a virtual machine, configure a replayer to replicate the network data to the virtual machine, and identify unauthorized activity by analyzing a response from the virtual machine to the network data.
An unauthorized activity capture method can comprise copying network data from a communication network, analyzing the copied network data with a heuristic to flag the network data as suspicious, and orchestrating transmission of the network data to a destination device to identify unauthorized activity. Orchestrating transmission of the network data can comprise retrieving a virtual machine configured to receive the network data, configuring a replayer to transmit the network data to the virtual machine, and simulating the transmission of the network data to the virtual machine.
A computer readable medium can comprise computer readable code configured to direct a processor to copy network data from a communication network, analyze the copied network data with a heuristic to flag the network data as suspicious, and orchestrate transmission of the network data to a destination device to identify unauthorized activity. Orchestrating transmission of the network data comprises directing a processor to retrieve a virtual machine configured to receive the network data, configuring a replayer to transmit the network data to the virtual machine, and simulating the transmission of the network data to the virtual machine.
An unauthorized activity containment system in accordance with one embodiment of the present invention detects suspicious computer activity, models the suspicious activity to identify unauthorized activity, and blocks the unauthorized activity. The unauthorized activity containment system can flag suspicious activity and then model the effects of the suspicious activity to identify malware and/or unauthorized activity associated with a computer user. The threshold for detecting the suspicious activity may be set low whereby a single command may be flagged as suspicious. In other embodiments, the threshold may be higher to flag suspicious activity of a combination of commands or repetitive commands.
Unauthorized activity can include any unauthorized and/or illegal computer activity. Unauthorized activity can also include activity associated with malware or illegitimate computer use. Malware is software created and distributed for malicious purposes and can take the form of viruses, worms, or trojan horses, for example. A virus is an intrusive program that infects a computer file by inserting a copy of itself in the file. The copy is usually executed when the file is loaded into memory, allowing the virus to infect still other files. A worm is a program that propagates itself across computers, usually by creating copies of itself in each computer's memory. A worm might duplicate itself in one computer so often that it causes the computer to crash. A trojan horse is a destructive program disguised as a game, utility, or application. When run, a trojan horse can harm the computer system while appearing to do something useful.
Illegitimate computer use can comprise intentional or unintentional unauthorized access to data. A hacker may intentionally seek to damage a computer system. A hacker, or computer cracker, is an individual who seeks unauthorized access to data. One example of a common attack is a denial-of-service attack where the hacker configures one or more computers to constantly request access to a target computer. The target computer may become overwhelmed by the requests and either crash or become too busy to conduct normal operations. While some hackers seek to intentionally damage computer systems, other computer users may seek to gain rights or privileges of a computer system in order to copy data or access other computers on a network. Such computer use can unintentionally damage computer systems or corrupt data.
Detection of worms can be accomplished through the use of a computer worm detection system that employs a decoy computer network having orchestrated network activities. The computer worm detection system is configured to permit computer worms to infect the decoy computer network. Alternately, rather than infect the decoy network, communications that are characteristic of a computer worm can be filtered from communication traffic and replayed in the decoy network. Detection is then based on the monitored behavior of the decoy computer network. Once a computer worm has been detected, an identifier of the computer worm is determined and provided to a computer worm blocking system that is configured to protect one or more computer systems of a real computer network. In some embodiments, the computer worm detection system can generate a recovery script to disable the computer worm and repair damage caused to the one or more computer systems, and in some instances, the computer worm blocking system initiates the repair and recovery of the infected systems.
Optionally, the computer worm sensor 105 may be in communication with (as illustrated in
The computing systems 120 are computing devices typically found in a computer network. For example, the computing systems 120 can include computing clients or servers. As a further example, the computing systems 120 can include gateways and subnets in the computer network 110. Each of the computing systems 120 and the gateway 125 can have different hardware or software profiles.
The gateway 125 allows computer worms to pass from the communication network 130 to the computer network 110. The computer worm sensor 105 can include multiple gateways 125 in communication with multiple communication networks 130. These communication networks 130 may also be in communication with each other. For example, the communication network 130 can be part of the Internet or in communication with the Internet. In one embodiment, each of the gateways 125 can be in communication with multiple communication networks 130.
The controller 115 controls the operation of the computing systems 120 and the gateway 125 to orchestrate network activities in the computer worm sensor 105. In one embodiment, the orchestrated network activities are a predetermined sequence of network activities in the computer network 110, which represents an orchestrated behavior of the computer network 110. In this embodiment, the controller 115 monitors the computer network 110 to determine a monitored behavior of the computer network 110 in response to the orchestrated network activities. The controller 115 then compares the monitored behavior of the computer network 110 with a predetermined orchestrated behavior to identify an anomalous behavior.
Anomalous behavior may include a communication anomaly, like an unexpected network communication, or an execution anomaly, for example, an unexpected execution of computer program code. If the controller 115 identifies an anomalous behavior, the computer network 110 is deemed to be infected with a computer worm. In this way, the controller 115 can detect the presence of a computer worm in the computer network 110 based on an anomalous behavior of the computer worm in the computer network 110. The controller 115 then creates an identifier (i.e., a “definition” of the anomalous behavior), which can be used for detecting the computer worm in another computer network, such as the communication network 130.
The identifier determined by the controller 115 for a computer worm in the computer network 110 can be a signature that characterizes the anomalous behavior of the computer worm. The signature can then be used to detect the computer worm in another computer network. In one embodiment, the signature indicates a sequence of ports in the computer network 110 along with data used to exploit each of the ports. For instance, the signature can be a set of tuples {(p1, c1), (p2, c2), . . . }, where pn represents a Transfer Control Protocol (TCP) or a User Datagram Protocol (UDP) port number, and cn is signature data contained in a TCP or UDP packet used to exploit a port associated with the port number. For example, the signature data can be 16-32 bytes of data in a data portion of a data packet.
The controller 115 can determine a signature of a computer worm based on a uniform resource locator (URL), and can generate the signature by using a URL filtering device, which represents a specific case of content filtering. For example, the controller 115 can identify a uniform resource locator (URL) in data packets of Hyper Text Transfer Protocol (HTTP) traffic and can extract a signature from the URL. Further, the controller 115 can create a regular expression for the URL and include the regular expression in the signature such that each tuple of the signature includes a destination port and the regular expression. In this way, a URL filtering device can use the signature to filter out network traffic associated with the URL. The controller 115, in some embodiments, can also filter data packet traffic for a sequence of tokens and dynamically produce a signature having a regular expression that includes the token sequence.
Alternatively, the identifier may be a vector (e.g., a propagation vector, an attack vector, or a payload vector) that characterizes an anomalous behavior of the computer worm in the computer network 110. For example, the vector can be a propagation vector (i.e., a transport vector) that characterizes a sequence of paths traveled by the computer worm in the computer network 110. The propagation vector may include a set {p1, p2, p3, . . . }, where pn represents a port number (e.g., a TCP or UDP port number) in the computer network 110 and identifies a transport protocol (e.g., TCP or UDP) used by the computer worm to access the port. Further, the identifier may be a multi-vector that characterizes multiple propagation vectors for the computer worm. In this way, the vector can characterize a computer worm that uses a variety of techniques to propagate in the computer network 110. These techniques may include dynamic assignment of probe addresses to the computing systems 120, network address translation (NAT) of probe addresses to the computing systems 120, obtaining topological service information from the computer network 110, or propagating through multiple gateways 125 of the computer worm sensor 105.
The controller 115 can be configured to orchestrate network activities (e.g., network communications or computing services) in the computer network 110 based on one or more orchestration patterns. In one embodiment, the controller 115 generates a series of network communications based on an orchestration pattern to exercise one or more computing services (e.g., Telnet, FTP, or SMTP) in the computer network 110. In this embodiment, the orchestration pattern produces an orchestrated behavior (e.g., an expected behavior) of the computer network 110 in the absence of computer worm infection. The controller 115 then monitors network activities in the computer network 110 (e.g., the network communications and computing services accessed by the network communications) to determine a monitored behavior of the computer network 110, and compares the monitored behavior with the orchestrated behavior. If the monitored behavior does not match the orchestrated behavior, the computer network 110 is deemed to be infected with a computer worm. The controller 115 then identifies an anomalous behavior in the monitored behavior (e.g., a network activity in the monitored behavior that does not match the orchestration pattern) and determines an identifier for the computer worm based on the anomalous behavior. In other embodiments, the controller 115 is configured to detect unexpected network activities in the computer network 110.
In another embodiment, an orchestrated pattern is associated with a type of network communication. In this embodiment, the gateway 125 identifies the type of a network communication received by the gateway 125 from the communication network 130 before propagating the network communication to the computer network 110. The controller 115 then selects an orchestration pattern based on the type of network communication identified by the gateway 125 and orchestrates network activities in the computer network 110 based on the selected orchestration pattern. In the computer network 110, the network communication accesses one or more computing systems 120 via one or more ports to access one or more computing services (e.g., network services) provided by the computing systems 120.
For example, the network communication may access an FTP server on one of the computing systems 120 via a well-known or registered FTP port number using an appropriate network protocol (e.g., TCP or UDP). In this example, the orchestration pattern includes the identity of the computing system 120, the FTP port number, and the appropriate network protocol for the FTP server. If the monitored behavior of the computer network 110 does not match the orchestrated behavior expected from the orchestration pattern, the network communication is deemed to be infected with a computer worm. The controller 115 then determines an identifier for the computer worm based on the monitored behavior, as is described in more detail herein.
The controller 115 orchestrates network activities in the computer network 110 such that the detection of anomalous behavior in the computer network 110 is simple and highly reliable. All behavior (e.g., network activities) of the computer network 110 that is not part of an orchestrated behavior represents an anomalous behavior. In alternative embodiments, the monitored behavior of the computer network 110 that is not part of the orchestrated behavior is analyzed to determine whether any of the monitored behavior is an anomalous behavior.
In another embodiment, the controller 115 periodically orchestrates network activities in the computer network 110 to access various computing services (e.g., web servers or file servers) in the communication network 130. In this way, a computer worm that has infected one of these computing services may propagate from the communication network 130 to the computer network 110 via the orchestrated network activities. The controller 115 then orchestrates network activities to access the same computing services in the computer network 110 and monitors a behavior of the computer network 110 in response to the orchestrated network activities. If the computer worm has infected the computer network 110, the controller 115 detects the computer worm based on an anomalous behavior of the computer worm in the monitored behavior, as is described more fully herein.
In one embodiment, a single orchestration pattern exercises all available computing services in the computer network 110. In other embodiments, each orchestration pattern exercises selected computing services in the computer network 110, or the orchestration patterns for the computer network 110 are dynamic (e.g., vary over time). For example, a user of the computer worm sensor 105 may add, delete, or modify the orchestration patterns to change the orchestrated behavior of the computer network 110.
In one embodiment, the controller 115 orchestrates network activities in the computer network 110 to prevent a computer worm in the communication network 130 from recognizing the computer network 110 as a decoy. For example, a computer worm may identify and avoid inactive computer networks, as such networks may be decoy computer networks deployed for detecting the computer worm (e.g., the computer network 110). In this embodiment, therefore, the controller 115 orchestrates network activities in the computer network 110 to prevent the computer worm from avoiding the computer network 110.
In another embodiment, the controller 115 analyzes both the packet header and the data portion of data packets in network communications in the computer network 110 to detect anomalous behavior in the computer network 110. For example, the controller 115 can compare the packet header and the data portion of the data packets with those of data packets propagated pursuant to an orchestration pattern to determine whether the network communications data packets constitute anomalous behavior in the computer network 110. Because the network communication propagated pursuant to the orchestration pattern is an orchestrated behavior of the computer network 110, the controller 115 avoids false positive detection of anomalous behavior in the computer network 110, which can occur in anomaly detection systems operating on unconstrained computer networks. In this way, the controller 115 reliably detects computer worms in the computer network 110 based on the anomalous behavior.
To further illustrate what is meant by reliable detection of anomalous behavior, for example, an orchestration pattern can be used that is expected to cause emission of a sequence of data packets (a, b, c, d) in the computer network 110. The controller 115 orchestrates network activities in the computer network 110 based on the orchestration pattern and monitors the behavior (e.g., measures the network traffic) of the computer network 110. If the monitored behavior of the computer network 110 includes a sequence of data packets (a, b, c, d, e, f), then the extra data packets (e, P represent an anomalous behavior (e.g., anomalous traffic). This anomalous behavior may be caused by an active computer worm propagating inside the computer network 110.
As another example, if an orchestration pattern is expected to cause emission of a sequence of data packets (a, b, c, d) in the computer network 110, but the monitored behavior includes a sequence of data packets (a, b′, c′, d), the modified data packets (b′, c′) represent an anomalous behavior in the computer network 110. This anomalous behavior may be caused by a passive computer worm propagating inside the computer network 110.
In various further embodiments, the controller 115 generates a recovery script for the computer worm, as is described more fully herein. The controller 115 can then execute the recovery script to disable (e.g., destroy) the computer worm in the computer worm sensor 105 (e.g., remove the computer worm from the computing systems 120 and the gateway 125). Moreover, the controller 115 can output the recovery script for use in disabling the computer worm in other infected computer networks and systems.
In another embodiment, the controller 115 identifies the source of a computer worm based on a network communication containing the computer worm. For example, the controller 115 may identify an infected host (e.g., a computing system) in the communication network 130 that generated the network communication containing the computer worm. In this example, the controller 115 transmits the recovery script via the gateway 125 to the host in the communication network 130. In turn, the host executes the recovery script to disable the computer worm in the host. In various further embodiments, the recovery script is also capable of repairing damage to the host caused by the computer worm.
The computer worm sensor 105 can export the recovery script, in some embodiments, to a bootable compact disc (CD) or floppy disk that can be loaded into infected hosts to repair the infected hosts. For example, the recovery script can include an operating system for the infected host and repair scripts that are invoked as part of the booting process of the operating system to repair an infected host. Alternatively, the computer worm sensor 105 may provide the recovery script to an infected computer network (e.g., the communication network 130) so that the computer network 110 can direct infected hosts in the communication network 130 to reboot and load the operating system in the recovery script.
In another embodiment, the computer worm sensor 105 uses a per-host detection and recovery mechanism to recover hosts (e.g., computing systems) in a computer network (e.g., the communication network 130). The computer worm sensor 105 generates a recovery script including a detection process for detecting the computer worm and a recovery process for disabling the computer worm and repairing damage caused by the computer worm. The computer worm sensor 105 provides the recovery script to hosts in a computer network and each host executes the detection process. If the host detects the computer worm, the host then executes the recovery process. In this way, a computer worm that performs random corruptive acts on the different hosts (e.g., computing systems) in the computer network can be disabled in the computer network and damage to the computer network caused by the computer worm can be repaired.
The computer worm sensor 105 can be a single integrated system, such as a network device or a network appliance, which is deployed in the communication network 130 (e.g., a commercial or military computer network). Alternatively, the computer worm sensor 105 may include integrated software for controlling operation of the computer worm sensor 105, such that per-host software (e.g., individual software for each computing system 120 and gateway 125) is not required.
The computer worm sensor 105 can also be a hardware module, such as a combinational logic circuit, a sequential logic circuit, a programmable logic device, or a computing device, among others. Alternatively, the computer worm sensor 105 may include one or more software modules containing computer program code, such as a computer program, a software routine, binary code, or firmware, among others. The software code can be contained in a permanent memory storage device such as a compact disc read-only memory (CD-ROM), a hard disk, or other memory storage device. In various embodiments, the computer worm sensor 105 includes both hardware and software modules.
In some embodiments, the computer worm sensor 105 is substantially transparent to the communication network 130 and does not substantially affect the performance or availability of the communication network 130. In another embodiment, the software in the computer worm sensor 105 may be hidden such that a computer worm cannot detect the computer worm sensor 105 by checking for the existence of files (e.g., software programs) in the computer worm sensor 105 or by performing a simple signature check of the files. In one example, the software configuration of the computer worm sensor 105 is hidden by employing one or more well-known polymorphic techniques used by viruses to evade signature-based detection.
In another embodiment, the gateway 125 facilitates propagation of computer worms from the communication network 130 to the computer network 110, with the controller 115 orchestrating network activities in the computer network 110 to actively propagate the computer worms from the communication network 130 to the computer network 110. For example, the controller 115 can originate one or more network communications between the computer network 110 and the communication network 130. In this way, a passive computer worm in the communication network 130 can attach to one of the network communications and propagate along with the network communication from the communication network 130 to the computer network 110. Once the computer worm is in the computer network 110, the controller 115 can detect the computer worm based on an anomalous behavior of the computer worm, as is described in more fully herein.
In another embodiment, the gateway 125 selectively prevents normal network traffic (e.g., network traffic not generated by a computer worm) from propagating from the communication network 130 to the computer network 110 to prevent various anomalies or perturbations in the computer network 110. In this way, the orchestrated behavior of the computer network 110 can be simplified to increase the reliability of the computer worm sensor 105.
For example, the gateway 125 can prevent Internet Protocol (IP) data packets from being routed from the communication network 130 to the computer network 110. Alternatively, the gateway 125 can prevent broadcast and multicast network communications from being transmitted from the communication network 130 to the computer network 110, prevent communications generated by remote shell applications (e.g., Telnet) in the communication network 130 from propagating to the computer network 110, or exclude various application level gateways including proxy services that are typically present in a computer network for application programs in the computer network. Such application programs can include a Web browser, an FTP server and a mail server, and the proxy services can include the Hypertext Markup Language (HTML), the File Transfer Protocol (FTP), or the Simple Mail Transfer Protocol (SMTP).
In another embodiment, the computing systems 120 and the gateway 125 are virtual computing systems. For example, the computing systems 120 may be implemented as virtual systems using machine virtualization technologies such as VMware™ sold by VMware, Inc. In another example, the VM can be based on instrumental virtual CPU technology (e.g., Bochs, Qemu, and Valgrind.) In another embodiment, the virtual systems include VM software profiles and the controller 115 automatically updates the VM software profiles to be representative of the communication network 130. The gateway 125 and the computer network 110 may also be implemented as a combination of virtual and real systems.
In another embodiment, the computer network 110 is a virtual computer network. The computer network 110 includes network device drivers (e.g., special purpose network device drivers) that do not access a physical network, but instead use software message passing between the different virtual computing systems 120 in the computer network 110. The network device drivers may log data packets of network communications in the computer network 110, which represent the monitored behavior of the computer network 110.
In various embodiments, the computer worm sensor 105 establishes a software environment of the computer network 110 (e.g., computer programs in the computing systems 120) to reflect a software environment of a selected computer network (e.g., the communication network 130). For example, the computer worm sensor 105 can select a software environment of a computer network typically attacked by computer worms (e.g., a software environment of a commercial communication network) and can configure the computer network 110 to reflect that software environment. In a further embodiment, the computer worm sensor 105 updates the software environment of the computer network 110 to reflect changes in the software environment of the selected computer network. In this way, the computer worm sensor 105 can effectively detect a computer worm that targets a recently deployed software program or software profile in the software environment (e.g., a widely deployed software profile).
The computer worm sensor 105 can also monitor the software environment of the selected computer network and automatically update the software environment of the computer network 110 to reflect the software environment of the selected computer network. For example, the computer worm sensor 105 can modify the software environment of the computer network 110 in response to receiving an update for a software program (e.g., a widely used software program) in the software environment of the selected computer network.
In another embodiment, the computer worm sensor 105 has a probe mechanism to automatically check the version, the release number, and the patch-level of major operating systems and application software components installed in the communication network 130. Additionally, the computer worm sensor 105 has access to a central repository of up-to-date versions of the system and application software components. In this embodiment, the computer worm sensor 105 detects a widely used software component (e.g., software program) operating in the communication network 130, downloads the software component from the central repository, and automatically deploys the software component in the computer network 110 (e.g., installs the software component in the computing systems 120). The computer worm sensor 105 may coordinate with other computer worm sensors 105 to deploy the software component in the computer networks 110 of the computer worm sensors 105. In this way, the software environment of each computer worm sensor 105 is modified to contain the software component.
In another embodiment, the computer worm sensors 105 are automatically updated from a central computing system (e.g., a computing server) by using a push model. In this embodiment, the central computing system obtains updated software components and sends the updated software components to the computer worm sensors 105. Moreover, the software environments of the computer worm sensors 105 can represent widely deployed software that computer worms are likely to target. Examples of available commercial technologies that can aid in the automated update of software and software patches in a networked environment include N1 products sold by SUN Microsystems, Inc.™ and Adaptive Infrastructure products sold by the Hewlett Packard Company™. In some embodiments, the computer worm sensors 105 are automatically updated by connecting to an independent software vendor (ISV) supplied update mechanism (e.g., the Microsoft Windows™ update service).
The computer worm sensor 105, in some embodiments, can maintain an original image of the computer network 110 (e.g., a copy of the original file system for each computing system 120) in a virtual machine that is isolated from both of the computer network 110 and the communication network 130 (e.g., not connected to the computer network 110 or the communication network 130). The computer worm sensor 105 obtains a current image of an infected computing system 120 (e.g., a copy of the current file system of the computing system 120) and compares the current image with the original image of the computer network 110 to identify any discrepancies between these images, which represent an anomalous behavior of a computer worm in the infected computing system 120.
The computer worm sensor 105 generates a recovery script based on the discrepancies between the current image and the original image of the computing system 120. The recovery script can be used to disable the computer worm in the infected computing system 120 and repair damage to the infected computing system 120 caused by the computer worm. For example, the recovery script may include computer program code for identifying infected software programs or memory locations based on the discrepancies, and for removing the discrepancies from the infected software programs or memory locations. The infected computing system 120 can then execute the recovery script to disable (e.g., destroy) the computer worm and repair any damage to the infected computing system 120 caused by the computer worm.
The recovery script may include computer program code for replacing the current file system of the computing system 120 with the original file system of the computing system 120 in the original image of the computer network 110. Alternatively, the recovery script may include computer program code for replacing infected files with the corresponding original files of the computing system 120 in the original image of the computer network 110. In still another embodiment, the computer worm sensor 105 includes a file integrity checking mechanism (e.g., a tripwire) for identifying infected files in the current file system of the computing system 120. The recovery script can also include computer program code for identifying and restoring files modified by a computer worm to reactivate the computer worm during reboot of the computing system 120 (e.g., reactivate the computer worm after the computer worm is disabled).
In one embodiment, the computer worm sensor 105 occupies a predetermined address space (e.g., an unused address space) in the communication network 130. The communication network 130 redirects those network communications directed to the predetermined address space to the computer worm sensor 105. For example, the communication network 130 can redirect network communications to the computer worm sensor 105 by using various IP layer redirection techniques. In this way, an active computer worm using a random IP address scanning technique (e.g., a scan directed computer worm) can randomly select an address in the predetermined address space and can infect the computer worm sensor 105 based on the selected address (e.g., transmitting a network communication containing the computer worm to the selected address).
An active computer worm can select an address in the predetermined address space based on a previously generated list of target addresses (e.g., a hit-list directed computer worm) and can infect a computing system 120 located at the selected address. Alternatively, an active computer worm can identify a target computing system 120 located at the selected address in the predetermined address space based on a previously generated list of target systems, and then infect the target computing system 120 based on the selected address.
In various embodiments, the computer worm sensor 105 identifies data packets directed to the predetermined address space and redirects the data packets to the computer worm sensor 105 by performing network address translation (NAT) on the data packets. For example, the computer network 110 may perform dynamic NAT on the data packets based on one or more NAT tables to redirect data packets to one or more computing systems 120 in the computer network 110. In the case of a hit-list directed computer worm having a hit-list that does not have a network address of a computing system 120 in the computer network 110, the computer network 110 can perform NAT to redirect the hit-list directed computer worm to one of the computing systems 120. Further, if the computer worm sensor 105 initiates a network communication that is not defined by the orchestrated behavior of the computer network 110, the computer network 110 can dynamically redirect the data packets of the network communication to a computing system 120 in the computer network 110.
In another embodiment, the computer worm sensor 105 operates in conjunction with dynamic host configuration protocol (DHCP) servers in the communication network 130 to occupy an address space in the communication network 130. In this embodiment, the computer worm sensor 105 communicates with each DHCP server to determine which IP addresses are unassigned to a particular subnet associated with the DHCP server in the communication network 130. The computer worm sensor 105 then dynamically responds to network communications directed to those unassigned IP addresses. For example, the computer worm sensor 105 can dynamically generate an address resolution protocol (ARP) response to an ARP request.
In another embodiment, a traffic analysis device 135 analyzes communication traffic in the communication network 130 to identify a sequence of network communications characteristic of a computer worm. The traffic analysis device 135 may use one or more well-known worm traffic analysis techniques to identify a sequence of network communications in the communication network 130 characteristic of a computer worm. For example, the traffic analysis device 135 may identify a repeating pattern of network communications based on the destination ports of data packets in the communication network 130. The traffic analysis device 135 duplicates one or more network communications in the sequence of network communications and provides the duplicated network communications to the controller 115, which emulates the duplicated network communications in the computer network 110.
The traffic analysis device 135 may identify a sequence of network communications in the communication network 130 characteristic of a computer worm by using heuristic analysis techniques (i.e., heuristics) known to those skilled in the art. For example, the traffic analysis device 135 may detect a number of IP address scans, or a number of network communications to an invalid IP address, occurring within a predetermined period. The traffic analysis device 135 determines whether the sequence of network communications is characteristic of a computer worm by comparing the number of IP address scans or the number of network communications in the sequence to a heuristics threshold (e.g., one thousand IP address scans per second).
The traffic analysis device 135 may lower typical heuristics thresholds of these heuristic techniques to increase the rate of computer worm detection, which can also increase the rate of false positive computer worm detection by the traffic analysis device 135. Because the computer worm sensor 105 emulates the duplicated network communications in the computer network 110 to determine whether the network communications include an anomalous behavior of a computer worm, the computer worm sensor 105 may increase the rate of computer worm detection without increasing the rate of false positive worm detection.
In another embodiment, the traffic analysis device 135 filters network communications characteristic of a computer worm in the communication network 130 before providing duplicate network communications to the controller 115. For example, a host A in the communication network 130 can send a network communication including an unusual data byte sequence (e.g., worm code) to a TCP/UDP port of a host B in the communication network 130. In turn, the host B can send a network communication including a similar unusual data byte sequence to the same TCP/UDP port of a host C in the communication network 130. In this example, the network communications from host A to host B and from host B to host C represent a repeating pattern of network communication. The unusual data byte sequences may be identical data byte sequences or highly correlated data byte sequences. The traffic analysis device 135 filters the repeating pattern of network communications by using a correlation threshold to determine whether to duplicate the network communication and provide the duplicated network communication to the controller 115.
The traffic analysis device 135 may analyze communication traffic in the communication network 130 for a predetermined period. For example, the predetermined period can be a number of seconds, minutes, hours, or days. In this way, the traffic analysis device 135 can detect slow propagating computer worms as well as fast propagating computer worms in the communication network 130.
The computer worm sensor 105 may contain a computer worm (e.g., a scanning computer worm) within the computer network 110 by performing dynamic NAT on an unexpected network communication originating in the computer network 110 (e.g., an unexpected communication generated by a computing system 120). For example, the computer worm sensor 105 can perform dynamic NAT on data packets of an IP address range scan originating in the computer network 110 to redirect the data packets to a computing system 120 in the computer network 110. In this way, the network communication is contained in the computer network 110.
In another embodiment, the computer worm sensor 105 is topologically knit into the communication network 130 to facilitate detection of a topologically directed computer worm. The controller 115 may use various network services in the communication network 130 to topologically knit the computer worm sensor 105 into the communication network 130. For example, the controller 115 may generate a gratuitous ARP response including the IP address of a computing system 120 to the communication network 130 such that a host in the communication network 130 stores the IP address in an ARP cache. In this way, the controller 115 plants the IP address of the computing system 120 into the communication network 130 to topologically knit the computing system 120 into the communication network 130.
The ARP response generated by the computer worm sensor 105 may include a media access control (MAC) address and a corresponding IP address for one or more of the computing systems 120. A host (e.g., a computing system) in the communication network 130 can then store the MAC and IP addresses in one or more local ARP caches. A topologically directed computer worm can then access the MAC and IP addresses in the ARP caches and can target the computing systems 120 based on the MAC or IP addresses.
In various embodiments, the computer worm sensor 105 can accelerate network activities in the computer network 110. In this way, the computer worm sensor 105 can reduce the time for detecting a time-delayed computer worm (e.g., the CodeRed-II computer worm) in the computer network 110. Further, accelerating the network activities in the computer network 110 may allow the computer worm sensor 105 to detect the time-delayed computer worm before the time-delayed computer worm causes damage in the communication network 130. The computer worm sensor 105 can then generate a recovery script for the computer worm and provide the recovery script to the communication network 130 for disabling the computer worm in the communication network 130.
The computing system 120 in the computer network can accelerate network activities by intercepting time-sensitive system calls (e.g., “time-of-day” or “sleep” system calls) generated by a software program executing in the computing system 120 or responses to such systems calls, and then modifying the systems calls or responses to accelerate execution of the software program. For example, the computing system 120 can modify a parameter of a “sleep” system call to reduce the execution time of this system call or modify the time or date in a response to a “time-of-day” system call to a future time or date. Alternatively, the computing system 120 can identify a time consuming program loop (e.g., a long, central processing unit intensive while loop) executing in the computing system 120 and can increase the priority of the software program containing the program loop to accelerate execution of the program loop.
In various embodiments, the computer worm sensor 105 includes one or more computer programs for identifying execution anomalies in the computing systems 120 (e.g., anomalous behavior in the computer network 110) and distinguishing a propagation vector of a computer worm from spurious traffic (e.g. chaff traffic) generated by the computer worm. In one embodiment, the computing systems 120 execute the computing programs to identify execution anomalies occurring in the computer network 110. The computer worm sensor 105 correlates these execution anomalies with the monitored behavior of the computer worm to distinguish computing processes (e.g., network services) that the computer worm exploits for propagation purposes from computing processes that only receive benign network traffic from the computer worm. The computer worm sensor 105 then determines a propagation vector of the computer worm based on the computing processes that the computer worm propagates for exploitative purposes. In a further embodiment, each computing system 120 executes a function of one of the computer programs as an intrusion detection system (IDS) by generating a computer worm intrusion indicator in response to detecting an execution anomaly.
In one embodiment, the computer worm sensor 105 tracks system call sequences to identify an execution anomaly in the computing system 120. For example, the computer worm sensor 105 can use finite state automata techniques to identify an execution anomaly. Additionally, the computer worm sensor 105 may identify an execution anomaly based on call-stack information for system calls executed in a computing system 120. For example, a call-stack execution anomaly may occur when a computer worm executes system calls from the stack or the heap of the computing system 120. The computer worm system 105 may also identify an execution anomaly based on virtual path identifiers in the call-stack information.
The computer worm sensor 105 may monitor transport level ports of a computing system 120. For example, the computer worm sensor 105 can monitor systems calls (e.g., “bind” or “recvfrom” system calls) associated with one or more transport level ports of a computing process in the computing system 120 to identify an execution anomaly. If the computer worm sensor 105 identifies an execution anomaly for one of the transport level ports, the computer worm sensor 105 includes the transport level port in the identifier (e.g., a signature or a vector) of the computer worm, as is described more fully herein.
In another embodiment, the computer worm sensor 105 analyzes binary code (e.g., object code) of a computing process in the computing system 120 to identify an execution anomaly. The computer worm sensor 105 may also analyze the call stack and the execution stack of the computing system 120 to identify the execution anomaly. For example, the computer worm sensor 105 may perform a static analysis on the binary code of the computing process to identify possible call stacks and virtual path identifiers for the computing process. The computer worm sensor 105 then compares an actual call stack with the identified call stacks to identify a call stack execution anomaly in the computing system 120. In this way, the computer worm sensor 105 can reduce the number of false positive computer worm detections and false negative computer worm detections. Moreover, if the computer worm sensor 105 can identify all possible call-stacks and virtual path identifiers for the computing process, the computer worm sensor 105 can have a zero false positive rate of computer worm detection.
In another embodiment, the computer worm sensor 105 identifies one or more anomalous program counters in the call stack. For example, an anomalous program counter can be the program counter of a system call generated by worm code of a computer worm. The computer worm sensor 105 tracks the anomalous program counters and determines an identifier for detecting the computer worm based on the anomalous program counters. Additionally, the computer worm sensor 105 can determine whether a memory location (e.g., a memory address or a memory page) referenced by the program counter is a writable memory location. The computer worm sensor 105 then determines whether the computer worm has exploited the memory location. For example, a computer worm can store worm code into a memory location by exploiting a vulnerability of the computing system 120 (e.g., a buffer overflow mechanism).
The computer worm sensor 105 may take a snapshot of data in the memory around the memory location referenced by the anomalous program counter. The computer worm sensor 105 then searches the snapshot for data in recent data packets received by the computing process (e.g., computing thread) associated with the anomalous program counter. The computer worm sensor 105 searches the snapshot by using a searching algorithm to compare data in the recent data packets with a sliding window of data (e.g., 16 bytes of data) in the snapshot. If the computer worm sensor 105 finds a match between the data in a recent data packet and the data in the sliding window, the matching data is deemed to be a signature candidate for the computer worm.
In another embodiment, the computing system 120 tracks the integrity of computing code in a computing system 120 to identify an execution anomaly in the computing system 120. The computing system 120 associates an integrity value with data stored in the computing system 120 to identify the source of the data. If the data is from a known source (e.g., a computing program) in the computing system 120, the integrity value is set to one, otherwise the integrity value is set to zero. For example, data received by the computing system 120 in a network communication is associated with an integrity value of zero. The computing system 120 stores the integrity value along with the data in the computing system 120, and monitors a program counter in the computing system 120 to identify an execution anomaly based on the integrity value. A program counter having an integrity value of zero indicates that data from a network communication is stored in the program counter, which represents an execution anomaly in the computing system 120.
The computing system 120 may use the signature extraction algorithm to identify a decryption routine in the worm code of a polymorphic worm, such that the decryption routine is deemed to be a signature candidate of the computer worm. Additionally, the computer worm sensor 105 may compare signature candidates identified by the computing systems 120 in the computer worm sensor 105 to determine an identifier for detecting the computer worm. For example, the computer worm sensor 105 can identify common code portions in the signature candidates to determine an identifier for detecting the computer worm. In this way, the computer worm sensor 105 can determine an identifier of a polymorphic worm containing a mutating decryption routine (e.g., polymorphic code).
In another embodiment, the computer worm sensor 105 monitors network traffic in the computer network 110 and compares the monitored network traffic with typical network traffic patterns occurring in a computer network to identify anomalous network traffic in the computer network 110. The computer worm sensor 105 determines signature candidates based on data packets of the anomalous network traffic (e.g., extracts signature candidates from the data packets) and determines identifiers for detecting computer worms based on the signature candidates.
In another embodiment, the computer worm sensor 105 evaluates characteristics of a signature candidate to determine the quality of the signature candidate, which indicates an expected level of false positive computer worm detection in a computer network (e.g., the communication network 130). For example, a signature candidate having a high quality is not contained in data packets of typical network traffic occurring in the computer network. Characteristics of a signature candidate include a minimum length of the signature candidate (e.g., 16 bytes of data) and an unusual data byte sequence. In one embodiment, the computer worm sensor 105 performs statistical analysis on the signature candidate to determine whether the signature candidate includes an unusual byte sequence. For example, computer worm sensor 105 can determine a correlation between the signature candidate and data contained in typical network traffic. In this example, a low correlation (e.g., zero correlation) indicates a high quality signature candidate.
In another embodiment, the computer worm sensor 105 identifies execution anomalies by detecting unexpected computing processes in the computer network 110 (i.e., computing processes that are not part of the orchestrated behavior of the computer network 110). The operating systems in the computing systems 120 may be configured to detect computing processes that are not in a predetermined collection of computing processes. In another embodiment, a computing system 120 is configured as a network server that permits a host in the communication network 130 to remotely execute commands on the computing system 120. For example, the original Morris computer worm exploited a debug mode of sendmail that allowed remote command execution in a mail server.
In some cases, the intrusion detection system of the computer worm sensor 105 detects an active computer worm based on anomalous network traffic in the computer network 110, but the computer worm sensor 105 does not detect an execution anomaly caused by a computing process in the computer network 110. In these cases, the computer worm sensor 105 determines whether the computer worm has multiple possible transport vectors based on the ports being accessed by the anomalous network traffic in the computer network 110. If the computer network 110 includes a small number of ports (e.g., one or two), the computer worm sensor 105 can use these ports to determine a vector for the computer worm. Conversely, if the computer network 110 includes many ports (e.g., three or more ports), the computer worm sensor 105 partitions the computing services in the computer network 110 at appropriate control points to determine those ports exploited by the computer worm.
The computer worm sensor 105 may randomly block ports of the computing systems 120 to suppress traffic to these blocked ports. Consequently, a computer worm having a transport vector that requires one or more of the blocked ports will not be able to infect a computing system 120 in which those ports are blocked. The computer worm sensor 105 then correlates the anomalous behavior of the computer worm across the computing systems 120 to determine which ports the computer worm has used for diversionary purposes (e.g., emitting chaff) and which ports the computer worm has used for exploitive purposes. The computer worm sensor 105 then determines a transport vector of the computer worm based on the ports that the computer worm has used for exploitive purposes.
The wormhole system 1125, via self-infection, acts as a computer worm gateway from the production network 1130 to the hidden network 1110. The hidden network 1110 represents an alternate world where normal behavior of network communications may be known by virtue of orchestration. The wormhole system 1125 transports the computer worm from a production network 1130 (e.g., the real world) where the computer worm is not easily identifiable, to the hidden network 1110 (e.g., the alternate world) where the computer worm's behavior is readily identifiable. In one embodiment, the wormhole system 1125 is removed in both space and time from the production network 1130.
In one embodiment, a goal of the worm sensor 1105 is to get a computer worm traveling in the production network 1130 to infect one of the wormhole systems 1125. Once a wormhole system 1125 has been infected, an intent is to get the computer worm to travel (e.g., propagate) inside the hidden network 1110. The hidden network 1110 may be instrumented to observe the computer worm's behavior and to track the computer worm's path. The hidden systems 1120 also serve to reveal a transport vector (e.g., transport propagation vector) of a multi-vector computer worm. An active computer worm may traverse (e.g., propagate) the hidden network 1110 using a variety of techniques, including dynamic assignment or network address translation (NAT) of probe addresses to the hidden systems 1120 coupled to the hidden network 1110, obtaining topological service information from the hidden network 1110, and entering other wormhole systems 1125 in the worm sensor 1110. In one embodiment, orchestration patterns initiated by an orchestration engine in the worm sensor 1105 serve to propagate a passive computer worm from a production network 1130 to the hidden network 1110 and within the hidden network 1110.
In one embodiment, the wormhole systems 1125 prevent normal network activity from causing any perturbations or anomalies in the hidden systems 1120 or hidden network 1110. For example, the wormhole system 1125 does not allow Internet Protocol (IP) level routing, does not have remote shell capabilities (e.g., Telnet), does not run application level gateways, does not pass directed broadcasts into the hidden network 1110, and does not pass multicast traffic into the hidden network 1110. A computer worm that infects a wormhole system 1125 can, however, cause anomalies in the hidden network 1110.
The orchestration pattern on the hidden network 1110 and between certain wormhole systems 1125 and production networks 1130 (e.g., production systems) can vary over time. In one embodiment, the orchestration pattern exercises all available services in the hidden network 1110. In another embodiment, the purpose of orchestration is to allow a passive computer worm to propagate to the hidden network 1110 and within the hidden network 1110 because a passive computer worm can only propagate on traffic initiated by others (e.g., other network communications). In still another embodiment, the purpose of orchestration is to prevent the computer worm from detecting that the computer worm is on a decoy network. For example, a computer worm may detect an inactive computer network, which may indicate that the computer network has no real purpose and is intended to be used as a computer worm decoy (e.g., a decoy network).
In one embodiment, orchestration is also intended to make anomaly detection in the hidden network simple and highly reliable. In this embodiment, all behavior (e.g., network communications) that is not part of a predetermined orchestration pattern on the hidden network represents an anomaly.
In one embodiment, both packet header and data contents of network packets are inspected for matching purposes during orchestration. This avoids false positive issues that are typical for anomaly detection systems that operate on unconstrained production networks 1130. To illustrate what is meant by reliable network anomaly detection, assume, for example, a network orchestration pattern that is expected to cause emission of a sequence of network packets (a, b, c, d) on the hidden network 1110. If the measured traffic is (a, b, c, d, e, f) on a particular orchestrated run, then (e, f) represent anomalous traffic in the hidden network. Such might be the case for an active computer worm propagating inside the hidden network 1110 once a wormhole system 1125 has been infected. As another example, assume that the measured traffic is (a, b′, c′, d), in which (b′, c′) represents anomalous traffic. Such might be the case for a passive computer worm propagating inside the hidden network 1110.
In various embodiments, the orchestration engine 205 controls the state and operation of the computer worm sensor 105 (
In one embodiment, the orchestration engine 205 sends orchestration requests (e.g., orchestration patterns) to various orchestration agents (e.g., computing processes) in the computing systems 120. The orchestration agent of a computing system 120 performs a periodic sweep of computing services (e.g., network services) in the computing system 120 that are potential targets of a computer worm attack. The computing services in the computing system 120 may include typical network services (e.g., web service, FTP service, mail service, instant messaging, or Kazaa) that are also in the communication network 130.
The orchestration engine 205 may generate a wide variety of orchestration sequences to exercise a variety of computing services in the computer network 110, or may select orchestration patterns to avoid loading the computer network 110 with orchestrated network traffic. Additionally, the orchestration engine 205 may select the orchestration patterns to vary the orchestration sequences. In this way, a computer worm is prevented from scanning the computer network 110 to predict the behavior of the computer network 110.
In various embodiments, the software configuration unit 215 dynamically creates or destroys virtual machines (VMs) or VM software profiles in the computer network 110, and may initialize or update the software state of the VMs or VM software profiles. In this way, the software configuration unit 215 configures the computer network 110 such that the controller 115 can orchestrate network activities in the computer network 110 based on one or more orchestration patterns. It is to be appreciated that the software configuration unit 215 is optional in various embodiments of the computer worm sensor 105.
In various embodiments, the extraction unit 200 determines an identifier for detecting the computer worm. In these embodiments, the extraction unit 200 can extract a signature or a vector of the computer worm based on network activities (e.g., an anomalous behavior) occurring in the computer network 110, for example from data (e.g., data packets) in a network communication.
The database 210 stores data for the computer worm sensor 105, which may include a configuration state of the computer worm sensor 105. For example, the configuration state may include orchestration patterns or “golden” software images of computer programs (i.e., original software images uncorrupted by a computer worm exploit). The data stored in the database 210 may also include identifiers or recovery scripts for computer worms, or identifiers for the sources of computer worms in the communication network 130. The identifier for the source of each computer worm may be associated with the identifier and the recovery script of the computer worm.
The protocol sequence replayer 220 receives a network communication from the traffic analysis device 135 (
In one embodiment, the protocol sequence replayer 220 includes a queue 225 for storing network communications. The queue 225 receives a network communication from the traffic analysis device 135 and temporarily stores the network communication until the protocol sequence replayer 220 is available to replay the network communication. In another embodiment, the protocol sequence replayer 220 is a computing system 120 in the computer network 110. For example, the protocol sequence replayer 220 may be a computer server including computer program code for replaying network communications in the computer network 110.
In another embodiment, the protocol sequence replayer 220 is in communication with a port (e.g., connected to a network port) of a network device in the communication network 130 and receives duplicated network communications occurring in the communication network 130 from the port. For example, the port can be a Switched Port Analyzer (SPAN) port of a network switch or a network router in the communication network 130, which duplicates network traffic in the communication network 130. In this way, various types of active and passive computer worms (e.g., hit-list directed, topologically-directed, server-directed, and scan-directed computer worms) may propagate from the communication network 130 to the computer network 110 via the duplicated network traffic.
The protocol sequence replayer 220 replays the data packets in the computer network 110 by sending the data packets to a computing system 120 having the same class (e.g., Linux or Windows platform) as the original target system of the data packets. In various embodiments, the protocol sequence replayer 220 synchronizes any return network traffic generated by the computing system 120 in response to the data packets. The protocol sequence replayer 220 may suppress (e.g., discard) the return network traffic such that the return network traffic is not transmitted to a host in the communication network 130. In one embodiment, the protocol sequence replayer 220 replays the data packets by sending the data packets to the computing system 120 via a TCP connection or UDP session. In this embodiment, the protocol sequence replayer 220 synchronizes return network traffic by terminating the TCP connection or UDP session.
The protocol sequence replayer 220 may modify destination IP addresses of data packets in the network communication to one or more IP addresses of the computing systems 120 and replay (i.e., generate) the modified data packets in the computer network 110. The controller 115 monitors the behavior of the computer network 110 in response to the modified data packets, and may detect an anomalous behavior in the monitored behavior, as is described more fully herein. If the controller 115 identifies an anomalous behavior, the computer network 110 is deemed to be infected with a computer worm and the controller 115 determines an identifier for the computer worm, as is described more fully herein.
The protocol sequence replayer 220 may analyze data packets in a sequence of network communications in the communication network 130 to identify a session identifier. The session identifier identifies a communication session for the sequence of network communications and can distinguish the network communications in the sequence from other network communications in the communication network 130. For example, each communication session in the communication network 130 can have a unique session identifier. The protocol sequence replayer 220 may identify the session identifier based on the communication protocol of the network communications in the sequence. For instance, the session identifier may be in a field of a data packet header as specified by the communication protocol. Alternatively, the protocol sequence replayer 220 may infer the session identifier from repeating network communications in the sequence. For example, the session identifier is typically one of the first fields in an application level communication between a client and a server (e.g., computing system 120) and is repeatedly used in subsequent communications between the client and the server.
The protocol sequence replayer 220 may modify the session identifier in the data packets of the sequence of network communications. The protocol sequence replayer 220 generates an initial network communication in the computer network 110 based on a selected network communication in the sequence, and the computer network 110 (e.g., a computing system 120) generates a response including a session identifier. The protocol sequence replayer 220 then substitutes the session identifier in the remaining data packets of the network communication with the session identifier of the response. In a further embodiment, the protocol sequence replayer 220 dynamically modifies session variables in the data packets, as is appropriate, to emulate the sequence of network communications in the computer network 110.
The protocol sequence replayer 220 may determine the software or hardware profile of a host (e.g., a computing system) in the communication network 130 to which the data packets of the network communication are directed. The protocol sequence replayer 220 then selects a computing system 120 in the computer network 110 that has the same software or hardware profile of the host and performs dynamic NAT on the data packets to redirect the data packets to the selected computing system 120. Alternatively, the protocol sequence replayer 220 randomly selects a computing system 120 and performs dynamic NAT on the data packets to redirect the data packets to the randomly selected computing system 120.
In one embodiment, the traffic analysis device 135 can identify a request (i.e., a network communication) from a web browser to a web server in the communication network 130, and a response (i.e., a network communication) from the web server to the web browser. In this case, the response may include a passive computer worm. The traffic analysis device 135 may inspect web traffic on a selected network link in the communication network 130 to identify the request and response. For example, the traffic analysis device 135 may select the network link or identify the request based on a policy. The protocol sequence replayer 220 orchestrates the request in the computer network 110 such that a web browser in a computing system 120 initiates a substantially similar request. In response to this request, the protocol sequence replayer 220 generates a response to the web browser in the computing system 120, which is substantially similar to the response generated by the browser in the communication network 130. The controller 115 then monitors the behavior of the web browser in the computing system 120 and may identify an anomalous behavior in the monitored behavior. If the controller 115 identifies an anomalous behavior, the computer network 110 is deemed to be infected with a passive computer worm.
In another embodiment, e.g., a shadow mode, the wormhole system 1125 in
There are several embodiments in which computer worm traffic going to systems on the production network 1130 can be used to trigger an infection of the wormhole system 1125 by using the monitored traffic. One embodiment may utilize a transport level redirection or synchronization. In this embodiment, the monitored packets are redirected via NAT towards a wormhole system 1125 that is the same class of machine as the target of the original packets (e.g., Linux or Windows). In case the software profile of the target of the production network system 1140 is not accurately known, then a shadow connection can be simultaneously attempted on multiple wormhole system 1125 targets representing commonly used platforms on the production network 1130 by making multiple copies of the monitored traffic. The return traffic from the wormhole system 1125 is simply suppressed, since only one party can be communicating with the remote node. This works for connections that are not dependent on the state of the systems. If the transport level synchronizer observes that the TCP connection or UDP session cannot be maintained, the transport level synchronizer disconnects the wormhole system 1125 TCP connection operating as a shadow of the original.
A computer worm exploit can depend upon software and widely used configurations of that software without depending upon the local data of the system or service being attacked. If the computer worm exploit sequence does not depend on the local data of the system or service being attacked (e.g., in order to evade this kind of a defense), then a computer worm exploit that can infect the system or service on a production network 1130 can also infect the wormhole system 1125 that may be shadowing the connection to the system or service.
For example, in the case of a file transfer protocol (FTP) server, the local data includes files on the FTP server. If the computer worm exploits a filename handling buffer overflow vulnerability in that FTP server, then an exploit sequence that simply connects to the FTP server and sends the exploit filename string can also infect the shadow of the FTP connection on the wormhole system 1125.
As another example, if the computer worm first gets a listing of files on the FTP server, retrieves some of the files (e.g., performs a get operation on some of the files), and then sends the exploit filename sequence, a transport level shadow synchronizer may not be able to keep the shadow wormhole system 1125 synchronized. This embodiment is an example of evasive computer worm behavior in which the computer worm may be attempting to circumvent a transport level shadow wormhole system synchronizer. Such an evasive computer worm can still be coerced into infecting a shadow wormhole system 1125 that uses an application level proxy (e.g., an application level proxy sitting on top of the monitored traffic on the SPAN port) for shadowing and synchronization purposes. For example, an FTP level proxy can intercept commands at the FTP level, send the FTP commands (e.g., list and get commands) to the shadow wormhole system, and discard the responses. Once the exploit sequence is sent to the shadow wormhole system 1125, the application level proxy will have succeeded in infecting the wormhole system 1125. Similar proxy synchronizers can be created for other network servers.
Various embodiment provide a useful mechanism to intercept a wide variety of computer worm infection strategies. The shadow wormhole system 1125 may serve to increase the probability of early detection of a computer worm, without requiring software or worm sensors 1105 on a per-host basis.
In one embodiment, each computer worm sensor 105 randomly blocks one or more ports of the computing systems 120. Accordingly, some of the computer worm sensors 105 may detect an anomalous behavior of a computer worm, as described more fully herein. The computer worm sensors 105 that detect an anomalous behavior communicate the anomalous behavior (e.g., a signature candidate) to the sensor manager 305. In turn, the sensor manager 305 correlates the anomalous behaviors and determines an identifier (e.g., a transport vector) for detecting the computer worm.
In some cases, a human intruder (e.g., a computer hacker) may attempt to exploit vulnerabilities that a computer worm would exploit in a computer worm sensor 105. The sensor manager 305 may distinguish an anomalous behavior of a human intruder from an anomalous behavior of a computer worm by tracking the number of computing systems 120 in the computer worm sensors 105 that detect a computer worm within a given period. If the number of computing systems 120 detecting a computer worm within the given period exceeds a predetermined threshold, the sensor manager 305 determines that a computer worm caused the anomalous behavior. Conversely, if the number of computing systems 120 detecting a computer worm within the given period is equal to or less than the predetermined threshold, the sensor manager 305 determines that a human intruder caused the anomalous behavior. In this way, false positive detections of the computer worm may be decreased.
In one embodiment, each computer worm sensor 105 maintains a list of infected hosts (e.g., computing systems infected by a computer worm) in the communication network 130 and communicates the list to the sensor manager 305. In this way, computer worm detection system 300 maintains a list of infected hosts detected by the computer worm sensors 105.
In step 405, the controller 115 (
In step 410, the computer worm sensor 105 identifies an anomalous behavior in the monitored behavior to detect a computer worm. In one embodiment, the controller 115 identifies the anomalous behavior by comparing the predetermined sequence of network activities with network activities in the monitored behavior. For example, the orchestration engine 205 of the controller 115 can identify the anomalous behavior by comparing network activities in the monitored behavior with one or more orchestrated behaviors defining the predetermined sequence of network activities. The computer worm sensor 105 evaluates the anomalous behavior to determine whether the anomalous behavior is caused by a computer worm, as is described more fully herein.
In step 415, the computer worm sensor 105 determines an identifier for detecting the computer worm based on the anomalous behavior. The identifier may include a signature or a vector of the computer worm, or both. For example, the vector can be a transport vector, an attack vector, or a payload vector. In one embodiment, the extraction unit 200 of the computer worm sensor 105 determines the signature of the computer worm based on one or more signature candidates, as is described more fully herein. It is to be appreciated that step 415 is optional in accordance with various embodiments of the computer worm sensor 105.
In step 420, the computer worm sensor 105 generates a recovery script for the computer worm. An infected host (e.g., an infected computing system or network) can then execute the recovery script to disable (e.g., destroy) the computer worm in the infected host or repair damage to the host caused by the computer worm. The computer worm sensor 105 may also identify a host in the communication network 130 that is the source of the computer worm and provides the recovery script to the host such that the host can disable the computer worm and repair damage to the host caused by the computer worm.
In one embodiment, the controller 115 determines a current image of the file system in the computer system 120, and compares the current image with an original image of the file system in the computer system 120 to identify any discrepancies between the current image and the original image. The controller 115 then generates the recovery script based on these discrepancies. The recovery script includes computer program code for identifying infected software programs or memory locations based on the discrepancies, and removing the discrepancies from infected software programs or memory locations.
Additionally, the computer worm containment system 500 can comprise multiple blocking devices 510 in communication with one or more computer worm blocking managers (not shown) across the communication network 130 in analogous fashion to the computer worm detection system 300 of
In one embodiment, the blocking device 510 loads a computer worm signature into a content filter operating at the network level to block the computer worm from entering the computer system 520 from the communication network 130. In another embodiment, the blocking device 510 blocks a computer worm transportation vector in the computer system 520 by using transport level action control lists (ACLs) in the computer system 520.
More specifically, the blocking device 510 can function as a network interface between the communication network 130 and the corresponding computer system 520. For example, a blocking device 510 can be an inline signature based Intrusion Detection and Protection (IDP) system, as would be recognized by one skilled in the art. As another example, the blocking device 510 can be a firewall, network switch, or network router that includes content filtering or ACL management capabilities.
An effective computer worm quarantine may require a proper network architecture to ensure that blocking measures are effective in containing the computer worm. For example, if there are content filtering devices or transport level ACL devices protecting a set of subnets on the computer system 520, then there should not be another path from the computer system 520 on that subnet that does not pass through the filtering device.
Assuming that the communication network 130 is correctly partitioned, the function of the blocking device 510 is to receive a computer worm identifier, such as a signature list or transport vector, from the computer worm sensor 105 and configure the appropriate filtering devices. These filtering devices can be commercially available switches, routers, or firewalls obtainable from any of a number of network equipment vendors, or host-based solutions that provide similar functionality. In some embodiments, ACLs are used to perform universal blocking of those transport ports for the computer system 520 under protection. For example, traffic originating from a given source IP and intended for a given destination IP with the destination port matching a transport port in the transport vector can be blocked.
Another class of filtering is content based filtering, in which the filtering devices inspect the contents of the data past the TCP or UDP header of a data packet to check for particular data sequences. Examples of content filtering devices are routers in the class of the Cisco™ routers that use Network Based Application Recognition (NBAR) to classify and apply a policy to packets (e.g., reduce the priority of the packets or discard the packets). These types of filtering devices can be useful to implement content filtering at appropriate network points.
In one embodiment, host-based software is deployed on an enterprise scale to perform content filtering in the context of host-based software. In this embodiment, ACL specifications (e.g., vendor independent ACL specifications) and content filtering formats (e.g., eXtensible Markup Language or XML format) are communicated to the blocking devices 510, which in turn dynamically configure transport ACLs or content filters for network equipment and host software of different vendors.
Each computer worm containment system 500 is associated with a subscriber having a subscriber account that is maintained and managed by the management system 600. The management system 600 provides various computer worm defense services that allow the subscribers to obtain different levels of protection from computer worms, computer viruses, and other malicious code, based on levels of payment, for example.
The management system 600 interacts with the computer worm sensors 105 of the various computer worm containment systems 500 in several ways. For example, the management system 600 can activate and deactivate computer worm sensors 105 based on payment or the lack thereof by the associated subscriber. The management system 600 also obtains identifiers of computer worms and repair scripts from the various computer worm sensors 105 and distributes these identifiers to other computer worm containment systems 500. The management system 600 can also distribute system updates as needed to controllers 115 (not shown) of the computer worm sensors 105. It will be appreciated that the computer worm defense system of the invention benefits from having a distributed set of computer worm sensors 105 in a widely distributed set of environments, compared to a centralized detection system, because computer worms are more likely to be detected sooner by the distributed set of computer worm sensors 105. Accordingly, in some embodiments it is advantageous to not deactivate a computer worm sensor 105 upon non-payment by a subscriber.
The management system 600 also interacts with the computer worm blocking systems of the various computer worm containment systems. Primarily, the management system 600 distributes computer worm identifiers found by computer worm sensors 105 of other computer worm containment systems 500 to the remaining computer worm blocking systems. In some embodiments the distribution is performed automatically as soon as the identifiers become known to the management system 600. However, in other embodiments, perhaps based on lower subscription rates paid by subscribers, newly found computer worm identifiers are distributed on a periodic basis such as daily or weekly. Similarly, the distribution of repair scripts to the various computer worm containment systems can also be controlled by the management system 600. In some embodiments, identifiers and/or repair scripts are distributed to subscribers by CD-ROM or similar media rather than automatically over a network such as the Internet.
In one embodiment, payment for the computer worm defense service is based on a periodic (e.g., monthly or annual) subscription fee. Such a fee can be based on the size of the enterprise being protected by the subscriber's computer worm containment system 500, where the size can be measured, for example, by the number of computer systems 520 therein. In another embodiment, a subscriber pays a fee for each computer worm identifier that is distributed to a computer worm containment system associated with the subscriber. In still another embodiment, payment for the computer worm defense service is based on a combination of a periodic subscription fee and a fee for each computer worm identifier received from the computer worm defense service. In yet another embodiment, subscribers receive a credit for each computer worm identifier that originates from a computer worm sensor 105 of their computer worm containment system 500.
The source device 705 and the destination device 710 are digital devices. Some examples of digital devices include computers, servers, laptops, personal digital assistants, and cellular telephones. The source device 705 is configured to transmit network data over the communication network 720 to the destination device 710. The destination device 710 is configured to receive the network data from the source device 705.
The tap 715 is a digital data tap configured to monitor network data and provide a copy of the network data to the controller 725. Network data comprises signals and data that are transmitted over the communication network 720 including data flows from the source device 705 to the destination device 710. In one example, the tap 715 intercepts and copies the network data without an appreciable decline in performance of the source device 705, the destination device 710, or the communication network 720. The tap 715 can copy any portion of the network data. For example, the tap 715 can receive and copy any number of data packets from the network data.
In some embodiments, the network data can be organized into one or more data flows and provided to the controller 725. In various embodiments, the tap 715 can sample the network data based on a sampling scheme. Data flows can then be reconstructed based on the network data samples.
The tap 715 can also capture metadata from the network data. The metadata can be associated with the source device 705 and the destination device 710. The metadata can identify the source device 705 and/or the destination device 710. In some embodiments, the source device 705 transmits metadata which is captured by the tap 715. In other embodiments, the heuristic module 730 (described herein) can determine the source device 705 and the destination device 710 by analyzing data packets within the network data in order to generate the metadata.
The communication network 720 can be similar to the communication network 130 (
The controller 725 can be any digital device or software that receives network data from the tap 715. In some embodiments, the controller 725 is contained within the computer worm sensor 105 (
The heuristic module 730 receives the copy of the network data from the tap 715. The heuristic module 730 applies heuristics and/or probability analysis to determine if the network data might contain suspicious activity. In one example, the heuristic module 730 flags network data as suspicious. The network data can then be buffered and organized into a data flow. The data flow is then provided to the scheduler 735. In some embodiments, the network data is provided directly to the scheduler 735 without buffering or organizing the data flow.
The heuristic module 730 can perform any heuristic and/or probability analysis. In one example, the heuristic module 730 performs a dark internet protocol (IP) heuristic. A dark IP heuristic can flag network data coming from a source device 705 that has not previously been identified by the heuristic module 730. The dark IP heuristic can also flag network data going to a previously unused port address. In an example, an attacker scans random IP addresses of a network to identify an active server or workstation. The dark IP heuristic can flag network data directed to an unassigned IP address.
The heuristic module 730 can also perform a dark port heuristic. A dark port heuristic can flag network data transmitted to an unassigned or unusual port address. In one example, network data is transmitted to a previously unused (or unseen) port address. Such network data transmitted to an unusual port can be indicative of a port scan by a worm or hacker. Further, the heuristic module 730 can flag network data from the source device 705 that are significantly different than traditional data traffic transmitted by the source device 705. For example, the heuristic module 730 can flag network data from a source device 705 such as a laptop that begins to transmit network data that is common to a server.
The heuristic module 730 can retain data packets belonging to a particular data flow previously copied by the tap 715. In one example, the heuristic module 730 receives data packets from the tap 715 and stores the data packets within a buffer or other memory. Once the heuristic module 730 receives a predetermined number of data packets from a particular data flow, the heuristic module 730 performs the heuristics and/or probability analysis.
In some embodiments, the heuristic module 730 performs heuristic and/or probability analysis on a set of data packets belonging to a data flow and then stores the data packets within a buffer or other memory. The heuristic module 730 can then continue to receive new data packets belonging to the same data flow. Once a predetermined number of new data packets belonging to the same data flow are received, the heuristic and/or probability analysis can be performed upon the combination of buffered and new data packets to determine a likelihood of suspicious activity.
In some embodiments, an optional buffer receives the flagged network data from the heuristic module 730. The buffer can buffer and organize the flagged network data into one or more data flows before providing the one or more data flows to the scheduler 735. In various embodiments, the buffer can buffer network data and stall before providing the network data to the scheduler 735. In one example, the buffer stalls the network data to allow other components of the controller 725 time to complete functions or otherwise clear data congestion.
The scheduler 735 identifies the destination device 710 and retrieves a virtual machine associated with the destination device 710. A virtual machine is software that is configured to mimic the performance of a device (e.g., the destination device 710). The virtual machine can be retrieved from the virtual machine pool 745.
In some embodiments, the heuristic module 730 transmits the metadata identifying the destination device 710 to the scheduler 735. In other embodiments, the scheduler 735 receives one or more data packets of the network data from the heuristic module 730 and analyzes the one or more data packets to identify the destination device 710. In yet other embodiments, the metadata can be received from the tap 715.
The scheduler 735 can retrieve and configure the virtual machine to mimic the pertinent performance characteristics of the destination device 710. In one example, the scheduler 735 configures the characteristics of the virtual machine to mimic only those features of the destination device 710 that are affected by the network data copied by the tap 715. The scheduler 735 can determine the features of the destination device 710 that are affected by the network data by receiving and analyzing the network data from the tap 715. Such features of the destination device 710 can include ports that are to receive the network data, select device drivers that are to respond to the network data and any other devices coupled to or contained within the destination device 710 that can respond to the network data. In other embodiments, the heuristic module 730 can determine the features of the destination device 710 that are affected by the network data by receiving and analyzing the network data from the tap 715. The heuristic module 730 can then transmit the features of the destination device to the scheduler 735.
The optional fingerprint module 740 is configured to determine the packet format of the network data to assist the scheduler 735 in the retrieval and/or configuration of the virtual machine. In one example, the fingerprint module 740 determines that the network data is based on a transmission control protocol/internet protocol (TCP/IP). Thereafter, the scheduler 735 will configure a virtual machine with the appropriate ports to receive TCP/IP packets. In another example, the fingerprint module 740 can configure a virtual machine with the appropriate ports to receive user datagram protocol/internet protocol (UDP/IP) packets. The fingerprint module 740 can determine any type of packet format of network data.
The virtual machine pool 745 is configured to store virtual machines. The virtual machine pool 745 can be any storage capable of storing software. In one example, the virtual machine pool 745 stores a single virtual machine that can be configured by the scheduler 735 to mimic the performance of any destination device 710 on the communication network 720. The virtual machine pool 745 can store any number of distinct virtual machines that can be configured to simulate the performance of any destination devices 710.
The analysis environment 750 simulates transmission of the network data between the source device 705 and the destination device 710 to analyze the effects of the network data upon the destination device 710. The analysis environment 750 can identify the effects of malware or illegitimate computer users (e.g., a hacker, computer cracker, or other computer user) by analyzing the simulation of the effects of the network data upon the destination device 710 that is carried out on the virtual machine. There can be multiple analysis environments 750 to simulate multiple network data. The analysis environment 750 is further discussed with respect to
The optional policy engine 755 is coupled to the heuristic module 730 and can identify network data as suspicious based upon policies contained within the policy engine 755. In one example, a destination device 710 can be a computer designed to attract hackers and/or worms (e.g., a “honey pot”). The policy engine 755 can contain a policy to flag any network data directed to the “honey pot” as suspicious since the “honey pot” should not be receiving any legitimate network data. In another example, the policy engine 755 can contain a policy to flag network data directed to any destination device 710 that contains highly sensitive or “mission critical” information.
The policy engine 755 can also dynamically apply a rule to copy all network data related to network data already flagged by the heuristic module 730. In one example, the heuristic module 730 flags a single packet of network data as suspicious. The policy engine 755 then applies a rule to flag all data related to the single packet (e.g., data flows) as suspicious. In some embodiments, the policy engine 755 flags network data related to suspicious network data until the analysis environment 750 determines that the network data flagged as suspicious is related to unauthorized activity.
Although
The virtual switch 810 is software that is capable of forwarding packets of flagged network data to the virtual machine 815. In one example, the replayer 805 simulates the transmission of the data flow by the source device 705. The virtual switch 810 simulates the communication network 720 and the virtual machine 815 simulates the destination device 710. The virtual switch 810 can route the data packets of the data flow to the correct ports of the virtual machine 815.
The virtual machine 815 is a representation of the destination device that can be provided to the analysis environment 750 by the scheduler 735. In one example, the scheduler 735 retrieves a virtual machine 815 from the virtual machine pool 745 and configures the virtual machine 815 to mimic a destination device 710. The configured virtual machine 815 is then provided to the analysis environment 750 where it can receive flagged network data from the virtual switch 810.
As the analysis environment 750 simulates the transmission of the network data, behavior of the virtual machine 815 can be closely monitored for unauthorized activity. If the virtual machine 815 crashes, performs illegal operations, performs abnormally, or allows access of data to an unauthorized computer user, the analysis environment 750 can react. In one example, the analysis environment 750 can transmit a command to the destination device 710 to stop accepting the network data or data flows from the source device 705.
In some embodiments, the analysis environment 750 monitors and analyzes the behavior of the virtual machine 815 in order to determine a specific type of malware or the presence of an illicit computer user. The analysis environment 750 can also generate computer code configured to eliminate new viruses, worms, or other malware. In various embodiments, the analysis environment 750 can generate computer code configured to repair damage performed by malware or the illicit computer user. By simulating the transmission of suspicious network data and analyzing the response of the virtual machine, the analysis environment 750 can identify known and previously unidentified malware and the activities of illicit computer users before a computer system is damaged or compromised.
In step 905, the network data is analyzed to determine whether the network data is suspicious. For example, a heuristic module, such as the heuristic module 730, can analyze the network data. The heuristic module can base the determination on heuristic and/or probabilistic analyses. In various embodiments, the heuristic module has a very low threshold to determine whether the network data is suspicious. For example, a single command within the network data directed to an unusual port of the destination device can cause the network data to be flagged as suspicious.
Step 905 can alternatively include flagging network data as suspicious based on policies such as the identity of a source device, a destination device, or the activity of the network data. In one example, even if the heuristic module does not flag the network data, the network data can be flagged as suspicious based on a policy if the network data was transmitted from a device that does not normally transmit network data. Similarly, based on another policy, if the destination device contains trade secrets or other critical data, then any network data transmitted to the destination device can be flagged suspicious. Similarly, if the network data is directed to a particularly important database or is attempting to gain rights or privileges within the communication network or the destination device, then the network data can be flagged as suspicious. In various embodiments, the policy engine 755 flags network data based on these and/or other policies.
In step 910, the transmission of the network data is orchestrated to analyze unauthorized activity. In one example, the transmission of the network data over a network is simulated to analyze the resulting action of the destination device. The simulation can be monitored and analyzed to identify the effects of malware or illegitimate computer use.
In step 1005, a virtual machine 815 is retrieved and configured to mimic the destination device 710. The scheduler 735 identifies the destination device 710 and retrieves a virtual machine 815 from the virtual machine pool 745. In some embodiments, the scheduler 735 further configures the virtual machine 815 to mimic the performance characteristics of the destination device 710. The scheduler 735 then transmits the virtual machine 815 to the analysis environment 750.
In step 1010, the analysis environment 750 replays transmission of the network data between the configured replayer 805 and the virtual machine 815 to detect unauthorized activity. The replayer 805 is configured to simulate the source device 705 transmitting the network data and the virtual machine 815 is configured to mimic the features of the destination device 710 that is affected by the network data. The virtual switch 810 can simulate the communication network 720 in delivering the network data to the destination device 710.
As the transmission of the network data on the model destination device 710 is simulated, results are monitored to determine if the network data is generated by malware or activity generated by illegitimate computer use. In one example, if the network data attempts to replicate programs within the virtual machine 815, then a virus can be identified. In another example, if the network data constantly attempts to access different ports of the virtual machine 815, then a worm or hacker can be identified.
Since the effects of network data transmission are simulated and the result analyzed, the controller 725 need not wait for repetitive behavior of malware or computer hackers before detecting their presence. In some examples of the prior art, new viruses and hackers are detected only upon multiple events that cause similar damage. By contrast, in some embodiments, a single data flow can be flagged and identified as harmful within a simulation thereby identifying malware, hackers, and unwitting computer users before damage is done.
In the foregoing specification, the invention is described with reference to specific embodiments thereof, but those skilled in the art will recognize that the invention is not limited thereto. Various features and aspects of the above-described invention can be used individually or jointly. Further, the invention can be utilized in any number of environments and applications beyond those described herein without departing from the broader spirit and scope of the specification. The specification and drawings are, accordingly, to be regarded as illustrative rather than restrictive. It will be recognized that the terms “comprising,” “including,” and “having,” as used herein, are specifically intended to be read as open-ended terms of art.
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
4292580, | Nov 30 1978 | Siemens Aktiengesellschaft | Circuit arrangement for attenuation of power oscillations in networks |
5175732, | Feb 15 1991 | Standard Microsystems Corp. | Method and apparatus for controlling data communication operations within stations of a local-area network |
5440723, | Jan 19 1993 | TREND MICRO INCORPORATED | Automatic immune system for computers and computer networks |
5490249, | Dec 23 1992 | Apple Inc | Automated testing system |
5657473, | Feb 21 1990 | ERAN & TORRA B V , LLC | Method and apparatus for controlling access to and corruption of information in computer systems |
5842002, | Jun 01 1994 | Quantum Leap Innovations, Inc. | Computer virus trap |
5978917, | Aug 14 1997 | NORTONLIFELOCK INC | Detection and elimination of macro viruses |
6088803, | Mar 27 1997 | Intel Corporation | System for virus-checking network data during download to a client device |
6094677, | May 30 1997 | GOOGLE LLC | Methods, systems and computer program products for providing insertions during delays in interactive systems |
6108799, | Mar 12 1998 | TREND MICRO INCORPORATED | Automated sample creation of polymorphic and non-polymorphic marcro viruses |
6269330, | Oct 07 1997 | Cisco Technology, Inc | Fault location and performance testing of communication networks |
6272641, | Sep 10 1997 | Trend Micro, Inc. | Computer network malicious code scanner method and apparatus |
6279113, | Mar 16 1998 | GEN DIGITAL INC | Dynamic signature inspection-based network intrusion detection |
6298445, | Apr 30 1998 | NORTONLIFELOCK INC | Computer security |
6357008, | Sep 23 1997 | Symantec Corporation | Dynamic heuristic method for detecting computer viruses using decryption exploration and evaluation phases |
6424627, | Feb 24 1997 | METROBILITY OPTICAL SYSTEMS, INC | Full-duplex medium tap apparatus and system |
6484315, | Feb 01 1999 | Cisco Technology, Inc. | Method and system for dynamically distributing updates in a network |
6487666, | Jan 15 1999 | Cisco Technology, Inc. | Intrusion detection signature analysis using regular expressions and logical operators |
6493756, | Oct 28 1999 | JPMORGAN CHASE BANK, N A ; MORGAN STANLEY SENIOR FUNDING, INC | System and method for dynamically sensing an asynchronous network event within a modular framework for network event processing |
6550012, | Dec 11 1998 | JPMORGAN CHASE BANK, N A ; MORGAN STANLEY SENIOR FUNDING, INC | Active firewall system and methodology |
6775657, | Dec 22 1999 | Cisco Technology, Inc.; Cisco Technology, Inc | Multilayered intrusion detection system and method |
6832367, | Mar 06 2000 | International Business Machines Corporation | Method and system for recording and replaying the execution of distributed java programs |
6895550, | Dec 05 2001 | JDA SOFTWARE GROUP, INC | Computer-implemented PDF document management |
6898632, | Mar 31 2003 | Viavi Solutions Inc | Network security tap for use with intrusion detection system |
6907396, | Jun 01 2000 | JPMORGAN CHASE BANK, N A ; MORGAN STANLEY SENIOR FUNDING, INC | Detecting computer viruses or malicious software by patching instructions into an emulator |
6981279, | Aug 17 2000 | TREND MICRO INCORPORATED | Method and apparatus for replicating and analyzing worm programs |
7007107, | Oct 22 2001 | AMETEK PROGRAMMABLE POWER, INC | Methods and apparatus for performing data acquisition and control |
7028179, | Jul 03 2001 | Intel Corporation | Apparatus and method for secure, automated response to distributed denial of service attacks |
7043757, | May 22 2001 | RAKUTEN GROUP, INC | System and method for malicious code detection |
7069316, | Feb 19 2002 | JPMORGAN CHASE BANK, N A , AS ADMINISTRATIVE AGENT | Automated Internet Relay Chat malware monitoring and interception |
7080408, | Nov 30 2001 | JPMORGAN CHASE BANK, N A , AS ADMINISTRATIVE AGENT | Delayed-delivery quarantining of network communications having suspicious contents |
7093002, | Dec 06 2001 | JPMORGAN CHASE BANK, N A , AS ADMINISTRATIVE AGENT | Handling of malware scanning of files stored within a file storage device of a computer network |
7093239, | Jul 17 2000 | PALO ALTO NETWORKS, INC | Computer immune system and method for detecting unwanted code in a computer system |
7100201, | Jan 24 2002 | Arxceo Corporation | Undetectable firewall |
7159149, | Oct 24 2002 | CA, INC | Heuristic detection and termination of fast spreading network worm attacks |
7231667, | May 29 2003 | Computer Associates Think, Inc | System and method for computer virus detection utilizing heuristic analysis |
7240364, | May 20 2000 | Ciena Corporation | Network device identity authentication |
7240368, | Apr 14 1999 | Raytheon BBN Technologies Corp | Intrusion and misuse deterrence system employing a virtual network |
7251215, | Aug 26 2002 | Juniper Networks, Inc. | Adaptive network router |
7287278, | Aug 29 2003 | TREND MICRO INCORPORATED; TREND MICRO, INC | Innoculation of computing devices against a selected computer virus |
7308716, | May 20 2003 | TREND MICRO INCORPORATED | Applying blocking measures progressively to malicious network traffic |
7328453, | May 09 2001 | SCA IPLA HOLDINGS INC | Systems and methods for the prevention of unauthorized use and manipulation of digital content |
7356736, | Sep 25 2001 | CA, INC | Simulated computer system for monitoring of software performance |
7386888, | Aug 29 2003 | TREND MICRO INCORPORATED; TREND MICRO, INC | Network isolation techniques suitable for virus protection |
7392542, | Aug 29 2003 | Seagate Technology LLC | Restoration of data corrupted by viruses using pre-infected copy of data |
7418729, | Jul 19 2002 | CA, INC | Heuristic detection of malicious computer code by page tracking |
7428300, | Dec 09 2002 | Verizon Patent and Licensing Inc | Diagnosing fault patterns in telecommunication networks |
7441272, | Jun 09 2004 | TAHOE RESEARCH, LTD | Techniques for self-isolation of networked devices |
7448084, | Jan 25 2002 | TRUSTEES OF COLUMBIA UNIVERSITY IN THE CITY OF NEW YORK, THE | System and methods for detecting intrusions in a computer system by monitoring operating system registry accesses |
7458098, | Mar 08 2002 | JPMORGAN CHASE BANK, N A , AS ADMINISTRATIVE AGENT | Systems and methods for enhancing electronic communication security |
7464404, | May 20 2003 | TREND MICRO INCORPORATED | Method of responding to a truncated secure session attack |
7464407, | Aug 20 2002 | NEC Corporation | Attack defending system and attack defending method |
7467408, | Sep 09 2002 | Cisco Technology, Inc. | Method and apparatus for capturing and filtering datagrams for network security monitoring |
7478428, | Oct 12 2004 | Microsoft Technology Licensing, LLC | Adapting input to find integer overflows |
7480773, | May 02 2005 | T-MOBILE INNOVATIONS LLC | Virtual machine use and optimization of hardware configurations |
7487543, | Jul 23 2002 | FINJAN BLUE, INC | Method and apparatus for the automatic determination of potentially worm-like behavior of a program |
7496960, | Oct 30 2000 | TREND MICRO INCORPORATED | Tracking and reporting of computer virus information |
7496961, | Oct 15 2003 | Intel Corporation | Methods and apparatus to provide network traffic support and physical security support |
7516488, | Feb 23 2005 | NORTONLIFELOCK INC | Preventing data from being submitted to a remote system in response to a malicious e-mail |
7519990, | Jul 19 2002 | Fortinet, INC | Managing network traffic flow |
7523493, | Aug 29 2003 | TREND MICRO INCORPORATED | Virus monitor and methods of use thereof |
7530104, | Feb 09 2004 | GEN DIGITAL INC | Threat analysis |
7540025, | Nov 18 2004 | Cisco Technology, Inc. | Mitigating network attacks using automatic signature generation |
7565550, | Aug 29 2003 | TREND MICRO INCORPORATED; TREND MICRO, INC | Automatic registration of a virus/worm monitor in a distributed network |
7568233, | Apr 01 2005 | CA, INC | Detecting malicious software through process dump scanning |
7603715, | Jul 21 2004 | Microsoft Technology Licensing, LLC | Containment of worms |
7607171, | Jan 17 2002 | TW SECURITY CORP ; TRUSTWAVE HOLDINGS, INC | Virus detection by executing e-mail code in a virtual machine |
7639714, | Nov 12 2003 | The Trustees of Columbia University in the City of New York | Apparatus method and medium for detecting payload anomaly using n-gram distribution of normal data |
7644441, | Sep 26 2003 | Synopsys, Inc | Methods for identifying malicious software |
7657419, | Jun 19 2001 | KYNDRYL, INC | Analytical virtual machine |
7676841, | Feb 01 2005 | FMR LLC | Network intrusion mitigation |
7698548, | Dec 08 2005 | Microsoft Technology Licensing, LLC | Communications traffic segregation for security purposes |
7707633, | May 20 2003 | International Business Machines Corporation | Applying blocking measures progressively to malicious network traffic |
7712136, | May 05 2005 | Ironport Systems, Inc. | Controlling a message quarantine |
7730011, | Oct 19 2005 | JPMORGAN CHASE BANK, N A , AS ADMINISTRATIVE AGENT | Attributes of captured objects in a capture system |
7739740, | Sep 22 2005 | CA, INC | Detecting polymorphic threats |
7779463, | May 11 2004 | TRUSTEES OF COLUMBIA UNIVERSITY IN THE CITY OF NEW YORK, THE | Systems and methods for correlating and distributing intrusion alert information among collaborating computer systems |
7784097, | Nov 24 2004 | The Trustees of Columbia University in the City of New York | Systems and methods for correlating and distributing intrusion alert information among collaborating computer systems |
7832008, | Oct 11 2006 | Cisco Technology, Inc.; Cisco Technology, Inc | Protection of computer resources |
7849506, | Oct 12 2004 | AVAYA LLC | Switching device, method, and computer program for efficient intrusion detection |
7854007, | May 05 2005 | IRONPORT SYSTEMS, INC | Identifying threats in electronic messages |
7869073, | Mar 22 2005 | FUJI XEROX CO , LTD | Image forming system, image forming method and information terminal device |
7877803, | Jun 27 2005 | VALTRUS INNOVATIONS LIMITED | Automated immune response for a computer |
7904959, | Apr 18 2005 | The Trustees of Columbia University in the City of New York | Systems and methods for detecting and inhibiting attacks using honeypots |
7908660, | Feb 06 2007 | Microsoft Technology Licensing, LLC | Dynamic risk management |
7930738, | Jun 02 2005 | Adobe Inc | Method and apparatus for secure execution of code |
7937761, | Dec 17 2004 | CA, INC | Differential threat detection processing |
7949849, | Aug 24 2004 | JPMORGAN CHASE BANK, N A ; MORGAN STANLEY SENIOR FUNDING, INC | File system for a capture system |
7996556, | Dec 06 2004 | Cisco Technology, Inc. | Method and apparatus for generating a network topology representation based on inspection of application messages at a network device |
7996836, | Dec 29 2006 | NORTONLIFELOCK INC | Using a hypervisor to provide computer security |
7996904, | Dec 19 2007 | NORTONLIFELOCK INC | Automated unpacking of executables packed by multiple layers of arbitrary packers |
7996905, | Jul 23 2002 | FINJAN BLUE, INC | Method and apparatus for the automatic determination of potentially worm-like behavior of a program |
8006305, | Jun 14 2004 | FireEye Security Holdings US LLC | Computer worm defense system and method |
8010667, | Dec 12 2007 | VMware, Inc. | On-access anti-virus mechanism for virtual machine architecture |
8020206, | Jul 10 2006 | FORCEPOINT FEDERAL HOLDINGS LLC; Forcepoint LLC | System and method of analyzing web content |
8028338, | Sep 30 2008 | CA, INC | Modeling goodware characteristics to reduce false positive malware signatures |
8045094, | Dec 26 2006 | Sharp Kabushiki Kaisha | Backlight device, display device, and television receiver |
8045458, | Nov 08 2007 | JPMORGAN CHASE BANK, N A , AS ADMINISTRATIVE AGENT | Prioritizing network traffic |
8069484, | Jan 25 2007 | FireEye Security Holdings US LLC | System and method for determining data entropy to identify malware |
8087086, | Jun 30 2008 | Symantec Corporation | Method for mitigating false positive generation in antivirus software |
8171553, | Apr 01 2004 | FireEye Security Holdings US LLC | Heuristic based capture with replay to virtual machine |
8176049, | Oct 19 2005 | JPMORGAN CHASE BANK, N A , AS ADMINISTRATIVE AGENT | Attributes of captured objects in a capture system |
8201246, | Feb 25 2008 | TREND MICRO INCORPORATED | Preventing malicious codes from performing malicious actions in a computer system |
8204984, | Apr 01 2004 | FireEye Security Holdings US LLC | Systems and methods for detecting encrypted bot command and control communication channels |
8220055, | Feb 06 2004 | CA, INC | Behavior blocking utilizing positive behavior system and method |
8225288, | Jan 29 2008 | INTUIT INC. | Model-based testing using branches, decisions, and options |
8225373, | Oct 11 2006 | Cisco Technology, Inc. | Protection of computer resources |
8233882, | Jun 26 2009 | VMWARE, INC | Providing security in mobile devices via a virtualization software layer |
8234640, | Oct 17 2006 | Red Hat, Inc | Compliance-based adaptations in managed virtual systems |
8234709, | Jun 20 2008 | NORTONLIFELOCK INC | Streaming malware definition updates |
8239944, | Mar 28 2008 | CA, INC | Reducing malware signature set size through server-side processing |
8266091, | Jul 21 2009 | NORTONLIFELOCK INC | Systems and methods for emulating the behavior of a user in a computer-human interaction environment |
8286251, | Dec 21 2006 | Telefonaktiebolaget L M Ericsson (publ) | Obfuscating computer program code |
8291499, | Apr 01 2004 | FireEye Security Holdings US LLC | Policy based capture with replay to virtual machine |
8307435, | Feb 18 2010 | CA, INC | Software object corruption detection |
8307443, | Sep 28 2007 | Microsoft Technology Licensing, LLC | Securing anti-virus software with virtualization |
8312545, | Apr 06 2006 | Pulse Secure, LLC | Non-signature malware detection system and method for mobile platforms |
8321936, | May 30 2007 | TRUSTWAVE HOLDINGS, INC | System and method for malicious software detection in multiple protocols |
8321941, | Apr 06 2006 | Pulse Secure, LLC | Malware modeling detection system and method for mobile platforms |
8332571, | Oct 07 2008 | QUEST SOFTWARE INC F K A DELL SOFTWARE INC ; Aventail LLC | Systems and methods for improving virtual machine performance |
8365286, | Jun 30 2006 | DEUTSCHE BANK AG NEW YORK BRANCH, AS COLLATERAL AGENT | Method and system for classification of software using characteristics and combinations of such characteristics |
8370938, | Apr 25 2009 | DASIENT, INC | Mitigating malware |
8370939, | Jul 23 2010 | Kaspersky Lab, ZAO | Protection against malware on web resources |
8375444, | Apr 20 2006 | FireEye Security Holdings US LLC | Dynamic signature creation and enforcement |
8381299, | Feb 28 2006 | The Trustees of Columbia University in the City of New York | Systems, methods, and media for outputting a dataset based upon anomaly detection |
8402529, | May 30 2007 | TRUSTWAVE HOLDINGS, INC | Preventing propagation of malicious software during execution in a virtual machine |
8464340, | Sep 04 2007 | Samsung Electronics Co., Ltd. | System, apparatus and method of malware diagnosis mechanism based on immunization database |
8479174, | Apr 05 2006 | CARBONITE, LLC; OPEN TEXT INC | Method, computer program and computer for analyzing an executable computer file |
8479276, | Dec 29 2010 | EMC IP HOLDING COMPANY LLC | Malware detection using risk analysis based on file system and network activity |
8510827, | May 18 2006 | VMware, Inc. | Taint tracking mechanism for computer security |
8510828, | Dec 31 2007 | NORTONLIFELOCK INC | Enforcing the execution exception to prevent packers from evading the scanning of dynamically created code |
8510842, | Apr 13 2011 | FINJAN BLUE, INC | Pinpointing security vulnerabilities in computer software applications |
8516478, | Jun 12 2008 | Musarubra US LLC | Subsequent processing of scanning task utilizing subset of virtual machines predetermined to have scanner process and adjusting amount of subsequest VMs processing based on load |
8516593, | Apr 01 2004 | FireEye Security Holdings US LLC | Systems and methods for computer worm defense |
8528086, | Apr 01 2004 | FireEye Security Holdings US LLC | System and method of detecting computer worms |
8533824, | Dec 04 2006 | GLASSWALL (IP) LIMITED | Resisting the spread of unwanted code and data |
8539582, | Apr 01 2004 | FireEye Security Holdings US LLC | Malware containment and security analysis on connection |
8549638, | Jun 14 2004 | FireEye Security Holdings US LLC | System and method of containing computer worms |
8561177, | Apr 01 2004 | FireEye Security Holdings US LLC | Systems and methods for detecting communication channels of bots |
8566946, | Apr 20 2006 | FireEye Security Holdings US LLC | Malware containment on connection |
8584094, | Jun 29 2007 | Microsoft Technology Licensing, LLC | Dynamically computing reputation scores for objects |
8584234, | Jul 07 2010 | CA, INC | Secure network cache content |
8584239, | Apr 01 2004 | FireEye Security Holdings US LLC | Virtual machine with dynamic data flow analysis |
8595834, | Feb 04 2008 | Samsung Electronics Co., Ltd; SAMSUNG ELECTRONICS CO , LTD | Detecting unauthorized use of computing devices based on behavioral patterns |
8627476, | Jul 05 2010 | CA, INC | Altering application behavior based on content provider reputation |
8635696, | Apr 01 2004 | FireEye Security Holdings US LLC | System and method of detecting time-delayed malicious traffic |
8713631, | Dec 25 2012 | AO Kaspersky Lab | System and method for detecting malicious code executed by virtual machine |
8713681, | Oct 27 2009 | GOOGLE LLC | System and method for detecting executable machine instructions in a data stream |
8782792, | Oct 27 2011 | CA, INC | Systems and methods for detecting malware on mobile platforms |
8789172, | Sep 18 2006 | The Trustees of Columbia University in the City of New York | Methods, media, and systems for detecting attack on a digital processing device |
8789178, | Aug 03 2009 | BARRACUDA NETWORKS, INC | Method for detecting malicious javascript |
8805947, | Feb 27 2008 | Virtuozzo International GmbH | Method and system for remote device access in virtual environment |
8806647, | Apr 27 2011 | TWITTER, INC | Behavioral scanning of mobile applications |
20010005889, | |||
20010047326, | |||
20020018903, | |||
20020038430, | |||
20020091819, | |||
20020144156, | |||
20020162015, | |||
20020166063, | |||
20020184528, | |||
20020188887, | |||
20020194490, | |||
20030074578, | |||
20030084318, | |||
20030115483, | |||
20030188190, | |||
20030200460, | |||
20030212902, | |||
20030229801, | |||
20030237000, | |||
20040003323, | |||
20040015712, | |||
20040019832, | |||
20040047356, | |||
20040083408, | |||
20040093513, | |||
20040111531, | |||
20040165588, | |||
20040236963, | |||
20040243349, | |||
20040249911, | |||
20040255161, | |||
20040268147, | |||
20050021740, | |||
20050033960, | |||
20050033989, | |||
20050050148, | |||
20050086523, | |||
20050091513, | |||
20050091533, | |||
20050108562, | |||
20050114663, | |||
20050125195, | |||
20050149726, | |||
20050157662, | |||
20050183143, | |||
20050201297, | |||
20050210533, | |||
20050238005, | |||
20050265331, | |||
20060010495, | |||
20060015715, | |||
20060021029, | |||
20060021054, | |||
20060031476, | |||
20060047665, | |||
20060070130, | |||
20060075496, | |||
20060095968, | |||
20060101516, | |||
20060101517, | |||
20060117385, | |||
20060123477, | |||
20060143709, | |||
20060150249, | |||
20060161983, | |||
20060161987, | |||
20060161989, | |||
20060164199, | |||
20060173992, | |||
20060179147, | |||
20060184632, | |||
20060191010, | |||
20060221956, | |||
20060236393, | |||
20060242709, | |||
20060251104, | |||
20060288417, | |||
20070006288, | |||
20070006313, | |||
20070011174, | |||
20070016951, | |||
20070033645, | |||
20070038943, | |||
20070064689, | |||
20070074169, | |||
20070094730, | |||
20070101435, | |||
20070143827, | |||
20070156895, | |||
20070157180, | |||
20070157306, | |||
20070171824, | |||
20070174915, | |||
20070192500, | |||
20070192858, | |||
20070198275, | |||
20070208822, | |||
20070220607, | |||
20070240218, | |||
20070240219, | |||
20070240220, | |||
20070240222, | |||
20070250930, | |||
20070271446, | |||
20080005782, | |||
20080040710, | |||
20080072326, | |||
20080077793, | |||
20080080518, | |||
20080098476, | |||
20080120722, | |||
20080134178, | |||
20080134334, | |||
20080141376, | |||
20080184373, | |||
20080189787, | |||
20080215742, | |||
20080222728, | |||
20080222729, | |||
20080263665, | |||
20080295172, | |||
20080301810, | |||
20080307524, | |||
20080320594, | |||
20090007100, | |||
20090013408, | |||
20090031423, | |||
20090036111, | |||
20090044024, | |||
20090044274, | |||
20090083369, | |||
20090083855, | |||
20090089879, | |||
20090094697, | |||
20090125976, | |||
20090126015, | |||
20090126016, | |||
20090133125, | |||
20090144823, | |||
20090158430, | |||
20090187992, | |||
20090193293, | |||
20090199296, | |||
20090228233, | |||
20090241187, | |||
20090241190, | |||
20090265692, | |||
20090271867, | |||
20090300761, | |||
20090328185, | |||
20090328221, | |||
20100017546, | |||
20100043073, | |||
20100054278, | |||
20100058474, | |||
20100064044, | |||
20100077481, | |||
20100083376, | |||
20100100718, | |||
20100115621, | |||
20100132038, | |||
20100154056, | |||
20100192223, | |||
20100235831, | |||
20100251104, | |||
20100281102, | |||
20100281541, | |||
20100281542, | |||
20100287260, | |||
20110004737, | |||
20110025504, | |||
20110041179, | |||
20110047594, | |||
20110047620, | |||
20110078794, | |||
20110093951, | |||
20110099633, | |||
20110113231, | |||
20110145918, | |||
20110145920, | |||
20110167493, | |||
20110167494, | |||
20110225655, | |||
20110247072, | |||
20110265182, | |||
20110307954, | |||
20110307955, | |||
20110307956, | |||
20110314546, | |||
20120079596, | |||
20120084859, | |||
20120117652, | |||
20120124426, | |||
20120174186, | |||
20120174218, | |||
20120198279, | |||
20120210423, | |||
20120222121, | |||
20120255015, | |||
20120266244, | |||
20120278886, | |||
20120297489, | |||
20120330801, | |||
20130014259, | |||
20130036472, | |||
20130047257, | |||
20130097706, | |||
20130117855, | |||
20130139264, | |||
20130160130, | |||
20130160131, | |||
20130174214, | |||
20130196649, | |||
20130227691, | |||
20130246370, | |||
20130263260, | |||
20130291109, | |||
20130298243, | |||
20140053260, | |||
20140053261, | |||
20140137180, | |||
GB2439806, | |||
GB2490431, | |||
WO206928, | |||
WO223805, | |||
WO2007022454, | |||
WO2007117636, | |||
WO2008041950, | |||
WO2008084259, | |||
WO2011084431, | |||
WO2012145066, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Aug 19 2013 | FireEye, Inc. | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Date | Maintenance Schedule |
May 05 2018 | 4 years fee payment window open |
Nov 05 2018 | 6 months grace period start (w surcharge) |
May 05 2019 | patent expiry (for year 4) |
May 05 2021 | 2 years to revive unintentionally abandoned end. (for year 4) |
May 05 2022 | 8 years fee payment window open |
Nov 05 2022 | 6 months grace period start (w surcharge) |
May 05 2023 | patent expiry (for year 8) |
May 05 2025 | 2 years to revive unintentionally abandoned end. (for year 8) |
May 05 2026 | 12 years fee payment window open |
Nov 05 2026 | 6 months grace period start (w surcharge) |
May 05 2027 | patent expiry (for year 12) |
May 05 2029 | 2 years to revive unintentionally abandoned end. (for year 12) |