Methods and systems for training a language processing model. The methods may involve receiving a first log record in a first format, wherein the first log record includes annotations describing items in the first log record, and then creating a second log record in a second format comprising data from the first log record utilizing the annotations in the first log record and a conversion rule set. The second log record may then be used to train a language processing model so that a trained model can identify items in a third log record and the relationships therebetween.
|
1. A method for training a language processing model, the method comprising:
receiving at an interface a first log record in a first format, wherein the first log record includes annotations describing items in the first log record;
creating a second log record in a second format comprising data from the first log record utilizing the annotations in the first log record and a conversion rule set;
providing the second log record to a processor executing instructions stored on a memory to cause the processors to perform:
training a language processing model using the second log record, resulting in a trained model configured to identify items and respective types of the items in previously unseen log records and relationships therebetween, wherein the conversion rule set is associated with the language processing model;
processing a third log record using the trained model to identify items and respective types of the items in the third log record and relationships therebetween; and
detecting network activity associated with the third log record based on processing the third log record using the trained model.
10. A system for training a language processing model, the system comprising:
an interface for receiving:
a first log record in a first format, wherein the first log record includes annotations describing items in the first log record;
a mapping module configured to execute a conversion rule set to create a second log record in a second format comprising data from the first log record utilizing the annotations in the first log record; and
a processor executing instructions stored on a memory to:
receive the second log record;
train a language processing model using the second log record, resulting in a trained model configured to identify items and respective types of the items in previously unseen log records and relationships therebetween, wherein the conversion rule set is associated with the language processing model;
process a third log record using the trained model to identify items and respective types of the items in the third log record and relationships therebetween; and
detect network activities associated with the third log record based on processing the third log record using the trained model.
19. A non-transitory computer readable medium containing computer-executable instructions for performing a method for training a language processing model, the method comprising:
receiving at an interface a first log record in a first format, wherein the first log record includes annotations describing items in the first log record;
creating a second log record in a second format comprising data from the first log record utilizing the annotations in the first log record and a conversion rule set; and
providing the second log record to a processor executing instructions stored on a memory to cause the processor to perform:
training a language processing model using the second log record, resulting in a trained model configured to identify items and respective types of the items in previously unseen a records and relationships therebetween, wherein the conversion rule set is associated with the language processing model;
processing a third log record using the trained model to identify items and respective types of the items in the third log record and relationships therebetween; and
detecting network activity associated with the third log record based on processing the third log record using the trained model.
2. The method of
3. The method of
4. The method of
5. The method of
6. The method of
7. The method of
8. The method of
9. The method of
11. The system of
12. The system of
13. The system of
14. The system of
15. The system of
16. The system of
17. The system of
18. The system of
20. The non-transitory computer readable medium of
providing the identified items and respective types of the items in the third log record and the relationships therebetween to a threat detection module for analysis to detect malicious activity.
|
Embodiments described herein generally relate to systems and methods for generating training data for machine learning models, and more specifically natural language processing (NLP) models.
Network devices generate log records as part of their routine operation. These log records may include data related to the devices' operation, such as timestamps of actions, interactions with other network devices, etc.
Log records generated by different sources may appear considerably different from one another. For example, they may be formatted differently or may include different types of data. Even log records generated by the same types of devices may be formatted differently due to the designing engineer's specifications or preferences.
Existing techniques for parsing or otherwise analyzing generated log records require non-trivial engineering efforts dedicated to each specific source or source type to address these differences in format and data content. Analyzing log records from various log sources typically requires building specifically tailored parsing solutions and often requires manual oversight. These existing techniques are time consuming and resource intensive.
A need exists, therefore, for more efficient systems and methods for parsing log records.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description section. This summary is not intended to identify or exclude key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
In one aspect, embodiments relate to a method for training a language processing model. The method includes receiving at an interface a first log record in a first format, wherein the first log record includes annotations describing items in the first log record, creating a second log record in a second format comprising data from the first log record utilizing the annotations in the first log record and a conversion rule set, and providing the second log record to a processor executing instructions stored on a memory to provide a language processing model, resulting in a trained model configured to identify items in a third log record and relationships therebetween, wherein the conversion rule set is associated with the language processing model.
In some embodiments, creating the second log record includes converting the first log record into a list of tuples required by the language processing model.
In some embodiments, the described items in the first log record include at least one of an IP address, a byte count, a port, and a user name.
In some embodiments, the language processing model is configured to output a probabilistic assessment regarding the items identified in the third log record and the relationships therebetween.
In some embodiments, the language processing model is implemented as a convolutional neural network.
In some embodiments, the language processing model is configured to semantically map the third log record into desired log values.
In some embodiments, the trained model executes at least two different natural language processing packages that are each defined by a different conversion rule set.
In some embodiments, the method further includes providing the identified items in the previously unseen log records and the relationships therebetween to a threat detection module for analysis to detect malicious activity.
In some embodiments, the method further includes providing the identified items in the third log records and the relationships therebetween to a log searching tool configured to conduct searches on log records.
According to another aspect, embodiments relate to a system for training a language processing model. The system includes an interface for receiving a first log record in a first format, wherein the first log record includes annotations describing items in the first log record; a mapping module configured to execute a conversion rule set to create a second log record in a second format comprising data from the first log record utilizing the annotations in the first log record; and a processor executing instructions stored on a memory to receive the second log record and provide a language processing model, resulting in a trained model configured to identify items in a third log record and relationships therebetween, wherein the conversion rule set is associated with the language processing model.
In some embodiments, the second format includes a list of tuples required by the language processing model.
In some embodiments, the described items in the first log record include at least one of an IP address, a byte count, a port, and a user name.
In some embodiments, the language processing model is configured to output a probabilistic assessment regarding the items identified in the third log record and the relationships therebetween.
In some embodiments, the language processing model is implemented as a convolutional neural network.
In some embodiments, the language processing model is configured to semantically map the third log record into desired log values.
In some embodiments, the trained model is configured to execute at least two different natural language processing packages that are each defined by a different conversion rule set.
In some embodiments, the processor is further configured to provide the identified items in the third log record and the relationships therebetween to a threat detection module for analysis to detect malicious activity.
In some embodiments, the processor is further configure to provide the identified items in the third log record to a log searching tool configured to conduct searches on log records.
According to yet another aspect, embodiments relate to a non-transitory computer readable medium containing computer-executable instructions for performing a method for training a language processing model. The medium includes computer-executable instructions for receiving at an interface a first log record in a first format, wherein the first log record includes annotations describing items in the first log record, computer-executable instructions for creating a second log record in a second format comprising data from the first log record utilizing the annotations in the first log record and a conversion rule set, and computer-executable instructions for providing the second log record to a processor executing instructions stored on a memory to provide a language processing model, resulting in a trained model configured to identify items in a third log record and relationships therebetween, wherein the conversion rule set is associated with the language processing model.
Non-limiting and non-exhaustive embodiments of this disclosure are described with reference to the following figures, wherein like reference numerals refer to like parts throughout the various views unless otherwise specified.
Various embodiments are described more fully below with reference to the accompanying drawings, which form a part hereof, and which show specific exemplary embodiments. However, the concepts of the present disclosure may be implemented in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided as part of a thorough and complete disclosure, to fully convey the scope of the concepts, techniques and implementations of the present disclosure to those skilled in the art. Embodiments may be practiced as methods, systems or devices. Accordingly, embodiments may take the form of a hardware implementation, an entirely software implementation or an implementation combining software and hardware aspects. The following detailed description is, therefore, not to be taken in a limiting sense.
Reference in the specification to “one embodiment” or to “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least one example implementation or technique in accordance with the present disclosure. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment. The appearances of the phrase “in some embodiments” in various places in the specification are not necessarily all referring to the same embodiments.
Some portions of the description that follow are presented in terms of symbolic representations of operations on non-transient signals stored within a computer memory. These descriptions and representations are used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. Such operations typically require physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical, magnetic or optical signals capable of being stored, transferred, combined, compared and otherwise manipulated. It is convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. Furthermore, it is also convenient at times, to refer to certain arrangements of steps requiring physical manipulations of physical quantities as modules or code devices, without loss of generality.
However, all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing” or “computing” or “calculating” or “determining” or “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system memories or registers or other such information storage, transmission or display devices. Portions of the present disclosure include processes and instructions that may be embodied in software, firmware or hardware, and when embodied in software, may be downloaded to reside on and be operated from different platforms used by a variety of operating systems.
The present disclosure also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application specific integrated circuits (ASICs), or any type of media suitable for storing electronic instructions, and each may be coupled to a computer system bus. Furthermore, the computers referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
The processes and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may also be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform one or more method steps. The structure for a variety of these systems is discussed in the description below. In addition, any particular programming language that is sufficient for achieving the techniques and implementations of the present disclosure may be used. A variety of programming languages may be used to implement the present disclosure as discussed herein.
In addition, the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the disclosed subject matter. Accordingly, the present disclosure is intended to be illustrative, and not limiting, of the scope of the concepts discussed herein.
Most, if not all, computing devices generate log records as part of their routine operation. For example, a firewall generates records logging which devices connect or otherwise interact therewith. In this case, the generated log records may include data such as the IP address of a device that interacted with the firewall, as well as a timestamp of said interaction.
Different types of devices may of course generate different types of log records. That is, the logged data associated with devices may vary across different types of devices that perform different functions.
Web applications such as social media platforms similarly generate an extraordinary amount of log records. For social media platforms, for example, this data may relate to profiles of users, dates of activity, a user's friends or followers, or the like.
Different types of devices may generate different types of log records. That is, the data present in generated logs may vary across different types of devices that perform different functions. Accordingly, the log records from these different devices, services, or platforms may look considerably different from each other.
Even log records associated with the same types of devices may look considerably different from each other if differently designed or configured. For example, the engineers configuring different firewalls may have their own preferences or specifications regarding what data is collected and how that data is logged and presented.
The number of log record sources will likely continue to increase in the future. This increase will likely be accompanied by an increase in the number of different types or formats of log records associated with the sources.
Existing log parsers are ill-suited to analyze log records with different formats. For example, existing log parsers are generally only configured to analyze log records associated with a particular source or log records in a particular format. Accordingly, they may be unable to analyze previously unreviewed log records to adequately identify items therein and the relationships between those items.
Applicant has applied NLP techniques to analyze log records. NLP generally requires a language model trained on a corpus of speech or text that is sufficiently similar to a target document to be ingested and parsed.
NLP has traditionally been applied to human-generated language. These applications rely on language-specific models trained on manually-annotated and curated data. These datasets may be developed over years or even decades from the involvement of countless individuals.
For example, the human language sentence “the quick brown fox jumps over the lazy dog” may be annotated to indicate that “fox” is the subject, “quick” is an adjective that modifies the subject, “dog” is an object, etc.
With respect to log records, an annotation may involve identifying items such as IP addresses, ports, bytes, usernames, actions, timestamps, durations of activities, or the like. The type of items in a log record may of course vary and may depend on the source associated with the log record.
Many existing log analysis tools have mechanisms carefully crafted to ingest and understand logs, but only logs from particular sources and in particular formats. Although these methods and techniques are precise, they are by design only able to analyze particular log records. Accordingly, these existing methods and techniques are not well suited to handle log records from previously unseen sources.
The present application discloses novel systems and methods for training language processing models to parse log records. The features of the various embodiments herein apply a natural language processing approach that utilizes machine learning techniques to dynamically develop statistical models.
The systems and methods described herein convert existing normalized records from parsing products into a format suitable for the model training process. This enables the systems and methods to leverage existing data that has already undergone some degree of human curation to quickly develop a model that is tailored to a known body of log record data. This is in contrast to existing techniques, which generally require the manual annotation of data for model construction.
In accordance with various embodiments, a first annotated log record in a first format may be provided to a mapping module to execute a conversion rule set to convert the first log record into a second log record in a second format. The second log record may include all or some of the data from the first log record. The second log record may be provided to a processor executing instructions stored on a memory to provide a language processing model, resulting in a trained model configured to identify items in a third log record and relationships therebetween.
The user device 102 may be any hardware device capable of executing the user interface 104. The user device 102 may be configured as a laptop, PC, tablet, mobile device, or the like. The exact configuration of the user device 102 may vary as long as it can execute and present the user interface 104 to the user 106. The user interface 104 may allow the user 106 to supply parameters regarding which log records to analyze and other types of parameters.
The user device 102 may be in operable communication with one or more processors 108. The processors 108 may be any hardware device capable of executing instructions stored on memory 110 to accomplish the objectives of the various embodiments described herein. The processor(s) 108 may be implemented as software executing on a microprocessor, a field programmable gate array (FPGA), an application-specific integrated circuit (ASIC), or another similar device whether available now or invented hereafter.
In some embodiments, such as those relying on one or more ASICs, the functionality described as being provided in part via software may instead be configured into the design of the ASICs and, as such, the associated software may be omitted. The processor(s) 108 may be configured as part of the user device 102 on which the user interface 104 executes, such as a laptop, or may be located on a different computing device, perhaps at some remote location.
The processor 108 may execute instructions stored on memory 110 to provide various modules to accomplish the objectives of the various embodiments described herein. Specifically, the processor 108 may execute or otherwise include a processor interface 112, an NLP package module 114, the resultant trained NLP model 116, and a summary generation module 118.
The memory 110 may be L1, L2, L3 cache or RAM memory configurations. The memory 110 may include non-volatile memory such as flash memory, EPROM, EEPROM, ROM, and PROM, or volatile memory such as static or dynamic RAM, as discussed above. The exact configuration/type of memory 110 may of course vary as long as instructions for training a language processing model can be executed by the processor 108 to accomplish the objectives of various embodiments described herein.
The processor interface 112 may be in communication with a mapping module 120 configured to execute one or more conversion rule sets 122 to map an annotated log record into a target format. For example, the mapping module 120 may receive a first log record in a first format. The first log record may be received at an interface 124 that is on otherwise able to access one or more network(s) 126. The first log record may be received from one or more databases 128, and may be associated with a first source 130 or a second source 132 on the network(s) 126.
The network(s) 126 may link the various devices with various types of network connections. The network(s) 126 may be comprised of, or may interface to, any one or more of the Internet, an intranet, a Personal Area Network (PAN), a Local Area Network (LAN), a Wide Area Network (WAN), a Metropolitan Area Network (MAN), a storage area network (SAN), a frame relay connection, an Advanced Intelligent Network (AIN) connection, a synchronous optical network (SONET) connection, a digital T1, T3, E1, or E3 line, a Digital Data Service (DDS) connection, a Digital Subscriber Line (DSL) connection, an Ethernet connection, an Integrated Services Digital Network (ISDN) line, a dial-up port such as a V.90, a V.34, or a V.34bis analog modem connection, a cable modem, an Asynchronous Transfer Mode (ATM) connection, a Fiber Distributed Data Interface (FDDI) connection, a Copper Distributed Data Interface (CDDI) connection, or an optical/DWDM network.
The network(s) 126 may also comprise, include, or interface to any one or more of a Wireless Application Protocol (WAP) link, a Wi-Fi link, a microwave link, a General Packet Radio Service (GPRS) link, a Global System for Mobile Communication G(SM) link, a Code Division Multiple Access (CDMA) link, or a Time Division Multiple access (TDMA) link such as a cellular phone channel, a Global Positioning System (GPS) link, a cellular digital packet data (CDPD) link, a Research in Motion, Limited (RIM) duplex paging type device, a Bluetooth radio link, or an IEEE 802.11-based link.
The first log record may have been previously annotated by a human user to include labels and values regarding items in therein. For example, the first log record may be accompanied by key-value pairs indicating values such as a source IP address, a destination IP address, byte counts, usernames, device labels, timestamps of actions, dates of actions, duration values of actions, etc. The type of values annotated may vary and may of course depend on the source associated with the first log record.
This log record 200 may have been previously annotated with labels and values. Annotation list 202, for example, provides labels identifying various items in the log record 200. These labels identify “device1” as corresponding to a device label, “jsmith” as corresponding to a username, “establish TCP connection” as corresponding to an action performed by j smith using device1, “1.2.3.4/56 to 7.8.9.0/80” as corresponding to ports, “23:59:01” as corresponding to a timestamp, and “00:00:02” as corresponding to a duration value.
Log record 200 is only exemplary and other types of log records may be considered in conjunction with the systems and methods described herein. The format of log records and types of data present in the log records may vary and may depend on the type of device or source associated with said log record.
Referring back to
The conversion rule set 122 that is executed may depend on the NLP package module 114 used to generate the trained NLP model 116. Accordingly, each conversion rule set 122 may perform different transformations or steps to convert the received annotated log records into a form suitable for the particular NLP package module 114.
A particular NLP package module 114 may require the received log record to be in a certain format or require certain information about the received log record. In some embodiments, an NLP package module 114 may require positional definitions of where particular types of values start and stop.
For example,
The converted dataset 300 shown in
The converted data such as the converted dataset 300 may be communicated to the processor 108. Specifically the processor interface 112 may receive the converted dataset 300 and provide the dataset 300 to the NLP package module 114.
The NLP package module 114 may be tasked with executing an NLP model training process. This training process in effect allows the resulting trained NLP model 116 to internalize patterns and idiosyncrasies that relate meaningful labels to specific values within raw data logs.
In some embodiments, the NLP package module 114 may execute or otherwise be based on a convolutional neural network (CNN) or other type of machine learning framework useful in text sequencing or pattern identification.
A CNN may be trained on the items identified in log records and the resultant converted datasets such as those shown in
During training, the CNN 400 may receive the converted dataset(s) 402 and convert them into a matrix to represent each item of interest and accompanying data as numbers. The CNN 400 may also analyze embedded items to learn or otherwise distinguish various classes of log records and data therein.
A CNN is only one type of framework that may be used for training an NLP model. Any other type of machine learning or artificial intelligence framework may be used as long as it can accomplish the features of the embodiments described herein.
Once trained, the NLP model 116 may ingest previously unencountered raw log records. The trained NLP model 116 may then, in response, return key-value pairs that include meaningful labels and values of interest. At this point, the NLP model 116 has been trained on sufficiently similar, though not necessarily identical, log record types so as to be capable of parsing previously unencountered records (or at least make an attempt at doing so).
A character standardization module 502 and a tokenization module 504 may first perform any pre-processing steps. For example, the character standardization module 502 may convert all characters into lower case, and the tokenization module 504 may detect components such as slashes or other types of punctuations to break the third log record 500 into discrete components.
An item identification module 506 may recognize that the first portion of the third log record 500 corresponds to the particular device or source of the log record. Similarly, the item identification module 506 may recognize that a series of numbers separated by colons represents either a timestamp of an action or the duration of an action.
More specifically, the item identification module 506 may understand that integers separated by periods “.” refer to a port number. For example, the item identification module 506 may recognize “99” in 5.5.7.8/99 and “80” in 2.1.0.3/80 as port numbers.
As seen in
For example, the relationship identification module 508 may recognize that the user's username immediately follows the device label. That is, the user represented by the username “Jdoe” is the user of the device represented by the device label “device2.”
Referring back to
Over time, the training data can be expanded to account for new sources of log records. Accordingly, the underlying trained NLP model 116 may continue to evolve to be able to handle more variety with respect to different types of log records.
Step 602 involves receiving at an interface a first log record in a first format, wherein the first log record includes annotations describing items in the first log record. This log record may be associated with any type of source or device on a network such as, but not limited to, firewalls, printers, social media platforms, routers, PCs, laptops, tablets, mobile devices, switches, or the like.
As discussed previously, the first log record may have been previously annotated to include details describing items therein. These details may include data regarding actions performed by the associated source as well as interactions with other types of devices or sources.
Step 604 involves creating a second log record in a second format comprising data from the first log record utilizing the annotations in the first log record and a conversion rule set. Step 604 may be performed a mapping module such as the mapping module 120 of
Step 606 involves providing the second log record to a processor executing instructions stored on a memory to provide a language processing model, resulting in a trained model configured to identify items in a third log record and relationships therebetween. The third log record may refer to a log record that has not been previously encountered.
The trained NLP model 116 may ingest and understand previously unencountered raw log records and identify items of interest therein. In addition to identifying items, the trained NLP model 116 may return key-value pairs that include meaningful labels and values of interest. For example, the trained NLP model may be configured to semantically map the third log record into desired log values suitable for analysis by other services or tools.
The trained NLP model may be unable to identify items and relationships with absolute certainty. Accordingly, the summary generation module 118 of
Step 608 involves providing the identified items in the previously unseen log records and the relationships therebetween to a threat detection module for analysis to detect malicious activity associated with the third log record. Once the items in a log record are identified and their meanings understood, they can be more meaningfully ingested by other products such as threat detection tools or other log searching tools or services.
In some embodiments, a processor such as the processor 108 of
A user may investigate the data further and perform any appropriate mitigation steps. Additionally or alternatively, these mitigation steps may be implemented autonomously.
The methods, systems, and devices discussed above are examples. Various configurations may omit, substitute, or add various procedures or components as appropriate. For instance, in alternative configurations, the methods may be performed in an order different from that described, and that various steps may be added, omitted, or combined. Also, features described with respect to certain configurations may be combined in various other configurations. Different aspects and elements of the configurations may be combined in a similar manner. Also, technology evolves and, thus, many of the elements are examples and do not limit the scope of the disclosure or claims.
Embodiments of the present disclosure, for example, are described above with reference to block diagrams and/or operational illustrations of methods, systems, and computer program products according to embodiments of the present disclosure. The functions/acts noted in the blocks may occur out of the order as shown in any flowchart. For example, two blocks shown in succession may in fact be executed substantially concurrent or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Additionally, or alternatively, not all of the blocks shown in any flowchart need to be performed and/or executed. For example, if a given flowchart has five blocks containing functions/acts, it may be the case that only three of the five blocks are performed and/or executed. In this example, any of the three of the five blocks may be performed and/or executed.
A statement that a value exceeds (or is more than) a first threshold value is equivalent to a statement that the value meets or exceeds a second threshold value that is slightly greater than the first threshold value, e.g., the second threshold value being one value higher than the first threshold value in the resolution of a relevant system. A statement that a value is less than (or is within) a first threshold value is equivalent to a statement that the value is less than or equal to a second threshold value that is slightly lower than the first threshold value, e.g., the second threshold value being one value lower than the first threshold value in the resolution of the relevant system.
Specific details are given in the description to provide a thorough understanding of example configurations (including implementations). However, configurations may be practiced without these specific details. For example, well-known circuits, processes, algorithms, structures, and techniques have been shown without unnecessary detail in order to avoid obscuring the configurations. This description provides example configurations only, and does not limit the scope, applicability, or configurations of the claims. Rather, the preceding description of the configurations will provide those skilled in the art with an enabling description for implementing described techniques. Various changes may be made in the function and arrangement of elements without departing from the spirit or scope of the disclosure.
Having described several example configurations, various modifications, alternative constructions, and equivalents may be used without departing from the spirit of the disclosure. For example, the above elements may be components of a larger system, wherein other rules may take precedence over or otherwise modify the application of various implementations or techniques of the present disclosure. Also, a number of steps may be undertaken before, during, or after the above elements are considered.
Having been provided with the description and illustration of the present application, one skilled in the art may envision variations, modifications, and alternate embodiments falling within the general inventive concept discussed in this application that do not depart from the scope of the following claims.
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
10079842, | Mar 30 2016 | Amazon Technologies, Inc | Transparent volume based intrusion detection |
9292487, | Aug 16 2012 | Amazon Technologies, Inc | Discriminative language model pruning |
20130041647, | |||
20170155643, | |||
20200098351, | |||
20200125489, | |||
20200387810, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jun 27 2019 | Rapid7, Inc. | (assignment on the face of the patent) | / | |||
Sep 09 2019 | LIN, WAH-KWAN | Rapid7, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 051260 | /0873 | |
Apr 23 2020 | Rapid7, Inc | KEYBANK NATIONAL ASSOCIATION | INTELLECTUAL PROPERTY SECURITY AGREEMENT | 052489 | /0939 |
Date | Maintenance Fee Events |
Jun 27 2019 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Date | Maintenance Schedule |
Oct 18 2025 | 4 years fee payment window open |
Apr 18 2026 | 6 months grace period start (w surcharge) |
Oct 18 2026 | patent expiry (for year 4) |
Oct 18 2028 | 2 years to revive unintentionally abandoned end. (for year 4) |
Oct 18 2029 | 8 years fee payment window open |
Apr 18 2030 | 6 months grace period start (w surcharge) |
Oct 18 2030 | patent expiry (for year 8) |
Oct 18 2032 | 2 years to revive unintentionally abandoned end. (for year 8) |
Oct 18 2033 | 12 years fee payment window open |
Apr 18 2034 | 6 months grace period start (w surcharge) |
Oct 18 2034 | patent expiry (for year 12) |
Oct 18 2036 | 2 years to revive unintentionally abandoned end. (for year 12) |