systems, methods and apparatuses for analyzing a string of terms (e.g., a search query, text of an email, and the like) are provided. In some examples, a determination is made as to whether one or more terms in the string matches a keyword. If so, various parts of speech of one or more terms in the string of terms may be determined. In some examples, a category of risk of the terms for which the part of speech is identified may also be determined. A risk rating may then be determined for the string of terms based on the relationship between the terms (e.g., the parts of speech) and the category or categories identified. In some examples, one or more additional actions may be implemented based on the risk rating.
|
1. An apparatus, comprising:
at least one processor; and
a memory storing computer-readable instructions that, when executed by the at least one processor, cause the apparatus to:
parse a string of terms into a plurality individual terms;
determine whether the plurality of individual terms includes at least one term matching a keyword of a keyword listing;
responsive to determining that the plurality of individual terms includes at least one term matching the keyword, determine a part of speech of the at least one term matching the keyword;
responsive to determining that the part of speech of the at least one term is one of:
a noun and a verb, identify the other of the noun and the verb in the plurality of individual terms;
identify a category of risk associated with the noun and a category of risk associated with the verb;
determine whether the identified category of risk associated with the identified noun and the identified category of risk associated with the identified verb are a same category of risk;
responsive to determining that the identified category of risk associated with the identified noun and the identified category of risk associated with the identified verb are the same category, determine a first risk rating of the string of terms including the identified noun and the identified verb, the first risk rating being based on the identified noun, the identified verb and the same category; and
responsive to determining that the identified category of risk associated with the identified noun and the identified category of risk associated with the identified verb are different categories, determine a second risk rating of the string of terms including the identified noun and identified verb, the second risk rating being based on the identified noun, the identified verb, the identified category of risk associated with the noun and the identified category of risk associated with the verb, the second risk rating being different from the first risk rating.
15. One or more non-transitory computer-readable media having computer-executable instructions stored thereon that, when executed, cause at least one computing device to:
parse a string of terms into a plurality of individual terms;
determine whether the plurality of individual terms includes at least one term matching a keyword of a keyword listing;
responsive to determining that the plurality of individual terms includes at least one term matching the keyword, determine a part of speech of the at least one term matching the keyword;
responsive to determining that the part of speech of the at least one term is one of: a noun and a verb, identify the other of the noun and the verb in the plurality of individual terms;
identify a category of risk associated with the identified noun and a category of risk associated with the identified verb;
determine whether the identified category of risk associated with the identified noun and the identified category of risk associated with the identified verb are a same category of risk;
responsive to determining that the identified category of risk associated with the identified noun and the identified category of risk associated with the identified verb are the same category, determine a first risk rating of the string of terms including the identified noun and the identified verb, the first risk rating being based on the identified noun, the identified verb and the same category; and
responsive to determining that the identified category of risk associated with the identified noun and the identified category of risk associated with the identified verb are different categories, determine a second risk rating of the string of terms including the identified noun and the identified verb, the second risk rating being based on the identified noun, the identified verb, the identified category of risk associated with the noun and the identified category of risk associated with the verb, the second risk rating being different from the first risk rating.
8. A method, comprising:
parsing, by a risk identification system including at least one computing device having at least one processor and at least one memory coupled to the processor, a string of terms into a plurality of individual terms;
determining, by the at least one computing device of the risk identification system, whether the plurality of individual terms includes at least one term matching a keyword of a keyword listing;
responsive to determining that the plurality of individual terms includes at least one term matching the keyword, determining a part of speech of the at least one term matching the keyword;
responsive to determining that the part of speech of the at least one term is one of: a noun and a verb, identify the other of the noun and the verb in the plurality of individual terms;
identifying, by the at least one computing device of the risk identification system, a category of risk associated with the identified noun and a category of risk associated with the identified verb;
determining, by the at least one computing device of the risk identification system, whether the identified category of risk associated with the identified noun and the identified category of risk associated with the identified verb are a same category of risk;
responsive to determining that the identified category of risk associated with the identified noun and the identified category of risk associated with the identified verb are the same category, determining, by the risk identification system, a first risk rating of the string of terms including the identified noun and the identified verb, the first risk rating being based on the identified noun, the identified verb and the same category; and
responsive to determining that the identified category of risk associated with the identified noun and the identified category of risk associated with the identified verb are different categories, determining, by the risk identification system, a second risk rating of the string of terms including the identified noun and the identified verb, the second risk rating being based on the identified noun, the identified verb, the identified category of risk associated with the noun and the identified category of risk associated with the verb, the second risk rating being different from the first risk rating.
2. The apparatus of
identify an adjective in the string of terms and determine a third risk rating of the string of terms based on the identified noun, identified verb, identified category of risk associated with the noun, identified category of risk associated with the verb and the identified adjective, the third risk rating being different from at least one of the first risk rating and the second risk rating.
3. The apparatus of
4. The apparatus of
5. The apparatus of
6. The apparatus of
7. The apparatus of
9. The method of
identifying, by the at least one computing device of the risk identification system, an adjective in the string of terms; and
determining, by the risk identification system, a third risk rating of the received string of terms based on the identified noun, identified verb, identified category of risk associated with the noun, identified category of risk associated with the verb and the identified adjective, the third risk rating being different from at least one of the first risk rating and the second risk rating.
10. The method of
11. The method of
12. The method of
13. The method of
14. The method of
16. The one or more non-transitory computer-readable media of
identify an adjective in the string of terms; and
determine a third risk rating of the string of terms based on the identified noun, identified verb, identified category of risk associated with the noun, identified category of risk associated with the verb and the identified adjective, the third risk rating being different from at least one of the first risk rating and the second risk rating.
17. The one or more non-transitory computer-readable media of
18. The one or more non-transitory computer-readable media of
19. The one or more non-transitory computer-readable media of
20. The one or more non-transitory computer-readable media of
|
This application is a continuation of and claims priority to U.S. application Ser. No. 14/014,731, entitled “Risk Identification,” and file Aug. 30, 2013, which is incorporated herein by reference in its entirety.
One important aspect to securing information is understanding how employees are using employer-provided computing devices. Accordingly, companies often monitor activity on these employer-provided devices. For instance, employers may monitor and evaluate search queries such as those input into an Internet search engine. In another example, employers may monitor email content of employees. Conventional systems of monitoring these activities include analyzing the content to determine whether any of the terms used match keywords identified as indicating potential issues. However, this system of identifying keywords results in many false positives because a word matching a keyword may be used as different parts of speech that may indicate different levels of risk. These conventional systems do not account for use of terms as different parts of speech and, thus, can provide inaccurate results.
The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosure. The summary is not an extensive overview of the disclosure. It is neither intended to identify key or critical elements of the disclosure nor to delineate the scope of the disclosure. The following summary merely presents some concepts of the disclosure in a simplified form as a prelude to the description below.
Aspects of the disclosure relate to methods, computer-readable media, and apparatuses for analyzing a string of terms (e.g., a search query, text of an email, and the like) and determining whether one of the terms matches a keyword. If so, various parts of speech of one or more terms in the string of terms may be determined. In some examples, a category of risk of the terms for which the part of speech is identified may also be determined. A risk rating may then be determined for the string of terms based on the relationship between the terms (e.g., the parts of speech) and the category or categories identified. In some examples, one or more additional actions may be implemented based on the risk rating.
The present disclosure is illustrated by way of example and not limited in the accompanying figures in which like reference numerals indicate similar elements and in which:
In the following description of various illustrative embodiments, reference is made to the accompanying drawings, which form a part hereof, and in which is shown, by way of illustration, various embodiments in which the claimed subject matter may be practiced. It is to be understood that other embodiments may be utilized, and that structural and functional modifications may be made, without departing from the scope of the present claimed subject matter.
It is noted that various connections between elements are discussed in the following description. It is noted that these connections are general and, unless specified otherwise, may be direct or indirect, wired or wireless, and that the specification is not intended to be limiting in this respect.
Various aspects of the arrangements described herein are related to identifying risk associated with a string of terms, such as a search query or text of an electronic communication. The string of terms input by a user who may be an employee of an entity and the computing device into which the user is inputting the string of terms may be an entity-provided computing device for use by the user during the course of business. The string of terms may be analyzed to determine whether one or more predefined keywords are used. If so, a part of speech of one or more terms within the string of terms may be determined in order to identify the risk associated with the string. For instance, as will be discussed more fully below, some terms may be used as different parts of speech (e.g., noun or verb) and the risk rating associated with the term may be different based on how it is used in the string of terms (e.g., the part of speech of the term in the string being analyzed). Once a risk rating has been determined, one or more additional actions or additional processing of the string may be implemented in order to further understand and/or mitigate the risk. These and various other arrangements will be discussed more fully below.
Computing system environment 100 may include computing device 101 having processor 103 for controlling overall operation of computing device 101 and its associated components, including random-access memory (RAM) 105, read-only memory (ROM) 107, communications module 109, and memory 115. Computing device 101 may include a variety of computer readable media. Computer readable media may be any available media that may be accessed by computing device 101, may be non-transitory, and may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, object code, data structures, program modules, or other data. Examples of computer readable media may include random access memory (RAM), read only memory (ROM), electronically erasable programmable read only memory (EEPROM), flash memory or other memory technology, compact disk read-only memory (CD-ROM), digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store the desired information and that can be accessed by computing device 101.
Although not required, various aspects described herein may be embodied as a method, a data processing system, or as a computer-readable medium storing computer-executable instructions. For example, a computer-readable medium storing instructions to cause a processor to perform steps of a method in accordance with aspects of the disclosed arrangements is contemplated. For example, aspects of the method steps disclosed herein may be executed on a processor on computing device 101. Such a processor may execute computer-executable instructions stored on a computer-readable medium.
Software may be stored within memory 115 and/or storage to provide instructions to processor 103 for enabling computing device 101 to perform various functions. For example, memory 115 may store software used by computing device 101, such as operating system 117, application programs 119, and associated database 121. Also, some or all of the computer executable instructions for computing device 101 may be embodied in hardware or firmware. Although not shown, RAM 105 may include one or more applications representing the application data stored in RAM 105 while computing device 101 is on and corresponding software applications (e.g., software tasks), are running on computing device 101.
Communications module 109 may include a microphone, keypad, touch screen, and/or stylus through which a user of computing device 101 may provide input, and may also include one or more of a speaker for providing audio output and a video display device for providing textual, audiovisual and/or graphical output. Computing system environment 100 may also include optical scanners (not shown). Exemplary usages include scanning and converting paper documents, e.g., correspondence, receipts, and the like, to digital files.
Computing device 101 may operate in a networked environment supporting connections to one or more remote computing devices, such as computing devices 141 and 151. Computing devices 141 and 151 may be personal computing devices or servers that include any or all of the elements described above relative to computing device 101. Computing devices 141 or 151 may be a mobile device (e.g., smart phone) communicating over a wireless carrier channel.
The network connections depicted in
The disclosure is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with the disclosed embodiments include, but are not limited to, personal computers (PCs), server computers, hand-held or laptop devices, smart phones, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
Computer network 203 may be any suitable computer network including the Internet, an intranet, a wide-area network (WAN), a local-area network (LAN), a wireless network, a digital subscriber line (DSL) network, a frame relay network, an asynchronous transfer mode (ATM) network, a virtual private network (VPN), or any combination of any of the same. Communications links 202 and 205 may be any communications links suitable for communicating between workstations 201 and server 204 (e.g. network control center), such as network links, dial-up links, wireless links, hard-wired links, as well as network types developed in the future, and the like. A virtual machine may be a software implementation of a computer that executes computer programs as if it were a standalone physical machine.
The risk identification system 300 may include one or more modules that may include hardware and/or software configured to perform various function within the system 300. For instance, the system may include parsing module 304. The parsing module 304 may receive data including, for example, strings of terms, such as search queries, text from emails or other electronic communications, and the like, and may parse the strings into individual words. In some examples, the parsing module 304 may receive data (e.g., search queries, text from communications, and the like) from one or more computing devices 316a-316e. The computing devices 316 may be provided to a user (e.g., an employee) by the entity 302 (e.g., an employer of the employee) for use during the course of business and may include a smartphone 316a, personal digital assistant (PDA) 316b, tablet computer 316c, cell phone 316d, computer terminal 316e, among other types of computing devices. As with many businesses, any use of the employer or entity-provided computing device 316 may be monitored by the employer or entity. Accordingly, search queries, Internet usage, websites visited, content of email or other messages, and the like, may be monitored by the employer. In the arrangements described herein, the entity or employer may monitor these communications to identify a risk associated with any of the communications or activities of the user (e.g., an indication that a user intends to misuse company resources or sensitive information, an indication that a user is looking to leave his/her position, an indication that the user is attempting to circumvent one or more security measures implemented by the entity, and the like), as will be discussed more fully below.
The parsing module 304 may be connected to or in communication with a keyword module 306. The keyword module 306 may include data storage identifying one or more keywords. The keywords may be indicative of an increased risk. For instance, some example keywords may include, “block,” “bypass,” “confidential,” and the like. Several other keywords may also be stored in the keyword module 306. The individual words from the data received by the parsing module may be analyzed to determine whether one or more of the words matches a keyword stored in the keyword module 306. In some examples, keywords may be identified by an administrator or other overseer of the system 300. In other examples, the keywords may be automatically stored based on historical data indicating a risk associated with certain words.
The risk identification system 300 may further include a part of speech module 308. The part of speech module may be connected to or in communication with the parsing module 304 and/or the keyword module 306. The part of speech module 308 may identify a part of speech of one or more individual words identified in the received data. For instance, if one word of the received word string matches a keyword, the part of speech of the word may be identified by the part of speech module 308. This may aid in determining the intent of the user inputting the search query, drafting the email, and the like. For instance, the term “bypass” may be identified as a keyword and the part of speech module 308 may determine whether “bypass” is used as a noun or a verb in the string. This may aid in determining whether there is a risk or a level of risk associated with the word string, search query, and the like. For instance, if “bypass” is used as a noun in a search query (e.g., “bypass around I-495”) it may indicate less risk associated with the search query than if “bypass” is used as a verb (e.g., “bypass website block”).
In another example, other words in the string (e.g., words other than a word matching a keyword) may be analyzed to determine a part of speech as well. For instance, continuing the example above, the term “block” may, in some examples, not match a keyword but as the search query is analyzed, the part of speech of “block” may be determined by the part of speech module 308 to further determine a risk associated with the search query. Accordingly, if “block” is used as a noun (as above “bypass website block”) a greater risk may be associated with the search query than if “block” is used as a verb (e.g., “block crab grass growth”).
The risk identification system 300 may further include a category module 310. The category module 310 may be connected to or in communication with one or more of the parsing module 304, keyword module 306, and/or part of speech module 308. The category module 310 may include one or more categories of terms that may indicate risk. For instance, keywords or terms may be sorted into categories such as intellectual property theft, information technology sabotage, unauthorized activity and/or behavior. Various other categories may be used without departing from the invention. In some examples, if two or more terms in a string are within the same category, that may indicate an increased risk associated with the string. For instance, if the identified noun and verb in a string are within a same category, there is a greater chance that the string is associated with activity not generally approved by the entity and, accordingly, a higher risk may be assigned to that string.
The risk identification system 300 may further include a risk rating module 312. The risk rating module 312 may be connected to or in communication with one or more of parsing module 304, keyword module 306, part of speech module 308 and/or category module 308. The risk rating module 312 may receive data from the one or more other modules and determine an overall risk associated with the word string. For instance, the risk rating module 312 may receive data associated with any keywords in the string, parts of speech of terms used, categories of the terms and whether any of the terms are in the same categories, and the like. A risk associated with the word string may be determined and the determined risk may be used to identify further processing for the word string and/or any additional steps taken, as will be discussed more fully below.
The risk identification system 300 may further include data storage module 314. Data storage module may include one or more databases storing terms, keywords, risk ratings, and the like. It may further include one or more processors and/or memory configured to identify additional keywords, refine risk ratings, and the like, based on historical data stored therein. For example, the data storage module 314 may be configured to provide rule defined learning for the system to improve accuracy of the risk ratings.
If, in step 402, a determination is made that one or more of the terms matches a keyword, the part of speech of the matching word may be determined in step 404. As discussed above, determining a part of speech of a word in the string may aid in identifying risk associated with the string. For instance, some words, when used as a first part of speech, may indicate little or no risk. However, when used as another part of speech, the word may indicate increased or high risk. For instance, the term “bypass” may be used as a noun in a variety of contexts that would indicate minimal risk (e.g., bypass on a highway, heart bypass, and the like). However, when used as a verb, “bypass” may indicate a user is attempting to circumvent one or more security measures put in place by the entity. For instance, the user may be conducting a search to identify ways to circumvent or bypass a website block instituted by the entity. Accordingly, determining a part of speech may aid in determining the intent of the user when the string of terms was used (e.g., was the user attempting to circumvent security measures or simply conducting research to avoid traffic).
In another example, the term “resume” may be used to as a verb to indicate taking up an action but may also be used as a noun “resume” (albeit without the appropriate accent marks to indicate the French origin which often do not appear when the term is used on English language keyboards) to describe a person's summary of their word experience, and the like. A user searching or including text in an email including the term “resume” may be using the term as a verb which would likely indicate minimal or little risk. However, if the term is being used as a noun, it may indicate that the user is looking for another job or position and, in this instance, may be using entity resources to conduct a job search, apply for jobs, and the like, which would pose a higher risk to the entity (e.g., the person may be leaving the position, the person may have access to confidential or sensitive information, and the like).
In step 406, a part of speech of an additional word in the string is determined. For instance, if the keyword is identified as a verb, the system (such as the part of speech module 310) may identify the term used as a noun in the string. This may aid in providing additional information about the risk associated with the string. For instance, if the term “bypass” matches a keyword and is identified as a verb, the system may identify the noun in the string. For instance, if the noun is “website” that may indicate that the user is looking to bypass one or more Internet blocks that are implemented by the entity and the string may pose a risk to the entity. Alternatively, if the noun is “traffic” the user may simply be looking for ways to avoid traffic and the string would appear to pose little risk to the entity.
In step 408, a category of the keyword and the at least one additional word may be identified or determined. As discussed above, various words (e.g., keywords, other common words associated with risk, and the like) may be sorted into categories, such as categories of type of risk. Some example categories may include:
1. Behavior—this category may include various keywords or other words indicating an action or behavior associated with the user that might involve risk. For instance, behavior may include terms indicating the user is looking for another job.
2. Information Technology Sabotage—this category may include various keywords or other words indicating the user may be a technical insider and may be looking to use his/her insider knowledge in an unauthorized manner. For instance, the user may be looking to download confidential or sensitive information to a portable drive in order to transmit it outside of the entity.
3. Unauthorized access—this category may include various words or keywords indicating the user intends or is involved in unauthorized activity such as theft, misuse of company information, and the like.
4. Intellectual property theft—this category may include various words or keywords indicating that the user is or plans to attempt to bypass one or more controls implemented by the entity. For instance, the user may be attempting to access information he/she is not authorized to access or may be attempting to circumvent one or more security measures implemented by the entity.
Although four categories are described herein, more or fewer, as well as other, categories may be used with the risk identification system without departing from the invention.
In step 410, a determination may be made as to whether the determined category for the keyword (for which the part of speech has been determined) and the determined category for the at least one additional word (for which the part of speech has been determined) are the same category. In some examples, the keyword and the additional word being in the same category may indicate an increased or higher potential risk. For instance, if two words are within the same category then it increases the likelihood that the string of terms being analyzed is associated with user activity that is not appropriate on an entity provided computing device (e.g., unauthorized, ill-advised, or the like). Accordingly, if, in step 410 a determination is made that the two words are within the same category, then, in step 412 a risk level or rating associated with the text being analyzed may be determined. The risk rating may be determined based on the identified terms, identified parts of speech, and/or the determined category.
If, in step 410, a determination is made that the terms are not in the same category, then a risk rating associated with the string being analyzed is determined in step 414. The risk level may be based, at least in part, on the determined part of speech and the determined category for each term.
Alternatively, if in step 502, a determination is made that the string does include a term matching a keyword, then a noun and a verb in the string may be identified in step 504. For instance, the noun and verb may be identified by the system, such as by a part of speech module 310. In step 506, a category of each of the identified noun and the identified verb may be determined. The categories identified may be similar to the categories discussed above or may include other categories.
In step 508 a determination may be made as to whether the noun and verb are in the same category. Similar to the arrangements above, the identified noun and verb of the same category may indicate an increased risk from a string in which the noun and verb are in different categories. If the noun and verb are not in the same category, the risk associated with the string may be determined in step 514 based on the term and the categories into which the terms (e.g., the identified noun and verb) are placed.
If, in step 508, the noun and verb are in the same category, an adjective may be identified in the string in step 510. Identification of the adjective (e.g., the term acting as the adjective in the sentence) may aid in identifying a risk level associated with the string because it may indicate the intent of the user. For instance, if the phrase “download confidential information” is input into an Internet search engine, the noun “information” and verb “download” may be in the same category and may indicate a risk. However, the addition of the adjective “confidential” raises that identified risk rating.
Once the adjective is identified and evaluated, a risk level may be determined for the string in step 512. The risk level may be determined based on the terms and categories of each term. For instance, the relationship of the terms (e.g., parts of speech of each term, categories, and the like) may aid in identified the risk level associated with the string being analyzed.
In step 606, a determination is made as to whether the part of speech of the word matching the keyword is a verb. If so, the system may identify a noun in the string in step 608. If, in step 606, the part of speech of the term matching the keyword is not a verb, a determination is made in step 610 as to whether the term matching the keyword is a noun. If not, the process moves to step 608 to identify the noun in the string. If, in step 610, the term is a noun, then, in step 612, a verb in the string may be identified. In some instances, identifying the noun and verb in a string may include identifying a category of the noun and verb, similar to the arrangements discussed above.
In step 614, a determination is made as to whether the noun and verb are in the same category. If not, a risk level of rating of the string is determined in step 618. The risk rating may be based on the terms, categories and relationship between the terms. If, in step 614, the noun and verb are in the same category, an adjective in the string may be identified in step 616. The risk rating of the string may then be determined in step 618 and may be based on the terms (e.g., noun, verb and adjective), category, and relationship between the terms.
In step 618, any additional processing or actions associated with the string may be identified. For instance, based on the risk level or risk rating determined, one or more additional processing steps or further actions may be identified and/or implemented to mitigate the identified risk. Although the step of identifying additional processing is discussed in association with the method illustrated in
Column 702 identifies the various risk ratings that may be identified for a string of terms being analyzed. The ratings shown extending from 1 to 5 and may include 1 being the highest risk and 5 being the lowest risk, or vice versa with 1 being the lowest risk and 5 being the highest risk. For simplicity, the remainder of the discussion of
Accordingly, one a string is analyzed and a risk rating is determined, any additional processing or actions may be determined from table 700 or similar tables. For instance, a risk rating of 1 may be identified for a string. The risk rating of 1 may be identified if, for example, there is no word in the string that matches a keyword. In response to a risk rating of 1, response 1 from additional action(s) column 704 may be implemented. Because rating 1 is the lowest risk, response 1 may include actions such as continuing to monitor the user as normal or take no further action.
In another example, a risk rating of 3 may be identified in situations in which, for example, there is a keyword match but the noun and verb in the string are not in the same category. In this example, response 3 may be implemented and may include further review of search queries or communications associated with the user. Response 3 may also include placing tighter controls on the user (e.g., monitor new communications more frequently, log all activity of the user, and the like), further processing the string of terms to confirm risk associated with the string of terms, and the like.
In yet another example, a risk rating of 5 may be identified in situations in which, for instance, there is a keyword match and the noun and verb are in the same category. Risk rating of 5 may also include an adjective indicative of malicious or otherwise unauthorized intent of the user. Accordingly, response 5 may be implemented and the situation may be escalated to confront the user, seize his or her entity provided computing devices, modify permissions or access to information of the user, and the like.
The risk identification methods, systems, and the like, described herein provide an accurate and objective way to identify risk associated with user of a computing device. As discussed above, by determining different parts of speech of terms in a string of terms, and thereby understanding the relationship between the terms, risk can be more accurately identified and additional processing steps may be better tailored to the particular risk. Because fewer false positives may be identified by the system described herein than by conventional systems and methods, a more efficient and accurate system and method of determining risk is provided.
Various aspects described herein may be embodied as a method, an apparatus, or as one or more computer-readable media storing computer-executable instructions. Accordingly, those aspects may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Any and/or all of the method steps described herein may be embodied in computer-executable instructions stored on a computer-readable medium, such as a non-transitory computer readable medium. Additionally or alternatively, any and/or all of the method steps described herein may be embodied in computer-readable instructions stored in the memory of an apparatus that includes one or more processors, such that the apparatus is caused to perform such method steps when the one or more processors execute the computer-readable instructions. In addition, various signals representing data or events as described herein may be transferred between a source and a destination in the form of light and/or electromagnetic waves traveling through signal-conducting media such as metal wires, optical fibers, and/or wireless transmission media (e.g., air and/or space).
Aspects of the disclosure have been described in terms of illustrative embodiments thereof. Numerous other embodiments, modifications, and variations within the scope and spirit of the appended claims will occur to persons of ordinary skill in the art from a review of this disclosure. For example, one of ordinary skill in the art will appreciate that the steps illustrated in the illustrative figures may be performed in other than the recited order, and that one or more steps illustrated may be optional in accordance with aspects of the disclosure. Further, one or more aspects described with respect to one figure or arrangement may be used in conjunction with other aspects associated with another figure or portion of the description.
Blake, Adam James, Langsam, Peter J., Zuhde, Jennifer Ann, Bradley, Thomas
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
5237502, | Sep 04 1990 | International Business Machines Corporation | Method and apparatus for paraphrasing information contained in logical forms |
6889196, | Jun 16 1999 | METIER LTD | Method and apparatus for planning, monitoring, and illustrating multiple tasks based on user defined criteria and predictive ability |
7181438, | May 30 2000 | RELATIVITY DISPLAY LLC | Database access system |
8141149, | Nov 08 2005 | FORCEPOINT FEDERAL HOLDINGS LLC | Keyword obfuscation |
8566350, | Nov 02 2009 | Xerox Corporation | Method and apparatus for facilitating document sanitization |
20020129140, | |||
20040193870, | |||
20070028297, | |||
20070050844, | |||
20080133508, | |||
20080243825, | |||
20090018820, | |||
20090070103, | |||
20090113548, | |||
20110107205, | |||
20110131076, | |||
20110184877, | |||
20110185056, | |||
20110191673, | |||
20120046989, | |||
20120066763, | |||
20120198556, | |||
20130097706, | |||
20140033327, | |||
20140129211, | |||
20140172497, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Aug 27 2013 | BLAKE, ADAM JAMES | Bank of America Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 037326 | /0905 | |
Aug 28 2013 | ZUHDE, JENNIFER ANN | Bank of America Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 037326 | /0905 | |
Aug 28 2013 | BRADLEY, THOMAS | Bank of America Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 037326 | /0905 | |
Aug 30 2013 | LANGSAM, PETER J | Bank of America Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 037326 | /0905 | |
Dec 18 2015 | Bank of America Corporation | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Oct 21 2020 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Oct 22 2024 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
May 30 2020 | 4 years fee payment window open |
Nov 30 2020 | 6 months grace period start (w surcharge) |
May 30 2021 | patent expiry (for year 4) |
May 30 2023 | 2 years to revive unintentionally abandoned end. (for year 4) |
May 30 2024 | 8 years fee payment window open |
Nov 30 2024 | 6 months grace period start (w surcharge) |
May 30 2025 | patent expiry (for year 8) |
May 30 2027 | 2 years to revive unintentionally abandoned end. (for year 8) |
May 30 2028 | 12 years fee payment window open |
Nov 30 2028 | 6 months grace period start (w surcharge) |
May 30 2029 | patent expiry (for year 12) |
May 30 2031 | 2 years to revive unintentionally abandoned end. (for year 12) |