Techniques for providing a people recommendation system for predicting and recommending relevant people (or other entities) to include in a conversation. In an exemplary embodiment, a plurality of conversation boxes associated with communications between a user and target recipients, or between other users and recipients, are collected and stored as user history. During a training phase, the user history is used to train encoder and decoder blocks in a de-noising auto-encoder model. During a prediction phase, the trained encoder and decoder are used to predict one or more recipients for a current conversation box composed by the user, based on contextual and other signals extracted from the current conversation box. The predicted recipients are ranked using a scoring function, and the top-ranked individuals or entities may be recommended to the user.
|
1. A method for execution by a computer processor coupled to a memory, the method comprising:
using an encoding function, encoding a vector comprising a pre-specified recipient group, a source user, and at least one context signal derived from a current communications item;
using a decoding function, decoding the encoded vector to generate a relevance group;
generating a recommendation comprising a member of the relevance group not in the pre-specified recipient group; and
training said encoding and decoding functions using a de-noising auto-encoder neural network model applied to a plurality of historical communications items, each of said plurality of historical communications items being associated with a data sample comprising a recipient group, a source user, and at least one context signal from the item;
said training comprising, for each data sample:
applying a corrupting function to the recipient group of the data sample to corrupt the recipient group of the data sample and generate a corrupted vector;
encoding the corrupted vector to generate an encoded training vector to extract features of the corrupted vector;
decoding the encoded training vector to generate a target recipient group corresponding to an estimate of the recipient group;
computing a loss function for the data sample based on the difference between the target recipient group and the recipient group; and
updating parameters of the encoding and decoding functions using the computed loss function for the data sample.
16. An apparatus comprising computer hardware configurable to execute:
means for, using an encoding function, encoding a vector comprising a pre-specified recipient group, a source user, and at least one context signal derived from a communications item;
means for, using a decoding function, decoding the encoded vector to generate a relevance group;
means for generating a recommendation comprising a member of the relevance group not in the pre-specified recipient group; and
means for training the encoding and decoding functions using a de-noising auto-encoder neural network model applied to a plurality of historical communications items, each of said plurality of historical communications items being associated with a data sample comprising a recipient group, a source user, and at least one context signal from the item;
said means for training comprising:
means for, for each data sample, applying a corrupting function to the recipient group of the data sample to corrupt the recipient group of the data sample and generate a corrupted vector;
means for encoding the corrupted vector to generate an encoded training vector to extract features of the corrupted vector;
means for decoding the encoded training vector to generate a target recipient group corresponding to an estimate of the recipient group;
means for computing a loss function for the data sample based on the difference between the target recipient group and the recipient group; and
means for updating parameters of the encoding and decoding using the computed loss function for each data sample.
9. An apparatus comprising a processor and a memory coupled to the processor, the memory storing instructions for causing the processor for executing:
a training block configured to train an encoding function and a decoding function using a plurality of historical communications items; and
a prediction block configured to generate a recipient recommendation using the encoding function to encode a vector comprising a pre-specified recipient group, a source user, and at least one context signal derived from a current communications item, and further using the decoding function to decode the encoded vector to generate a relevance group, wherein the recipient recommendation comprises a member of the relevance group not in the pre-specified recipient group;
the training block configured to train the encoding and decoding functions using a de-noising auto-encoder neural network model applied to the plurality of historical communications items, each of said plurality of historical communications items being associated with a data sample comprising a recipient group, a source user, and at least one context signal from the item, the training block configured to, for each data sample:
apply a corrupting function to the recipient group of the data sample to corrupt the recipient group of the data sample and generate a corrupted vector;
using the encoding function, encode the corrupted vector to generate an encoded training vector to extract features of the corrupted vector;
using the decoding function, decode the encoded training vector to generate a target recipient group corresponding to an estimate of the recipient group;
compute a loss function for the data sample based on the difference between the target recipient group and the recipient group; and
update parameters of the encoding and decoding functions using the computed loss function for the data sample.
2. The method of
3. The method of
4. The method of
5. The method of
6. The method of
7. The method of
8. The method of
10. The apparatus of
11. The apparatus of
12. The apparatus of
apply a scoring function to generate a score for each member of the relevance group not in the pre-specified recipient group;
generate the recipient recommendation as the scored member having the highest score.
13. The apparatus of
|
This application claims the benefit of U.S. Provisional Application No. 62/154,039, filed Apr. 28, 2015, and U.S. Provisional Application No. 62/156,362, filed May 4, 2015, the disclosures of which are hereby incorporated by reference.
Widespread communications applications like email, document sharing, and social networking allow more and more people to be connected. As users' contacts lists grow ever larger, it becomes more difficult to determine the most relevant people to receive a message or join in a conversation. Existing software applications may make people recommendations for recipients to include in a user-created conversation. However, such applications may only take into account basic signals, such as the first letters of a user-input name, or most frequently messaged contacts, etc.
It would be desirable to provide a system that learns user preferences for people to include in a message or conversation, based on the content and context of prior user communications. For example, when composing an email related to a specific business project, a user could be provided with recommendations of people who have previously communicated with the user regarding the business project. Similarly, when browsing posts or updates on a social network, a user could be provided with recommendations of people to add as contacts.
Accordingly, techniques are desired for utilizing prior user communications to identify people most relevant to certain contextual signals contained therein, and leverage such signals to generate people recommendations.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Briefly, various aspects of the subject matter described herein are directed towards techniques for generating people recommendations based on contextual features of a user-created item. In certain aspects, records of prior user communications are structured into a plurality of conversation boxes containing associated contextual features. During a training phase, the conversation boxes are used to train a prediction algorithm, such as a de-noising auto-encoder model, to derive optimal weights for assigning a recommended group of participants to a set of contextual features. During a prediction phase, the prediction algorithm functions to recommend a group of participants for a current conversation box. A scoring function may be used to identify a top-ranked participant. In a further aspect, techniques are provided for feedback adjustment of the prediction algorithm based on user acceptance or rejection of system-generated recommendations.
Other advantages may become apparent from the following detailed description and drawings.
Various aspects of the technology described herein are generally directed towards techniques for analyzing prior user communications to derive an algorithm for recommending participants relevant to a current communications item.
The detailed description set forth below in connection with the appended drawings is intended as a description of exemplary means “serving as an example, instance, or illustration,” and should not necessarily be construed as preferred or advantageous over other exemplary aspects. The detailed description includes specific details for the purpose of providing a thorough understanding of the exemplary aspects of the invention. It will be apparent to those skilled in the art that the exemplary aspects of the invention may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form in order to avoid obscuring the novelty of the exemplary aspects presented herein. Note the term “conversation box” may also be used interchangeably herein with the term “communications item.”
It would be desirable to provide email, document sharing, social networking, and other communications software with the capability to intelligently predict suitable people to recommend to a user based on context. For example, when a user composes email relating to a certain task or project, the email software may intelligently predict people who are most relevant to such task or project, and recommend those people to the user as email recipients. Alternatively, when a user posts on a social networking platform relating to certain content, the software may intelligently predict people who are relevant to such content, and recommend those people to the user to include as recipients. Techniques of the present disclosure advantageously provide a people recommendation system for predicting and recommending relevant people based on user communications history and present context.
Note the term “people” as used herein is not meant to only denote one or more individual persons, but may also be understood to refer to any entity that can be recommended by the system for inclusion in a conversation. Thus mailing lists, social networking groups, etc., will also be understood to fall within the scope of “people” that may be recommended by the system.
Note
In an exemplary embodiment, a user composes a document that may be shared with other users. In this case, the present techniques may be used to identify and recommend such other users, based on contextual signals of the document. In an alternative exemplary embodiment, a user may create a meeting invitation to send to other users. In this case, the present techniques may be used to identify and recommend other users to receive the invitation. In yet another alternative exemplary embodiment, a user may share posts or tweets on a social networking platform, and thereby be provided with recommendations of additional people to add based on the content of the posts. Other exemplary embodiments may apply the present techniques to, e.g., people recommendation for text messages, instant messaging applications, video sharing applications, other sharing applications, etc. Such alternative exemplary embodiments are contemplated to be within the scope of the present disclosure.
In
At block 220, user input 201a to application 210, as well as any communications items previously processed through application 210 (e.g., messages received from other people), is cumulatively stored by message/document history block 220 as user history 220a (also generally denoted herein as “a plurality of communications items”). In an exemplary embodiment, history 220a may include one or more data files that include all items cumulatively created or processed by application 210, or other applications 211 (i.e., distinct from application 210).
History 220a may include, e.g., messages (such as emails or messages on a social network) sent and received between the user and other persons, documents (e.g., with or without senders and/or recipients), profile entries, chat conversations (e.g., chat histories), calendar items, meeting requests, agendas, posts or updates on social messaging applications, and/or metadata (e.g., including time/date indicators, location indicators if available, etc.) associated with such items, etc. History 220a may be stored locally, e.g., on a local hard drive, or on a remote server.
In
Recommendation engine 230 analyzes parameters 230a and user history 220a to generate people recommendation(s) 230b for the current item. In particular, people recommendation(s) 230b may correspond to one or more additional people or other entities who the user may wish to include as recipient(s) of the current item. In the exemplary embodiment shown, recommendation engine 230 includes a history analysis engine 234, including conversation box structuring block 234.1 and algorithm training block 234.2. Block 234.1 structures user history 220a into a plurality of conversation boxes, as further described hereinbelow with reference to
In
It will be appreciated that conversation boxes may generally include any items from which fields may be extracted for algorithm training, e.g., emails, user profile entries (e.g., a user name, date of birth, age, etc.), non-shared local or online documents, etc. Conversation boxes may also correspond to other types of messages besides emails, e.g., text messages, entries such as online posts or feeds on social network sites, etc.
In an exemplary embodiment, a set of conversation boxes may contain different types of communications items. For example, in
For each conversation box, a set of relevant parameters are extracted. In particular, relevant parameters for each conversation box 310.i may be symbolically represented using the variable xi, wherein xi for arbitrary i is also denoted herein as a “vector” or “data sample.” For example, x1 corresponds to conversation box 1, xN corresponds to conversation box N, etc. Each data sample xi may further be composed of three different components extracted from conversation box 310.i: si (or “source user”), ci (or “at least one context signal”), and Ti (or “recipient group” or “recipient vector”).
In particular, si, or the “source user” set, denotes at least one source user to whom people recommendations are to be provided. For example, conversation box 1 in
ci denotes a contextual feature of the conversation box. For example, in conversation box 1, c1 may correspond to representations of the subject field, content of the body of the email, date, time, the form of Conversation Box 1 as an email, etc. It will be appreciated that the specific content of ci is described herein for illustrative purposes only, and is not meant to limit the scope of the present disclosure to any particular fields that may be extracted from, or any particular representation of the information present in, conversation boxes.
In an exemplary embodiment, ci may be represented as a multi-dimensional “N-gram” representation, wherein ci corresponds to a multi-dimensional vector, and individual dimensions of ci each correspond to specific combinations (e.g., “N-grams”) of N letters. For example, in a “3-gram” representation of c1 (i.e., N=3), a dimension of vector c1 corresponding to the three letters “mar” may have a value of 2, corresponding to the two occurrences of “mar” in the email 100 (i.e., once in the subject field 114 and once in the body 120). In alternative exemplary embodiments, individual dimensions of ci may correspond to individual words or word roots, etc.
In an alternative exemplary embodiment, additional or alternative dimensions to the N-gram dimensions may be provided in ci. For example, certain individual dimensions in ci may correspond to the separate content of certain fields or positions within a conversation box, e.g., the subject field 114, the first sentence of body 120, etc. Alternatively or in conjunction, individual dimensions may correspond to, e.g., topic models specifying extracted topics, or semantic symbols corresponding to semantic vectors derived from a deep semantic similarity model (DSSM). In an alternative exemplary embodiment, certain dimensions of ci may correspond to, e.g., explicit text representations of information from the conversation box. Such alternative exemplary embodiments are contemplated to be within the scope of the present disclosure.
Ti, or the “target participants” or “training recipient vector,” may indicate a group of one or more participants in the conversation box, excluding those already present in si. For example, in conversation box 1, T1 may correspond to a representation of a group consisting of recipients “Bob Jones” and “Dave Lee.”
In an exemplary embodiment, a “recipient-dimensional encoding” representation for Ti may be utilized, wherein Ti is encoded as a sparse binary vector, eg Ti=[0, 0, 0, 1, 0, 0, 1 . . . 0, 0], and wherein each dimension of Ti corresponds to a possible recipient. For example, non-zero (e.g., “1”) entries in Ti would correspond to those recipients that are involved in the i-th conversation box. Per the recipient-dimensional encoding representation, the total number T of dimensions of Ti may thus correspond to the total number of contacts in the source user's list of contacts.
Each data sample xi may thus be composed of a concatenation of the corresponding fields si, ci, Ti, and such concatenation may be expressed herein as [si, ci, Ti]. For example, x1=[s1, c1, T1] may express that the data sample corresponding to conversation box 1 or 310.1 includes the concatenation of the fields s1, c1, T1.
As a further example of a conversation box, conversation box 310.2 corresponds to a profile entry for “Bob Jones” in the contact list of source user “John Smith.” Accordingly, s2 may correspond to “John Smith,” T2 may correspond to “Bob Jones,” and c2 may correspond to representations of the parameters in the profile as illustratively shown.
Aggregating over all available conversation boxes 1 to N, or 310.1 to 310.N, produces a series of data samples {x1, . . . , xi, . . . , xN}, which may be collectively denoted as x, also denoted herein as the “aggregated data samples,” as indicated at block 320.
In an exemplary embodiment, aggregated data samples x associated with user history 220a may be used to train a neural network-based algorithm for predicting relevant people or entities. In an exemplary embodiment, the algorithm may employ a ranking de-noising auto-encoder technique. Note the techniques described herein are for illustrative purposes only, and are not meant to limit the scope of the present disclosure to any particular techniques for predicting relevant people to a conversation. In alternative exemplary embodiments of the present disclosure, non-neural network-based methods may also be utilized for the purposes described herein.
In
At block 420 (also denoted herein as a “prediction block” or “prediction phase”), using optimum parameters 410a, people recommendation 230b is generated given current conversation parameters 230a. Current conversation parameters 230a may include fields s′, c′, T′, corresponding to current communications item x′.
In
At block 520, a corrupting function is applied to data sample xi to generate a corresponding corrupted vector {circumflex over (x)}i. The corrupting function acts to corrupt certain elements present in xi, e.g., in a random or deterministic pseudo-random manner. For example, a corrupting function may randomly select binary elements in xi, and flip the selected binary element (e.g., 1 to 0 or 0 to 1).
In an exemplary embodiment, the corrupting function may be applied only to the Ti field (e.g., recipient vector) of xi. A corrupting rate, e.g., 10%-35%, may be associated with the corrupting function, wherein the corrupting rate may be defined as the percentage of bits that are corrupted. Note the corrupting rate may be adjustable depending on different types of data samples used.
In view of the description hereinabove, a corrupting function may be understood to take a “typical” data sample xi, such as may correspond to a conversation box in user history 220a, and “corrupt” the recipient field Ti in a manner so as to emulate the presence of incomplete entries or deviations in the “typical” recipient group. Such incomplete entries or deviations may be, e.g., statistically similar to current conversation parameters 230a received from user 201 during composition of a new communications item, e.g., for which recipients entered by user 201 for the new conversation item may be incomplete or include incorrect recipients (e.g., corresponding to a “corrupted” Ti field). It will be appreciated that an object of training 500 is then to configure the prediction algorithm (e.g., further described hereinbelow with reference to
Following generation of corrupted data sample {circumflex over (x)}i, at block 530, an encoder f(•) is applied to {circumflex over (x)}i to generate an encoded vector hi=f ({circumflex over (x)}i). In an exemplary embodiment, the encoder f(•) may be implemented as, e.g., a weighted summary matrix followed by an activation function. The summary function may include, e.g., an affine transformation, or any other non-linear or linear functions, and the activation function may include, e.g., tan h, sigmoid, etc.
At block 540, vector hi may be decoded using decoder g(•) to generate an output real-valued vector yi=g (hi), wherein yi is also denoted herein as a “estimated relevance group.” In an exemplary embodiment, the decoder g(•) may be implemented as, e.g., a weighted summary matrix followed by an activation function.
It will be appreciated that vector yi may generally be designed to contain only elements representing estimates of target participants Ti in xi. For example, in exemplary embodiments, yi need not contain estimates of si and ci fields, as those fields need not be corrupted by the corrupting function at block 520.
At block 550, also denoted herein a “loss function calculation block,” yi and (pre-corrupted) field Ti are compared with each other using a loss function, also denoted a reconstruction error function 1 (yi, Ti). In particular, 1 (yi, Ti) may quantify the difference between the output vector yi and the original non-corrupted target participants Ti. It will be appreciated that a goal of updating and/or adjusting weights of the encoder f(•) and the decoder g(•) may be to minimize or otherwise reduce the magnitude of the loss function 1 (yi, Ti), over a suitably broad range of data samples xi.
In an exemplary embodiment, 1 (yi, Ti) may correspond to a squared reconstruction error function (hereinafter “squared-loss function”) (Equation 1): 1 (yi, Ti)=Σ(yi−Ti)2, wherein it will be understood that the summation is to be performed over all dimensions of yi, Ti.
In an alternative exemplary embodiment, 1 (yi, Ti) may correspond to a ranking-based reconstruction loss function utilizing a negative log-likelihood softmax function (hereinafter “ranking-based function”) (Equation 2):
wherein λ is a smoothness term for the soft-max function,
normalizes for the number of participants in each sample Ti, yit indicates the t-th dimension of yi, t∈Ti indicates a non-zero element in Ti (e.g., per the recipient-dimensional encoding representation for Ti), and j∉Ti indicates the zero elements in Ti.
It will be appreciated that a squared-error function may effectively capture a point-wise loss, while a softmax likelihood function may effectively capture a pair-wise loss. In particular, a point-wise function may be used to estimate the absolute score for each single person, while a pair-wise loss may effectively capture the relevant scores for a list of people. For example, in a scenario wherein the Top-N relevant people are to be recommended given a conversation box, a pair-wise function may preferably distinguish the top candidates with greater sensitivity.
Note the ranking-based function may generally correspond to the negative of the summation of the log likelihood of the softmax function, e.g., as summed over all individuals (e.g., non-zero entries) present in Ti. It will be appreciated that using a log likelihood based function may advantageously simplify the computational resources required to calculate Equation 2, as the logarithms are summed, whereas otherwise terms would need to be exponentiated and multiplied.
In an exemplary embodiment, to further simplify the computation of the ranking-based function, the summation in the denominator may be performed over random subsets Zi of the zero elements of Ti, rather than over all zero elements (Equation 3):
In an exemplary embodiment, it may be assumed that the zero elements' contribution to the softmax denominator is negligible.
In alternative exemplary embodiments, 1 (yi, Ti) may generally utilize any function known in the art of reinforcement learning, and such alternative exemplary embodiments are contemplated to be within the scope of the present disclosure. It will be appreciated that the manner in which the encoder/decoder f (•) and g (•) are updated during each iteration will depend on the reconstruction error function chosen.
In an exemplary embodiment, to update weights according to the squared-loss function shown in Equation 1 hereinabove, the following equations (Equations 4) may be used:
WHT=WHT−λαihiT; bT=bT−λαi; (Equation 4a)
WST=WST−λβisiT; WCT=WCT−λβiciT; (Equation 4b)
WTT=WTT−λβiŤiT; bH=bH−λβi; (Equation 4c)
αi=(yi−Ti)·(1−yi)·(1+yi); (Equation 4d)
βi=(WHαi)·(1−hi)·(1+hi); (Equation 4e)
wherein ŤiT represents the corrupted (transposed) version of Ti, and wherein the variables WHT, WST, WTT, αi, βi are related as follows in an exemplary embodiment (Equations 5):
fs(si)=WSTsi; fc(ci)=WCTci; fT(Ťi)=WTTŤi; (Equations 5a)
f(fs(si),fc(ci),fT(Ťi)=Tan h(WSTsi+WCTci+WTTŤi+bH); (Equation 5b)
yi=gK(hi)=Tan h(WHThi+bT). (Equation 5c)
In an alternative exemplary embodiment, to update weights according to the ranking-based function (e.g., Equation 2), the following equations (Equations 6) may be used:
wherein the exemplary embodiment of Equations 5 may again be used.
Once the weights and bias terms are updated, at block 560, it is checked whether training has been performed using all data samples xi in x. If not, method 500 proceeds to block 565, whereupon training at blocks 520-550 is again performed using the next available data sample xi, e.g., the corrupting and updating are repeated over all the plurality of communications items in the history. If yes, method 500 proceeds to block 570.
In the manner described with reference to blocks 520-550, based on cumulative training of the weights and other bias terms present in f(•) and g(•) using successive data samples in x, an optimal encoding function f*(•) and an optimal decoding function g*(•) are generated. Using these optimal functions, prediction may subsequently be performed to generate a predicted people group y′ given an arbitrary input vector x′, as further described hereinbelow.
In
At block 620, x′ is encoded using an encoding function f*(•) to generate an encoded vector h′. Note encoding may generally proceed similarly as described with reference to block 530 described hereinabove with reference to data sample xi of the user history.
At block 630, h′ is passed to decoding function g*(•) to generate y′=g*(h′). It will be noted that y′ contains a field T′pred, also denoted herein as a “relevance group,” corresponding to the predicted recipients of the current conversation box.
At block 640, T′ (the people group already specified in x′) is compared to y′, in particular, the T′pred component of y′, to suggest a recipient to the user. Based on this comparison, there may be a set of additional people T′new not already present in T′, wherein T′new is also described as T′pred−(T′pred ∩T′). To determine which of the additional people in T′new is to be recommended as the one or more top recommendations, ranking may further be performed as described with reference to
In
At block 720, a scoring function F[x′, tnew(j)] is applied to each person tnew(j), where j indexes the individual persons or entities in T′ new. In an exemplary embodiment, the scoring function F[•] may correspond to the following (Equation 7):
h=Tan h(WSTs+WCTc+WTTT+bH); (Equation 7a)
F[x′,tnew]=(WHTh)t
At block 730, the individuals are ranked according to their score F[•].
At block 740, an individual person or entity associated with the top-ranked score may be recommended to the user.
At block 750, it is determined whether the user accepts the recommendation(s) provided at block 740. It is noted that user acceptance of a recommendation automatically generates a new x′, whereby the new x′ includes an updated T′ field incorporating the accepted tnew. If yes (user accepts recommendation), then method 640.1 proceeds to block 760. If no, method 640.1 proceeds to block 755.
At block 755, the next highest ranked tnew may be recommended to the user. Following block 755, it may then be determined again whether the user accepts the recommendation, at block 750.
At block 760, the new x′ may be input to another iteration of prediction phase 300 to generate new people recommendation(s).
In an exemplary embodiment, depending on whether a user accepts or rejects a recipient recommendation, such information may be used to re-train the de-noising auto-encoding algorithms described hereinabove, e.g., with reference to
In
At block 820, it is determined whether user 201 accepts recommendation 230b or not. If yes, the method 800 proceeds to block 830. If no, the method 800 proceeds to block 840.
At block 830, as recommendation 230b is accepted by user 201, user history 220a is updated, and new parameters 230a for a next people recommendation may be received.
Alternatively, at block 840, as recommendation 230b is not accepted by user 201, method 800 will receive information from application 210 regarding the correct people (P*) to include for the current content parameters 230a, e.g., as indicated directly by the user. For example, in certain instances, system 200 may recommend a candidate recipient (230b) for an email (230a) being composed by user 201, and user 201 may reject the candidate recipient. User 201 may instead choose an alternative recipient (P*) as the correct recipient.
At block 860, based on the indication of the correct recipient (P*) as indicated by user 201, system 200 may perform real-time updating or training of system parameters using the data set defined by P* and current parameters 230a. In an exemplary embodiment, such updating or training may correspond to training the encoding and decoding functions f*(•) and g*(•) by defining a new data sample x*, wherein the T component of x* accounts for the indication of the correct recipient P* as indicated by user 201 at block 850.
In
At block 920, using a decoding function, the encoded vector is decoded to generate a relevance group.
At block 930, a recommendation is generated comprising a member of the relevance group not in the pre-specified recipient group.
In
In
In this specification and in the claims, it will be understood that when an element is referred to as being “connected to” or “coupled to” another element, it can be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being “directly connected to” or “directly coupled to” another element, there are no intervening elements present. Furthermore, when an element is referred to as being “electrically coupled” to another element, it denotes that a path of low resistance is present between such elements, while when an element is referred to as being simply “coupled” to another element, there may or may not be a path of low resistance between such elements.
The functionality described herein can be performed, at least in part, by one or more hardware and/or software logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include Field-programmable Gate Arrays (FPGAs), Program-specific Integrated Circuits (ASICs), Program-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), etc.
While the invention is susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in the drawings and have been described above in detail. It should be understood, however, that there is no intention to limit the invention to the specific forms disclosed, but on the contrary, the intention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the invention.
Wang, Ye-Yi, Guo, Chenlei, Gao, Jianfeng, Zou, Yang, Song, Xinying, Shen, Yelong, Remick, Brian D., Stepp, Mariana, Byun, Byungki, Thiele, Edward, Ali, Mohammed Aatif, Gois, Marcus, Jetley, Divya, Friesen, Stephen
Patent | Priority | Assignee | Title |
11954645, | Mar 26 2020 | International Business Machines Corporation | Collaboration participant inclusion |
ER4373, | |||
ER5895, | |||
ER6127, |
Patent | Priority | Assignee | Title |
7774421, | Oct 14 2005 | LinkedIn Corporation | Mitigating address book weaknesses that permit the sending of e-mail to wrong addresses |
8090781, | Nov 14 2006 | Sony Ericsson Mobile Communications AB | Communication terminal, and destination-address right/wrong determining method and program thereof |
8301704, | Sep 20 2006 | Meta Platforms, Inc | Electronic message system recipient recommender |
8306809, | Jul 17 2008 | International Business Machines Corporation | System and method for suggesting recipients in electronic messages |
8489626, | Jan 31 2011 | International Business Machines Corporation | Method and apparatus for recommending a short message recipient |
8677251, | May 30 2008 | Microsoft Technology Licensing, LLC | Creation and suggestion of contact distribution lists |
8738634, | Feb 05 2010 | GOOGLE LLC | Generating contact suggestions |
8892672, | Jan 28 2014 | FMR LLC | Detecting unintended recipients of electronic communications |
8990191, | Mar 25 2014 | Microsoft Technology Licensing, LLC | Method and system to determine a category score of a social network member |
9594851, | Feb 07 2012 | GOOGLE LLC | Determining query suggestions |
20070130368, | |||
20090037413, | |||
20090100183, | |||
20090282039, | |||
20120183935, | |||
20130204809, | |||
20140214976, | |||
20140222815, | |||
20150112182, | |||
20160048741, | |||
20160098633, | |||
20160106321, | |||
20160189730, | |||
20160283859, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jul 23 2015 | JETLEY, DIVYA | Microsoft Technology Licensing LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 036201 | /0770 | |
Jul 23 2015 | ALI, MOHAMMED AATIF | Microsoft Technology Licensing LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 036201 | /0770 | |
Jul 23 2015 | SONG, XINYING | Microsoft Technology Licensing LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 036201 | /0770 | |
Jul 23 2015 | GUO, CHENLEI | Microsoft Technology Licensing LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 036201 | /0770 | |
Jul 23 2015 | REMICK, BRIAN D | Microsoft Technology Licensing LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 036201 | /0770 | |
Jul 23 2015 | ZOU, YANG | Microsoft Technology Licensing LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 036201 | /0770 | |
Jul 23 2015 | SHEN, YELONG | Microsoft Technology Licensing LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 036201 | /0770 | |
Jul 23 2015 | WANG, YE-YI | Microsoft Technology Licensing LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 036201 | /0770 | |
Jul 23 2015 | GAO, JIANFENG | Microsoft Technology Licensing LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 036201 | /0770 | |
Jul 24 2015 | STEPP, MARIANA | Microsoft Technology Licensing LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 036201 | /0770 | |
Jul 24 2015 | GOIS, MARCUS | Microsoft Technology Licensing LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 036201 | /0770 | |
Jul 24 2015 | BYUN, BYUNGKI | Microsoft Technology Licensing LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 036201 | /0770 | |
Jul 24 2015 | THIELE, EDWARD | Microsoft Technology Licensing LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 036201 | /0770 | |
Jul 28 2015 | Microsoft Technology Licensing, LLC | (assignment on the face of the patent) | / | |||
Aug 03 2015 | FRIESEN, STEPHEN | Microsoft Technology Licensing, LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 036262 | /0734 |
Date | Maintenance Fee Events |
Jan 26 2022 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Date | Maintenance Schedule |
Aug 07 2021 | 4 years fee payment window open |
Feb 07 2022 | 6 months grace period start (w surcharge) |
Aug 07 2022 | patent expiry (for year 4) |
Aug 07 2024 | 2 years to revive unintentionally abandoned end. (for year 4) |
Aug 07 2025 | 8 years fee payment window open |
Feb 07 2026 | 6 months grace period start (w surcharge) |
Aug 07 2026 | patent expiry (for year 8) |
Aug 07 2028 | 2 years to revive unintentionally abandoned end. (for year 8) |
Aug 07 2029 | 12 years fee payment window open |
Feb 07 2030 | 6 months grace period start (w surcharge) |
Aug 07 2030 | patent expiry (for year 12) |
Aug 07 2032 | 2 years to revive unintentionally abandoned end. (for year 12) |