A method and system to produce and train composite similarity functions for record linkage problems, including product normalization problems, is disclosed. In one embodiment, for a group of products in a plurality of products, a composite similarity function is constructed for the group of products from a weighted set of basis similarity functions. training records are used to calculate the weights in the weighted set of basis similarity functions in the composite similarity function for the group of products. In another embodiment, a composite similarity function is applied to pairs of training records. The application of the composite similarity function provides a number that can be used to indicate whether two records relate to a common subject. The composite similarity function includes a weighted set of basis similarity functions. A perceptron algorithm is used to modify the weights in the weighted set.
|
9. A system for training a composite similarity function to identify when two records refer to a common underlying subject, comprising:
one or more processors;
memory;
means for applying the composite similarity function ƒ*(R1,R2) to pairs of training records,
ƒ*(R1,R2)=ƒ[ƒ1(R1,R2), . . . , ƒK(R1,R2), α1, . . . αK] wherein
ƒ*(R1,R2) is the composite similarity function,
R1 and R2 are two records,
ƒ[ ] is a function,
ƒi(R1,R2) are respective basis similarity functions, and
αi are respective weights for respective basis similarity functions,
wherein application of the composite similarity function ƒ*(R1,R2) provides a composite similarity number that is used to indicate whether two records relate to the common underlying subject,
wherein the composite similarity number is adapted to facilitate identification and display of records for the common underlying subject, the common underlying subject selected from the group consisting of a product, a seller, a person, and a reference, and
wherein the composite similarity function ƒ*(R1,R2) includes a weighted set of basis similarity functions; and
means for using a perceptron algorithm to modify the respective weights αi in the weighted set.
7. A system for training a composite similarity function to identify when two records refer to a common underlying subject, comprising at least one computer, wherein said at least one computer is configured to:
apply the composite similarity function ƒ*(R1,R2) to pairs of training records,
ƒ*(R1,R2)=ƒ[ƒ1(R1,R2), . . . , ƒK(R1,R2), α1, . . . , αK] wherein
ƒ*(R1,R2) is the composite similarity function,
R1 and R2 are two records,
ƒ[ ] is a function,
ƒi(R1,R2) are respective basis similarity functions, and
αi are respective weights for respective basis similarity functions,
wherein application of the composite similarity function ƒ*(R1,R2) provides a composite similarity number that is used to indicate whether two records relate to the common underlying subject,
wherein the composite similarity number is adapted to facilitate identification and display of records for the common underlying subject, the common underlying subject selected from the group consisting of a product, a seller, a person, and a reference, and
wherein the composite similarity function ƒ*(R1,R2) includes a weighted set of basis similarity functions; and
use a perceptron algorithm to modify the respective weights αi in the weighted set.
2. A computer-implemented method for training a composite similarity function to identify when two records refer to a common underlying subject, comprising:
at a computer comprising memory and one or more processors:
applying the composite similarity function ƒ*(R1,R2) to pairs of training records,
ƒ*(R1,R2)=ƒ[ƒ1(R1, R2), . . . , ƒK(R1,R2), α1, . . . , αK] wherein
ƒ*(R1,R2) is the composite similarity function,
R1 and R2 are two records,
ƒ[ ] is a function,
ƒi(R1,R2) are respective basis similarity functions, and
αi are respective weights for respective basis similarity functions,
wherein the composite similarity function ƒ*(R1,R2) is configured to provide a composite similarity number that is used to indicate whether two records relate to the common underlying subject,
wherein the composite similarity number is adapted to facilitate identification and display of records for the common underlying subject, the common underlying subject selected from the group consisting of a product, a seller, a person, and a reference, and
wherein the composite similarity function ƒ*(R1,R2) includes a weighted set of basis similarity functions; and
using a perceptron algorithm to modify the respective weights αi in the weighted set.
1. A computer-implemented method for training a composite similarity function to identify when two records refer to a common underlying subject, comprising:
at a computer comprising memory and one or more processors:
e####
applying the composite similarity function ƒ*(R1,R2) to pairs of training records,
wherein application of the composite similarity function ƒ*(R1,R2) provides a number that can be used to indicate whether two records relate to a common underlying subject, and
wherein the composite similarity function ƒ*(R1,R2) is a transform of a weighted linear combination of basis similarity functions:
wherein
ƒ*(R1,R2) is the composite similarity function, wherein the composite similarity function is configured to provide a composite similarity number that is used to indicate whether two records relate to the common underlying subject, and wherein the composite similarity number is adapted to facilitate identification and display of records for the common underlying subject, which is selected from the group consisting of a product a seller, a person and a reference,
R1 and R2 are two records,
ƒtransform[ ] is a transform function,
ƒi(R1,R2) are respective basis similarity functions, and
αi are respective weights for respective basis similarity functions; and
using an averaged perceptron algorithm to modify the respective weights αi in the weighted linear combination
8. A machine readable medium having stored thereon data representing sequences of instructions for training a composite similarity function to identify when two records refer to a common underlying subject, which when executed by a computer, cause the computer to:
apply the composite similarity function ƒ*(R1,R2) to pairs of training records,
ƒ*(R1,R2)=ƒ[ƒ1(R1,R2), . . . , ƒK(R1,R2), α1, . . . , αK] wherein
ƒ*(R1,R2) is the composite similarity function,
R1 and R2 are two records,
ƒ[ ] is a function,
ƒi(R1,R2) are respective basis similarity functions, and
αi are respective weights for respective basis similarity functions,
wherein application of the composite similarity function ƒ*(R1,R2) is configured to provide a composite similarity number that can be used to indicate whether two records relate to the common underlying subject,
wherein the composite similarity number is adapted to facilitate identification and display of records for the common underlying subject, the common underlying subject selected from the group consisting of a product, a seller, a person, and a reference, and
wherein the composite similarity function ƒ*(R1,R2) includes a weighted set of basis similarity functions; and
use a perceptron algorithm to modify the respective weights αi in the weighted set.
16. A system for producing a composite similarity function to identify when two records refer to a common underlying subject, comprising at least one computer, wherein said at least one computer is configured to:
for a group of subjects in a plurality of subjects,
construct the composite similarity function ƒ*(R1,R2) for the group of subjects from a weighted set of basis similarity functions,
ƒ*(R1,R2)=ƒ[ƒ1(R1,R2), . . . , ƒK(R1,R2), α1, . . . , αK] wherein
ƒ*(R1,R2) is the composite similarity function, wherein the composite similarity function is configured to provide a composite similarity number that is used to indicate whether two records relate to the common underlying subject, and wherein the composite similarity number is adapted to facilitate identification and display of records for the common underlying subject, which is selected from the group consisting of a product, a seller, a person, and a reference,
R1 and R2 are two records,
ƒ[ ] is a function,
ƒi(R1,R2) are respective basis similarity functions, and
αi are respective weights for respective basis similarity functions,
wherein a basis similarity function ƒi(R1,R2) is configured to provide a numerical indication of the similarity of entries in corresponding fields in data records for subjects in the group of subjects; and
use training records to calculate the respective weights αi in the weighted set of basis similarity functions in the composite similarity function ƒ*(R1,R2) for the group of subjects.
10. A computer-implemented method for producing a composite similarity function configured to identify when two records refer to a common underlying subject, comprising:
at a computer comprising memory and one or more processors:
for a group of subjects in a plurality of subjects,
constructing the composite similarity function ƒ*(R1,R2) for the group of subjects from a weighted set of basis similarity functions,
ƒ*(R1,R2)=ƒ[ƒ1(R1,R2), . . . , ƒK(R1,R2), αi, . . . , αK] wherein
ƒ*(R1,R2) is the composite similarity function, wherein the composite similarity function is configured to provide a composite similarity number that is used to indicate whether two records relate to the common underlying subject, which is selected from the group consisting of a product, a seller, a person, and a reference, and wherein the composite similarity number is adapted to facilitate identification and display of records for the common underlying subject,
R1 and R2 are two records,
ƒ[ ] is a function,
ƒi(R1,R2) are respective basis similarity functions, and
αi are respective weights for respective basis similarity functions,
wherein a basis similarity function ƒi(R1,R2) is configured to provide a numerical indication of the similarity of entries in corresponding fields in data records for products in the group of subjects; and
using training records to calculate the respective weights αi in the weighted set of basis similarity functions in the composite similarity function ƒ*(R1,R2) for the group of subjects.
3. The method of wherein
ƒtransform[ ] function, and
is the weighted linear combination of basis similarity functions.
11. The method of
wherein
ƒtransform[ ] is a transform function, and
is the weighted linear combination of basis similarity functions.
13. The method of
15. The method of
|
The disclosed embodiments relate generally to machine learning. More particularly, the disclosed embodiments relate to methods and systems to produce and train composite similarity functions for record linkage problems, including product normalization problems.
Record linkage is the problem of identifying when two (or more) references to an object are referring to the same entity (i.e., the references are “co-referent”). One example of record linkage is identifying whether two paper citations (which may be in different styles and formats) refer to the same actual paper. Addressing the record linkage problem is important in a number of domains where multiple users, organizations, or authors may describe the same item using varied textual descriptions.
Historically, one of the most studied problems in record linkage is determining whether two database records for a person are referring to the same real-life individual. In applications from direct marketing to survey response (e.g., the U.S. Census), record linkage is often seen as an important step in data cleaning in order to avoid waste and maintain consistent data.
More recently, record linkage has become an issue in several web applications. For example, the task of determining whether two paper citations refer to the same true publication is an important problem in online systems for scholarly paper searches, such as CiteSeer (http://citeseer.ist.psu.edu) and Google Scholar (http://scholar.google.com).
A new record linkage problem—called product normalization—arises in online comparison shopping. Here, two different websites may sell the same product, but provide different descriptions of that product to a comparison shopping database. (Note: Records containing product descriptions are also called “offers” herein.) Variations in the comparison shopping database records can occur for a variety of reasons, including spelling errors, typographical errors, abbreviations, or different but equivalent descriptions that are used to describe the same product. For example, in online comparison shopping, shopping bots like Froogle (http://froogle.google.com) and MySimon (http://www.mysimon.com) merge heterogeneous data from multiple merchant websites into one product database. This combined product database is then used to provide one common access point for the customer to compare product specifications, pricing, shipping, and other information. In such cases, two websites may have two different product offers that refer to the same underlying product, e.g., “Canon ZR 65 MC Camcorder” and “Canon ZR65 Digital MiniDV Camcorder.”
Thus, a comparison shopping engine is faced with the record linkage problem of determining which such offers are referring to the same true underlying product. Solving this product normalization problem allows the shopping engine to display multiple offers for the same product to a user who is trying to determine from which vendor to purchase the product. Accurate product normalization is also important for data mining tasks, such as analysis of pricing trends.
In online comparison shopping, the number of vendors and the sheer number of products (with potentially very different characteristics) make it very difficult to manually craft a single function that can adequately determine if two arbitrary offers are for the same product. Moreover, for different categories of products, different similarity functions may be needed that capture the notion of equivalence for each category. Hence, a method and system that provide for efficient production and training of similarity functions between offers and/or between product categories is needed.
Furthermore, in many record linkage tasks, such as product normalization, the records to be linked actually contain multiple fields (e.g., product name, description, manufacturer, price, etc.). Such records may either come in a pre-structured form (e.g., XML or relational database records), or such fields may have been extracted from an underlying textual description. Hence, a method and system that provide for efficient production and training of similarity functions between offers with multiple fields is also needed.
Another consideration in record linkage problems like product normalization is the fact that new data is continuously becoming available. As a result, a learning approach to the linkage problem in such settings should be able to readily use new training data without having to retrain on previously seen data.
Thus, it would be highly desirable to develop methods and systems that efficiently produce and train composite similarity functions for record linkage problems, including product normalization problems.
The present invention overcomes the problems described above.
One aspect of the invention is a computer-implemented method that involves, for a group of products in a plurality of products, constructing a composite similarity function for the group of products from a weighted set of basis similarity functions and using training records to calculate the weights in the weighted set of basis similarity functions in the composite similarity function for the group of products. A basis similarity function provides a numerical indication of the similarity of entries in corresponding fields in data records for products in the group of products.
Another aspect of the invention is a system comprising at least one computer. The at least one computer is configured to, for a group of products in a plurality of products, construct a composite similarity function for the group of products from a weighted set of basis similarity functions and use training records to calculate the weights in the weighted set of basis similarity functions in the composite similarity function for the group of products.
Another aspect of the invention involves a machine readable medium having stored thereon data representing sequences of instructions, which when executed by a computer, cause the computer to, for a group of products in a plurality of products, construct a composite similarity function for the group of products from a weighted set of basis similarity functions and use training records to calculate the weights in the weighted set of basis similarity functions in the composite similarity function for the group of products.
Another aspect of the invention involves a system that comprises, for a group of products in a plurality of products, means for constructing a composite similarity function for the group of products from a weighted set of basis similarity functions and means for using training records to calculate the weights in the weighted set of basis similarity functions in the composite similarity function for the group of products.
Another aspect of the invention is a computer-implemented method in which a composite similarity function is applied to pairs of training records. The application of the composite similarity function provides a number that can be used to indicate whether two records relate to a common subject. The composite similarity function includes a weighted set of basis similarity functions. A perceptron algorithm is used to modify the weights in the weighted set.
Another aspect of the invention is a system comprising at least one computer. The at least one computer is configured to apply a composite similarity function to pairs of training records. The application of the composite similarity function provides a number that can be used to indicate whether two records relate to a common subject. The composite similarity function includes a weighted set of basis similarity functions. The at least one computer is also configured to use a perceptron algorithm to modify the weights in the weighted set.
Another aspect of the invention involves a machine readable medium having stored thereon data representing sequences of instructions, which when executed by a computer, cause the computer to apply a composite similarity function to pairs of training records. The application of the composite similarity function provides a number that can be used to indicate whether two records relate to a common subject. The composite similarity function includes a weighted set of basis similarity functions. When executed by a computer, the instructions also cause the computer to use a perceptron algorithm to modify the weights in the weighted set.
Another aspect of the invention involves a system that comprises means for applying a composite similarity function to pairs of training records. The application of the composite similarity function provides a number that can be used to indicate whether two records relate to a common subject. The composite similarity function includes a weighted set of basis similarity functions. The system also comprises means for using a perceptron algorithm to modify the weights in the weighted set.
Thus, the invention efficiently produces and trains composite similarity functions for record linkage problems, including product normalization problems.
For a better understanding of the aforementioned aspects of the invention as well as additional aspects and embodiments thereof, reference should be made to the Description of Embodiments below, in conjunction with the following drawings in which like reference numerals refer to corresponding parts throughout the figures.
Methods and systems are described that show how to produce and train composite similarity functions for record linkage problems, including product normalization problems. Reference will be made to certain embodiments of the invention, examples of which are illustrated in the accompanying drawings. While the invention will be described in conjunction with the embodiments, it will be understood that it is not intended to limit the invention to these particular embodiments alone. On the contrary, the invention is intended to cover alternatives, modifications and equivalents that are within the spirit and scope of the invention as defined by the appended claims.
Moreover, in the following description, numerous specific details are set forth to provide a thorough understanding of the present invention. However, it will be apparent to one of ordinary skill in the art that the invention may be practiced without these particular details. In other instances, methods, procedures, components, and networks that are well-known to those of ordinary skill in the art are not described in detail to avoid obscuring aspects of the present invention.
Each of the above identified modules and applications correspond to a set of instructions for performing a function described above. These modules (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules may be combined or otherwise re-arranged in various embodiments. In some embodiments, memory 206 may store a subset of the modules and data structures identified above. Furthermore, memory 206 may store additional modules and data structures not described above.
Although
Producing Composite Similarity Functions
In record linkage, a function is created that is used to determine the degree of similarity between records. For example, any binary classifier that produces confidence scores can be used to estimate the overall similarity of a record pair (Ri1, Ri2) by classifying the corresponding feature vector xi and treating classification confidence as similarity.
Records in a product database generally have multiple attributes of different types, each of which has an associated similarity measure. For instance, string similarity measures like edit distance or cosine similarity can be used to compare textual attributes like product name and description. Numerical functions (e.g., relative difference) can be used for real-valued attributes like price. Customized similarity functions can be used for categorical attributes, e.g., tree proximity can be used as a similarity measure for a categorical attribute that corresponds to the location of an item in the product category hierarchy. (See the discussion of Offers A and B below for more details.)
An adaptive framework for learning similarity functions is beneficial because it lets a product normalization algorithm be domain-independent. Consider the following example. When performing record linkage for product normalization, equivalence of book titles and author names is generally highly indicative of co-referent book records. So, the weight of the string similarity measure corresponding to the product name attribute should be high for the book domain. On the other hand, product name similarity is often insufficient by itself to link records corresponding to offers for electronic products. For example, “Toshiba Satellite M35X-S309 notebook” and “Toshiba Satellite M35X-S309 notebook battery” have a high textual similarity but refer to different products. At the same time, for high-end electronic items, price similarity is an important indicator of offer equivalence—the notebook and the battery records have very different prices, indicating that they are not co-referent. Compared to the weights in the book domain, the weight of the basis similarity function corresponding to product name in this example should be lower and the weight of the basis function measuring price similarity should be higher. Thus, an adaptive framework that can learn a composite similarity function, customized for a particular group of products (e.g., a product category), from training data is useful for a general purpose product normalization algorithm.
Basis Functions
In some embodiments, a set of K basis functions f1 (R1, R2), f2 (R1, R2), . . . , fK (R1, R2) are defined, which are basis similarity functions 240 between data in corresponding fields in two records R1 and R2. While some similarity functions may only take into account the data in individual fields of the records, other similarity functions may take into account data in multiple fields of the records. The methods disclosed herein do not require that the basis functions 240 only operate on single fields of records. Indeed, the methods presented here are general enough to make use of arbitrarily complex functions of two records, e.g., concatenations of multiple attributes. However, for clarity and easier applicability to real-world tasks, basis similarity functions of single fields are described here. In some embodiments, a composite similarity function, denoted f*, is produced from a linear combination (with corresponding weights αi and an additional threshold parameter α0) of the basis functions:
Values provided by f* are not constrained to be positive: the learning method described below assumes that the threshold ac may take on a negative value so that for pairs of records that are not equivalent f* can return a negative value.
In some embodiments, once trained, f* can be used to produce a similarity matrix S over all pairs of records. In turn, S can be used with a similarity based clustering algorithm to determine clusters, each of which contains a set of records that presumably should be linked. Each cluster can be interpreted as a set of records referring to the same true underlying item.
Pair Space Representation
Identifying co-referent records requires classifying every candidate pair of records as belonging to the class of matches M or to the class of nonmatches U. Given some domain ΔR from which each record is sampled, and K basis similarity functions fk: ΔR X ΔR→R that operate on pairs of records, a pair-space vector xi∈RK+1 can be produced for every pair of records (Ri1, Ri2) as
xi=[1, f1(Ri1, Ri2), . . . , fK(Ri1, Ri2)]T
where the K values obtained from basis similarity functions are concatenated with a default attribute that always has value 1, which corresponds to the threshold parameter α0. The exponent T here is shorthand for ‘matrix transpose’, which makes xi a column vector (k-by-1 matrix), as opposed to a row vector (1-by-k matrix)].
For a group of products in a plurality of products, a composite similarity function is constructed from a weighted set of basis similarity functions (e.g., by function generator 230) (302). As explained above and illustrated by example below, a basis similarity function 240 provides a numerical indication of the similarity of entries in corresponding fields in two data records for products in the group of products. In some embodiments, the composite similarity function is a transform of a weighted linear combination of basis similarity functions, such as a sigmoid function. In some embodiments, the basis similarity functions are kernel functions, which are well known in pattern analysis (e.g., see J. Shawe-Taylor and N. Cristianini, “Kernel Methods for Pattern Analysis”, Cambridge University Press, 2004).
For example, consider the following two offers with four attributes each:
Offer A:
attr1, Product Name: Canon EOS 20D Digital SLR Body Kit (Req. Lens) USA
attr2, Product Price: $1499.00
attr3, Product Description: Canon EOS 20d digital camera body (lens not included), BP511a battery, CG580 battery charger, USB cable, Video cable, instructions, warranty, 3 CDROM software discs, Wide strap.
attr4, Classified Category: 474 (Electronics->Cameras->Digital Cameras)
Offer B:
attr1, Product Name: Canon EOS 20d Digital Camera Body USA—Lens sold separately
attr2, Product Price: $1313.75
attr3, Product Description: Canon EOS 20D is a digital, single-lens reflex, AF/AE camera with built-in flash, providing 8.2 megapixel resolution and up to 23 consecutive frames at 5 fps.
attr4, Classified Category: 474 (Electronics->Cameras->Digital Cameras)
The attributes are of different types: attr1 and attr3 are textual (strings); attr2 is numeric; and attr4 is categorical. In the above example, the value “474” in attr4 is just an identifier, whose value corresponds to a specific category in a product hierarchy tree.
For product offers with these attributes, three types of basis functions 240 may be used—fcos, fnum, and fcat—each of which operates on attribute values of a particular type:
1. fcos (str1, str2): cosine similarity between string values str1 and str2:
fcos (str1, str2)=cos (TFIDF (str1), TFIDF (str2)) where TFIDF (str1) and TFIDF (str2) are Term Frequency—Inverse Document Frequency representations of str1 and str2 as numerical vectors, v1 and v2. These vectors have dimensionality equal to the total number of tokens (“words”) seen in the entire records database 220; but only those components that correspond to tokens present in a particular string are non-zero. For example, if the entire vocabulary has 20,000 different tokens, a string “Canon EOS” is represented by a 20,000-dimensional vector that has only two non-zero components, those corresponding to ‘Canon’ and to ‘EOS’ tokens. The cosine similarity of two vectors is defined as the dot product of the vectors divided by the product of the magnitudes of the two vectors,
2. fnum (n1, n2): 1—relative difference between numeric values n1 and n2:
fnum (n1, n2)=1−|n1−n2|/((n1+n2)/2)
3. fcat (cat1, cat2): similarity between categorical values computed as the inverse of the hierarchy distance between categories:
fcat (cat1, cat2)=1/(1+Dist (cat1, cat2))
where Dist (cat1, cat2) is the distance between cat1 and cat2 in the category hierarchy—in other words, the number of categories between them in the tree.
Note that one of ordinary skill in the art would recognize that other types of basis functions could be used beyond the three illustrated here. For example, other token-based or sequence-based string similarity functions, such as the string edit distance, could also be used to determine the similarity of product names and/or product descriptions.
If these three basis functions 240 are used on four-attribute product descriptions for offers A and B, a 4-dimensional vector of similarity values, [v1 v2 v3 v4], is produced, where
The actual similarity values computed by the basis functions 240 for offers A and B shown above are approximately the following: v1=0.7; v2=0.87; v3=0.08; and v4=1.0.
Now, assume that the following weights corresponding to basis similarity functions 240 for particular attributes have been learned:
If the composite similarity function is the similarity score transformed by the sigmoid function, the following final score is obtained:
SimTransformed (A, B)=1/(1+exp(−Sim(A, B))=1/(1+exp(−0.77))=0.68
where the composite similarity function, f* is:
f*=1/(1+exp(−{(w1*fcos(A.attr1, B.attr1))+(w2*fnum(A.attr2, B.attr2))+(w3*fcos(A.attr3, B.attr3))
+(w4*fcat(A.attr4, B.attr4))})
As described below, training records are used to calculate the weights in the weighted set of basis similarity functions in the composite similarity function (e.g., by training module 250) (304). In some embodiments, the averaged perceptron algorithm is used to calculate the weights.
Training a Composite Similarity Function for Record Linkage
As noted above, any binary classifier that produces confidence scores can be used to estimate the overall similarity of a record pair (Ri1, Ri2) by classifying the corresponding feature vector xi and treating classification confidence as similarity. The classifier is typically trained using a corpus of labeled data in the form of pairs of records that are known to be either co-referent ((Ri1, Ri2) ∈M) or non-equivalent ((Ri1, Ri2) ∈U). Potential classifiers include, without limitation, the averaged perceptron, Naïve Bayes, decision trees, maximum entropy, and Support Vector Machines.
A composite similarity function is applied to pairs of training records (e.g., by training module 250) (402). The application of the composite similarity function provides a number that can be used to indicate whether two records relate to a common subject. In some embodiments, the common subject is a product. In other embodiments, the common subject is, without limitation: a seller; a person; a category, class, or other group of products; or a reference.
The composite similarity function includes a weighted set of basis similarity functions. In some embodiments, the composite similarity function is a transform of a weighted linear combination of basis similarity functions. In some embodiments, the basis similarity functions are kernel functions.
In some embodiments, a perceptron algorithm is used to modify the weights in the weighted set (e.g., by training module 250) (404). In some embodiments, the weights of each basis function in a linear combination are learned from labeled training data using a version of the voted perceptron algorithm, which is very efficient in an online learning setting for large volumes of streaming data. This algorithm can also be deployed in batch-mode learning using standard online-to-batch conversion techniques, and it has comparable empirical performance to state-of-the-art learning techniques such as boosting. In some embodiments, the perceptron algorithm is the averaged perceptron. In some embodiments, the perceptron algorithm is the voted perceptron.
The averaged perceptron algorithm, described in Table 1, is a space-efficient variation of the voted perceptron algorithm proposed and analyzed by Freund and Schapire. The averaged perceptron is a linear classifier that, when given an instance xi, generates a prediction of the form ŷi=αavg·xi, where αavg is a vector of (K+1) real weights that is averaged over all weight vectors observed during the training process (as opposed to just using the final weight vector in the regular perceptron algorithm). Each of the weights corresponds to the importance of the corresponding basis similarity function. αavg0 is the classification threshold separating the classes of co-referent and non-equivalent records. xi is the pair-space vector defined above. The label −1 is assigned to the class U of non-equivalent record pairs, and the label +1 is assigned to the class M of co-referent record pairs.
The averaged perceptron algorithm has several properties that make it particularly useful for large-scale streaming linkage tasks. First and foremost, it is an online learning algorithm: the similarity function parameters (weights) that it generates can be easily updated as more labeled examples become available without the need to retrain on all previously seen training data. Second, the averaged perceptron is a linear model that produces a hypothesis that is intuitive and easily interpretable by humans, which is an attractive property for a system to be deployed and maintained on a continuous real-world task. Third, the averaged perceptron is a discriminative classifier with strong theoretical performance guarantees.
Input: Training set of record pairs {(Ri1, Ri2, yi)}, y ε {−1, +1},
number of epochs T,
similarity functions F = {i(.,.)}i=1 to K
Output: Weight vector αavg = { αi}i = 0 to K
Algorithm:
Initialize αavg = α = 0
Initialize xi =[1, f1(Ri1, Ri2), . . . , fK (Ri1, Ri2)] for i = 1... M
For t = 1... T {
For i = 1... M {
Compute ŷi = sign (α · xi).
If ŷi≠ yi{
α = α + yi xi
}
αavg= αavg+ α
}
}
αavg = αavg / (T · M)
Table 1 shows the averaged perceptron training algorithm for learning the parameters (weights) αavg. Freund and Schapire have proved several theoretical properties of the algorithm, including the fact that the expected number of mistakes made by a classifier trained using the algorithm does not depend on the weight vector dimensionality. This is a useful feature of the algorithm because the freedom to vary the number of basis similarity functions and to extend them at will is highly desirable in many applications. Having theoretical guarantees that such additions will not harm performance allows for experimentation with different basis functions 240 without the fear that bad local optima will arise due to correlations between attributes.
The algorithm can also be viewed as minimizing the cumulative hinge loss suffered on a stream of examples. As every training record pair (Ri1, Ri2, yi) with a corresponding feature vector xi is presented to the learner, it incurs a (hinge) loss L(xi, yi)=max{−yiαi·xi, 0}, and the vector of weights a is updated in the direction of the gradient to reduce the loss: α=α−δL(xi, yi)/δα. Intuitively, this training procedure corresponds to iterative evaluation of the prediction for every training pair, and if the prediction differs from the true label, the weights are adjusted to correct for the error. This view can lead to variations of the algorithm using other loss functions, e.g., log-loss Llog(xi, yi)=ln(1+exp(−yi α·xi)).
The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated.
Bilenko, Mikhail, Sahami, Mehran, Basu, Sugato
Patent | Priority | Assignee | Title |
10019429, | Jan 22 2014 | GOOGLE LLC | Identifying tasks in messages |
10069784, | Jan 30 2014 | GOOGLE LLC | Associating a segment of an electronic message with one or more segment addressees |
10091147, | Dec 31 2013 | GOOGLE LLC | Providing additional information related to a vague term in a message |
10225228, | Dec 31 2013 | GOOGLE LLC | Determining an effect on dissemination of information related to an event based on a dynamic confidence level associated with the event |
10534860, | Jan 22 2014 | GOOGLE LLC | Identifying tasks in messages |
10659483, | Oct 31 2017 | EMC IP HOLDING COMPANY LLC | Automated agent for data copies verification |
10664619, | Oct 31 2017 | EMC IP HOLDING COMPANY LLC | Automated agent for data copies verification |
10680991, | Dec 31 2013 | GOOGLE LLC | Determining an effect on dissemination of information related to an event based on a dynamic confidence level associated with the event |
11070508, | Dec 31 2013 | GOOGLE LLC | Determining an effect on dissemination of information related to an event based on a dynamic confidence level associated with the event |
11411894, | Dec 31 2013 | GOOGLE LLC | Determining strength of association between user contacts |
11507548, | Sep 21 2011 | Amazon Technologies, Inc. | System and method for generating a classification model with a cost function having different penalties for false positives and false negatives |
11694276, | Aug 27 2021 | Bottomline Technologies, Inc.; BOTTOMLINE TECHNLOGIES, INC | Process for automatically matching datasets |
11876760, | Dec 31 2013 | GOOGLE LLC | Determining strength of association between user contacts |
11972228, | Oct 11 2017 | AMPERITY, INC. | Merging database tables by classifying comparison signatures |
7953693, | Aug 03 2004 | SINOEAST CONCEPT LIMITED | Method and apparatus for ontology-based classification of media content |
8423364, | Feb 20 2007 | Microsoft Technology Licensing, LLC | Generic framework for large-margin MCE training in speech recognition |
8843492, | Feb 13 2012 | Microsoft Technology Licensing, LLC | Record linkage based on a trained blocking scheme |
8990191, | Mar 25 2014 | Microsoft Technology Licensing, LLC | Method and system to determine a category score of a social network member |
9135396, | Dec 22 2008 | Amazon Technologies, Inc | Method and system for determining sets of variant items |
9304974, | Dec 31 2013 | GOOGLE LLC | Determining an effect on dissemination of information related to an event based on a dynamic confidence level associated with the event |
9342597, | Dec 31 2013 | GOOGLE LLC | Associating an event attribute with a user based on a group of electronic messages associated with the user |
9418119, | Mar 25 2014 | Microsoft Technology Licensing, LLC | Method and system to determine a category score of a social network member |
9418138, | Dec 22 2008 | Amazon Technologies, Inc. | Method and system for determining sets of variant items |
9424247, | Dec 31 2013 | GOOGLE LLC | Associating one or more terms in a message trail with a task entry |
9436755, | Jan 26 2014 | GOOGLE LLC | Determining and scoring task indications |
9497153, | Jan 30 2014 | GOOGLE LLC | Associating a segment of an electronic message with one or more segment addressees |
9507836, | Dec 31 2013 | GOOGLE LLC | Associating an event attribute with a user based on a group of one or more electronic messages associated with the user |
9514098, | Dec 09 2013 | GOOGLE LLC | Iteratively learning coreference embeddings of noun phrases using feature representations that include distributed word representations of the noun phrases |
9548951, | Dec 31 2013 | GOOGLE LLC | Providing additional information related to a vague term in a message |
9552560, | Dec 31 2013 | GOOGLE LLC | Facilitating communication between event attendees based on event starting time |
9569422, | Dec 31 2013 | GOOGLE LLC | Associating one or more terms in a message trail with a task entry |
9571427, | Dec 31 2013 | GOOGLE LLC | Determining strength of association between user contacts |
9606977, | Jan 22 2014 | GOOGLE LLC | Identifying tasks in messages |
9690773, | Dec 31 2013 | GOOGLE LLC | Associating one or more terms in a message trail with a task entry |
9720986, | Jul 16 2012 | Qatar Foundation | Method and system for integrating data into a database |
9749274, | Dec 31 2013 | GOOGLE LLC | Associating an event attribute with a user based on a group of one or more electronic messages associated with the user |
9875233, | Dec 31 2013 | GOOGLE LLC | Associating one or more terms in a message trail with a task entry |
9984058, | Jan 22 2014 | GOOGLE LLC | Identifying tasks in messages |
Patent | Priority | Assignee | Title |
5671333, | Apr 07 1994 | THE CHASE MANHATTAN BANK, AS COLLATERAL AGENT | Training apparatus and method |
5835902, | Nov 02 1994 | METRICLY, INC | Concurrent learning and performance information processing system |
6571225, | Feb 11 2000 | GOOGLE LLC | Text categorizers based on regularizing adaptations of the problem of computing linear separators |
6728706, | Mar 23 2001 | PayPal, Inc | Searching products catalogs |
6853996, | Sep 01 2000 | ACCESSIFY, LLC | Product normalization |
20030229604, | |||
20040220987, | |||
20050289039, | |||
EP1006458, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Feb 06 2006 | BILENKO, MIKHAIL | Google Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 017557 | /0560 | |
Mar 13 2006 | SAHAMI, MEHRAN | Google Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 017557 | /0560 | |
Mar 14 2006 | Google Inc. | (assignment on the face of the patent) | / | |||
Mar 14 2006 | BASU, SUGATO | Google Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 017557 | /0560 | |
Sep 29 2017 | Google Inc | GOOGLE LLC | CHANGE OF NAME SEE DOCUMENT FOR DETAILS | 044101 | /0610 |
Date | Maintenance Fee Events |
Oct 21 2013 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Oct 20 2017 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Dec 06 2021 | REM: Maintenance Fee Reminder Mailed. |
May 23 2022 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Apr 20 2013 | 4 years fee payment window open |
Oct 20 2013 | 6 months grace period start (w surcharge) |
Apr 20 2014 | patent expiry (for year 4) |
Apr 20 2016 | 2 years to revive unintentionally abandoned end. (for year 4) |
Apr 20 2017 | 8 years fee payment window open |
Oct 20 2017 | 6 months grace period start (w surcharge) |
Apr 20 2018 | patent expiry (for year 8) |
Apr 20 2020 | 2 years to revive unintentionally abandoned end. (for year 8) |
Apr 20 2021 | 12 years fee payment window open |
Oct 20 2021 | 6 months grace period start (w surcharge) |
Apr 20 2022 | patent expiry (for year 12) |
Apr 20 2024 | 2 years to revive unintentionally abandoned end. (for year 12) |