A statistical machine translation (smt) system employs a conditional translation probability conditioned on the source language content. A model parameters optimization engine is configured to optimize values of parameters of the conditional translation probability using a translation pool comprising candidate aligned translations for source language sentences having reference translations. The model parameters optimization engine adds candidate aligned translations to the translation pool by sampling available candidate aligned translations in accordance with the conditional translation probability.
|
5. A non-transitory storage medium storing instructions executable on a digital processor to perform a method including (i) translating source language content in a source natural language to a target natural language based on a conditional translation probability conditioned on the source language content and (ii) tuning the conditional translation probability using a translation pool,
wherein the tuning includes:
e####
selecting candidate aligned translations for a source language sentence having a reference translation by sampling available candidate aligned translations for the source language sentence in accordance with the conditional translation probability conditioned on the source language sentence; wherein the conditional translation probability for a translation (e,a) of source language content f is monotonically increasing with
where hk(. . .), k=1, . . . , k denotes a set of k feature functions and λk, k=1, . . . , k denotes tuning parameters of the conditional translation probability;
wherein the selecting comprises:
constructing a translation lattice representing the available candidate aligned translations for the source language sentence; and
sampling the available candidate aligned translations by sampling paths through the translation lattice from its root node to its final node wherein each transition from a current node to a next node is selected based on conditional translation probabilities
of the available edges ei leading away from the current node, and
adding the selected candidate aligned translations to the translation pool.
1. A method comprising:
translating source language content in a source natural language to a target natural language using statistical machine translation (smt) employing a conditional translation probability conditioned on the source language content; and
optimizing values of parameters of the conditional translation probability by an iterative optimization process operating on a translation pool, the optimizing including adding candidate aligned translations to the translation pool by sampling available candidate aligned translations for a source language sentence in accordance with the conditional translation probability; wherein the conditional translation probability is quantitatively equivalent to:
where f denotes the source language sentence, L denotes the set of available candidate aligned translations, (e, a) denotes a candidate aligned translation for which the conditional probability is computed, hk(. . .), k=1, . . . , k denotes a set of k feature functions, and λk, k=1, . . . , k denotes the parameters of the conditional translation probability;
wherein the sampling comprises:
selecting a sampled candidate aligned translation by traversing a translation lattice representing the available candidate aligned translations for the source language sentence from its root node to its final node wherein each transition from a current node to a next node is selected based on conditional translation probabilities
of the available edges ei leading away from the current node;
wherein the smt and the optimizing are implemented by a smt system embodied by at least one digital processor.
8. An apparatus comprising:
a statistical machine translation (smt) system employing a conditional translation probability conditioned on the source language content; and
a model parameters optimization engine configured to optimize values of parameters of the conditional translation probability using a translation pool comprising candidate aligned translations for source language sentences having reference translations, the model parameters optimization engine adding candidate aligned translations to the translation pool by sampling available candidate aligned translations in accordance with the conditional translation probability; wherein the conditional translation probability is quantitatively equivalent to:
where f denotes the source language sentence, L denotes the set of available candidate aligned translations, (e, a) denotes a candidate aligned translation for which the conditional probability is computed, hk(. . .), k=1, . . . , k denotes a set of k feature functions, and λk, k=1, . . . , k denotes the parameters of the conditional translation probability;
wherein the sampling comprises:
selecting a sampled candidate aligned translation by traversing a translation lattice representing the available candidate aligned translations for the source language sentence from its root node to its final node wherein each transition from a current node to a next node is selected based on conditional translation probabilities
of the available edges ei leading away from the current node;
wherein the smt system and the model parameters optimization engine are embodied by one or more digital processors.
2. The method as set forth in
3. The method as set forth in
4. The method as set forth in
where λkopt, k=1, . . . , k denotes the parameters of the conditional translation probability optimized by the optimizing, and (e*,a*) denotes the translation.
6. The storage medium as set forth in
7. The storage medium as set forth in
where λkopt, k=1, . . . , k denotes the parameters of the conditional translation probability optimized by the optimizing, and t* denotes the translation.
9. The apparatus as set forth in
10. The method as set forth in
11. An apparatus comprising:
A non-transitory storage medium as set forth in
A digital processor in operative communication with the non-transitory computer-readable storage medium and configured to execute instructions stored on the non-transitory computer-readable storage medium.
|
The following relates to the machine translation arts, the statistical machine translation arts, and so forth.
Machine (or automated) translation from a source language to a target language is known. For example, such machine translation may automatically translate a source-language sentence in English, French, Chinese, or another natural language, to a corresponding target-language sentence in another natural language. Some machine translation systems further include a user interface via which the machine translation is presented to a user as a proposed translation, which may be accepted, rejected, or modified by the user via the user interface.
In translation memory systems, a translation memory stores previously translated text as source language content and corresponding translated target language content, with corresponding textual units (e.g., words, phrases, sentences, or so forth) in the source and target languages associated together. When source-language content is received for translation, it is compared with the source-language contents of the translation memory. If a match, or approximate match, is found, the corresponding aligned target language content is presented to the user. If the match is approximate, the user may also be informed of the differences. The translation memory approach depends upon the memory contents being accurate and sufficiently comprehensive to encompass a usefully large portion of the source-language content received for translation.
Another known technique for machine translation is statistical machine translation (SMT). In this approach, a database of source-language/target language phrases are stored as a phrase table. (The term “phrase” as used herein and in the SMT literature generally is to be understood as a unit of text, e.g. a word or sequence of words, in some instances possibly including punctuation—the term “phrase” is not limited herein or in the SMT literature generally to grammatical phrases.) A translation model is provided or developed. This model comprises an aligned translation conditional probability. The “aligned translation” comprises one or more target language phrases in a particular sequence (i.e., alignment), with each target language phrase corresponding to a phrase of the source language content. In operation, the SMT generates candidate translations for received source language content to be translated by selecting target language phrases from the phrase table that match source language phrases of the source language content. The translation model is used to assess the candidate translations so as to select a translation having a high probability as assessed by the model. Since the number of candidate translations can be too large to exhaustively search, in some SMT configurations the translation model is used to guide the generation of candidate translations, for example by modifying a previously generated candidate translations to generate new candidate translations having high probabilities as assessed by the model.
Similarly to the translation memory approach, SMT depends on the comprehensiveness and accuracy of the phrase table. However, since the phrases are generally substantially shorter that textual units of a translation memory, it is generally easier to generate an accurate and reasonably comprehensive phrase table. SMT also depends on the accuracy of the translation model. Toward this end, the translation model is generally constructed to be “tunable”, that is, the translation model includes model parameters that can be optimized based on a development dataset comprising source language sentences and corresponding aligned target language translations.
The following discloses various improvements in machine translation apparatuses and methods.
In some illustrative embodiments disclosed as illustrative examples herein, a method comprises: translating source language content in a source natural language to a target natural language using statistical machine translation (SMT) employing a conditional translation probability conditioned on the source language content; and optimizing values of parameters of the conditional translation probability by an iterative optimization process operating on a translation pool, the optimizing including adding candidate aligned translations to the translation pool by sampling available candidate aligned translations for a source language sentence in accordance with the conditional translation probability. The SMT and the optimizing are suitably implemented by a SMT system embodied by at least one digital processor.
In some illustrative embodiments disclosed as illustrative examples herein, a statistical machine translation (SMT) system is embodied by at least one digital processor performing the method of the immediately preceding paragraph. In some illustrative embodiments disclosed as illustrative examples herein, a storage medium stores instructions executable on a digital processor to perform the method of the immediately preceding paragraph.
In some illustrative embodiments disclosed as illustrative examples herein, a storage medium stores instructions executable on a digital processor to perform a method including (i) translating source language content in a source natural language to a target natural language based on a conditional translation probability conditioned on the source language content and (ii) tuning the conditional translation probability using a translation pool, wherein the tuning includes (I) selecting candidate aligned translations for a source language sentence having a reference translation by sampling available candidate aligned translations for the source language sentence in accordance with the conditional translation probability conditioned on the source language sentence, and (II) adding the selected candidate aligned translations to the translation pool.
In some illustrative embodiments disclosed as illustrative examples herein, an apparatus comprises: a statistical machine translation (SMT) system employing a conditional translation probability conditioned on the source language content; and a model parameters optimization engine configured to optimize values of parameters of the conditional translation probability using a translation pool comprising candidate aligned translations for source language sentences having reference translations, the model parameters optimization engine adding candidate aligned translations to the translation pool by sampling available candidate aligned translations in accordance with the conditional translation probability. The SMT system and the model parameters optimization engine are suitably embodied by one or more digital processors.
With reference to
It is also contemplated to omit a human user from the processing. For example, the source language content 20 may be generated automatically, for example by an application program that extracts the source language content 20 from a document. Similarly, the target language translation 22 may be utilized automatically, for example by the application program in constructing a translated version of the document in the target language.
For embodiments in which the SMT system 10 is embodied by one or more digital processors of one or more computers, it is to be understood that the one or more computers may include one or more desktop computers, one or more notebook computers, one or more network servers, one or more Internet servers, or various combinations thereof.
In some embodiments, the one or more computers include a user-interfacing computer including the user interfacing components 14, 16, and a separate network server computer having one or more digital processors configured to perform the statistical machine translation. In such embodiments, the user interfacing computer is employed by a human user to formulate a translation request comprising content expressed in a source natural language, and this content is communicated via the network (for example, a wired local area network, a wireless local area network, the Internet, some combination thereof, or so forth) to the server computer hosting the SMT system 10, and a translation of the content in a target natural language is then communicated back via the network to the user interfacing computer where the translation is displayed on the display 14 or is otherwise conveyed to the human user.
In some embodiments, the one or more computers include a single computer including the user interfacing components 14, 16, and also including the one or more digital processors configured to perform the statistical machine translation. In such embodiments, the user interfacing computer is employed by a human user to formulate a translation, the included one or more digital processors perform the statistical machine translation, and the resulting translation is displayed on the display 14 of the computer.
In some embodiments, a storage medium (not illustrated) stores instructions executable on a digital processor to perform the disclosed statistical machine translation techniques including the disclosed parameter optimization embodiments. The storage medium may include, for example, one or more of the following: a hard disk or other magnetic storage medium; an optical disk or other optical medium; a read-only memory (ROM), electronically erasable programmable read-only memory (EEPROM), flash memory, random access memory (RAM) or other electronic storage medium; a combination of two or more of the foregoing; or so forth.
With continuing reference to
The conditional translation probability 30 is parameterized, that is, includes model parameters 32 whose values can be adjusted to optimize or tune the conditional translation probability 30. By way of illustrative example, in the illustrative embodiments described herein the conditional translation probability 30 employs a log-linear model as follows:
where f denotes the source language content, L denotes a set of available candidate aligned translations, (e′,a′) denotes one translation belonging to the set L, (e,a) denotes a candidate aligned translation for which the conditional probability is computed, hk( . . . ), k=1, . . . , K denotes a set of K feature functions, and λk, k=1, . . . , K denotes the parameters 32 of the conditional translation probability.
To generate candidate aligned translations, a phrase table 34 containing source language-target language phrase pairs is used to construct possible aligned target language translations for multi-phrase source language content. For each source language phrase in the source language content, the phrase table 34 is searched for possible corresponding target language phrases, which are then combined with various alignments to generate candidate aligned translations.
With continuing reference to
With continuing reference to
With reference again to the illustrative log-linear conditional translation probability of Equation (1), it will be noticed that the denominator of Equation (1) does not depend on the particular candidate translation (e,a) for which the conditional probability is being computed. Thus, for the purpose of maximizing the conditional probability of Equation (1) respective to candidate translation (e,a), the denominator can be omitted, and the exponential in the numerator can also be dropped. Thus, the decoder 36 suitably selects the translation 22 as the optimal translation (e*,a*) as follows:
where λkopt, k=1, . . . , K denotes values for the parameters 32 of the conditional translation probability 30 that are optimized for the translation task, and (e*,a*) denotes the translation 22.
With brief reference again to
As used with reference to Equation (2), and as used generally herein, terms such as “optimize”, “minimize”, “maximize”, and so forth are to be broadly construed as encompassing approximate or non-global optimization, minimization, maximization, or so forth. For example, in evaluating Equation (2) it is to be understood that the evaluation is over the set of available candidate aligned translations, which may not include the globally best translation. Moreover, even if the globally best translation is included in the set of available candidate aligned translations, the evaluation of Equation (2) may employ a non-exhaustive optimization that may identify a locally best translation rather than the globally best translation. Similar broad construction is understood to apply respective to optimization of the parameters 32 of the conditional translation probability.
The parameters λk, k=1, . . . , K 32 determine the relative importance of the different feature functions hh( . . . ), k=1, . . . , K in the global score of Equation (1) or Equation (2). The parameters 32 are in some embodiments suitably tuned by cross-validation. More generally, the parameters 32 are tuned by a model parameters optimization engine 40 using a development dataset 42 comprising source language sentences having reference translations in the target language, for example supplied by a human translator. Although the model parameters optimization engine 40 is diagrammatically shown in
The optimization performed by the model parameters optimization engine 40 is suitably a type of Minimum Error Rate Training (MERT). The algorithm starts by initializing the parameter vector
The process implemented by the optimization engine 40 has some similarity with a “Best-N” MERT algorithm such as is described in Och, “Minimum error rate training in statistical machine translation”, in ACL '03: Proceedings of the 41st Annual Meeting on Association for Computational Linguistics, pages 160-167 (Sapporo, Japan, 2003), which is incorporated herein by reference in its entirety. In the Best-N MERT approach, the translation pool is updated in each iteration with a list of N-best scoring candidate translations according to the model using the current values of the model parameters. In contrast, the approach of the optimization engine 40 is to add candidate aligned translations to the translation pool 44 for each source language sentence of the dataset 42 by sampling available candidate aligned translations for the source language sentence in accordance with the conditional translation probability 30 conditioned on the source language sentence.
The Best-N procedure provides computational efficiency, based on the observation that BLEU only depends on the translation receiving the highest score by the translation model in the translation pool. This in turn means that, for any given source language sentence, its contribution to BLEU changes only when the value of the parameters change in such a way that the candidate translation ranking highest according to the model switches from one candidate translation to another. This situation does not change when one considers all the source language sentences in a development set instead of just one: while varying the X vector, the BLEU score changes only when there is a change at the top of the ranking of the alternatives for at least one source language sentence in the set. In other words, BLEU is piece-wise constant in
The Best-N MERT algorithm assumes at each iteration that the set of candidates with a chance to make it to the top (for some value of the parameter vector
Thus, the Best-N approach is computationally efficient, but converges slowly if the initial parameter vector
The process implemented by the optimization engine 40 also has some similarity with another MERT algorithm that is described in Macherey et al., “Lattice-based minimum error rate training for statistical machine translation”, in EMNLP '08: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 725-734 (Morristown, N.J., USA, 2008), which is incorporated herein by reference in its entirety. The approach of Macherey et al. extends the MERT algorithm so as to use the whole set of candidate translations compactly represented in the search lattice produced by the decoder, instead of only a N-best list of candidates extracted from it as in the Best-N approach. The use of the whole set of candidate translations is achieved via an elegant but relatively heavy dynamic programming algorithm that propagates sufficient statistics (called envelopes) throughout the whole search graph. The theoretical worst-case complexity of this algorithm reported in Macherey et al. is O(|V∥E|log|E|), where V and E are the vertex set and the edge set of the lattice respectively.
The approach of Macherey et al. overcomes deficiencies of the Best-N approach by using all available candidate translations. Thus, there is substantially less concern about the N-best list being biased toward the current value of the parameter vector
A difference in the approach of the optimization engine 40 of
The approach of the optimization engine 40 of
With reference to
A current value for the parameter vector
The complete set of translations that can be produced using the phrase table 34 (also called the “reachable translations” herein) for the source sentence is represented in
The convex envelope of a sampling of the available candidate aligned translations (the larger dashed polygon) for the source language sentence, sampled from the translation lattice in accordance with the conditional translation probability, is indicated by the dotted polygon in
With continuing reference to
In sum, the optimizing including adding candidate aligned translations to the translation pool by sampling available candidate aligned translations for a source language sentence in accordance with the conditional translation probability 30, as disclosed herein, provides translation pool additions that are substantially better (in terms of likelihood of movement toward the reference translation) as compared with the N-best MERT approach, while having computational complexity comparable with the N-best MERT approach and substantially lower than the “whole lattice” approach of Macherey et al.
With reference to
In an operation 64, the model parameters optimization engine 40 (see
The first iteration of the operation 64 effectively initializes the translation pool 44, and so N sampled candidate translations are added to the translation pool 44 in the initial iteration. In subsequent iterations, the operation 64 of adding candidate translations to the translation pool 44 by sampling in accordance with the conditional translation probability entails merging the newly sampled candidate translations with the candidate translations already contained in the translation pool 44. Thus, if a newly sampled candidate translation is already in the translation pool 44, it is not “added again” but rather the new sampling is discarded. In this case the number of candidate translations added during that iteration is less than N. In an alternative approach, if a newly sampled candidate translation is already in the translation pool 44 then another candidate translation may be sampled so as to keep the number of added candidate translations at N for each iteration.
An operation 68 updates the model parameters of the parameters vector
At a decision operation 72, a suitable stopping criterion is employed to determine whether further iterations should be performed. The stopping criterion can be based, for example, on the iteration-to-iteration improvement in the value of the BLEU metric 46, and/or a maximum number of iterations stopping criterion, and/or based on a threshold for the norm of the update to the parameter vector
A suitable implementation of the sampling operation 64 is described. In this approach, N candidates are sampled from the translation lattice according to the probability distribution over paths induced by the model 30, given the current setting of the
Feature functions are incremental over the edges of the translation lattice. Accordingly, the non-normalized probability of a path including m edges or transitions is given by:
where:
is the score of the edge ei. With a minor notational change the score σ(ei) is also denoted herein as σ(nj,k), where the edge ei goes from translation lattice node nj to translation lattice node nk. Further denoted herein as σ(ni) is the score of node ni, that is, the logarithm of the cumulative unnormalized probability of all the paths in the lattice that go from node ni to a final node. The unnormalized probability of selecting node nj starting from ni can then be expressed recursively as follows:
S(nj|ni)≈exp(σ(nj)+σ(ni,j)) (5).
The scores required to compute these sampling probabilities can be obtained by a backward pass in the lattice. Let Pi denote the set of successors of ni. Then the total unnormalized log-probability of reaching a final state (i.e. with a complete translation) from ni is given by:
where σ(ni)=0 is set if Pi={ }, that is, if ni is the final node of the translation lattice. At the end of the backward sweep, σ(n0) contains the unnormalized cumulative probability of all paths, that is, the partition function. Notice that this normalizing constant cancels out when computing local sampling probabilities for traversed nodes in the translation lattice.
Once the transition probability is known for each node, as per Equation (5), candidate translations are sampled by starting in the root node of the translation lattice and at each step randomly selecting among its successors, until the final node is reached. The whole sampling procedure is repeated as many times as the number of samples sought (e.g., N times in the case of sampling N candidate translations). After collecting samples for each source sentence 50, the whole list is used to grow the translation pool 44 by merging the list with the already-present contents (if any) of the translation pool 44.
An analysis of time complexity of the parameter optimization of
Since the number of edges leaving each node is bounded by a constant, it is |E|=Θ(|V|), and the method of Macherey et al. is O(|V|2 log|V|). The maximum number of vertices in the translation lattice is limited by the capacity of the stacks—that is, |V|≦aJ. This leads to a complexity of O(J2 log J) for the inner loop of the method of Macherey et al.
The complexity is driven by the length of the source sentence in the case of the method of Macherey et al., and by the size of the translation pool in the case of both the N-best list method and the sampling method employed by the optimization engine 40 of
The foregoing addresses the complexity of the innermost loop which searches for a global optimum along a line in the parameter space. This line search is repeated many times, and accordingly has a substantial impact on the overall complexity of each of the parameters optimization methods (top N, sampling, or the method of Macherey et al.). In the following, the different methods are considered in terms of the operations that are performed as part of the outer iteration, that is upon redecoding the development set with a new parameter vector.
For the N-best list method, this outer iteration entails constructing an N-best list from the translation lattice. This can be done in time linear in the size J of the sentence and in N with a backward sweep in the translation lattice. In the case of the method of Macherey et al., the outer iteration does not entail any operations at all, since the whole lattice is passed over for envelope propagation to the inner loop.
The sampling method implemented by the optimization engine 40 using the sampling described herein with reference to Equations (3)-(6) entails sampling N times the translation lattice according to the conditional probability distribution 30 induced by the weights on its edges. The approach of Equations (3)-(6) uses a dynamic programming approach for computing the posterior probabilities of traversing edges. In this phase each edge of the translation lattice is visited exactly once, hence this phase is linear in the number of edges in the lattice, hence under standard assumptions in the length J of the sentence for this case. Once posterior probabilities are computed for the translation lattice, N paths are sampled from it, each of which is composed of at most J edges (assuming all phrase pairs cover at least one source word). In order to select a new edge to traverse among all possible outgoing edges from the current node, the outgoing edges are suitably sorted into a binary search tree, storing intervals of cumulative non-normalized probabilities, and then a binary search is performed on it with a uniformly generated random number: if |E′| is the number of successors of a node then the computational cost is O(|E′|log|E′|) the first time a node is ever sampled (not all nodes are necessarily ever sampled) to build the binary tree, and then O(log|E′|) for traversing the tree for each sampling operation. The overall cost for sampling N paths is thus O(|E|+NJ(|E′|log(|E′|)+log(|E′|))). Under standard assumptions |E′| is a constant, and |E|≈O(J), so the whole sampling is also O(NJ), which is the same complexity as for extracting the N-best list.
Accordingly, it is concluded that the parameter optimization techniques disclosed herein, which employ sampling in accordance with the conditional translation probability 30 of the available candidate translations (for example, as encoded in a translation lattice) has computational complexity that is comparable with that of the N-best list approach. However, the sampling approaches disclosed herein provide substantially improved performance in terms of convergence speed and robustness in terms of likelihood of converging to a set of optimized parameters that is close to the ideal parameter values.
It will be appreciated that various of the above-disclosed and other features and functions, or alternatives thereof, may be desirably combined into many other different systems or applications. Also that various presently unforeseen or unanticipated alternatives, modifications, variations or improvements therein may be subsequently made by those skilled in the art which are also intended to be encompassed by the following claims.
Cancedda, Nicola, Chatterjee, Samidh
Patent | Priority | Assignee | Title |
10002125, | Dec 28 2015 | Meta Platforms, Inc | Language model personalization |
10002131, | Jun 11 2014 | Meta Platforms, Inc | Classifying languages for objects and entities |
10013417, | Jun 11 2014 | Meta Platforms, Inc | Classifying languages for objects and entities |
10067936, | Dec 30 2014 | Meta Platforms, Inc | Machine translation output reranking |
10089299, | Dec 17 2015 | Meta Platforms, Inc | Multi-media context language processing |
10133738, | Dec 14 2015 | Meta Platforms, Inc | Translation confidence scores |
10261994, | May 25 2012 | SDL INC | Method and system for automatic management of reputation of translators |
10289681, | Dec 28 2015 | Meta Platforms, Inc | Predicting future translations |
10319252, | Nov 09 2005 | SDL INC | Language capability assessment and training apparatus and techniques |
10346537, | Sep 22 2015 | Meta Platforms, Inc | Universal translation |
10380249, | Oct 02 2017 | Meta Platforms, Inc | Predicting future trending topics |
10402498, | May 25 2012 | SDL Inc. | Method and system for automatic management of reputation of translators |
10417646, | Mar 09 2010 | SDL INC | Predicting the cost associated with translating textual content |
10540450, | Dec 28 2015 | Meta Platforms, Inc | Predicting future translations |
10902215, | Jun 30 2016 | Meta Platforms, Inc | Social hash for language models |
10902221, | Jun 30 2016 | Meta Platforms, Inc | Social hash for language models |
10984429, | Mar 09 2010 | SDL Inc. | Systems and methods for translating textual content |
11003838, | Apr 18 2011 | SDL INC | Systems and methods for monitoring post translation editing |
8489385, | Nov 21 2007 | University of Washington | Use of lexical translations for facilitating searches |
8600730, | Feb 08 2011 | Microsoft Technology Licensing, LLC | Language segmentation of multilingual texts |
8660836, | Mar 28 2011 | International Business Machines Corporation | Optimization of natural language processing system based on conditional output quality at risk |
8666725, | Apr 16 2004 | SOUTHERN CALIFORNIA, UNIVERSITY OF | Selection and use of nonstatistical translation components in a statistical machine translation framework |
8676563, | Oct 01 2009 | SDL INC | Providing human-generated and machine-generated trusted translations |
8694303, | Jun 15 2011 | SDL INC | Systems and methods for tuning parameters in statistical machine translation |
8825466, | Jun 08 2007 | LANGUAGE WEAVER, INC ; University of Southern California | Modification of annotated bilingual segment pairs in syntax-based machine translation |
8831928, | Apr 04 2007 | SDL INC | Customizable machine translation service |
8886518, | Aug 07 2006 | SDL INC | System and method for capitalizing machine translated text |
8942973, | Mar 09 2012 | SDL INC | Content page URL translation |
8943080, | Apr 07 2006 | University of Southern California | Systems and methods for identifying parallel documents and sentence fragments in multilingual document collections |
8977536, | Apr 16 2004 | University of Southern California | Method and system for translating information with a higher probability of a correct translation |
8990064, | Jul 28 2009 | SDL INC | Translating documents based on content |
9122674, | Dec 15 2006 | SDL INC | Use of annotations in statistical machine translation |
9152622, | Nov 26 2012 | SDL INC | Personalized machine translation via online adaptation |
9213694, | Oct 10 2013 | SDL INC | Efficient online domain adaptation |
9710429, | Nov 12 2010 | GOOGLE LLC | Providing text resources updated with translation input from multiple users |
9740687, | Jun 11 2014 | Meta Platforms, Inc | Classifying languages for objects and entities |
9805029, | Dec 28 2015 | Meta Platforms, Inc | Predicting future translations |
9830386, | Dec 30 2014 | Meta Platforms, Inc | Determining trending topics in social media |
9830404, | Dec 30 2014 | Meta Platforms, Inc | Analyzing language dependency structures |
9864744, | Dec 03 2014 | Meta Platforms, Inc | Mining multi-lingual data |
9899020, | Feb 13 2015 | Meta Platforms, Inc | Machine learning dialect identification |
9953024, | Nov 05 2013 | BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO , LTD | Method and device for expanding data of bilingual corpus, and storage medium |
Patent | Priority | Assignee | Title |
4829580, | Mar 26 1986 | Telephone and Telegraph Company, AT&T Bell Laboratories | Text analysis system with letter sequence recognition and speech stress assignment arrangement |
5477451, | Jul 25 1991 | Nuance Communications, Inc | Method and system for natural language translation |
5510981, | Oct 28 1993 | IBM Corporation | Language translation apparatus and method using context-based translation models |
5991710, | May 20 1997 | International Business Machines Corporation; IBM Corporation | Statistical translation system with features based on phrases or groups of words |
6304841, | Oct 28 1993 | International Business Machines Corporation | Automatic construction of conditional exponential models from elementary features |
7346487, | Jul 23 2003 | Microsoft Technology Licensing, LLC | Method and apparatus for identifying translations |
8060360, | Oct 30 2007 | Microsoft Technology Licensing, LLC | Word-dependent transition models in HMM based word alignment for statistical machine translation |
8175864, | Mar 30 2007 | GOOGLE LLC | Identifying nearest neighbors for machine translation |
8185375, | Mar 26 2007 | GOOGLE LLC | Word alignment with bridge languages |
20040098247, | |||
20050021323, | |||
20060015320, | |||
20070150257, | |||
20080015842, | |||
20080120092, | |||
20080262829, | |||
20080270109, | |||
20090106015, | |||
20090192781, | |||
20090271177, | |||
20090326913, | |||
20100004919, | |||
20100004920, | |||
20100023315, | |||
20100179803, | |||
20110093254, | |||
20110191096, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Apr 26 2010 | CANCEDDA, NICOLA | Xerox Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 024366 | /0290 | |
May 06 2010 | CHATTERJEE, SAMIDH | Xerox Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 024366 | /0290 | |
May 11 2010 | Xerox Corporation | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Aug 21 2012 | ASPN: Payor Number Assigned. |
Feb 18 2016 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
May 04 2020 | REM: Maintenance Fee Reminder Mailed. |
Oct 19 2020 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Sep 11 2015 | 4 years fee payment window open |
Mar 11 2016 | 6 months grace period start (w surcharge) |
Sep 11 2016 | patent expiry (for year 4) |
Sep 11 2018 | 2 years to revive unintentionally abandoned end. (for year 4) |
Sep 11 2019 | 8 years fee payment window open |
Mar 11 2020 | 6 months grace period start (w surcharge) |
Sep 11 2020 | patent expiry (for year 8) |
Sep 11 2022 | 2 years to revive unintentionally abandoned end. (for year 8) |
Sep 11 2023 | 12 years fee payment window open |
Mar 11 2024 | 6 months grace period start (w surcharge) |
Sep 11 2024 | patent expiry (for year 12) |
Sep 11 2026 | 2 years to revive unintentionally abandoned end. (for year 12) |