A system and method for transforming unstructured text into structured form is disclosed. The system and method include converting an input word sequence (e.g., sentence) into tagged output which can be then easily be converted into a structured format. The system may include a bidirectional recurrent neural network that can generate multiple labels of individual words or phrases. In some embodiments, a customized learning loss equation involving set similarity is used to generate the multiple labels.
|
1. A method of transforming unstructured text into structured form, comprising:
obtaining a word sequence, including at least a first word and a second word;
obtaining a first word embedding and a first part-of-speech (“POS”) tag embedding both corresponding to the first word;
obtaining a second word embedding and a second POS tag embedding both corresponding to the second word;
concatenating the first word embedding with the first POS word embedding into a first input and the second word embedding with the second POS word embedding into a second input;
using self-attention to process the first input and the second input through a bidirectional recurrent neural network (“RNN”) to generate a first output corresponding to the first input and a second output corresponding to the second input, wherein the first output includes at least two labels corresponding to the first word;
using a loss to perform back-propagation to adjust weights of the bidirectional RNN, wherein the loss is based on both the vector of true labels and the vector of independent probabilities of predicted labels; and
wherein the loss is HLdiff, and wherein
HLdiff=average(yt*(1−yp)+(1−yt)*yp), where yt is the vector of true labels and yp is the vector of independent probabilities of predicted labels.
8. A non-transitory computer-readable medium storing software comprising instructions executable by one or more computers which, upon such execution, cause the one or more computers to transform unstructured text into structured form by:
obtaining a word sequence, including at least a first word and a second word;
obtaining a first word embedding and a first part-of-speech (“POS”) tag embedding both corresponding to the first word;
obtaining a second word embedding and a second POS tag embedding both corresponding to the second word;
concatenating the first word embedding with the first POS word embedding into a first input and the second word embedding with the second POS word embedding into a second input;
using self-attention to process the first input and the second input through a bidirectional recurrent neural network (“RNN”) to generate a first output corresponding to the first input and a second output corresponding to the second input, wherein the first output includes at least two labels corresponding to the first word;
using a loss to perform back-propagation to adjust weights of the bidirectional RNN, wherein the loss is based on both the vector of true labels and the vector of independent probabilities of predicted labels; and
wherein the loss is HLdiff, and wherein
HLdiff=average(yt*(1−yp)+(1−yt)*yp), where yt is the vector of true labels and yp is the vector of independent probabilities of predicted labels.
15. A system for transforming unstructured text into structured form, comprising:
one or more computers and one or more storage devices storing instructions that are operable, when executed by the one or more computers, to cause the one or more computers to:
obtain a word sequence, including at least a first word and a second word;
obtain a first word embedding and a first part-of-speech (“POS”) tag embedding both corresponding to the first word;
obtain a second word embedding and a second POS tag embedding both corresponding to the second word;
concatenate the first word embedding with the first POS word embedding into a first input and the second word embedding with the second POS word embedding into a second input;
use self-attention to process the first input and the second input through a bidirectional recurrent neural network (“RNN”) to generate a first output corresponding to the first input and a second output corresponding to the second input, wherein the first output includes at least two labels corresponding to the first word;
using a loss to perform back-propagation to adjust weights of the bidirectional RNN, wherein the loss is based on both the vector of true labels and the vector of independent probabilities of predicted labels; and
wherein the loss is HLdiff, and wherein
HLdiff=average(yt*(1−yp)+(1−yt)*yp), where yt is the vector of true labels and yp is the vector of independent probabilities of predicted labels.
2. The method of
3. The method of
4. The method of
5. The method of
6. The method of
7. The method of
9. The non-transitory computer-readable medium storing software of
10. The non-transitory computer-readable medium storing software of
11. The non-transitory computer-readable medium storing software of
12. The non-transitory computer-readable medium storing software of
saving a key pair including the first word and the first output in a database.
13. The non-transitory computer-readable medium storing software of
saving a key pair including the second word and the second output in a database.
14. The non-transitory computer-readable medium storing software of
16. The system of
17. The system of
18. The system of
19. The system of
save a key pair including the first word and the first output in a database.
20. The system of
save a key pair including the second word and the second output in a database.
|
The present disclosure generally relates to transforming unstructured text into structured form. More specifically, the present disclosure generally relates to a system and method for transforming unstructured text into structured form.
Textual data is often available in the form of documents, which can be used for a variety of purposes, such as documentation, reports, surveys, and logs, etc. The text in many documents is unstructured. Typically, unstructured data is mostly useful only after extracting key information in a structured form. However, extracting key information from unstructured text is subject to many errors. Additionally, applying only one label per word found within unstructured text can only give a piece of the information the unstructured text yields. For example, if a biography states, “John Doe was born to German parents, and speaks German fluently,” and the word “German” is only labeled as a nationality, it would be missed that John Doe speaks the German language.
There is a need in the art for a system and method that addresses the shortcomings discussed above.
A system and method for transforming unstructured text into structured form is disclosed. The system and method include converting an input word sequence (e.g., sentence) into tagged output which can be then easily be converted into a structured format.
The disclosed system and method improve the accuracy of the results of transforming unstructured text into structured form by using a bidirectional recurrent neural network (“RNN”). The bidirectional RNN processes the input in a forward direction and in a backward direction, both with respect to time and the order of a word sequence that is received as an input. This type of bidirectional processing provides more context for each word or phrase input into the bidirectional RNN, thus helping with determining which, if any, label is appropriate for each word or phrase. The disclosed system and method further improve the accuracy of the results of transforming unstructured text into structured form by providing as input to the bidirectional RNN a word (or word embedding) and its corresponding part-of-speech (“POS”) (or POS tag embedding). The word and its corresponding POS provide more context for each word, again helping with determining which, if any, label is appropriate for each word. In some embodiments, the disclosed system and method can generate multiple labels of individual words or phrases using a customized learning loss equation involving set similarity. Generating multiple labels can help provide more information in a structured form, since words may have different meaning depending upon how the words are used. The customized learning loss equation is a reason why multiple labels can be predicted using the disclosed method.
In some embodiments, the structured format may contain only applicable information. In other words, pronouns, articles, and other generic information may be filtered out and excluded from the structured format. Distilling the unstructured text into applicable information in a structured format enables easier, more efficient analysis, processing and/or use of the applicable information.
The transformation of unstructured data to a structured form can help businesses in certain domains. For example, in Pharmacovigilance, where adverse effects of prescribed drugs are reported by patients or medical practitioners, this information can be used to detect signals of adverse effects. Collection, analysis, and reporting of these adverse effects by the drug companies is mandated by law. In most cases, it is easy for patients or medical practitioners to describe the side-effects of their drugs in a common, day-to-day language, in free form text. However, this free form text is difficult to extract information from. Thus, transforming the free form text into a structured format enables easier processing of information, e.g. statistical analysis of structured data for signals of adverse effects.
Another domain that can benefit from transforming unstructured data into a structured form is the management of legal contracts, e.g. lease agreements in real estate. Lease agreements can be lengthy documents that are difficult to compare to one another. Accordingly, transforming the text of a lease agreement into a structured format can allow easier comparison of the terms of different lease agreements. This structured information can be further used for aggregate analytics and decision making by large real estate firms.
In one aspect, the disclosure provides a method of transforming unstructured text into structured form. The method may include obtaining a word sequence, including at least a first word and a second word. The method may further include obtaining a first word embedding and a first POS tag embedding both corresponding to the first word. The method may include obtaining a second word embedding and a second POS tag embedding both corresponding to the second word. The method may include concatenating the first word embedding with the first POS word embedding into a first input and the second word embedding with the second POS word embedding into a second input. The method may include using self-attention to process the first input and the second input through a bidirectional recurrent neural network RNN to generate a first output corresponding to the first input and a second output corresponding to the second input, wherein the first output includes at least two labels corresponding to the first word.
In another aspect, the disclosure provides a non-transitory computer-readable medium storing software that may comprise instructions executable by one or more computers which, upon such execution, may cause the one or more computers to transforming unstructured text into structured form by: obtaining a first word embedding and a first POS tag embedding both corresponding to the first word; obtaining a second word embedding and a second POS tag embedding both corresponding to the second word; concatenating the first word embedding with the first POS word embedding into a first input and the second word embedding with the second POS word embedding into a second input; and using self-attention to process the first input and the second input through a bidirectional RNN to generate a first output corresponding to the first input and a second output corresponding to the second input, wherein the first output includes at least two labels corresponding to the first word.
In another aspect, the disclosure provides a system for transforming unstructured text into structured form, comprising one or more computers and one or more storage devices storing instructions that may be operable, when executed by the one or more computers, to cause the one or more computers to: obtain a word sequence, including at least a first word and a second word; obtain a first word embedding and a first POS tag embedding both corresponding to the first word; obtain a second word embedding and a second POS tag embedding both corresponding to the second word; concatenate the first word embedding with the first POS word embedding into a first input and the second word embedding with the second POS word embedding into a second input; and use self-attention to process the first input and the second input through a bidirectional RNN to generate a first output corresponding to the first input and a second output corresponding to the second input, wherein the first output includes at least two labels corresponding to the first word.
Other systems, methods, features, and advantages of the disclosure will be, or will become, apparent to one of ordinary skill in the art upon examination of the following figures and detailed description. It is intended that all such additional systems, methods, features, and advantages be included within this description and this summary, be within the scope of the disclosure, and be protected by the following claims.
While various embodiments are described, the description is intended to be exemplary, rather than limiting, and it will be apparent to those of ordinary skill in the art that many more embodiments and implementations are possible that are within the scope of the embodiments. Although many possible combinations of features are shown in the accompanying figures and discussed in this detailed description, many other combinations of the disclosed features are possible. Any feature or element of any embodiment may be used in combination with or substituted for any other feature or element in any other embodiment unless specifically restricted.
This disclosure includes and contemplates combinations with features and elements known to the average artisan in the art. The embodiments, features, and elements that have been disclosed may also be combined with any conventional features or elements to form a distinct invention as defined by the claims. Any feature or element of any embodiment may also be combined with features or elements from other inventions to form another distinct invention as defined by the claims. Therefore, it will be understood that any of the features shown and/or discussed in the present disclosure may be implemented singularly or in any suitable combination. Accordingly, the embodiments are not to be restricted except in light of the attached claims and their equivalents. Also, various modifications and changes may be made within the scope of the attached claims.
The invention can be better understood with reference to the following drawings and description. The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention. Moreover, in the figures, like reference numerals designate corresponding parts throughout the different views.
To demonstrate how unstructured data can be more useful when transformed into structured data,
A set of structured data 204 corresponding to set of unstructured text 200 is provided as input to a structured data preprocessing module 206. Structured data preprocessing module 206 may preprocess set of structured data 204 by removing extra information, e.g., punctuation and non-alphanumeric characters, and words lacking a label, e.g., labels tagged as “none,” Additionally or alternatively, structured data preprocessing module 206 may preprocess words in set of structured data 204. In some embodiments, preprocessing the words may include converting words in set of structured data 204 into embeddings. For example, pre-trained word vectors, such as GloVe, may be used to convert words into vectors. In such embodiments, preprocessing the words in set of structured data 204 may include initializing the words to GloVe embeddings and randomly initializing character embeddings,
The results of preprocessing set of unstructured text 200 and set of structured data 204 may be saved in a database 208. These results may be used to train machine learning model 210 to transform unstructured text into structured form. Training results in a trained model 212 that is capable of receiving unstructured text 214 as input, processing the unstructured text, and then outputting structured data 216. The results of preprocessing set of unstructured text 200 and set of structured data 204 may include preprocessing output 314 that may be saved in database 208. Preprocessing output 314 may include a key pair including a key, e.g., word or phrase in text, and a corresponding value, e.g., a label from a table.
In some embodiments, the bidirectional RNN has a first hidden layer in which the word sequence is processed in a backward time order and generates a sequence of hidden state representations (bhT, . . . , bh1). For example, as shown in
In the embodiment shown in
Backward time order hidden state representation bh2 receives second input 404 and backward time order hidden state representation bh3, and forward time order hidden state representation fh2 receives second input 404. The output from both backward time order hidden state representation bh2 and forward time order hidden state representation fh2 is concatenated and passed to the next layer with attention α2. In this case, the output of the bidirectional RNN called “Labels2” is the single label of “other.”
Backward time order hidden state representation bh3 receives third input 406 and backward time order hidden state representation bh4, and forward time order hidden state representation fh3 receives third input 406. The output from both backward time order hidden state representation bh3 and forward time order hidden state representation fh2 is concatenated and passed to the next layer with attention α3. In this case, the output of the bidirectional RNN called “Labels3” is the single label of “other.”
Backward time order hidden state representation bh1 receives fourth input 408, and forward time order hidden state representation fh4 receives fourth input 408 and backward time order hidden state representation fh3. The output from both backward time order hidden state representation bh4 and forward time order hidden state representation fh4 is concatenated and passed to the next layer with attention α4. In this case, the output called Labels4, includes two labels, “symptom” and “side effect.”
In some embodiments, conditional random fields (“CRF”) exist between the labels. For example, first line 430 between Labels1 and Labels2 represents CRF. Second line 432 between Labels2 and Labels; represents CRF, Third line 434 between Labels3 and Labels4 represents CRF,
In some embodiments, the word sequence may include a number. For example, as shown in
The method of transforming unstructured text into structured form may include obtaining a word embedding and a POS tag embedding both corresponding to the word for each word in a sequence. For example, method 500 includes obtaining a first word embedding and a first POS tag embedding both corresponding to the first word (operation 504). In another example, in the embodiment of
In some embodiments, the method of transforming unstructured text into structured form may include obtaining a second word embedding and a second POS tag embedding both corresponding to the second word. For example, method 500 includes obtaining a second word embedding and a second POS tag embedding both corresponding to the second word (operation 506). Obtaining a second word embedding and a second POS tag embedding both corresponding to the second word may be performed in the same ways discussed above with respect to obtaining a first word embedding and a first POS tag embedding both corresponding to the first word.
The method of transforming unstructured text into structured form may include concatenating the first word embedding with the first POS word embedding into a first input and the second word embedding with the second POS word embedding into a second input. For example, method 500 includes concatenating the first word embedding with the first POS word embedding into a first input and the second word embedding with the second POS word embedding into a second input (operation 508). In another example, as discussed above with respect to
The method of transforming unstructured text into structured form may include using self-attention to process the first input and the second input through a bidirectional RNN to generate a first output corresponding to the first input and a second output corresponding to the second input, wherein the first output includes at least two labels corresponding to the first word. For example, method 500 includes using self-attention to process the first input and the second input through a bidirectional RNN to generate a first output corresponding to the first input and a second output corresponding to the second input, wherein the first output includes at least two labels corresponding to the first word (operation 510). In another example, as discussed above with respect to
In another example, as discussed above with respect to
In another example, as discussed above with respect to
In another example, as discussed above with respect to
While the embodiment of
At the output layer, a Sigmoid function is used to normalize each of the label prediction scores between 0 and 1. Using the Sigmoid Function compresses the difference in the scores into a number that can be more easily compared. The prediction scores are based on the probability that a word corresponds to a label. In other words, the prediction score indicates the accuracy of a prediction that a word corresponds to a label. For example, a prediction score of 0.9 means that there is 90% probability that a word fits within a particular label. The higher the prediction score of a label, the more likely the label corresponds to a word. Each probability that a word fits with a label is independent of the probability that the same word fits with another label. Such a relationship between probabilities allows more than one label to be predicted for each word.
In some embodiments, a custom loss equation is used to perform back-propagation to adjust the weights of the bidirectional RNN. The custom loss equation is as follows:
HLdiff=average(yt*(1−yp)+(1−yt)*yp),
where yt is the vector of true labels and yr, is the vector of independent probabilities of predicted labels. This custom loss equation is differentiable.
In an example, a word has true labels [1,0,0,1] and the model predicts the labels [0.9,0.1,0,2,0.9], then loss in this case is computed as avg([1,0,0,1]*[0.1,0.9,0.8,0.1] [0,1,1,0]*[0.9,0.1,0.2,0,9]) or avg(0.1+0.1+0.1+0.2) or 0.125. It is a loss value, so better models have a lower loss.
While various embodiments of the invention have been described, the description is intended to be exemplary, rather than limiting, and it will be apparent to those of ordinary skill in the art that many more embodiments and implementations are possible that are within the scope of the invention. Accordingly, the invention is not to be restricted except in light of the attached claims and their equivalents. Also, various modifications and changes may be made within the scope of the attached claims.
Sengupta, Shubhashis, Deshmukh, Jayati, K. M., Annervaz
Patent | Priority | Assignee | Title |
11386885, | Feb 17 2020 | WIPRO LIMITED | Method and system for detecting intent as an ordered sequence from a user query |
11625539, | Dec 07 2020 | BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD. | Extracting trigger words and arguments from text to obtain an event extraction result |
Patent | Priority | Assignee | Title |
10410118, | Mar 13 2015 | The Governing Council of the University of Toronto | System and method for training neural networks |
10817509, | Mar 16 2017 | Massachusetts Institute of Technology | System and method for semantic mapping of natural language input to database entries via convolutional neural networks |
20170286397, | |||
20170357896, | |||
20180121799, | |||
20180329883, | |||
20190220748, | |||
20190228073, | |||
20190228099, | |||
20200258497, | |||
20200312306, | |||
20200327886, | |||
20200364409, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
May 29 2019 | DESHMUKH, JAYATI | Accenture Global Solutions Limited | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 049338 | /0968 | |
May 29 2019 | K M , ANNERVAZ | Accenture Global Solutions Limited | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 049338 | /0968 | |
May 29 2019 | SENGUPTA, SHUBHASHIS | Accenture Global Solutions Limited | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 049338 | /0968 | |
May 31 2019 | Accenture Global Solutions Limited | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
May 31 2019 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Date | Maintenance Schedule |
Oct 12 2024 | 4 years fee payment window open |
Apr 12 2025 | 6 months grace period start (w surcharge) |
Oct 12 2025 | patent expiry (for year 4) |
Oct 12 2027 | 2 years to revive unintentionally abandoned end. (for year 4) |
Oct 12 2028 | 8 years fee payment window open |
Apr 12 2029 | 6 months grace period start (w surcharge) |
Oct 12 2029 | patent expiry (for year 8) |
Oct 12 2031 | 2 years to revive unintentionally abandoned end. (for year 8) |
Oct 12 2032 | 12 years fee payment window open |
Apr 12 2033 | 6 months grace period start (w surcharge) |
Oct 12 2033 | patent expiry (for year 12) |
Oct 12 2035 | 2 years to revive unintentionally abandoned end. (for year 12) |