Inferring a natural language grammar is based on providing natural language understanding (nlu) data with concept annotations according to an application ontology characterizing a relationship structure between application-related concepts for a given nlu application. An application grammar is then inferred from the concept annotations and the application ontology.
|
1. A computer-implemented method of automatically inferring a natural language grammar for use in a statistical semantic model (ssm) system, the method comprising, by a processor and associated memory:
providing, in storage memory communicatively coupled to the processor, natural language understanding (nlu) data with multi-level concept annotations according to an application ontology characterizing a relationship structure between application-related concepts for a given nlu application, a plurality of the application-related concepts being domain-specific concepts;
relating, in the storage memory, mid-level concepts in the application ontology to two or more respective lower-level concepts, wherein each mid-level concept is derived from the two or more respective lower-level concepts and each lower-level concept has a respective associated existing grammar;
automatically inferring a new application grammar from the multi-level concept annotations and the application ontology, including inferring, directly from the multi-level concept annotations and indirectly from the application ontology, a grammar rule from each level of a multi-level annotation hierarchy;
a given grammar rule of the inferred grammar rules promoting at least one lower-level concept grammar rule to at least one higher-level concept grammar rule, the promoting resulting from the relating, in the storage memory, of a given mid-level concept to the two or more respective lower-level concepts, wherein each lower-level concept has a respective associated existing grammar;
in response to the promoting, accessing the storage memory to automatically reuse a given rule from the existing grammars associated with the two or more respective lower-level concepts related to the given mid-level concept to infer the new application grammar with respect to the at least one higher-level concept; and
employing the new application grammar, through access of the storage memory, in a semantic interpreter of a ssm system to determine a semantic interpretation of a user query.
9. A computer program product for automatically inferring a natural language grammar for use in a statistical semantic model (ssm) system, the computer program product comprising a non-transitory computer readable storage medium having instructions thereon for execution on at least one processor to:
provide, in storage memory communicatively coupled to the at least one processor, natural language understanding (nlu) data with multi-level concept annotations according to an application ontology characterizing a relationship structure between application-related concepts for a given nlu application, a plurality of the application-related concepts being domain-specific concepts;
relate, in the storage memory, mid-level concepts in the application ontology to two or more respective lower-level concepts, wherein each mid-level concept is derived from the two or more respective lower-level concepts and each lower-level concept has a respective associated existing grammar;
automatically infer a new application grammar from the multi-level concept annotations and the application ontology, including inferring, directly from the multi-level concept annotations and indirectly from the application ontology, a grammar rule from each level of a multi-level annotation hierarchy;
cause a given grammar rule of the inferred grammar rules to promote at least one lower-level concept grammar rule to at least one higher-level concept grammar rule, the promoting resulting from the relating, in the storage memory, of a given mid-level concept to the two or more respective lower-level concepts, wherein each lower-level concept has a respective associated existing grammar;
in response to the promoting, access the storage memory to automatically reuse a given rule from the existing grammars associated with the two or more respective lower-level concepts related to the given mid-level concept to infer the new application grammar with respect to the at least one higher-level concept; and
employ the new application grammar, through access of the storage memory, in a semantic interpreter of a ssm system to determine a semantic interpretation of a user query.
20. A computer-implemented system for automatically inferring a natural language grammar for use in a statistical semantic model (ssm) system, the computer-implemented system comprising:
at least one processor; and
at least one memory with computer code instructions stored thereon, the at least one processor and the at least one memory, with the computer code instructions, configured to cause the system to implement:
an annotation module configured to:
provide, in storage memory communicatively coupled to the at least one processor, natural language understanding (nlu) data with multi-level concept annotations according to an application ontology characterizing a relationship structure between application-related concepts for a given nlu application, a plurality of the application-related concepts being domain-specific concepts;
relate, in the storage memory, mid-level concepts in the application ontology to two or more respective lower-level concepts, wherein each mid-level concept is derived from the two or more respective lower-level concepts and each lower-level concept has a respective associated existing grammar;
an inference module configured to:
automatically infer a new application grammar from the multi-level concept annotations and the application ontology, including inferring, directly from the multi-level concept annotations and indirectly from the application ontology, a grammar rule from each level of a multi-level annotation hierarchy;
cause a given grammar rule of the inferred grammar rules to promote at least one lower-level concept grammar rule to at least one higher-level concept grammar rule, the promoting resulting from the relating, in the storage memory, of a given mid-level concept to the two or more respective lower-level concepts, wherein each lower-level concept has a respective associated existing grammar; and
in response to the promoting, access the storage memory to automatically reuse a given rule from the existing grammars associated with the two or more respective lower-level concepts related to the given mid-level concept to infer the new application grammar with respect to the at least one higher-level concept; and
the at least one processor further configured to provide the new application grammar to a ssm system, the new application grammar being employed in a semantic interpreter of the ssm system to determine a semantic interpretation of a user query.
2. The computer-implemented method according to
3. The computer-implemented method according to
4. The computer-implemented method according to
revising the concept annotations based on automatically parsing the nlu data and the concept annotations with the application grammar.
5. The computer-implemented method according to
automatically parsing new nlu data with the application grammar to develop concept annotations for the new nlu data.
6. The computer-implemented method according to
7. The computer-implemented method according to
8. The computer-implemented method according to
automatically parsing an input query with a grammar rule from the inferred grammar to extract features for semantic processing by a statistical learning machine arrangement.
10. The computer program product according to
11. The computer program product according to
12. The computer program product according to
revise the concept annotations based on automatically parsing the nlu data and the concept annotations with the application grammar.
13. The computer program product according to
automatically parse new nlu data with the application grammar to develop concept annotations for the new nlu data.
14. The computer program product according to
15. The computer program product according to
16. The computer program product according to
automatically parse an input query with a grammar rule from the inferred grammar to extract features for semantic processing by a statistical learning machine arrangement.
17. The computer-implemented method according to
18. The computer-implemented method according to
19. The computer-implemented method according to
|
The present invention relates to natural language understanding (NLU), and in particular, to automatic generation of NLU grammars from application ontologies.
Natural Language Processing (NLP) and Natural Language Understanding (NLU) involve using computer processing to extract meaningful information from natural language inputs such as human generated speech and text. One recent application of such technology is processing speech and/or text queries in mobile devices such as smartphones.
U.S. Patent Publication 20110054899 describes a hybrid client-server NLU arrangement for a mobile device. Various example screen shots of the application interface 100 from one such mobile device NLU application, Dragon Mobile Assistant for Android, are shown in
An NLU application based on ASR utilizes a statistical language model to initially recognize the words or likely words that were uttered based on probabilities such as the probability that an utterance is a given word based on one or more previously recognized words. Some language models are topic domain-specific such as medical radiology or aircraft control. A language model is often built by analyzing a large set of representative sentences, phrases or the like, to obtain statistics about word occurrence frequency, which words tend to occur after other words or phrases, etc.
The recognition grammars acts to interpret the semantic meanings of the recognized words. In this context, a recognition grammar is a set of phrases that a system is prepared to recognize. Conceptually, the phrases in a grammar represent all legitimate utterances a user may make. If a user utterance is included in the grammar, the system recognizes words of the utterance. If the user utters something that is not in the grammar, the utterance may be considered ungrammatical (“out-of-grammar”), and the system may not recognize the utterance correctly.
However, typically there are many ways a human can express a particular idea or command. For example, a user may order “two large pizzas, one with olives and the other with anchovies,” or the user may say she wants “one olive pizza and one anchovy pizza, both large.” Both utterances have the same meaning. Thus, a grammar writer's task involves predicting a set of phrases and encoding the phrases in the grammar. However, due to the variety of ways ideas and commands can be expressed, a grammar that accommodates a reasonable range of expressions can be quite large and difficult to design. Furthermore, the complexity of a grammar greatly affects speed and accuracy of an ASR system. Thus, complex grammars should be constructed with as much care as complex software programs. Grammar writing, however, is an unfamiliar task for most software developers, and creating a high-quality, error-free grammar requires somewhat different skills than programming in a language, such as Java or C++. For example, grammars are inherently non-procedural. Thus, many typical software development approaches are not applicable to grammar development.
In a speech-enabled NLU application, recognition slots are sometimes used to hold individual pieces of information from a recognized utterance. For example, in an automated banking system, slots may be defined for: (1) command-type (examples of which may include deposit, withdrawal, bill-payment and the like); (2) source-account (checking, savings or money-market); and (3) amount. An NLU application fills these slots with logical representations of recognized words and then passes the slots to application code for processing. For example, the phrases “the first of March” and “March the first” may cause a slot labeled date to be filled with “Mar01” or some other unambiguous date representation.
Developing NLU grammars is a time-consuming expensive process that requires considerable time and effort from human experts using large databases.
Embodiments of the present invention are directed to inferring a natural language grammar based on providing natural language understanding (NLU) data with concept annotations according to an application ontology characterizing a relationship structure between application-related concepts for a given NLU application. An application grammar is then inferred from the concept annotations and the application ontology.
The concept annotations may then be revised based on parsing the NLU data and the concept annotations with the application grammar. New NLU data can be parsed with the application grammar to develop concept annotations for the new NLU data; for example, using a structure tree of the concept annotations reflecting the application ontology. Inferring an application grammar may include inferring a back-off grammar from the annotations. Inferring an application grammar also may include incorporating one or more existing grammars for one or more of the application-related concepts. The inferred application grammar can be employed in an initial semantic interpreter for an NLU arrangement. The inferred grammar rules can be used to parse an input query to extract features for semantic processing by a statistical learning machine arrangement.
Embodiments of the present invention also include a computer program product in a non-tangible computer readable storage medium for execution on at least one processor of a method of inferring a natural language grammar, wherein the computer program product has instructions for execution on the at least one processor comprising program code for performing the method according to any of the above.
Embodiments of the present invention are based on developing NLU grammars for a new application or new vertical domain from an application/domain ontology and a hierarchy of concept annotations. Existing grammars for common concepts are reused instead of relying on costly experts and time-consuming wholly manual process. An annotation user provides concept annotations from high-level concept (e.g., intentions) down to low-level concepts based on the concept relationships described by the ontology. That then allows grammar rules to be directly inferred.
For example,
Low-level concepts such as location, date, time, etc. are common to many pre-existing applications and can be present in a common ontology with corresponding grammar links. An annotation user 210 then only needs to add new domain-specific concepts to the ontology 204. In the example shown in
Once the ontology 204 has been defined, step 301, an annotation user 210 uses an annotation module 203 to annotate input training data 201 for the new application, step 302. The user 210 tags each available sentence in the training data 201 with the appropriate multi-level concept annotations from the defined ontology 202. The first level is the intent, if there is one is present. Then in lower concept levels come the specific parts of the intent that are linked to low-level concepts.
For example, the sentence I would like to fly from Montreal to New-York next Monday would be tagged as:
First level: <intent_fly> I would like to fly from Montreal to
New-York next Monday </intent_fly>
Second level: <intent_fly> I would like to fly from
<departure_location> Montreal </departure_location> to
<arrival_location> New-York </arrival_location> <departure_date>
next Monday </departure_date> </intent_fly>
Third level: <intent_ fly> I would like to fly from <departure_location>
<location> Montreal </location> </departure_location> to
<arrival_location> <location> New-York </location>
</arrival_location> <departure_date> <date> next Monday </date>
</departure_date></intent_fly>
Fourth level: <intent_fly> I would like to fly from
<departure_location> <location><city> Montreal</city> </location>
</departure_location> to <arrival_location> <location> <city> New-York
</city> </location> </arrival_location> <departure_date> <date>
<order>next </order> <day_of_week> Monday
</day_of_week> </date>
</departure_date> </intent_fly>
Given multiple levels of annotation from defining the application ontology, step 301, and annotating the training data 201 via annotation module 203, step 302, a grammar inference module 205 can infer grammar rules for an NLU grammar 212 for the new application, step 303. Once there is multi-level annotation data 213, the grammar inference module 205 can create a grammar rule that refers to lower-level concept grammars and promotes those to the correct higher-level concepts. This can be done for each level of the annotation hierarchy. The new application grammar 212 is not inferred directly using the ontology 204, but rather indirectly because the annotation module 203 is driven by the ontology 204 and the annotation user 210 can only select concepts that are linked in the ontology 204. During the annotation module 203, the UI Layer 209 may only let the annotation user 210 select concepts that are linked together in the ontology 204.
The previous example sentence, I would like to fly from Montreal to New-York leaving next Monday, provides the inference of the grammar path:
<item>
I would like to fly from
<ruleref uri=“#LOCATION”/> <tag>departure_location=
LOCATION.location</tag>
to
<ruleref uri=“#LOCATION”/> <tag>arrival_location=
LOCATION.location</tag>
leaving
<ruleref uri=“#DATE”/> <tag>departure_date= DATE .date</tag>
<tag> intent = “intent_fly”</tag>
</item>
where the grammar rule LOCATION includes a grammar of city names:
<rule id=“LOCATION” scope=“private”>
<one-of>
<item> <ruleref uri=“City.grxml ”/>
<tag>location=City.city</tag>
</item>
</one-of>
</rule>
The grammar rule DATE includes absolute and relative date-defining phrases. Note that the grammar rules inferred from the first two levels represent full sentence covering rules in the new application grammar 212. Those full sentence rules come directly from the annotated data 213.
Another example sentence might be: When does flight one two three arrive. The annotation would be:
<request_arrival_time> When does flight
<flight_number> <cardinal_number> one two three
</cardinal_number> </flight_number> arrive
</request_arrival_time>
And the resulting full inferred sentence rule is:
<item>
When does flight
<ruleref uri=”CARDINAL_NUMBER”/>
<tag>flight_number = CARDINAL_NUMBER.number </tag>
arrive
<tag>itent = “request_arrival_time”</tag>
</item>
In the first inferred grammar example above, the words Montreal, New York, next, and Monday are handled by the respective low level concepts cities, cities, order and day_of_week. And so it is enough for the grammar inference module 205 to just infer grammar rules that promote those low-level concepts, and there is no need to create new rules that catch words and set values (like a rule that returns city:montreal for the word Montreal). But some concept words may not be covered by the low-level concepts rules. Take for example the sentence:
The inferred new application grammar 212 as initially created covers full sentences. For robust NLU operation, the grammar inference module 205 also creates smaller back-off grammar rules, step 304. By using the top-level annotation of all the verified sentences in the annotated data 213, the grammar inference module 205 can extract part of an annotation that appears in the same context. For example:
Output slot format. The inferred grammar needs to generate an output that will be understood by the component using the NLU engine 214. That can be done using a meaning representation in JSON format as the output of the inferred grammar based on the annotation and the ontology. This meaning representation is the information extracted by the NLU engine 214 that will be sent to the next component, for example, Application Developer Kit (ADK) 215.
Suppose the intention intent_travel is associated with the following sentence:
I would like to fly from <departure_location> <location> <city>
Montreal</city> </location> </departure_location> to
<arrival_location> <location> <city> New-York </city> </location>
</arrival_location>.
The NLU engine 214 parses the sentence: “I would like to fly from montreal to new york,” with the application grammar 212 and the output JSON format will be:
{“INTENT_TRAVEL” : {
“DEPARTURE_LOCATION” : { “LOCATION” : {“CITY” :
“montreal” } } },
“ARRIVAL_LOCATION”:{ “LOCATION” : {“CITY” : “new york” } }
}
The application grammar 212 can also be applied to the annotated data 213, step 305, to evaluate the consistency of the annotation module 203. Sentences in the annotated data 213 which are covered by the application grammar 212 parse fully. This can be done regularly in the user annotation session and permits significant annotation time savings since as the annotation user 210 progresses, more and more sentences in the annotated data 213 are already covered by the development of the application grammar 212 coming from previous annotations. The annotation user 210 also can reimport and re-annotate data that is not yet covered by the inferred application grammar 212, step 306, and this can be iteratively repeated as many times as needed.
Such inferred grammars can be used in a front-end initial semantic interpreter for an NLU arrangement such as statistical semantic model (SSM) system. A user input query can be parsed by the inferred grammars to develop a semantic interpretation 705. If the grammar parsing of the input query is unsuccessful (no meaning returned), then the query can be passed to a statistical learning machine for an interpretation. For example,
Embodiments of the invention may be implemented in whole or in part in any conventional computer programming language. For example, preferred embodiments may be implemented in a procedural programming language (e.g., “C”) or an object oriented programming language (e.g., “C++”, Python). Alternative embodiments of the invention may be implemented as pre-programmed hardware elements, other related components, or as a combination of hardware and software components.
Embodiments can be implemented in whole or in part as a computer program product for use with a computer system. Such implementation may include a series of computer instructions fixed either on a tangible medium, such as a computer readable medium (e.g., a diskette, CD-ROM, ROM, or fixed disk) or transmittable to a computer system, via a modem or other interface device, such as a communications adapter connected to a network over a medium. The medium may be either a tangible medium (e.g., optical or analog communications lines) or a medium implemented with wireless techniques (e.g., microwave, infrared or other transmission techniques). The series of computer instructions embodies all or part of the functionality previously described herein with respect to the system. Those skilled in the art should appreciate that such computer instructions can be written in a number of programming languages for use with many computer architectures or operating systems. Furthermore, such instructions may be stored in any memory device, such as semiconductor, magnetic, optical or other memory devices, and may be transmitted using any communications technology, such as optical, infrared, microwave, or other transmission technologies. It is expected that such a computer program product may be distributed as a removable medium with accompanying printed or electronic documentation (e.g., shrink wrapped software), preloaded with a computer system (e.g., on system ROM or fixed disk), or distributed from a server or electronic bulletin board over the network (e.g., the Internet or World Wide Web). Of course, some embodiments of the invention may be implemented as a combination of both software (e.g., a computer program product) and hardware. Still other embodiments of the invention are implemented as entirely hardware, or entirely software (e.g., a computer program product).
Although various exemplary embodiments of the invention have been disclosed, it should be apparent to those skilled in the art that various changes and modifications can be made which will achieve some of the advantages of the invention without departing from the true scope of the invention.
Peters, Stephen Douglas, Tremblay, Réal, Robillard, Serge, Tremblay, Jerome
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
5878385, | Sep 16 1996 | ERGO LINGUISTIC TECHNOLOGIES, INC | Method and apparatus for universal parsing of language |
6615178, | Feb 19 1999 | Sony Corporation | Speech translator, speech translating method, and recorded medium on which speech translation control program is recorded |
7257565, | Mar 31 2000 | Microsoft Technology Licensing, LLC | Linguistic disambiguation system and method using string-based pattern training learn to resolve ambiguity sites |
7548847, | May 10 2002 | Microsoft Technology Licensing, LLC | System for automatically annotating training data for a natural language understanding system |
7617093, | Jun 02 2005 | Microsoft Technology Licensing, LLC | Authoring speech grammars |
7653545, | Jun 11 1999 | Telstra Corporation Limited | Method of developing an interactive system |
7869998, | Apr 23 2002 | Microsoft Technology Licensing, LLC | Voice-enabled dialog system |
8972445, | Apr 23 2009 | THE GORMAN FAMILY TRUST | Systems and methods for storage of declarative knowledge accessible by natural language in a computer capable of appropriately responding |
9767093, | Jun 19 2014 | Microsoft Technology Licensing, LLC | Syntactic parser assisted semantic rule inference |
20020111811, | |||
20020133347, | |||
20020156616, | |||
20030121026, | |||
20040083092, | |||
20040122653, | |||
20040220797, | |||
20040220809, | |||
20040243568, | |||
20050246158, | |||
20060074634, | |||
20060212841, | |||
20080059149, | |||
20080126078, | |||
20080133220, | |||
20080208584, | |||
20090076799, | |||
20090259459, | |||
20090276396, | |||
20090292530, | |||
20100145902, | |||
20100161316, | |||
20100332217, | |||
20110054899, | |||
20120109637, | |||
20130173247, | |||
20140040312, | |||
20140297282, | |||
20140337814, | |||
20150186504, | |||
20150370778, | |||
20160026608, | |||
WO2015195744, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jul 08 2013 | TREMBLAY, JEROME | Nuance Communications, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 030801 | /0942 | |
Jul 08 2013 | PETERS, STEPHEN DOUGLAS | Nuance Communications, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 030801 | /0942 | |
Jul 08 2013 | ROBILLARD, SERGE | Nuance Communications, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 030801 | /0942 | |
Jul 09 2013 | TREMBLAY, REAL | Nuance Communications, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 030801 | /0942 | |
Jul 15 2013 | Nuance Communications, Inc. | (assignment on the face of the patent) | / | |||
Sep 20 2023 | Nuance Communications, Inc | Microsoft Technology Licensing, LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 065533 | /0389 |
Date | Maintenance Fee Events |
Sep 07 2022 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Date | Maintenance Schedule |
Mar 19 2022 | 4 years fee payment window open |
Sep 19 2022 | 6 months grace period start (w surcharge) |
Mar 19 2023 | patent expiry (for year 4) |
Mar 19 2025 | 2 years to revive unintentionally abandoned end. (for year 4) |
Mar 19 2026 | 8 years fee payment window open |
Sep 19 2026 | 6 months grace period start (w surcharge) |
Mar 19 2027 | patent expiry (for year 8) |
Mar 19 2029 | 2 years to revive unintentionally abandoned end. (for year 8) |
Mar 19 2030 | 12 years fee payment window open |
Sep 19 2030 | 6 months grace period start (w surcharge) |
Mar 19 2031 | patent expiry (for year 12) |
Mar 19 2033 | 2 years to revive unintentionally abandoned end. (for year 12) |