A speech dialogue service apparatus including: a language analysis module tagging a part of speech (pos) of each respective word included in a sentence recorded in a predetermined text, syntactically analyzing the sentence by classifying a meaning of each respective word, and generating at least one semantic frame corresponding to the sentence according to a result of the syntactical analysis; and a dialogue management module analyzing an intention of the sentence corresponding to the at least one respective semantic frame, and generating a system response corresponding to the sentence intention by selecting a predetermined sentence intention according to whether an action corresponding to the intention of the respective sentence can be performed.
|
8. A method for a speech dialogue service comprising:
analyzing a morpheme of a sentence;
tagging a pos of each respective word included in the sentence recorded in a predetermined text;
classifying a meaning of each respective word;
parsing the sentence into at least one respective phrase;
using at least one processing device classifying each respective phrase into a theme, a parameter, and an action;
generating at least one semantic frame corresponding to the sentence;
restoring, using at least one processing device, the sentence by converting a respective phrase of the at least one semantic frame into a valid phrase or a default value, by analogizing a phrase determined to be omitted from the sentence with reference to previous utterance contents of a user;
analyzing an intention of the user according to the restored sentence;
generating at least one action list according to the intention of the user;
determining whether the generated action can be performed; and
generating a system response corresponding to the action being determined to be able to be performed.
9. A computer-readable recording medium including a computer-executable program to control at least one processing device to execute a speech dialogue service method, the method comprising:
analyzing a morpheme of a sentence;
tagging a pos of each respective word included in a the sentence recorded in a predetermined text;
classifying a meaning of each respective word;
parsing the sentence into at least one respective phrase;
classifying each respective phrase into a theme, a parameter, and an action;
generating at least one semantic frame corresponding to the sentence;
restoring the sentence by converting a respective phrase of the at least one semantic frame into a valid phrase or a default value, by analogizing a phrase determined to be omitted from the sentence with reference to previous utterance contents of a user;
analyzing an intention of the user according to the restored sentence;
generating at least one action list according to the intention of the user;
determining whether the generated action can be performed; and
generating a system response corresponding to the action being determined to be able to be performed.
1. A speech dialogue service apparatus, comprising at least one processing device, the apparatus comprising:
a language analysis unit comprising the at least one processing device tagging a part of speech (pos) of each respective word included in a sentence recorded in a predetermined text, syntactically analyzing the sentence by classifying a meaning of each respective word, and generating at least one semantic frame corresponding to the sentence according to a result of the syntactical analysis; and
a dialogue management unit analyzing an intention of the sentence corresponding to each at least one semantic frame, and generating a system response corresponding to the sentence intention by selecting a predetermined sentence intention according to whether an action corresponding to the intention of the respective sentence can be performed,
wherein the language analysis unit comprises:
a pos tagging unit analyzing a morpheme of the sentence and tagging the pos of each respective word;
a syntax analysis unit classifying each respective word for each meaning and parsing the sentence into at least one phrase; and
a frame analysis unit classifying each respective phrase into a theme, a parameter, and an action, and generating at least one semantic frame corresponding to the sentence,
wherein the dialogue management unit comprises:
a context information unit restoring the sentence by converting a respective phrase of the at least one semantic frame into a valid phrase or a default value, by analogizing a phrase determined to be omitted from the sentence with reference to previous utterance contents of a user; and
a user intention analysis unit generating at least one action list by analyzing an intention of the user according to the restored sentence and selecting a predetermined action by determining whether a respective action can be performed.
2. The apparatus of
wherein the text comprises at least one natural language uttered by the user.
3. The apparatus of
a reference database maintaining a reference table in which one of the theme, the parameter, and the action is established as a reference domain and at least one valid phrase or default value corresponding to other domains in addition to the reference domain established with respect to a predetermined phrase is recorded; and
a focus stack in which a result of analyzing previous utterance contents in response to at least one user is recorded.
4. The apparatus of
5. The apparatus of
wherein the user intention analysis unit extracts the argument corresponding to the respective phrase included in the sentence with reference to the context model database and analyzes the intention of the user according to the sentence with reference to the sub-dialogue associated with the extracted argument.
6. The apparatus of
7. The apparatus of
|
This application claims priority from Korean Patent Application No. 10-2006-0020600, filed on Mar. 3, 2006, in the Korean Intellectual Property Office, the disclosure of which is incorporated herein by reference.
1. Field of the Invention
The present invention relates to a speech dialogue service apparatus and method, and more particularly, to a speech dialogue service apparatus and method of analyzing a dialogue style sentence including a natural language uttered by a user, analogizing omitted information of the sentence via dynamic context management, and analyzing and performing an intention of the user, thereby precisely analyzing and performing an uttered content of the user even when the user utters in the natural language as an ordinary dialogue instead of uttering only in a certain type of dialog capable of being recognized by a system.
2. Description of Related Art
Currently, technologies relating to home networks have been rapidly developing. Via home networks, home electronics such as a television, a video tape recorder, a telephone, a washer, and a refrigerator are connected to each other and users may enjoy various benefits by such network connections of home electronics.
In conventional technologies, to control home electronics by home networks, users must know a command system corresponding to each home electronics device or know the corresponding command system to interact with home electronics device. For example, users may directly control home electronics by using a remote control or a portable device.
However, as the so-called “Ubiquitous era” comes of age, methods of directly controlling home electronics by users as described above are being gradually substituted by methods in which home electronics are controlled by recognizing contents of dialogue uttered by users and executing corresponding operations.
Methods of controlling dialogue type home electronics include, speech recognition technology for receiving and converting a speech of users into a text and a technology for applying a dialogue type order analyzed by speech recognition to home electronics connected via home networks.
However, according to the described conventional method of controlling home electronics by speech recognition, there is a restriction on utterance contents of users. Specifically, users are must utter just a few instructions capable of being recognized by a home electronics control system to control home electronics. Accordingly, users must know well the instructions capable of being recognized by the system to control home electronics.
Therefore, dialogue type speech recognition services in which a user may more freely utter an instruction in a natural language and a system that may recognize the uttered natural language instruction and control home electronics are being developed. According to the dialogue type speech recognition services, the user does not need to previously know well a certain instruction and instead utters a word capable of being generally recognized, thereby easily controlling home electronics.
As a conventional dialogue type speech recognition service model, U.S. Pat. No. 6,604,090 and U.S. Patent Application No. 2002/0133347 disclose service models in which a keyword list is made by extracting a keyword from utterance contents of a user, a template corresponding to the keyword is extracted from a database, and a response is determined by comparing the templates with each other.
Also, U.S. Pat. Nos. 6,246,981 and 6,786,651 disclose service models in which expected dialogue forms are previously recorded and a response is provided according to a predetermined scenario corresponding to utterance contents of a user for each category, thereby recognizing an intention of the user.
However, in the aforementioned conventional dialogue type speech recognition services, since a natural language instruction uttered by a user is recognized by referring to standardized words previously inputted, there is a restriction on natural language analysis whose target is similar to a literary style or limited sentence combination. Specifically, in most natural language sentences uttered by the user, a word or phrase is omitted, tenses are not consistent and/or an order is inversed. Accordingly, a meaning itself may be ambiguous, and the natural language constructed by the imperfect sentence cannot be precisely recognized by the conventional services.
Also, when analyzing the intention of the user according to a certain scenario, it is not possible to correspond to general dialogue environments in which the intention of the user is frequently changed and cannot be estimated according to circumstances.
Accordingly, development of a dialogue type speech recognition service model capable of inducing a more intelligent and natural dialogue by more precisely analyzing and responding to imperfect sentence contents of a natural language instruction uttered by the user is required.
An aspect of the present invention provides a speech dialogue service apparatus and method, in which utterance contents of a user, including a natural language, are analyzed by recognizing a semantic slot, thereby more precisely recognizing the utterance contents of the user regardless of a type of utterance of the user.
An aspect of the present invention also provides a speech dialogue service apparatus and method, in which utterance contents of a user are analyzed by managing a dynamic context, thereby more precisely analyzing an intention of the user regardless of the user.
An aspect of the present invention also provides a speech dialogue service apparatus and method, in which utterance contents of a user is precisely recognized by recognizing a semantic slot and managing a context, thereby always precisely recognizing an intention of the user and performing a corresponding service even when a word of a predetermined natural language is instantly uttered as soon as the word comes to mind, without having to remember an utterance type capable of being recognized by each individual system.
According to an aspect of the present invention, there is provided a speech dialogue service apparatus including: a language analysis module tagging a part of speech (POS) of each respective word included in a sentence recorded in a predetermined text, syntactically analyzing the sentence by classifying a meaning of each respective word, and generating at least one semantic frame corresponding to the sentence according to a result of the syntactical analysis; and a dialogue management module analyzing an intention of the sentence corresponding to each at least one semantic frame, and generating a system response corresponding to the sentence intention by selecting a predetermined sentence intention according to whether an action corresponding to the intention of the respective sentence can be performed.
According to another aspect of the present invention, there is provided a speech dialogue service method including: tagging a POS of each respective word included in a sentence recorded in a predetermined text; syntactically analyzing the sentence by classifying a meaning of each respective word; generating at least one semantic frame corresponding to the sentence according to a result of the syntactical analysis; analyzing an intention of the sentence corresponding to each respective semantic frame; selecting a predetermined sentence intention according to whether an action corresponding to the intention of the sentence can be performed; and generating a system response corresponding to the sentence intention.
According to another aspect of the present invention, there is provided a method of providing a speech dialogue service, including: recognizing and converting uttered speech into text; resolving an intention of the speech by analyzing a sentence in the text by tagging a part of speech (POS) of each word of the sentence, parsing the sentence into at least one phrase by classifying a meaning of each word and combining each word whose meaning is classified, generating at least one semantic frame corresponding to the sentence according to the meaning of each word in the sentence, determining an intention of the sentence corresponding to the at least one semantic frame, and generating a system response corresponding to the sentence intention by selecting a predetermined sentence intention according to whether an action corresponding to the determined intention can be executed; analyzing the intention and performing plan management with respect to the execution of the user's intended command according to the intention analysis, when analysis of the intention is completed; and inquiring about the intention of the speech when the intention is not correctly analyzed or when the determined intention cannot be executed.
According to other aspects of the present invention, there are provided computer-readable recording media in which programs for executing the aforementioned methods are recorded.
Additional and/or other aspects and advantages of the present invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
The above and/or other aspects and advantages of the present invention will become apparent and more readily appreciated from the following detailed description, taken in conjunction with the accompanying drawings of which:
Reference will now be made in detail to the embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout. The embodiments are described below to explain the present invention by referring to the figures.
In the present description, an entire configuration and a flow of operations of a speech dialogue service system according to an embodiment of the present invention will be briefly described by referring to
Referring to
The uttered contents formed of the natural language may have ambiguity. In the case of natural language, grammatical constraints are fewer than in an artificial language, such as a programming language of a computer, and an area of use is not restricted. Accordingly, the natural language has ambiguity in which clauses and syntaxes forming a sentence are analyzed as at least one part of speech (POS), syntax structure, or meaning, according to a context.
The ambiguity of the natural language is a phenomenon that sometimes requires that elements forming the natural language, such as clauses, sentences, or syntax structure, be analyzed more than once. The ambiguity of the natural language may be divided into lexical ambiguity, syntactic ambiguity, and semantic ambiguity.
The lexical ambiguity is an ambiguity in which a word or clause used in a sentence may be analyzed as more than one POS or morpheme. The syntactic ambiguity is an ambiguity in which one grammar structure may be analyzed more than once. The semantic ambiguity is an ambiguity in which a meaning of a word or clause may be analyzed more than once.
The speech dialogue service system recognizes and converts speech uttered by the user into text (110). A word or a sentence of the speech uttered by the user is analyzed using the text to determine the user's intention (i.e., intended command) (120). When the analysis of the word or sentence is completed, the speech dialogue service system performs dialogue management by analyzing an intention of the user according to the uttered content (130). When analysis of the user's intention is completed, the speech dialogue service system performs plan management with respect to the execution of the user's intended command (service performance) according to the intention analysis (140). Each service may be performed according to the plan management (150). Also, when it is determined that the intention of the user is not correctly analyzed in the dialogue management operation (130), a system response inquiring about the user's intention is made (160).
Referring to
The speech recognition module 210 recognizes and converts speech uttered by a user into a text. The speech recognition module 210 may be a general speech recognition apparatus capable of recognizing and converting the speech of the user into a predetermined text.
The language analysis module 220 may include a POS tagging unit 221, a syntax analysis unit 222, and a frame analysis unit 223. The language analysis module 220 may tag a POS to each word included in a sentence recorded in the text, may parse the sentence by classifying a meaning of the each word, and may generate at least one semantic frame corresponding to the sentence according to a result of the parsing.
The POS tagging unit 221 tags a POS of each word included in the sentence by analyzing a morpheme of the sentence. POS tagging is process of assigning accurate POS information to the each word according to a context in which the each word is used in the sentence. The POS tagging may be generally used as a preprocessing process for reducing excessive loads in the operation of parsing due to the lexical ambiguity.
Non-limiting examples of POS tagging methods include a rule-based POS tagging method and a statistic-based POS tagging method, both generally used in a natural language processing field.
In the statistic-based POS tagging method, the lexical ambiguity is stochastically solved by using probability or uncertainty obtained by analyzing a large amount of raw or tagged corpuses including examples of real-life natural language and attached information, and extracting statistic information with respect to the natural language.
Conversely, in the rule-based POS tagging method, a common theory or a determinate rule applied to the POS tagging is detected and the lexical ambiguity is determinately solved by using the detected common theory or determinate rule. The POS tagging unit 221 may tag the POS by a method including the rule-based POS tagging method, the statistic-based POS tagging method, and various other known kinds of POS tagging methods.
The syntax analysis unit 222 classifies each word for each meaning and parses the sentence into at least one phrase. The syntax analysis unit 222 may tag a basic meaning corresponding to the each word by classifying the meaning of the word to which a POS is tagged. The syntax analysis may classify the meaning of the word by referring to a predetermined word sense database (not shown) in which a general meaning of a word is recorded.
The syntax analysis unit 222 may parse the sentence into at least one phrase by combining each word whose meaning is classified. Specifically, the every word may be combined with each other by using the POS or the meaning tagged to the each word. For example, when the sentence is “Change television channel into No. 11”, words included in the sentence may be “change”, “television”, “channel”, “into”, “No.”, and “11”. In this case, the syntax analysis unit 222 may combine the word performing the same role with each other by using a POS tagged to the word or meaning classification and may parse the sentence into phrases as “change”, “television”, “channel”, “into No. 11”.
The frame analysis unit 223 classifies the phrases into a theme, a parameter, and an action and generates at least one semantic frame corresponding to the sentence. When the phrases are parsed by the syntax analysis unit 222, the frame analysis unit 223 establishes a semantic slot corresponding to the respective phrase and substitutes the semantic slot for the respective phrase, thereby generating a semantic frame corresponding to the sentence. The semantic slot may be established as a theme slot, a parameter slot, and an action slot. Examples of the semantic slot will be described by referring to
As shown in
For example, when a user utters a sentence such as “Change television channel No. 18”, the sentence may be parsed into “change”, “television”, “channel”, and “No. 18” by the syntax analysis unit 222. The frame analysis unit 223 determines a type of slot to which a respective phrase is applied. Specifically, “television” and “No. 18” may be applied to the parameter slot, “channel” may be applied to the theme slot, and “change” may be applied to the action slot.
When the respective phrase is applied to a respective semantic slot as described above, the frame analysis unit 223 may analyze again the respective phrase. Specifically, “television” applied to the parameter may be analyzed again as “TV”, and “No. 18” applied to the parameter slot may be analyzed again as “18”. As described above, the frame analysis unit 223 may generate a semantic frame by analyzing again the respective phrase as a kind of domain-dependent language capable of being recognized by a system.
Also, the frame analysis unit 223 may generate a plurality of semantic frames in response to one sentence. Specifically, the plurality of semantic frames may be generated by differentiating a semantic slot applied to a respective phrase.
To generate the semantic frame as described above, the frame analysis unit 223 may maintain at least one semantic frame where a sense code is previously established in various methods, with respect to each of at least one control target device such as a TV, a refrigerator, a robot, an air conditioner, and a video player.
Referring back to
The dialogue management module 230 may analyze an intention of the sentence (i.e., an intended meaning) corresponding to each semantic form, may select a predetermined sentence intention according whether an action corresponding to each of the sentence intention can be performed, and may generate a system response corresponding to the sentence intention.
The context information unit 231 converts a respective phrase of the semantic form into a valid phrase or a default value by referring to the reference database 234. For this, the reference database 234 may include reference tables in which one of the theme, the parameter, and the action is established as a reference domain and at least one valid phrase or default value corresponding to domains in addition to the reference domain established with respect to a predetermined phrase is recorded. An example of the reference table will be described by referring to
In
For example, when a sentence according to the contents of the utterance of the user is “Turn to MBC”, a reference table whose domain action is “setChannel” may be loaded as described above. In the reference table, “MBC” of the sentence may be recognized as MBC as it is or may be recognized as absolute-channel information of 11.
Also, the sentence does not include a target. Specifically, information with respect to the target whose channel has to be turned to MBC, specifically, a channel of which of a plurality of TVs has to be changed, is omitted. In this case, as shown in the reference table of
For example, when a sentence according to contents of an utterance of the user is “record Friends”, the context information unit 231 may load a reference table whose domain action is established as “setRecordBooking” from the reference database 234. In the reference table of
The absolute-channel information may be received via a predetermined server. Specifically, the sentence does not include the absolute-channel information. Accordingly, the context information unit 231 may access the predetermined server providing TV program information and may receive and establish that absolute-channel information corresponding to “Friends” is 11 from the server, in the reference table.
Recording start-time information and recording end-time information may also be received from the server. The context information unit 231 may receive and establish that playing time of “Friends” is from Monday 10:00 to 11:00 from the server, in the reference table.
The server may be externally located or may be a predetermined memory device included in the speech dialogue service apparatus according to an embodiment of the present invention. For example, in the case of broadcast data, the speech dialogue service apparatus may receive various pieces of program information at any time from each broadcasting station to record and maintain in the memory device.
As described by referring to
Also, the context information unit 231 may analogize a phrase determined as being omitted from the sentence by referring to previous utterance contents, and may restore the sentence. The previous utterance contents of the user may be recorded and maintained in the focus stack 235. In the focus stack 235, as the reference table, the previous utterance contents of the user may be recorded according to a domain action or a type of argument. Also, the context information unit 231 may analogize an omitted argument value of the sentence from a value most currently recorded in the focus stack 235. In this way, omitted phrases can be deduced and used to resolve the user's intention.
As described above, the context information unit 231 may restore the sentence by referring to the reference database 234 and the focus stack 235. In this case, after the sentence is restored, when a phrase whose meaning is ambiguous is included, or an omitted value exists in the sentence, the response control unit 233 may make an inquiry of the user with respect to the ambiguous phrase or the omitted value. The inquiry may be a dialog.
Referring back to
In the context model database 236, arguments corresponding to at least one phrase and a sub-dialogue that is a combination of the arguments according to the previous utterance contents of the user are recorded and maintained.
In this case, the user intention analysis unit 232 may extract the arguments corresponding to the respective phrase included in the sentence by referring to the context model database 236 and may analyze the intention of the user according to the sentence by referring to the sub-dialogue associated with the extracted arguments.
For example, when the sentence is “deliver a voice message”, the user intention analysis unit 232 may generate a sub-dialogue of a domain action, such as “deliverVoiceMessage” corresponding to “deliver” by referring to the context model database 236.
Accordingly, the user intention analysis unit 232 may analyze the intention of the user via the sub-dialogue, as in the reference table. In this case, recognizing that an opponent is omitted from the sub-dialogue, the user intention analysis unit 232 may inquire of the user about the opponent via the response control unit 233 and may establish the opponent from a response of the user.
Also, when the sub-dialogue associated with the established argument does not exist in the context model database 236, the user intention analysis unit 232 may generate and record the sub-dialogue corresponding to the argument in the context model database 236.
Also, the user intention analysis unit 232 may analyze the intention of the user more than once. Specifically, more than one intention of the user may be established according to a method of combining the arguments, and the ambiguity of meaning due to an omitted phrase. For example, when the utterance contents of the user is “TV”, the user intention analysis unit 232 may establish the intentions of the user according to a case of “turn on TV” and a case of “turn off TV”, respectively.
The user intention analysis unit 232 generates an action list corresponding to the at least one intention of the user. The user intention analysis unit 232 selects a predetermined action by determining whether a respective action can be performed. For example, the user intention analysis unit 232 may generate an action list including an action of “turn on TV” and an action of “turn off TV”. The user intention analysis unit 232 reads whether the TV is currently turned on or off. As a result of the reading, when the TV is turned on, the user intention analysis unit 232 may select the action of “turn off TV”. Of course, in this case, the user intention analysis unit 232 may make an inquiry to the user with respect to whether to turn the TV on or off.
As described above, when an action according to the intention of the user is selected by the user intention analysis unit 232, to perform a service according to the selected action, the service performance control module 240 may control a predetermined device for providing the service.
As described above, the speech dialogue service apparatus according to an embodiment of the present invention may analyze a dialogue of the user via analyzing a semantic slot and may recognize an intention of the user via managing a dynamic context. Accordingly, regardless of various types of utterance of the user, the intention of the user according to contents uttered by the user may be more precisely recognized. Also, there is no need to remember a type of utterance capable of being recognized by each individual system. Services may be provided to the user just by instantly uttering a desired dialog. Also, the utterance of the user may be processed, and an intelligent and dynamic speech dialogue service may be provided by a sub-dialogue.
The speech dialogue service apparatus recognizes and converts a speech uttered by a user into text (610). The speech dialogue service apparatus tags a POS of each respective word included in a sentence recorded in the text (620). The speech dialogue service apparatus analyzes a syntax of the sentence by classifying a meaning of each respective word (630).
The speech dialogue service apparatus generates at least one semantic frame corresponding to the sentence according to a result of the syntax analyzing (640). The speech dialogue service apparatus analyzes an intention of the sentence corresponding to each respective semantic form (650) and selects a predetermined sentence intention according to whether an action corresponding to each respective sentence intention can be performed (660). The speech dialogue service apparatus controls a predetermined device to perform a service according to the selected sentence intention (670). Also, the speech dialogue service apparatus may generate and provide a predetermined system response corresponding to the sentence intention to the user. The system response may include an inquiry to the user, caused by lexical ambiguity or impossibility of service performance.
The speech dialogue service method of
The speech dialogue service method according to the present invention may be implemented as a program instruction capable of being executed via various computer units and may be recorded in a computer-readable recording medium. The computer-readable medium may include a program instruction, a data file, and a data structure, separately or cooperatively. The program instructions and the media may be those specially designed and constructed for the purposes of the present invention, or they may be of the kind well known and available to those skilled in the art of computer software arts. Examples of the computer-readable media include magnetic media (e.g., hard disks, floppy disks, and magnetic tapes), optical media (e.g., CD-ROMs or DVD), magneto-optical media (e.g., optical disks), and hardware devices (e.g., ROMs, RAMs, or flash memories, etc.) that are specially configured to store and perform program instructions. Examples of the program instructions include both machine code, such as produced by a compiler, and files containing high-level language codes that may be executed by the computer using an interpreter.
The above-described embodiments of the present invention provide a speech dialogue service apparatus and method, in which utterance contents of a user, including a natural language, are analyzed by recognizing a semantic slot, thereby more precisely recognizing the utterance contents of the user regardless of a type of utterance of the user.
The above-described embodiments of the present invention also provide a speech dialogue service apparatus and method, in which utterance contents of a user are analyzed by managing a dynamic context, thereby more precisely analyzing an intention of the user regardless of the user.
The above-described embodiments of the present invention also provide a speech dialogue service apparatus and method, in which utterance contents of a user is precisely recognized by recognizing a semantic slot and managing a context, thereby always precisely recognizing an intention of the user and performing a corresponding service even when a word of a predetermined natural language is instantly uttered as soon as the word comes to mind, without having to remember an utterance type capable of being recognized by each individual system.
Although a few embodiments of the present invention have been shown and described, the present invention is not limited to the described embodiments. Instead, it would be appreciated by those skilled in the art that changes may be made to these embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.
Cho, Jeong Mi, Kang, In Ho, Kwak, Byung Kwan
Patent | Priority | Assignee | Title |
10002189, | Dec 20 2007 | Apple Inc | Method and apparatus for searching using an active ontology |
10019994, | Jun 08 2012 | Apple Inc.; Apple Inc | Systems and methods for recognizing textual identifiers within a plurality of words |
10043516, | Sep 23 2016 | Apple Inc | Intelligent automated assistant |
10049663, | Jun 08 2016 | Apple Inc | Intelligent automated assistant for media exploration |
10049668, | Dec 02 2015 | Apple Inc | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
10049675, | Feb 25 2010 | Apple Inc. | User profiling for voice input processing |
10057736, | Jun 03 2011 | Apple Inc | Active transport based notifications |
10067938, | Jun 10 2016 | Apple Inc | Multilingual word prediction |
10074360, | Sep 30 2014 | Apple Inc. | Providing an indication of the suitability of speech recognition |
10078487, | Mar 15 2013 | Apple Inc. | Context-sensitive handling of interruptions |
10078631, | May 30 2014 | Apple Inc. | Entropy-guided text prediction using combined word and character n-gram language models |
10079014, | Jun 08 2012 | Apple Inc. | Name recognition system |
10083688, | May 27 2015 | Apple Inc | Device voice control for selecting a displayed affordance |
10083690, | May 30 2014 | Apple Inc. | Better resolution when referencing to concepts |
10089072, | Jun 11 2016 | Apple Inc | Intelligent device arbitration and control |
10101822, | Jun 05 2015 | Apple Inc. | Language input correction |
10102359, | Mar 21 2011 | Apple Inc. | Device access using voice authentication |
10108612, | Jul 31 2008 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
10127220, | Jun 04 2015 | Apple Inc | Language identification from short strings |
10127911, | Sep 30 2014 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
10134385, | Mar 02 2012 | Apple Inc.; Apple Inc | Systems and methods for name pronunciation |
10169329, | May 30 2014 | Apple Inc. | Exemplar-based natural language processing |
10170123, | May 30 2014 | Apple Inc | Intelligent assistant for home automation |
10176167, | Jun 09 2013 | Apple Inc | System and method for inferring user intent from speech inputs |
10185542, | Jun 09 2013 | Apple Inc | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
10186254, | Jun 07 2015 | Apple Inc | Context-based endpoint detection |
10192552, | Jun 10 2016 | Apple Inc | Digital assistant providing whispered speech |
10199051, | Feb 07 2013 | Apple Inc | Voice trigger for a digital assistant |
10223066, | Dec 23 2015 | Apple Inc | Proactive assistance based on dialog communication between devices |
10241644, | Jun 03 2011 | Apple Inc | Actionable reminder entries |
10241752, | Sep 30 2011 | Apple Inc | Interface for a virtual digital assistant |
10249300, | Jun 06 2016 | Apple Inc | Intelligent list reading |
10255566, | Jun 03 2011 | Apple Inc | Generating and processing task items that represent tasks to perform |
10255907, | Jun 07 2015 | Apple Inc. | Automatic accent detection using acoustic models |
10269345, | Jun 11 2016 | Apple Inc | Intelligent task discovery |
10276170, | Jan 18 2010 | Apple Inc. | Intelligent automated assistant |
10283110, | Jul 02 2009 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
10289433, | May 30 2014 | Apple Inc | Domain specific language for encoding assistant dialog |
10296160, | Dec 06 2013 | Apple Inc | Method for extracting salient dialog usage from live data |
10297253, | Jun 11 2016 | Apple Inc | Application integration with a digital assistant |
10303715, | May 16 2017 | Apple Inc | Intelligent automated assistant for media exploration |
10311144, | May 16 2017 | Apple Inc | Emoji word sense disambiguation |
10311871, | Mar 08 2015 | Apple Inc. | Competing devices responding to voice triggers |
10318871, | Sep 08 2005 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
10332518, | May 09 2017 | Apple Inc | User interface for correcting recognition errors |
10348654, | May 02 2003 | Apple Inc. | Method and apparatus for displaying information during an instant messaging session |
10354011, | Jun 09 2016 | Apple Inc | Intelligent automated assistant in a home environment |
10354652, | Dec 02 2015 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
10356243, | Jun 05 2015 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
10366158, | Sep 29 2015 | Apple Inc | Efficient word encoding for recurrent neural network language models |
10381016, | Jan 03 2008 | Apple Inc. | Methods and apparatus for altering audio output signals |
10390213, | Sep 30 2014 | Apple Inc. | Social reminders |
10395654, | May 11 2017 | Apple Inc | Text normalization based on a data-driven learning network |
10403278, | May 16 2017 | Apple Inc | Methods and systems for phonetic matching in digital assistant services |
10403283, | Jun 01 2018 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
10410637, | May 12 2017 | Apple Inc | User-specific acoustic models |
10417037, | May 15 2012 | Apple Inc.; Apple Inc | Systems and methods for integrating third party services with a digital assistant |
10417266, | May 09 2017 | Apple Inc | Context-aware ranking of intelligent response suggestions |
10417344, | May 30 2014 | Apple Inc. | Exemplar-based natural language processing |
10417405, | Mar 21 2011 | Apple Inc. | Device access using voice authentication |
10431204, | Sep 11 2014 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
10438595, | Sep 30 2014 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
10445429, | Sep 21 2017 | Apple Inc. | Natural language understanding using vocabularies with compressed serialized tries |
10446141, | Aug 28 2014 | Apple Inc. | Automatic speech recognition based on user feedback |
10446143, | Mar 14 2016 | Apple Inc | Identification of voice inputs providing credentials |
10446167, | Jun 04 2010 | Apple Inc. | User-specific noise suppression for voice quality improvements |
10453443, | Sep 30 2014 | Apple Inc. | Providing an indication of the suitability of speech recognition |
10474753, | Sep 07 2016 | Apple Inc | Language identification using recurrent neural networks |
10475446, | Jun 05 2009 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
10482874, | May 15 2017 | Apple Inc | Hierarchical belief states for digital assistants |
10490187, | Jun 10 2016 | Apple Inc | Digital assistant providing automated status report |
10496705, | Jun 03 2018 | Apple Inc | Accelerated task performance |
10496753, | Jan 18 2010 | Apple Inc.; Apple Inc | Automatically adapting user interfaces for hands-free interaction |
10497365, | May 30 2014 | Apple Inc. | Multi-command single utterance input method |
10503366, | Jan 06 2008 | Apple Inc. | Portable multifunction device, method, and graphical user interface for viewing and managing electronic calendars |
10504518, | Jun 03 2018 | Apple Inc | Accelerated task performance |
10509862, | Jun 10 2016 | Apple Inc | Dynamic phrase expansion of language input |
10515147, | Dec 22 2010 | Apple Inc.; Apple Inc | Using statistical language models for contextual lookup |
10521466, | Jun 11 2016 | Apple Inc | Data driven natural language event detection and classification |
10529326, | Dec 15 2011 | Microsoft Technology Licensing, LLC | Suggesting intent frame(s) for user request(s) |
10529332, | Mar 08 2015 | Apple Inc. | Virtual assistant activation |
10540976, | Jun 05 2009 | Apple Inc | Contextual voice commands |
10552013, | Dec 02 2014 | Apple Inc. | Data detection |
10553209, | Jan 18 2010 | Apple Inc. | Systems and methods for hands-free notification summaries |
10553215, | Sep 23 2016 | Apple Inc. | Intelligent automated assistant |
10567477, | Mar 08 2015 | Apple Inc | Virtual assistant continuity |
10568032, | Apr 03 2007 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
10572476, | Mar 14 2013 | Apple Inc. | Refining a search based on schedule items |
10580409, | Jun 11 2016 | Apple Inc. | Application integration with a digital assistant |
10592095, | May 23 2014 | Apple Inc. | Instantaneous speaking of content on touch devices |
10592604, | Mar 12 2018 | Apple Inc | Inverse text normalization for automatic speech recognition |
10593346, | Dec 22 2016 | Apple Inc | Rank-reduced token representation for automatic speech recognition |
10607140, | Jan 25 2010 | NEWVALUEXCHANGE LTD. | Apparatuses, methods and systems for a digital conversation management platform |
10607141, | Jan 25 2010 | NEWVALUEXCHANGE LTD. | Apparatuses, methods and systems for a digital conversation management platform |
10623347, | May 02 2003 | Apple Inc. | Method and apparatus for displaying information during an instant messaging session |
10636424, | Nov 30 2017 | Apple Inc | Multi-turn canned dialog |
10642574, | Mar 14 2013 | Apple Inc. | Device, method, and graphical user interface for outputting captions |
10643611, | Oct 02 2008 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
10652394, | Mar 14 2013 | Apple Inc | System and method for processing voicemail |
10657328, | Jun 02 2017 | Apple Inc | Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling |
10657961, | Jun 08 2013 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
10657966, | May 30 2014 | Apple Inc. | Better resolution when referencing to concepts |
10659851, | Jun 30 2014 | Apple Inc. | Real-time digital assistant knowledge updates |
10671428, | Sep 08 2015 | Apple Inc | Distributed personal assistant |
10672399, | Jun 03 2011 | Apple Inc.; Apple Inc | Switching between text data and audio data based on a mapping |
10679605, | Jan 18 2010 | Apple Inc | Hands-free list-reading by intelligent automated assistant |
10681212, | Jun 05 2015 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
10684703, | Jun 01 2018 | Apple Inc | Attention aware virtual assistant dismissal |
10691473, | Nov 06 2015 | Apple Inc | Intelligent automated assistant in a messaging environment |
10692504, | Feb 25 2010 | Apple Inc. | User profiling for voice input processing |
10699717, | May 30 2014 | Apple Inc. | Intelligent assistant for home automation |
10705794, | Jan 18 2010 | Apple Inc | Automatically adapting user interfaces for hands-free interaction |
10706373, | Jun 03 2011 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
10706841, | Jan 18 2010 | Apple Inc. | Task flow identification based on user intent |
10714095, | May 30 2014 | Apple Inc. | Intelligent assistant for home automation |
10714117, | Feb 07 2013 | Apple Inc. | Voice trigger for a digital assistant |
10720160, | Jun 01 2018 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
10726832, | May 11 2017 | Apple Inc | Maintaining privacy of personal information |
10733375, | Jan 31 2018 | Apple Inc | Knowledge-based framework for improving natural language understanding |
10733982, | Jan 08 2018 | Apple Inc | Multi-directional dialog |
10733993, | Jun 10 2016 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
10741181, | May 09 2017 | Apple Inc. | User interface for correcting recognition errors |
10741185, | Jan 18 2010 | Apple Inc. | Intelligent automated assistant |
10747498, | Sep 08 2015 | Apple Inc | Zero latency digital assistant |
10748529, | Mar 15 2013 | Apple Inc. | Voice activated device for use with a voice-based digital assistant |
10748546, | May 16 2017 | Apple Inc. | Digital assistant services based on device capabilities |
10755051, | Sep 29 2017 | Apple Inc | Rule-based natural language processing |
10755703, | May 11 2017 | Apple Inc | Offline personal assistant |
10762293, | Dec 22 2010 | Apple Inc.; Apple Inc | Using parts-of-speech tagging and named entity recognition for spelling correction |
10769385, | Jun 09 2013 | Apple Inc. | System and method for inferring user intent from speech inputs |
10789041, | Sep 12 2014 | Apple Inc. | Dynamic thresholds for always listening speech trigger |
10789945, | May 12 2017 | Apple Inc | Low-latency intelligent automated assistant |
10789959, | Mar 02 2018 | Apple Inc | Training speaker recognition models for digital assistants |
10791176, | May 12 2017 | Apple Inc | Synchronization and task delegation of a digital assistant |
10791216, | Aug 06 2013 | Apple Inc | Auto-activating smart responses based on activities from remote devices |
10795541, | Jun 03 2011 | Apple Inc. | Intelligent organization of tasks items |
10810274, | May 15 2017 | Apple Inc | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
10818288, | Mar 26 2018 | Apple Inc | Natural assistant interaction |
10839159, | Sep 28 2018 | Apple Inc | Named entity normalization in a spoken dialog system |
10847142, | May 11 2017 | Apple Inc. | Maintaining privacy of personal information |
10878809, | May 30 2014 | Apple Inc. | Multi-command single utterance input method |
10892996, | Jun 01 2018 | Apple Inc | Variable latency device coordination |
10904611, | Jun 30 2014 | Apple Inc. | Intelligent automated assistant for TV user interactions |
10909171, | May 16 2017 | Apple Inc. | Intelligent automated assistant for media exploration |
10909331, | Mar 30 2018 | Apple Inc | Implicit identification of translation payload with neural machine translation |
10928918, | May 07 2018 | Apple Inc | Raise to speak |
10930282, | Mar 08 2015 | Apple Inc. | Competing devices responding to voice triggers |
10942702, | Jun 11 2016 | Apple Inc. | Intelligent device arbitration and control |
10942703, | Dec 23 2015 | Apple Inc. | Proactive assistance based on dialog communication between devices |
10944859, | Jun 03 2018 | Apple Inc | Accelerated task performance |
10978090, | Feb 07 2013 | Apple Inc. | Voice trigger for a digital assistant |
10984326, | Jan 25 2010 | NEWVALUEXCHANGE LTD. | Apparatuses, methods and systems for a digital conversation management platform |
10984327, | Jan 25 2010 | NEW VALUEXCHANGE LTD. | Apparatuses, methods and systems for a digital conversation management platform |
10984780, | May 21 2018 | Apple Inc | Global semantic word embeddings using bi-directional recurrent neural networks |
10984798, | Jun 01 2018 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
11009970, | Jun 01 2018 | Apple Inc. | Attention aware virtual assistant dismissal |
11010127, | Jun 29 2015 | Apple Inc. | Virtual assistant for media playback |
11010550, | Sep 29 2015 | Apple Inc | Unified language modeling framework for word prediction, auto-completion and auto-correction |
11010561, | Sep 27 2018 | Apple Inc | Sentiment prediction from textual data |
11012942, | Apr 03 2007 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
11023513, | Dec 20 2007 | Apple Inc. | Method and apparatus for searching using an active ontology |
11025565, | Jun 07 2015 | Apple Inc | Personalized prediction of responses for instant messaging |
11037565, | Jun 10 2016 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
11048473, | Jun 09 2013 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
11069336, | Mar 02 2012 | Apple Inc. | Systems and methods for name pronunciation |
11069347, | Jun 08 2016 | Apple Inc. | Intelligent automated assistant for media exploration |
11080012, | Jun 05 2009 | Apple Inc. | Interface for a virtual digital assistant |
11087759, | Mar 08 2015 | Apple Inc. | Virtual assistant activation |
11120372, | Jun 03 2011 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
11126326, | Jan 06 2008 | Apple Inc. | Portable multifunction device, method, and graphical user interface for viewing and managing electronic calendars |
11126400, | Sep 08 2015 | Apple Inc. | Zero latency digital assistant |
11127397, | May 27 2015 | Apple Inc. | Device voice control |
11133008, | May 30 2014 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
11133953, | May 11 2018 | SHIVE, CATHERINE LOIS; STOOP, DIRK JOHN | Systems and methods for home automation control |
11140099, | May 21 2019 | Apple Inc | Providing message response suggestions |
11145294, | May 07 2018 | Apple Inc | Intelligent automated assistant for delivering content from user experiences |
11151899, | Mar 15 2013 | Apple Inc. | User training by intelligent digital assistant |
11152002, | Jun 11 2016 | Apple Inc. | Application integration with a digital assistant |
11169616, | May 07 2018 | Apple Inc. | Raise to speak |
11170166, | Sep 28 2018 | Apple Inc. | Neural typographical error modeling via generative adversarial networks |
11204787, | Jan 09 2017 | Apple Inc | Application integration with a digital assistant |
11217251, | May 06 2019 | Apple Inc | Spoken notifications |
11217255, | May 16 2017 | Apple Inc | Far-field extension for digital assistant services |
11227589, | Jun 06 2016 | Apple Inc. | Intelligent list reading |
11231904, | Mar 06 2015 | Apple Inc. | Reducing response latency of intelligent automated assistants |
11237797, | May 31 2019 | Apple Inc. | User activity shortcut suggestions |
11257504, | May 30 2014 | Apple Inc. | Intelligent assistant for home automation |
11269678, | May 15 2012 | Apple Inc. | Systems and methods for integrating third party services with a digital assistant |
11281993, | Dec 05 2016 | Apple Inc | Model and ensemble compression for metric learning |
11289073, | May 31 2019 | Apple Inc | Device text to speech |
11301477, | May 12 2017 | Apple Inc | Feedback analysis of a digital assistant |
11307752, | May 06 2019 | Apple Inc | User configurable task triggers |
11314370, | Dec 06 2013 | Apple Inc. | Method for extracting salient dialog usage from live data |
11348573, | Mar 18 2019 | Apple Inc | Multimodality in digital assistant systems |
11348582, | Oct 02 2008 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
11350253, | Jun 03 2011 | Apple Inc. | Active transport based notifications |
11360641, | Jun 01 2019 | Apple Inc | Increasing the relevance of new available information |
11360739, | May 31 2019 | Apple Inc | User activity shortcut suggestions |
11380310, | May 12 2017 | Apple Inc. | Low-latency intelligent automated assistant |
11386266, | Jun 01 2018 | Apple Inc | Text correction |
11388291, | Mar 14 2013 | Apple Inc. | System and method for processing voicemail |
11405466, | May 12 2017 | Apple Inc. | Synchronization and task delegation of a digital assistant |
11410053, | Jan 25 2010 | NEWVALUEXCHANGE LTD. | Apparatuses, methods and systems for a digital conversation management platform |
11423886, | Jan 18 2010 | Apple Inc. | Task flow identification based on user intent |
11423908, | May 06 2019 | Apple Inc | Interpreting spoken requests |
11431642, | Jun 01 2018 | Apple Inc. | Variable latency device coordination |
11462215, | Sep 28 2018 | Apple Inc | Multi-modal inputs for voice commands |
11468282, | May 15 2015 | Apple Inc. | Virtual assistant in a communication session |
11475884, | May 06 2019 | Apple Inc | Reducing digital assistant latency when a language is incorrectly determined |
11475898, | Oct 26 2018 | Apple Inc | Low-latency multi-speaker speech recognition |
11488406, | Sep 25 2019 | Apple Inc | Text detection using global geometry estimators |
11495218, | Jun 01 2018 | Apple Inc | Virtual assistant operation in multi-device environments |
11496600, | May 31 2019 | Apple Inc | Remote execution of machine-learned models |
11500672, | Sep 08 2015 | Apple Inc. | Distributed personal assistant |
11526368, | Nov 06 2015 | Apple Inc. | Intelligent automated assistant in a messaging environment |
11532306, | May 16 2017 | Apple Inc. | Detecting a trigger of a digital assistant |
11556230, | Dec 02 2014 | Apple Inc. | Data detection |
11587559, | Sep 30 2015 | Apple Inc | Intelligent device identification |
11599331, | May 11 2017 | Apple Inc. | Maintaining privacy of personal information |
11599332, | Oct 26 2007 | Great Northern Research, LLC | Multiple shell multi faceted graphical user interface |
11610065, | Jun 12 2020 | Apple Inc | Providing personalized responses based on semantic context |
11638059, | Jan 04 2019 | Apple Inc | Content playback on multiple devices |
11656884, | Jan 09 2017 | Apple Inc. | Application integration with a digital assistant |
11657813, | May 31 2019 | Apple Inc | Voice identification in digital assistant systems |
11710482, | Mar 26 2018 | Apple Inc. | Natural assistant interaction |
11727219, | Jun 09 2013 | Apple Inc. | System and method for inferring user intent from speech inputs |
11798547, | Mar 15 2013 | Apple Inc. | Voice activated device for use with a voice-based digital assistant |
11810578, | May 11 2020 | Apple Inc | Device arbitration for digital assistant-based intercom systems |
11854539, | May 07 2018 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
11928604, | Sep 08 2005 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
11948567, | Dec 28 2018 | SAMSUNG ELECTRONICS CO , LTD | Electronic device and control method therefor |
8289283, | Mar 04 2008 | Apple Inc | Language input interface on a device |
8296383, | Oct 02 2008 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
8311838, | Jan 13 2010 | Apple Inc. | Devices and methods for identifying a prompt corresponding to a voice input in a sequence of prompts |
8345665, | Oct 22 2001 | Apple Inc | Text to speech conversion of text messages from mobile communication devices |
8352268, | Sep 29 2008 | Apple Inc | Systems and methods for selective rate of speech and speech preferences for text to speech synthesis |
8352272, | Sep 29 2008 | Apple Inc | Systems and methods for text to speech synthesis |
8355919, | Sep 29 2008 | Apple Inc | Systems and methods for text normalization for text to speech synthesis |
8359234, | Jul 26 2007 | Braintexter, Inc | System to generate and set up an advertising campaign based on the insertion of advertising messages within an exchange of messages, and method to operate said system |
8364694, | Oct 26 2007 | Apple Inc. | Search assistant for digital media assets |
8380507, | Mar 09 2009 | Apple Inc | Systems and methods for determining the language to use for speech generated by a text to speech engine |
8396714, | Sep 29 2008 | Apple Inc | Systems and methods for concatenation of words in text to speech synthesis |
8458278, | May 02 2003 | Apple Inc. | Method and apparatus for displaying information during an instant messaging session |
8527861, | Aug 13 1999 | Apple Inc. | Methods and apparatuses for display and traversing of links in page character array |
8543407, | Oct 04 2007 | SAMSUNG ELECTRONICS CO , LTD | Speech interface system and method for control and interaction with applications on a computing system |
8583418, | Sep 29 2008 | Apple Inc | Systems and methods of detecting language and natural language strings for text to speech synthesis |
8600743, | Jan 06 2010 | Apple Inc. | Noise profile determination for voice-related feature |
8614431, | Sep 30 2005 | Apple Inc. | Automated response to and sensing of user activity in portable devices |
8620662, | Nov 20 2007 | Apple Inc.; Apple Inc | Context-aware unit selection |
8639516, | Jun 04 2010 | Apple Inc. | User-specific noise suppression for voice quality improvements |
8639716, | Oct 26 2007 | Apple Inc. | Search assistant for digital media assets |
8645137, | Mar 16 2000 | Apple Inc. | Fast, language-independent method for user authentication by voice |
8660849, | Jan 18 2010 | Apple Inc. | Prioritizing selection criteria by automated assistant |
8670979, | Jan 18 2010 | Apple Inc. | Active input elicitation by intelligent automated assistant |
8670985, | Jan 13 2010 | Apple Inc. | Devices and methods for identifying a prompt corresponding to a voice input in a sequence of prompts |
8676904, | Oct 02 2008 | Apple Inc.; Apple Inc | Electronic devices with voice command and contextual data processing capabilities |
8677377, | Sep 08 2005 | Apple Inc | Method and apparatus for building an intelligent automated assistant |
8682649, | Nov 12 2009 | Apple Inc; Apple Inc. | Sentiment prediction from textual data |
8682667, | Feb 25 2010 | Apple Inc. | User profiling for selecting user specific voice input processing information |
8688446, | Feb 22 2008 | Apple Inc. | Providing text input using speech data and non-speech data |
8706472, | Aug 11 2011 | Apple Inc.; Apple Inc | Method for disambiguating multiple readings in language conversion |
8706503, | Jan 18 2010 | Apple Inc. | Intent deduction based on previous user interactions with voice assistant |
8712776, | Sep 29 2008 | Apple Inc | Systems and methods for selective text to speech synthesis |
8713021, | Jul 07 2010 | Apple Inc. | Unsupervised document clustering using latent semantic density analysis |
8713119, | Oct 02 2008 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
8718047, | Oct 22 2001 | Apple Inc. | Text to speech conversion of text messages from mobile communication devices |
8719006, | Aug 27 2010 | Apple Inc. | Combined statistical and rule-based part-of-speech tagging for text-to-speech synthesis |
8719014, | Sep 27 2010 | Apple Inc.; Apple Inc | Electronic device with text error correction based on voice recognition data |
8731942, | Jan 18 2010 | Apple Inc | Maintaining context information between user interactions with a voice assistant |
8751238, | Mar 09 2009 | Apple Inc. | Systems and methods for determining the language to use for speech generated by a text to speech engine |
8762156, | Sep 28 2011 | Apple Inc.; Apple Inc | Speech recognition repair using contextual information |
8762469, | Oct 02 2008 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
8768702, | Sep 05 2008 | Apple Inc.; Apple Inc | Multi-tiered voice feedback in an electronic device |
8775442, | May 15 2012 | Apple Inc. | Semantic search using a single-source semantic model |
8781836, | Feb 22 2011 | Apple Inc.; Apple Inc | Hearing assistance system for providing consistent human speech |
8799000, | Jan 18 2010 | Apple Inc. | Disambiguation based on active input elicitation by intelligent automated assistant |
8812294, | Jun 21 2011 | Apple Inc.; Apple Inc | Translating phrases from one language into another using an order-based set of declarative rules |
8862252, | Jan 30 2009 | Apple Inc | Audio user interface for displayless electronic device |
8892446, | Jan 18 2010 | Apple Inc. | Service orchestration for intelligent automated assistant |
8898568, | Sep 09 2008 | Apple Inc | Audio user interface |
8903716, | Jan 18 2010 | Apple Inc. | Personalized vocabulary for digital assistant |
8909545, | Jan 24 2008 | Braintexter, Inc. | System to generate and set up an advertising campaign based on the insertion of advertising messages within an exchange of messages, and method to operate said system |
8930191, | Jan 18 2010 | Apple Inc | Paraphrasing of user requests and results by automated digital assistant |
8935167, | Sep 25 2012 | Apple Inc. | Exemplar-based latent perceptual modeling for automatic speech recognition |
8942986, | Jan 18 2010 | Apple Inc. | Determining user intent based on ontologies of domains |
8943089, | Oct 26 2007 | Apple Inc. | Search assistant for digital media assets |
8977255, | Apr 03 2007 | Apple Inc.; Apple Inc | Method and system for operating a multi-function portable electronic device using voice-activation |
8996376, | Apr 05 2008 | Apple Inc. | Intelligent text-to-speech conversion |
9053089, | Oct 02 2007 | Apple Inc.; Apple Inc | Part-of-speech tagging using latent analogy |
9075783, | Sep 27 2010 | Apple Inc. | Electronic device with text error correction based on voice recognition data |
9104670, | Jul 21 2010 | Apple Inc | Customized search or acquisition of digital media assets |
9117447, | Jan 18 2010 | Apple Inc. | Using event alert text as input to an automated assistant |
9190062, | Feb 25 2010 | Apple Inc. | User profiling for voice input processing |
9262612, | Mar 21 2011 | Apple Inc.; Apple Inc | Device access using voice authentication |
9280610, | May 14 2012 | Apple Inc | Crowd sourcing information to fulfill user requests |
9300784, | Jun 13 2013 | Apple Inc | System and method for emergency calls initiated by voice command |
9305101, | Oct 26 2007 | Apple Inc. | Search assistant for digital media assets |
9311043, | Jan 13 2010 | Apple Inc. | Adaptive audio feedback system and method |
9318108, | Jan 18 2010 | Apple Inc.; Apple Inc | Intelligent automated assistant |
9330381, | Jan 06 2008 | Apple Inc. | Portable multifunction device, method, and graphical user interface for viewing and managing electronic calendars |
9330720, | Jan 03 2008 | Apple Inc. | Methods and apparatus for altering audio output signals |
9338493, | Jun 30 2014 | Apple Inc | Intelligent automated assistant for TV user interactions |
9361886, | Nov 18 2011 | Apple Inc. | Providing text input using speech data and non-speech data |
9368114, | Mar 14 2013 | Apple Inc. | Context-sensitive handling of interruptions |
9389729, | Sep 30 2005 | Apple Inc. | Automated response to and sensing of user activity in portable devices |
9412392, | Oct 02 2008 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
9430463, | May 30 2014 | Apple Inc | Exemplar-based natural language processing |
9431006, | Jul 02 2009 | Apple Inc.; Apple Inc | Methods and apparatuses for automatic speech recognition |
9483461, | Mar 06 2012 | Apple Inc.; Apple Inc | Handling speech synthesis of content for multiple languages |
9495129, | Jun 29 2012 | Apple Inc. | Device, method, and user interface for voice-activated navigation and browsing of a document |
9501741, | Sep 08 2005 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
9502031, | May 27 2014 | Apple Inc.; Apple Inc | Method for supporting dynamic grammars in WFST-based ASR |
9535906, | Jul 31 2008 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
9547647, | Sep 19 2012 | Apple Inc. | Voice-based media searching |
9548050, | Jan 18 2010 | Apple Inc. | Intelligent automated assistant |
9576574, | Sep 10 2012 | Apple Inc. | Context-sensitive handling of interruptions by intelligent digital assistant |
9582608, | Jun 07 2013 | Apple Inc | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
9619079, | Sep 30 2005 | Apple Inc. | Automated response to and sensing of user activity in portable devices |
9620104, | Jun 07 2013 | Apple Inc | System and method for user-specified pronunciation of words for speech synthesis and recognition |
9620105, | May 15 2014 | Apple Inc. | Analyzing audio input for efficient speech and music recognition |
9626955, | Apr 05 2008 | Apple Inc. | Intelligent text-to-speech conversion |
9633004, | May 30 2014 | Apple Inc.; Apple Inc | Better resolution when referencing to concepts |
9633660, | Feb 25 2010 | Apple Inc. | User profiling for voice input processing |
9633674, | Jun 07 2013 | Apple Inc.; Apple Inc | System and method for detecting errors in interactions with a voice-based digital assistant |
9646609, | Sep 30 2014 | Apple Inc. | Caching apparatus for serving phonetic pronunciations |
9646614, | Mar 16 2000 | Apple Inc. | Fast, language-independent method for user authentication by voice |
9668024, | Jun 30 2014 | Apple Inc. | Intelligent automated assistant for TV user interactions |
9668121, | Sep 30 2014 | Apple Inc. | Social reminders |
9691383, | Sep 05 2008 | Apple Inc. | Multi-tiered voice feedback in an electronic device |
9697820, | Sep 24 2015 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
9697822, | Mar 15 2013 | Apple Inc. | System and method for updating an adaptive speech recognition model |
9711141, | Dec 09 2014 | Apple Inc. | Disambiguating heteronyms in speech synthesis |
9715875, | May 30 2014 | Apple Inc | Reducing the need for manual start/end-pointing and trigger phrases |
9721563, | Jun 08 2012 | Apple Inc.; Apple Inc | Name recognition system |
9721566, | Mar 08 2015 | Apple Inc | Competing devices responding to voice triggers |
9733821, | Mar 14 2013 | Apple Inc. | Voice control to diagnose inadvertent activation of accessibility features |
9734193, | May 30 2014 | Apple Inc. | Determining domain salience ranking from ambiguous words in natural speech |
9753912, | Dec 27 2007 | Great Northern Research, LLC | Method for processing the output of a speech recognizer |
9760559, | May 30 2014 | Apple Inc | Predictive text input |
9785630, | May 30 2014 | Apple Inc. | Text prediction using combined word N-gram and unigram language models |
9798393, | Aug 29 2011 | Apple Inc. | Text correction processing |
9805723, | Dec 27 2007 | Great Northern Research, LLC | Method for processing the output of a speech recognizer |
9818400, | Sep 11 2014 | Apple Inc.; Apple Inc | Method and apparatus for discovering trending terms in speech requests |
9842101, | May 30 2014 | Apple Inc | Predictive conversion of language input |
9842105, | Apr 16 2015 | Apple Inc | Parsimonious continuous-space phrase representations for natural language processing |
9858925, | Jun 05 2009 | Apple Inc | Using context information to facilitate processing of commands in a virtual assistant |
9865248, | Apr 05 2008 | Apple Inc. | Intelligent text-to-speech conversion |
9865280, | Mar 06 2015 | Apple Inc | Structured dictation using intelligent automated assistants |
9886432, | Sep 30 2014 | Apple Inc. | Parsimonious handling of word inflection via categorical stem + suffix N-gram language models |
9886953, | Mar 08 2015 | Apple Inc | Virtual assistant activation |
9899019, | Mar 18 2015 | Apple Inc | Systems and methods for structured stem and suffix language models |
9922642, | Mar 15 2013 | Apple Inc. | Training an at least partial voice command system |
9934775, | May 26 2016 | Apple Inc | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
9946706, | Jun 07 2008 | Apple Inc. | Automatic language identification for dynamic text processing |
9953088, | May 14 2012 | Apple Inc. | Crowd sourcing information to fulfill user requests |
9958987, | Sep 30 2005 | Apple Inc. | Automated response to and sensing of user activity in portable devices |
9959870, | Dec 11 2008 | Apple Inc | Speech recognition involving a mobile device |
9966060, | Jun 07 2013 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
9966065, | May 30 2014 | Apple Inc. | Multi-command single utterance input method |
9966068, | Jun 08 2013 | Apple Inc | Interpreting and acting upon commands that involve sharing information with remote devices |
9971774, | Sep 19 2012 | Apple Inc. | Voice-based media searching |
9972304, | Jun 03 2016 | Apple Inc | Privacy preserving distributed evaluation framework for embedded personalized systems |
9977779, | Mar 14 2013 | Apple Inc. | Automatic supplementation of word correction dictionaries |
9986419, | Sep 30 2014 | Apple Inc. | Social reminders |
ER8782, | |||
RE46139, | Mar 04 2008 | Apple Inc. | Language input interface on a device |
Patent | Priority | Assignee | Title |
6246981, | Nov 25 1998 | Nuance Communications, Inc | Natural language task-oriented dialog manager and method |
6314398, | Mar 01 1999 | Intertrust Technologies Corporation | Apparatus and method using speech understanding for automatic channel selection in interactive television |
6598018, | Dec 15 1999 | Intertrust Technologies Corporation | Method for natural dialog interface to car devices |
6604090, | Jun 04 1997 | MICRO FOCUS LLC | System and method for selecting responses to user input in an automated interface program |
6786651, | Mar 22 2001 | TYCO ELECTRONICS SERVICES GmbH; GOLDCUP 5514 AB UNAT TYCO ELECTRONICS SVENSKA HOLDINGS AB | Optical interconnect structure, system and transceiver including the structure, and method of forming the same |
6910004, | Dec 19 2000 | Xerox Corporation | Method and computer system for part-of-speech tagging of incomplete sentences |
7158930, | Aug 15 2002 | Microsoft Technology Licensing, LLC | Method and apparatus for expanding dictionaries during parsing |
20020133347, | |||
JP2001209662, | |||
JP2002251233, | |||
JP2002351492, | |||
JP4158476, | |||
KR19990047859, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Aug 11 2006 | KWAK, BYUNG KWAN | SAMSUNG ELECTRONICS CO , LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 018245 | /0271 | |
Aug 11 2006 | CHO, JEONG MI | SAMSUNG ELECTRONICS CO , LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 018245 | /0271 | |
Aug 11 2006 | KANG, IN HO | SAMSUNG ELECTRONICS CO , LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 018245 | /0271 | |
Aug 28 2006 | Samsung Electronics Co., Ltd | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Feb 28 2011 | ASPN: Payor Number Assigned. |
Oct 23 2013 | ASPN: Payor Number Assigned. |
Oct 23 2013 | RMPN: Payer Number De-assigned. |
Jan 17 2014 | REM: Maintenance Fee Reminder Mailed. |
Jun 08 2014 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Jun 08 2013 | 4 years fee payment window open |
Dec 08 2013 | 6 months grace period start (w surcharge) |
Jun 08 2014 | patent expiry (for year 4) |
Jun 08 2016 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jun 08 2017 | 8 years fee payment window open |
Dec 08 2017 | 6 months grace period start (w surcharge) |
Jun 08 2018 | patent expiry (for year 8) |
Jun 08 2020 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jun 08 2021 | 12 years fee payment window open |
Dec 08 2021 | 6 months grace period start (w surcharge) |
Jun 08 2022 | patent expiry (for year 12) |
Jun 08 2024 | 2 years to revive unintentionally abandoned end. (for year 12) |