There are provided a system and method for providing information using a spoken dialogue interface. The system includes a speech recognizer for transforming voice signals into sentences; a sentence analyzer for analyzing the sentences by their structural elements; a dialogue manager for extracting information on speech acts or intentions from the structural elements, and generating information on system's speech acts or intentions for a response to the extracted information on speech acts or intentions; a sentence generator for generating sentences based on the information on the system's speech acts or intentions for the response; a speech synthesizer for synthesizing the generated sentences into voices; an information extractor for extracting information required for the response from the Internet in real time; and a user modeling means for analyzing and classifying users' tendencies. information demanded by a user can be detected in real time and provided through a voice interface with versatile and familiar dialogues based on the user's tendencies.

Patent
   7225128
Priority
Mar 29 2002
Filed
Mar 31 2003
Issued
May 29 2007
Expiry
Jul 27 2025
Extension
849 days
Assg.orig
Entity
Large
7
13
all paid
9. A method of providing information using a spoken dialogue interface, comprising the steps of:
(a) transforming voice signals into sentences;
(b) analyzing the sentences by their meaning structures;
(c) extracting information on user speech acts or intentions from the meaning structures, and generating information on system speech acts or intentions for a response to the extracted information on the user's speech acts or intentions;
(d) generating sentences based on the information on the system's speech acts or intentions for the response to the voice signals; and
(e) synthesizing the generated sentences into voices.
1. A system for providing information using a spoken dialogue interface, comprising:
a speech recognizer for transforming voice signals into sentences;
a sentence analyzer for analyzing the sentences by their meaning structures;
a dialogue manager for extracting information on user speech acts or intentions from the meaning structures and generating information on system speech acts or intentions for a response to the extracted information on the user's speech acts or intentions;
a sentence generator for generating sentences based on the information on the system's speech acts or intentions for the response to the voice signals; and
a speech synthesizer for synthesizing the generated sentences into voices.
2. The system of claim 1, wherein the sentence analyzer includes a morphological analyzer for separating the sentences into their morphemes and tagging the separated morphemes and a syntactic analyzer for analyzing sentence structural elements based on relationships between the morphemes.
3. The system of claim 1, wherein the sentence analyzer further includes a semantic analyzer for transforming the sentences transformed from the voice signals into the meaning structures.
4. The system of claim 1, wherein the dialogue manager includes an intention analyzer for determining the user's speech acts or intentions from the meaning structures, and an intention generator for generating the system's speech acts or intentions for a response to the user's speech acts or intentions.
5. The system of claim 1, further comprising a query generator for generating query information based on information on the user's speech acts or intentions.
6. The system of claim 5, further comprising an information extractor for extracting information using the query information as key words and a user modeling means for modeling the user's tendencies from the user's dialogues.
7. The system of claim 1, further comprising a knowledge database for storing the information on the user's speech acts or intentions extracted from the meaning structures, and the information on the system's speech acts or intentions.
8. The system of claim 1, wherein the sentence generator includes a sentence structure generator for receiving the information on the system's speech acts or intentions and generating sentence structures and a morpheme generator for receiving the sentence structures and generating morphemes.
10. The method for claim 9, wherein step (b) includes (b1) separating the sentences into their morphemes and tagging the separated morphemes and (b2) analyzing structural elements of a sentence based on relationships between the morphemes.
11. The method of claim 9, wherein the step (c) includes (c1) determining the user's speech acts or intentions from the meaning structures, (c2) searching through a dialogue case database based on information on the user's speech acts or intentions, (c3) calculating similarities of the detected dialogue cases using information on the user's speech acts or intentions and information on the user's tendencies, (c4) selecting the most similar dialogue case using information on the similarities and determining the system's speech acts or intentions for a system response, (c5) generating query information for a response, and (c6) receiving search results obtained through the query information and completing the system's speech acts or intentions.
12. The method of claim 9, wherein the step (b) includes transforming the sentences transformed from the voice signals into the meaning structures.
13. The method of claim 9, wherein the step (c) includes storing the information on the user's speech acts or intentions extracted from the meaning structures.
14. The method of claim 9, wherein step (d) includes (d1) generating sentence structures based on the information on the system's speech acts or intentions and (d2) generating morphemes for a response.
15. A computer readable recording medium that stores a program for the computer to implement the method claimed in claim 9.

This application claims the priority of Korean Patent Application No. 2002-17413, filed on Mar. 29, 2002, in the Korean Intellectual Property Office, the disclosure of which is incorporated herein in its entirety by reference.

1. Field of the Invention

The present invention relates to a conversational agent for interfacing a human and a machine. Particularly, the present invention relates to a system for providing information in voice signals, which allow versatile dialogues by utilizing a knowledge database system that can extract wanted information in real time from the Internet and store users' dialogue records and tendencies, and a number of dialogue cases. Further, the present invention relates to a method of providing information in voice signals.

2. Description of the Related Art

In conventional methods of providing information in voice signals, dialogues have been managed by controlling the state transition control between a user state and a system state based on the detection of a keyword, or through a dialogue manager using scripts to determine system behaviors, and a knowledge database for managing the dialogues has been built on an off-line basis. Since information has not been updated in real time, there has been a limit in the provision of information. Further, since only short-term dialogues have been used and very limited and almost similar dialogues have been repeated, interests in the system for providing information in voice signals have not been sustained, and therefore, the application of the system has been limited.

The present invention provides a system and method for providing information using a spoken dialogue interface, which analyzes a user's voice signals and provides information in voice signals in response to the user's voice signals.

Further, the present invention provides a computer readable recording medium on which a program to implement the above-described method is embedded.

According to an aspect of the present invention, there is provided a system for providing information using a spoken dialogue interface, which includes a speech recognizer for transforming voice signals into sentences; a sentence analyzer for analyzing the sentences by their structural elements; a dialogue manager for extracting information on speech acts or intentions from the structural elements and generating information on system's speech acts or intentions for a response to the extracted information on speech acts or intentions; a sentence generator for generating sentences based on the information on system's speech acts or intentions for the response; a speech synthesizer for synthesizing the generated sentences into voices; an information extractor for extracting information required for the response from the Internet in real time; and a user modeling means for analyzing and classifying users' tendencies.

According to another aspect of the present invention, there is provided a method of providing information using a spoken dialogue interface, which includes the steps of (a) transforming voice signals into sentences; (b) analyzing the sentences by their structural elements; (c) extracting information on speech acts or intentions from the structural elements and generating information on system's speech acts or intentions for a response to the extracted information on speech acts or intentions; (d) generating sentences based on the information on system's speech acts or intentions for the response; and (e) synthesizing the generated sentences into voices.

The above aspects and advantages of the present invention will become more apparent by describing, in detail, preferred embodiments thereof with reference to the attached drawings in which:

FIG. 1 is a block diagram of a system for providing information using a spoken dialogue interface, according to a preferred embodiment of the present invention;

FIG. 2 is a flowchart of a method of providing information using a spoken dialogue interface, according to a preferred embodiment of the present invention;

FIG. 3 is a more detailed block diagram of the intention generator shown in FIG. 1; and

FIG. 4 is a flowchart for explaining operations of the intention generator shown in FIG. 3.

A preferred embodiment of the present invention will now be described with reference to FIGS. 1 and 2. FIG. 1 is a block diagram of a system for providing information using a voice dialogue interface, according to a preferred embodiment of the present invention, and FIG. 2 is a flowchart of a method of providing information using a voice dialogue interface, according to a preferred embodiment of the present invention.

When a user transmits voice signals, a speech recognizer 110 receives the voice signals, recognizes voices, and transforms the voice signals into sentences (STEP 210). A sentence analyzer 120 receives the sentences transformed through the speech recognizer 110 or sentences input through an input device such as a keyboard, and analyzes the sentences by their meaning structures (STEP 220). The sentence analyzer 120 includes a morphological analyzer 121 for separating the input sentences into morphemes and tagging the separated morphemes, a syntactic analyzer 123 for analyzing the structural elements of a sentence based on the relationship between the morphemes, and a semantic analyzer 125 for determining the meanings of the structural elements of a sentence and transforming them into meaning structures.

A dialogue manager 130 includes an intention analyzer 131 and an intention generator 133. The intention analyzer 131 receives the meaning structures and anlayzes the type of speech act or intention, among asking, demanding, proposing, requesting, etc., that is included in the user's voice signals (STEP 230). The intention generator 133 generates a system speech act or intention, such as answering, refusing, or accepting, for a response to the anlayzed user's speech act or intention (STEP 240). An information extractor 140 receives query information, and provides the intention generator 133 with information corresponding to the query information by searching for on-line information from the Internet or another network and off-line information from a knowledge database 145. A user modeling unit 150 receives information on the user's dialogues from the intention analyzer 131, analyzes the user's tendencies, and provides the analyzed result to the intention generator 133. The knowledge database 145 stores the records of dialogues between the user and the system, and the user's tendencies.

A sentence generator 160 receives information on system's speech acts or intentions, and transforms the information on system's speech acts or intentions into sentences (STEP 250). The sentence generator 160 includes a sentence structure generator 161 for generating sentence structures from the meaning structures regarding the system's speech acts or intentions, and a morphological generator 163 for receiving the sentence structures and generating morphemes to transform the sentence structures into sentences. A speech synthesizer 171 receives the sentences, synthesizes the sentences into voices, and outputs the synthesized voices (STEP 260). A character animation unit 173 receives the sentences and outputs motion pictures so that the user is inclined to communicate with a character in the motion pictures while the user is obtaining information.

The intention generator 133 will now be described in more detail with reference to FIGS. 3 and 4. FIG. 3 is a more detailed block diagram of the intention generator 133 shown in FIG. 1, and FIG. 4 is a flowchart for explaining operations of the intention generator 133 shown in FIG. 3.

The intention generator 133 includes a dialogue case search unit 133-1, an intention type determination unit 133-3, and an intention content determination unit 133-5. The dialogue case search unit 133-1 receives information on the user's speech acts and intentions, and searches for multiple dialogue cases from a dialogue case database (STEP 410). The intention type determination unit 133-3 calculates similarities between the information on the user's speech acts or intentions and the dialogue cases using information on the user's tendencies (STEP 420), and selects the most similar dialogue and determines a system's speech act or intention type for a system response (STEP 430). The intention content determination unit 133-5 generates query information to complete the content portion of the selected intention type (STEP 440) and completes the information on the system's speech acts or intentions using the search results from the Information extractor 170 (STEP 450). The dialogue case has a format where the user's intentions and the system's intentions correspond with one another and the dialogue case database stores a number of dialogue cases.

The present invention can be implemented on a recording medium with a code that is readable by a computer. The recording medium that can be read by a computer may include any kind of recording devices in which data that is readable by the computer is stored. Examples of the recording medium include ROM, RAM, CD-ROM, magnetic tape, hard discs, floppy discs, flash memory, optical data storage devices, and even carrier waves, for example, transmission over the Internet. Moreover, the recording medium may be distributed among computer systems that are interconnected through a network, and the present invention may be stored and implemented as a code in the distributed system.

As described above, according to the present invention, information demanded by a user can be detected in real time and provided through a voice interface with versatile and familiar dialogues based on the user's tendencies. That is, as the records of dialogues with a user are stored and an adequate response to a query is provided, it is possible to hold the user's interest without repeating similar dialogues. Further, since a knowledge database can be built in real time, information can be updated and provided in real time.

While the present invention has been particularly shown and described with reference to preferred embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present invention as defined by the appended claims.

Lee, Hye-Jeong, Lee, Yong-Beom, Lee, Jae-Won, Park, Chan-min, Kim, Jeong-Su, Seo, Hee-kyoung

Patent Priority Assignee Title
10510342, Mar 08 2016 Samsung Electronics Co., Ltd. Voice recognition server and control method thereof
11341962, May 13 2010 Poltorak Technologies LLC Electronic personal interactive device
11367435, May 13 2010 Poltorak Technologies LLC Electronic personal interactive device
7650283, Apr 12 2004 Panasonic Intellectual Property Corporation of America Dialogue supporting apparatus
8117022, Dec 07 2006 Method and system for machine understanding, knowledge, and conversation
8370130, Sep 01 2009 Electronics and Telecommunications Research Institute Speech understanding system using an example-based semantic representation pattern
8751240, May 13 2005 Microsoft Technology Licensing, LLC Apparatus and method for forming search engine queries based on spoken utterances
Patent Priority Assignee Title
5577164, Jan 28 1994 Canon Kabushiki Kaisha Incorrect voice command recognition prevention and recovery processing method and apparatus
5615296, Nov 12 1993 Nuance Communications, Inc Continuous speech recognition and voice response system and method to enable conversational dialogues with microprocessors
5644774, Apr 27 1994 Sharp Kabushiki Kaisha Machine translation system having idiom processing function
5652828, Mar 19 1993 GOOGLE LLC Automated voice synthesis employing enhanced prosodic treatment of text, spelling of text and rate of annunciation
5682539, Sep 29 1994 LEVERANCE, INC Anticipated meaning natural language interface
5761637, Aug 09 1994 Kabushiki Kaisha Toshiba Dialogue-sound processing apparatus and method
5797116, Jun 16 1993 Canon Kabushiki Kaisha Method and apparatus for recognizing previously unrecognized speech by requesting a predicted-category-related domain-dictionary-linking word
6282507, Jan 29 1999 Sony Corporation; Sony Electronics, Inc.; Sony Electronics, INC Method and apparatus for interactive source language expression recognition and alternative hypothesis presentation and selection
6442524, Jan 29 1999 Sony Corporation; Sony Electronics Inc.; Sony Electronics, INC Analyzing inflectional morphology in a spoken language translation system
6647363, Oct 09 1998 Nuance Communications, Inc Method and system for automatically verbally responding to user inquiries about information
6745161, Sep 17 1999 Microsoft Technology Licensing, LLC System and method for incorporating concept-based retrieval within boolean search engines
6920420, Aug 11 2000 Industrial Technology Research Institute Method for probabilistic error-tolerant natural language understanding
20020032564,
///////
Executed onAssignorAssigneeConveyanceFrameReelDoc
Mar 31 2003Samsung Electronics Co., Ltd.(assignment on the face of the patent)
Apr 18 2003KIM, JEONG-SUSAMSUNG ELECTRONICS CO , LTD ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0141670641 pdf
Apr 18 2003LEE, YONG-BEOMSAMSUNG ELECTRONICS CO , LTD ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0141670641 pdf
Apr 18 2003LEE, JAE-WONSAMSUNG ELECTRONICS CO , LTD ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0141670641 pdf
Apr 18 2003LEE, HYE-JEONGSAMSUNG ELECTRONICS CO , LTD ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0141670641 pdf
Apr 18 2003PARK, CHAN-MINSAMSUNG ELECTRONICS CO , LTD ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0141670641 pdf
Apr 18 2003SEO, HEE-KYOUNGSAMSUNG ELECTRONICS CO , LTD ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0141670641 pdf
Date Maintenance Fee Events
Oct 28 2010M1551: Payment of Maintenance Fee, 4th Year, Large Entity.
Aug 27 2013M2551: Payment of Maintenance Fee, 4th Yr, Small Entity.
Aug 27 2013R2551: Refund - Payment of Maintenance Fee, 4th Yr, Small Entity.
Nov 25 2014M1552: Payment of Maintenance Fee, 8th Year, Large Entity.
Jan 09 2015ASPN: Payor Number Assigned.
Oct 23 2018M1553: Payment of Maintenance Fee, 12th Year, Large Entity.


Date Maintenance Schedule
May 29 20104 years fee payment window open
Nov 29 20106 months grace period start (w surcharge)
May 29 2011patent expiry (for year 4)
May 29 20132 years to revive unintentionally abandoned end. (for year 4)
May 29 20148 years fee payment window open
Nov 29 20146 months grace period start (w surcharge)
May 29 2015patent expiry (for year 8)
May 29 20172 years to revive unintentionally abandoned end. (for year 8)
May 29 201812 years fee payment window open
Nov 29 20186 months grace period start (w surcharge)
May 29 2019patent expiry (for year 12)
May 29 20212 years to revive unintentionally abandoned end. (for year 12)