A method for streaming action fulfillment receives audio data corresponding to an utterance where the utterance includes a query to perform an action that requires performance of a sequence of sub-actions in order to fulfill the action. While receiving the audio data, but before receiving an end of speech condition, the method processes the audio data to generate intermediate automated speech recognition (ASR) results, performs partial query interpretation on the intermediate ASR results to determine whether the intermediate ASR results identify an application type needed to perform the action and, when the intermediate ASR results identify a particular application type, performs a first sub-action in the sequence of sub-actions by launching a first application to execute on the user device where the first application is associated with the particular application type. The method, in response to receiving an end of speech condition, fulfills performance of the action.
|
1. A computer-implemented method executed on data processing hardware that causes the data processing hardware to perform operations comprising:
receiving audio data corresponding to at least a portion of an utterance spoken by a user, the utterance comprising a query to perform an action;
processing, using a speech recognizer, the audio data to generate a sequence of intermediate automated speech recognition (ASR) results;
based on the sequence of intermediate ASR results, launching both a first application and a second application to execute in parallel on a user device, the first application and the second application both associated with a particular application type;
displaying, in a graphical user interface of the user device, a first graphical element and a second graphical element simultaneously, the first graphical element representing the first application executing on the user device and the second graphical element representing the second application executing on the user device; and
in response to detecting an end of speech condition, fulfilling performance of the action using one of the first application or the second application.
10. A system comprising:
data processing hardware; and
memory hardware in communication with the data processing hardware, the memory hardware storing instructions that when executed on the data processing hardware cause the data processing hardware to perform operations comprising:
receiving audio data corresponding to at least a portion of an utterance spoken by a user, the utterance comprising a query to perform an action;
processing, using a speech recognizer, the audio data to generate a sequence of intermediate automated speech recognition (ASR) results;
based on the sequence of intermediate ASR results, launching both a first application and a second application to execute in parallel on a user device, the first application and the second application both associated with a particular application type;
displaying, in a graphical user interface of the user device, a first graphical element and a second graphical element simultaneously, the first graphical element representing the first application executing on the user device and the second graphical element representing the second application executing on the user device; and
in response to detecting an end of speech condition, fulfilling performance of the action using one of the first application or the second application.
2. The computer-implemented method of
performing partial query interpretation on the sequence of intermediate ASR results to determine that the sequence of intermediate ASR results identifies the particular application type needed to perform the action,
wherein launching both the first application and the second application is based on determining that the sequence of intermediate ASR results identifies the particular application type needed to perform the action.
3. The computer-implemented method of
receiving a user input indication indicating selection of one of the first graphical element or the second graphical element displayed in the graphical user interface; and
in response to receiving the user input indication:
maintaining execution of the one of the first application or the second application that is represented by the selected one of the first graphical element or the second graphical element; and
ceasing execution of the other one of the first application or the second application that is represented by the one of the first graphical element or the second graphical element that was not selected by the user input indication.
4. The computer-implemented method of
5. The computer-implemented method of
6. The computer-implemented method of
receiving the audio data corresponding to at least the portion of the utterance comprises receiving the audio data corresponding to an initial portion of the utterance; and
the operations further comprise, after launching the first application and the second application and prior to detecting the end of speech condition:
receiving additional audio data corresponding to a remaining portion of the utterance;
processing the second portion of the utterance to generate a final ASR result; and
determining that the final ASR result identifies one of the first application or the second application to use for fulfilling the action.
7. The computer-implemented method of
8. The computer-implemented method of
9. The computer-implemented method of
11. The system of
performing partial query interpretation on the sequence of intermediate ASR results to determine that the sequence of intermediate ASR results identifies the particular application type needed to perform the action,
wherein launching both the first application and the second application is based on determining that the sequence of intermediate ASR results identifies the particular application type needed to perform the action.
12. The system of
receiving a user input indication indicating selection of one of the first graphical element or the second graphical element displayed in the graphical user interface; and
in response to receiving the user input indication:
maintaining execution of the one of the first application or the second application that is represented by the selected one of the first graphical element or the second graphical element; and
ceasing execution of the other one of the first application or the second application that is represented by the one of the first graphical element or the second graphical element that was not selected by the user input indication.
13. The system of
14. The system of
15. The system of
receiving the audio data corresponding to at least the portion of the utterance comprises receiving the audio data corresponding to an initial portion of the utterance; and
the operations further comprise, after launching the first application and the second application and prior to detecting the end of speech condition:
receiving additional audio data corresponding to a remaining portion of the utterance;
processing the second portion of the utterance to generate a final ASR result; and
determining that the final ASR result identifies one of the first application or the second application to use for fulfilling the action.
16. The system of
17. The system of
|
This U.S. patent application is a continuation of, and claims priority under 35 U.S.C. § 120 from, U.S. patent application Ser. No. 17/247,334, filed on Dec. 8, 2020. The disclosure of this prior application is considered part of the disclosure of this application and is hereby incorporated by reference in its entirety
This disclosure relates to streaming action fulfillment based on partial hypotheses
Users frequently interact with voice-enabled devices, such as smart phones, smart watches, and smart speakers, through digital assistant interfaces. These digital assistant interfaces enable users to complete tasks and obtain answers to questions they have all through natural, conversational interactions. Ideally, when conversing with a digital assistant interface, a user should be able to communicate as if the user were talking to another person, via spoken requests directed toward their voice-enabled device running the digital assistant interface. The digital assistant interface will provide these spoken requests to an automated speech recognizer to process and recognize the spoken request so that an action can be performed.
Digital assistant interfaces are moving onto mobile devices in which much of the speech recognition processing occurs on-device without incurring the added latency to connect to a server via a network and send audio data to the server to perform speech recognition using cloud-based speech recognition servers. In addition to improving latency, other benefits of on-device speech recognition include improved reliability and privacy. As such, digital assistant interfaces are becoming deeply integrated with various applications and operating systems running on mobile devices, thereby enabling a user to control their mobile device solely using their voice. However, these applications installed on user devices may themselves be slow, unreliable, or require network access to servers, thereby throttling the benefits that the on-device processing capabilities the digital assistant interface affords and can lead to a sluggish user experience.
One aspect of the disclosure provides a method for streaming action fulfillment. The method includes receiving, at data processing hardware, audio data corresponding to an utterance spoken by a user of a user device where the utterance includes a query to perform an action where the query requires performance of a sequence of sub-actions in order to fulfill the action. While receiving the audio data, but before receiving an end of speech condition, the method also includes processing, by the data processing hardware, using a speech recognizer, a first portion of the received audio data to generate a first sequence of intermediate automated speech recognition (ASR) results. While receiving the audio data, but before receiving an end of speech condition, the method further includes performing, by the data processing hardware, partial query interpretation on the first sequence of intermediate ASR results to determine whether the first sequence of intermediate ASR results identifies an application type needed to perform the action and, when the first sequence of intermediate ASR results identifies a particular application type, performing, by the data processing hardware, a first sub-action in the sequence of sub-actions by launching a first application to execute on the user device where the first application is associated with the particular application type. The method additionally includes, in response to receiving an end of speech condition, fulfilling, by the data processing hardware, performance of the action. Determining the end of speech condition may include detecting, using the speech recognizer, at least a minimum duration of non-speech in the received audio.
Implementations of the disclosure may include one or more of the following optional features. In some implementations, while receiving the audio data before receiving the end of speech condition and after launching the first application, the method also includes processing, by the data processing hardware, using the speech recognizer, a second portion of the received audio data to generate a second sequence of intermediate ASR results, performing, by the data processing hardware, the partial query interpretation on the second sequence of intermediate ASR results to identify a search query for content in the first application, and performing, by the data processing hardware, a second sub-action in the sequence of actions by instructing the first application to perform the search query for the content in the first application. In these implementations, in response to launching the first application to execute on the user device, the method also includes displaying, by the data processing hardware, in a graphical user interface of the user device, an initial screen having a search field and/or multiple graphical elements each representing different categories of content in the first application. Here, instructing the first application to perform the search query for the content includes entering, into the search field of the first screen, text characterizing the search query for the content in the first application or selecting, from among the multiple graphical elements of the first screen, the graphical element representing the category of content that includes the content specified by the search query. These implementations also may include, after the first application performs the search query for the content, displaying, by the data processing hardware, in the graphical user interface, a second screen presenting one or more results that include the content specified by the search query. In these implementations, the method may further include receiving, at the data processing hardware, a user input indication indicating selection of one of the results presented in the second screen and, in response to receiving the user input indication, detecting, by the data processing hardware, the end of speech condition. These implementations may also include, after the first application performs the search query for the content, processing, by the data processing hardware, using the speech recognizer, a third portion of the received audio data to generate a third sequence of intermediate ASR results, performing, by the data processing hardware, the partial query interpretation on the third sequence of intermediate ASR results to determine whether to update the search query for more specific content, and, when the partial query interpretation performed on the third sequence of intermediate search results determines to update the search query for more specific content, performing, by the data processing hardware, a third sub-action in the sequence of sub-actions by instructing the first application to update the search query for the more specific content.
In some configurations, performing the partial query interpretation on the first sequence of intermediate ASR results may determine that the first sequence of intermediate ASR results identifies the particular application type and fails to specify a slot value associated with naming a specific application for use in fulfilling the action. Here, performing the first sub-action in the sequence of sub-actions includes launching the first application as a default application associated with the particular application type. In these configurations, while receiving the audio data before receiving the end of speech condition and after launching the first application, the method also includes processing, by the data processing hardware, using the speech recognizer, a second portion of the received audio data to generate a second sequence of intermediate ASR results and performing, by the data processing hardware, the partial query interpretation on the second sequence of intermediate ASR results to determine whether the second sequence of intermediate ASR results identify a second application to use for fulfilling the action. Additionally, in these configurations, when the second sequence of intermediate ASR results identify the second application for use in fulfilling the action, the method further includes rolling-back, by the data processing hardware, performance of the first sub-action by ceasing execution of the first application on the user device and re-performing, by the data processing hardware, the first sub-action in the sequence of actions by launching the second application to execute on the user device.
In some examples, while receiving the audio data before receiving the end of speech condition and after launching the first application, the method further includes processing, by the data processing hardware, using the speech recognizer, a second portion of the received audio data to generate a second sequence of intermediate ASR results and performing, by the data processing hardware, the partial query interpretation on the second sequence of intermediate ASR results to identify a second sub-action in the sequence of sub-actions. In these examples, while receiving the audio data before receiving the end of speech condition and after launching the first application, the method also includes determining, by the data processing hardware, a rollback feasibility score associated with the second sub-action where the rollback feasibility indicates a likelihood that a user experience will be degraded if executing the second sub-action has to be rolled back and, when the rollback feasibility score satisfies a rollback feasibility threshold, performing, by the data processing hardware, the second sub-action. When the rollback feasibility score does not satisfy the rollback feasibility threshold, the method may delay performing, by the data processing hardware, the second sub-action until the end of speech condition is received. Also in these examples, the method may further include determining, by the data processing hardware, a confidence score of second sub-action identified by performing the partial query interpretation on the second sequence of intermediate ASR results and when the confidence score of the second sub-action fails to satisfy a confidence threshold, prompting, by the data processing hardware, the user to confirm whether the second sub-action is correctly identified.
Another aspect of the disclosure provides a system for streaming action fulfillment. The system includes data processing hardware and memory hardware in communication with the data processing hardware. The memory hardware stores instructions that when executed on the data processing hardware cause the data processing hardware to perform operations. The operations include receiving audio data corresponding to an utterance spoken by a user of a user device where the utterance includes a query to perform an action where the query requires performance of a sequence of sub-actions in order to fulfill the action. While receiving the audio data, but before receiving an end of speech condition, the operations also include processing, using a speech recognizer, a first portion of the received audio data to generate a first sequence of intermediate automated speech recognition (ASR) results. While receiving the audio data, but before receiving an end of speech condition, the operations further include performing partial query interpretation on the first sequence of intermediate ASR results to determine whether the first sequence of intermediate ASR results identifies an application type needed to perform the action and, when the first sequence of intermediate ASR results identifies a particular application type, performing a first sub-action in the sequence of sub-actions by launching a first application to execute on the user device where the first application is associated with the particular application type. The operations additionally includes, in response to receiving an end of speech condition, fulfilling performance of the action. Determining the end of speech condition may include detecting, using the speech recognizer, at least a minimum duration of non-speech in the received audio.
Implementations of the disclosure may include one or more of the following optional features. In some implementations, while receiving the audio data before receiving the end of speech condition and after launching the first application, the operations also include processing, using the speech recognizer, a second portion of the received audio data to generate a second sequence of intermediate ASR results, performing the partial query interpretation on the second sequence of intermediate ASR results to identify a search query for content in the first application, and performing a second sub-action in the sequence of actions by instructing the first application to perform the search query for the content in the first application. In these implementations, in response to launching the first application to execute on the user device, the operations also include displaying in a graphical user interface of the user device, an initial screen having a search field and/or multiple graphical elements each representing different categories of content in the first application. Here, instructing the first application to perform the search query for the content includes entering, into the search field of the first screen, text characterizing the search query for the content in the first application or selecting, from among the multiple graphical elements of the first screen, the graphical element representing the category of content that includes the content specified by the search query. These implementations also may include, after the first application performs the search query for the content, displaying in the graphical user interface, a second screen presenting one or more results that include the content specified by the search query. In these implementations, the operations may further include receiving a user input indication indicating selection of one of the results presented in the second screen and, in response to receiving the user input indication, detecting the end of speech condition. These implementations may also include, after the first application performs the search query for the content, processing using the speech recognizer, a third portion of the received audio data to generate a third sequence of intermediate ASR results, performing the partial query interpretation on the third sequence of intermediate ASR results to determine whether to update the search query for more specific content, and, when the partial query interpretation performed on the third sequence of intermediate search results determines to update the search query for more specific content, performing a third sub-action in the sequence of sub-actions by instructing the first application to update the search query for the more specific content.
In some configurations, performing the partial query interpretation on the first sequence of intermediate ASR results may determine that the first sequence of intermediate ASR results identifies the particular application type and fails to specify a slot value associated with naming a specific application for use in fulfilling the action. Here, performing the first sub-action in the sequence of sub-actions includes launching the first application as a default application associated with the particular application type. In these configurations, while receiving the audio data before receiving the end of speech condition and after launching the first application, the operations also include processing, using the speech recognizer, a second portion of the received audio data to generate a second sequence of intermediate ASR results and performing, by the data processing hardware, the partial query interpretation on the second sequence of intermediate ASR results to determine whether the second sequence of intermediate ASR results identify a second application to use for fulfilling the action. Additionally, in these configurations, when the second sequence of intermediate ASR results identify the second application for use in fulfilling the action, the operations further include rolling-back performance of the first sub-action by ceasing execution of the first application on the user device and re-performing, by the data processing hardware, the first sub-action in the sequence of actions by launching the second application to execute on the user device.
In some examples, while receiving the audio data before receiving the end of speech condition and after launching the first application, the operations further include processing, using the speech recognizer, a second portion of the received audio data to generate a second sequence of intermediate ASR results and performing, by the data processing hardware, the partial query interpretation on the second sequence of intermediate ASR results to identify a second sub-action in the sequence of sub-actions. In these examples, while receiving the audio data before receiving the end of speech condition and after launching the first application, the operations also include determining a rollback feasibility score associated with the second sub-action where the rollback feasibility indicates a likelihood that a user experience will be degraded if executing the second sub-action has to be rolled back and, when the rollback feasibility score satisfies a rollback feasibility threshold, performing, by the data processing hardware, the second sub-action. When the rollback feasibility score does not satisfy the rollback feasibility threshold, the operations may delay performing the second sub-action until the end of speech condition is received. Also in these examples, the operations may further include determining, by the data processing hardware, a confidence score of second sub-action identified by performing the partial query interpretation on the second sequence of intermediate ASR results and when the confidence score of the second sub-action fails to satisfy a confidence threshold, prompting, by the data processing hardware, the user to confirm whether the second sub-action is correctly identified.
The details of one or more implementations of the disclosure are set forth in the accompanying drawings and the description below. Other aspects, features, and advantages will be apparent from the description and drawings, and from the claims.
Like reference symbols in the various drawings indicate like elements.
Conventionally, digital assistant interfaces that enable users to complete tasks and obtain answers to questions they have through natural, conversational interactions have required the processing of servers to support speech recognition and language understanding models. These server-based models were not suitable for execution on device due to their size, which could exceed the available storage or memory on the device. However, recent advancements in recurrent neural networks have enabled the development of new speech recognition and language understanding models of drastically reduced size (e.g., less than half a gigabyte) suitable for storage and processing on-device. As such, digital assistant interfaces are moving onto mobile devices in which much of the speech recognition processing occurs on-device without incurring the added latency to connect to a server via a network and send audio data to the server to perform speech recognition using cloud-based speech recognition servers. User experience is drastically enhanced since transcriptions of speech can occur in real-time and without a network connection. In addition to improving latency, other benefits of on-device speech recognition include improved reliability and privacy. As such, digital assistant interfaces are becoming deeply integrated with various applications and operating systems running on mobile devices, thereby enabling a user to control their mobile device solely using their voice. For example, a user could multi-task across multiple applications running on the device, such as creating a calendar invite, finding and sharing a photo with friends, or dictating an email. However, these applications installed on user devices may themselves be slow, unreliable, or require network access to servers, thereby throttling the benefits that the on-device digital assistant interface affords and can lead to a sluggish user experience.
Generally, in speech recognition systems, speech endpointing is the process of determining which part of incoming audio contains speech by determining a beginning and an end of an utterance. The part of incoming audio corresponding to speech is provided to a speech recognizer to obtain a speech recognition result or a transcript of the audio. User perceived latency of speech recognition is the time from when the user stops speaking until the speech recognition result or transcript is output, often output for display on a screen of a user device. The recent advancements made to run speech recognition models on-device have allowed for real-time speech recognition results (e.g., streaming transcription) to display as the user speaks and before endpointing determines the end of the utterance.
Yet, while the user perceived latency of the actual speech recognition has been improved by processing speech on-device, a user perceived latency of fulfillment of user queries/commands still exists since an endpoint identifying the end of the utterance has to be made before the actual fulfillment can take place. For example, a command for “Play rock music playlist on YouTube Music” spoken by a user to a digital assistant interface executing on the user's phone requires performance of multiple sub-actions in order to fulfill the complete action of streaming the user's rock music playlist from the YouTube Music service for audible output through an acoustic speaker. These sub-actions include launching the YouTube Music application on the user's phone, accessing a search box in the YouTube Music to input a query for the rock music playlist or switching to a playlists tab in the YouTube Music to select the rock music playlist, and finally executing audible playback of the rock music playlist from the YouTube Music application. In this example, the system has to wait for the end pointer to endpoint the end of the utterance and obtain the final speech recognition result before initiating the first sub-action that needs to be fulfilled, e.g., launching the YouTube Music application.
Implementations herein are directed to generating intermediate speech recognition results from a user utterance commanding performance of an action and using the intermediate speech recognition results to perform multiple sub-actions related to the final action while the user is still speaking the utterance. The leveraging of available intermediate speech recognition results to identify and to perform sub-actions related to a final action before the user finishes speaking drastically reduces user perceived latency since waiting for a final endpoint is not required before initiating performance of a first sub-action in a sequence of sub-actions that need to be performed in order to fulfill the final action.
Yet, intermediate recognition results of “Play”, “rock music”, and “playlist” occur before the final endpoint and are available to identify possible sub-actions related to the query. For instance, the intermediate recognition result of “Play” can indicate that the user wants to play media content, which could include video content such as television programming, movies, or video clips, or audio content from one of multiple music applications on the user device. Next, the intermediate recognition result of “rock music” provides context that narrows the type of media content the user wants to play to audio content. Now, the possible available sub-actions are opening either a streaming radio application, a Spotify application, or a YouTube application on the user's phone that all can potentially output rock music. For instance, the streaming radio, Spotify, and YouTube applications all include their own rock music channels, while the Spotify and YouTube applications each include respective rock music playlists for the user. Once the intermediate recognition result of “playlist” becomes available, the available possible sub-actions are now more specific to indicate that a rock music playlist needs to be accessed on the Spotify application or the YouTube application.
Referring to
The device 110 may correspond to any computing device associated with the user 10 and capable of capturing audio from the environment 100. Some examples of user devices 110 include, but are not limited to, mobile devices (e.g., mobile phones, tablets, laptops, e-book readers, etc.), computers, wearable devices (e.g., smart watches), music players, casting devices, smart appliances (e.g., smart televisions) and internet of things (IoT) devices, remote controls, smart speakers, etc. The device 110 includes data processing hardware 112d and memory hardware 112m in communication with the data processing hardware 112d and storing instructions, that when executed by the data processing hardware 112d, cause the data processing hardware 112d to perform one or more operations related to speech processing.
The device 110 further includes an audio subsystem with an audio capturing device (e.g., an array of one or more microphones) 114 for capturing and converting audio within the speech environment 100 into electrical signals (e.g., audio data 14 (
The device 110 may also include a display 116 to display graphical user interface (GUI) elements (e.g., windows, screens, icons, menus, etc.) and/or graphical content. For example, the device 110 may load or launch applications 118, 118a-n that generate GUI elements or other graphical content for the display 116. Moreover, the elements generated in the display 116 may be selectable by the user 10 and also serve to provide some form of visual feedback to processing activities and/or operations occurring on the device 110. Furthermore, since the device 110 is a voice-enabled device 110, the user 10 may interact with elements generated on the display 116 using various voice commands. For instance, the display 116 may depict a menu of options for a particular application 118 and the user 10 may use the interface 200 to select an option through speech.
In some implementations, the device 110 communicates via a network 120 with a remote system 130. The remote system 130 may include remote resources 132, such as remote data processing hardware 134 (e.g., remote servers or CPUs) and/or remote memory hardware 136 (e.g., remote databases or other storage hardware). The device 110 may utilize the remote resources 132 to perform various functionality related to speech processing. For instance, some portion of the assistant interface 200 may reside on the remote system 130. In one example, a speech recognizer 210 executes locally on the device 110 to provide on-device automated speech recognition (ASR). In another example, the speech recognizer 210 resides on the remote system 130 to provide server-side ASR for the assistant interface 200. In yet another example, functionality of the speech recognizer 210 is split across the device 110 and the server 130 (e.g., the device 110 and the server 130 perform parallel ASR processing).
The speech-enabled interface 200 may field the query/command 22 conveyed in the spoken utterance 20 captured in streaming audio 15 by the device 110. The speech-enabled interface 200 (also referred to as the interface 200 or the assistant interface 200) generally facilitates receiving audio data 14 corresponding to an utterance 20 captured by the audio capturing device 114, coordinating speech processing on the audio data 14, performing semantic interpretation on the audio data 14 to identify a query 22 to perform an action 24, and performing a sequence of sub-actions 26, 26a-n in order to fulfill the action 24. The interface 200 may execute on the data processing hardware 112d of the device 110. When the microphone 114 of the device 110 captures an utterance 20 in streaming audio 15 and converts the audio into audio data 14, the audio data 14 corresponding to the utterance 20 is relayed to the interface 200 such that the interface 200 can perform speech recognition and semantic interpretation to identify and ultimately fulfill a query 22 conveyed in the spoken utterance 20. Although traditionally, fulfillment systems have had to wait for an endpointer (e.g., part of the speech recognizer or a separate component) to endpoint the end of an utterance 20 to obtain the final speech recognition result before initiating performance of an action 24 specified by the query/command 22, here, the interface 200 fulfills sub-actions 26 associated with the final action 24 as partial speech recognition results (e.g., partial hypotheses) are generated and before the user 10 is finished speaking the query 22 in the utterance 20. By leveraging partial speech recognition results, the interface 200 may provide the user 10 with real-time or near real-time feedback as to sub-actions 26 the interface 200 is performing in order to fulfill an action 24 specified by the query 22 once the utterance 20 is endpointed. In other words, the interface 200 strives to perform sub-actions 26 interpreted from intermediate speech recognition results for the audio data 14 while the user 10 is actively speaking the utterance 20.
In some implementations, to perform a sequence of sub-actions 26 in order to fulfill an action 24, the interface 200 interfaces with a plurality of applications 118, 118a-n on the device 110 or accessible to the device 110. An application 118 generally refers to any application 118 that is configured to run on the device 110. Some types of applications 118 include media applications (e.g., video streaming applications, audio streaming applications, media player applications, media gallery applications, etc.), word processing applications, navigation applications, social media applications, communication applications (e.g., messaging applications, email applications, etc.), financial applications, organizational applications (e.g., address book applications), retail applications, entertainment applications (e.g., news applications, weather applications, sport applications), casting applications, etc. The interface 200 may be integrated with these applications 118 to enable the user 10 to control applications on the device 110 using his or her voice. For example, the interface 200 is an application programming interface (API) or any other type of program or application configured to execute the functionality of the interface 200.
With continued reference to
Referring to
The interpreter 220 receives one or more sequences of intermediate speech recognition results 212 and performs partial query interpretation on the one or more intermediate speech recognition results 212. For instance, the interpreter 220 performs semantic interpretation (e.g., grammar interpretation) on a sequence of intermediate speech recognition results 212 to understand a portion of the utterance 20 and its context to identify any candidate sub-actions 26 that may be associated with a final action 24 to be specified once the query 22 is revealed when the user 10 finished speaking the utterance 20. Here, because the interpreter 220 is interpreting a sequence of intermediate speech recognition results 212 that corresponds to only a portion of the query 22, the interpreter 220 is able to derive the context of a sub-action 26 from the sequence of intermediate speech recognition results 212 corresponding to a portion of the utterance 20. Accordingly, the result of the partial query interpretation performed on a sequence of intermediate speech results 212 includes an interpretation 222 that may characterize a sub-action 26 that the query/command 22 will involve in order to fulfill a final action 24 that remains unknown until the user is finished speaking the utterance 20. When the interpreter 220 performs partial query interpretation, the interpretation 222 may suffer from some missing information due to the inherent fact that the interpretation is unable to contextualize the entirety of the utterance 20. Stated differently, the sub-actions 26 characterized by interpretations 222 may become increasingly specific as the number of sequences of intermediate speech recognition results 212 generated from the audio data 14 increases. For this reason, the interpreter 220 may form as complete of an interpretation 222 as possible by deriving a missing intent from the available information from a sequence of intermediate speech recognition results 212. For example, the interpreter 220 may perform partial query interpretation on an initial sequence of intermediate speech recognition results 212 to identify a particular application type needed to perform an action 24, but fails to specify a slot value associated with naming a specific application 118 for use in fulfilling the action 24 since the interpreter 220 cannot confidently identify the specific application 118 from the initial sequence of intermediate speech recognition results 212. In this example, the executor 230 may be configured to perform a first sub-action 26 by launching a default application 118 associated with the particular application type since the slot value associated with naming the specific application 118 is not specified (e.g., empty).
Referring to the example shown in
In
In some examples, a subsequent interpretation 222 may void a prior interpretation 222. For instance, if the first interpretation 222, 222a led to the executor 230 launching two plausible photo applications 118 in parallel where one was a local photo gallery application and the other was a third-party photo gallery accessible via a browser application, the second interpretation 222, 222b that identifies that the filter of time relates specifically to the user 10 (e.g., based on the word “my” preceding the word “holiday) would void/rollback the action of opening the third-party photo gallery because that gallery will not include any user-specific photos.
Referring to
In some configurations, the interpreter 220 uses an interpretation model that generates a confidence level for a given interpretation 222. In some implementations, the interpreter 220 generates multiple possible interpretations 222 for the same sequence of intermediate speech recognition results 212 and each possible interpretation 222 may have a respective confidence level. Furthermore, the speech recognizer 210 may generate multiple different candidate sequences of intermediate speech recognition results 212 for a same portion of audio data 14 and the interpreter 220 may generate one or more possible interpretations 222 for each candidate sequence of intermediate speech recognition results 222. Yet in some approaches, the interface 200 may only want to pursue a limited number of interpretations 222 (e.g., one interpretation 222) or interpretations 222 that indicate a confidence level above some interpretation confidence threshold. Here, when the interpreter 220 generates multiple possible interpretations 222 for a given sequence of intermediate speech recognition results 212 and with confidence levels satisfying an interpretation confidence threshold, the executor 230 may process respective sub-actions 26 characterized by the possible interpretations 222 in parallel. With the sub-actions 26 processing in parallel, the interface 200 may graphically display each parallel track on the display 116 and enable the user 10 to select a particular track, or even modify his or her utterance 20 to change the behavior of the interpreter 220 and/or executor 230.
Proceeding with the example of
When, as shown in
In some configurations, when the utterance 20 is complete or endpointed, the executor 230 performs complete fulfillment of the action 24. Here, the utterance 20 may be endpointed when the speech recognizer 210 detects some designated minimum duration of time of non-speech in the audio data 14. The executor 230 may perform a highest confidence action 24 based on the full speech recognition result for the entirety of the audio data 14 (or utterance 20). In these configurations, the executor 230 may roll back (rescind or terminate) previous sub-actions 26 that occurred prior to the fulfillment of the entire action 24. For instance, the executor 230 rolls back one or more sub-actions 26 that are inconsistent with the entire action 24. Generally speaking, the sub-action execution process aims to be in a final state that matches the final state of the execution process of the full action 24. However, depending on the query/command 22, this may not always be the case. Hence, roll back(s) allow the interface 200 to flexibly accommodate for different scenarios.
In some implementations, the executor 230 may roll back previous sub-actions 26 based not only on the entire action 24, but on other sub-actions 26 prior to the utterance 20 being endpointed. To illustrate, as described above, the interpreter's first interpretation 222 may have led to two plausible photo-related types of applications 118 where one was a local photo gallery application and the other was a third-party photo gallery in a browser application. Yet later interpretations 222 of sub-actions 26 confirmed that the application 118 could not have been the third-party photo gallery application because it would not contain photos of the user 10. In this respect, the executor 230 would roll back the first sub-action 26 for the launching of the third-party photo gallery in favor of the local photo gallery application. If the executor 230 actually launched the third-party photo gallery, but did not also launch the local photo gallery application, the executor 230, based on the later interpretations 222, may roll back the first sub-action 26 for the launching of the third-party photo gallery by ceasing execution of the third-party photo gallery and instead re-perform the first sub-action 26 by launching the local photo gallery application.
Additionally or alternatively, there may be certain sub-actions 26 that the interface 200 is not able to roll back, or that rolling back the sub-action 26 would negatively impact the user's experience. For example, when the sub-action 26 is to purchase an item on a retail application 118, the interface 200 may not be able to roll back such a purchase or do so without user intervention. Here, a sub-action 26 that the interface 200 is not able to roll back may be referred to as an irreversible sub-action 26. In these irreversible sub-actions 26, the executor 230 may prompt the user 10 for authorization or action confirmation while executing an irreversible sub-action 26. Another approach to irreversible sub-actions 26 is to identify an irreversible sub-action 26 and, when a sub-action 26 is identified as irreversible, the executor 230 waits to perform complete fulfillment of the action 24. That is, it may be safer to have the entire context of a full speech recognition result interpreted rather than a partial speech recognition result 212. In yet another approach, before the utterance 20 is endpointed, but after the executor 230 launches an application 118, the executor 230 may determine a rollback feasibility score for a sub-action 26. Here, the rollback feasibility score indicates a likelihood that the user's experience will be degraded or detrimentally impacted if the executor 230 rolls back the sub-action 26. When the rollback feasibility score satisfies a rollback feasibility threshold, the executor 230 may proceed to execute the sub-action 26. On the other hand, when the rollback feasibility score fails to satisfy the rollback feasibility threshold, the executor 230 may roll back the sub-action 26 or delay the roll back of the sub-action 26 to determine whether the complete fulfillment of the action 24 indicates that the sub-action 26 should be rolled back or not, and rolling it back accordingly.
Because the user 10 may see results of sub-actions 26 being performed by the interface 200 while the user 10 is speaking, the user 10 may, in some circumstances, endpoint an utterance manually before the user finishes speaking the complete utterance. Namely, when the user 10 is seeking a particular result from the device 110, if the particular result is displayed for the user 10 prior to completing the utterance 20, the user 10 may abandon the completion of the utterance 20 since the purpose has already been achieved. For instance, using the example of
As the user 10 continues to speak the utterance 20,
The executor 230 may also use the second interpretation 222b and the third interpretation 222c as a validity check to determine if any of the sub-actions 26 in the first set of sub-actions 26a-d should be rolled back (e.g., terminated). Since two of the applications 118c-d were applications for video (e.g., streaming video or a local video player), performing the validity check against the second interpretation 222b and the third interpretation 222c results in the executor 230 terminating the video-based applications 118c-d that were launched based on a third and a fourth sub-action 26, 26c-d. With this being the case, executing both interpretations 222b-c only has to occur at the first application 118a and the second application 118b. This execution process therefore forms a second set of sub-actions 26e-h where four parallel action sequences are occurring as shown in
In
In
The computing device 400 includes a processor 410, memory 420, a storage device 430, a high-speed interface/controller 440 connecting to the memory 420 and high-speed expansion ports 450, and a low speed interface/controller 460 connecting to a low speed bus 470 and a storage device 430. Each of the components 410, 420, 430, 440, 450, and 460, are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate. The processor 410 can process instructions for execution within the computing device 400, including instructions stored in the memory 420 or on the storage device 430 to display graphical information for a graphical user interface (GUI) on an external input/output device, such as display 480 coupled to high speed interface 440. In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices 400 may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system).
The memory 420 stores information non-transitorily within the computing device 400. The memory 420 may be a computer-readable medium, a volatile memory unit(s), or non-volatile memory unit(s). The non-transitory memory 420 may be physical devices used to store programs (e.g., sequences of instructions) or data (e.g., program state information) on a temporary or permanent basis for use by the computing device 400. Examples of non-volatile memory include, but are not limited to, flash memory and read-only memory (ROM)/programmable read-only memory (PROM)/erasable programmable read-only memory (EPROM)/electronically erasable programmable read-only memory (EEPROM) (e.g., typically used for firmware, such as boot programs). Examples of volatile memory include, but are not limited to, random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), phase change memory (PCM) as well as disks or tapes.
The storage device 430 is capable of providing mass storage for the computing device 400. In some implementations, the storage device 430 is a computer-readable medium. In various different implementations, the storage device 430 may be a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. In additional implementations, a computer program product is tangibly embodied in an information carrier. The computer program product contains instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory 420, the storage device 430, or memory on processor 410.
The high speed controller 440 manages bandwidth-intensive operations for the computing device 400, while the low speed controller 460 manages lower bandwidth-intensive operations. Such allocation of duties is exemplary only. In some implementations, the high-speed controller 440 is coupled to the memory 420, the display 480 (e.g., through a graphics processor or accelerator), and to the high-speed expansion ports 450, which may accept various expansion cards (not shown). In some implementations, the low-speed controller 460 is coupled to the storage device 430 and a low-speed expansion port 490. The low-speed expansion port 490, which may include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet), may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter.
The computing device 400 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server 400a or multiple times in a group of such servers 400a, as a laptop computer 400b, or as part of a rack server system 400c.
Various implementations of the systems and techniques described herein can be realized in digital electronic and/or optical circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms “machine-readable medium” and “computer-readable medium” refer to any computer program product, non-transitory computer readable medium, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor.
The processes and logic flows described in this specification can be performed by one or more programmable processors, also referred to as data processing hardware, executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
To provide for interaction with a user, one or more aspects of the disclosure can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube), LCD (liquid crystal display) monitor, or touch screen for displaying information to the user and optionally a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's client device in response to requests received from the web browser.
A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the disclosure. Accordingly, other implementations are within the scope of the following claims.
Sharifi, Matthew, Carbune, Victor
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
11210341, | Dec 09 2019 | A9.COM, INC. | Weighted behavioral signal association graphing for search engines |
11256414, | Nov 26 2014 | Snap Inc. | Hybridization of voice notes and calling |
11361763, | Sep 01 2017 | Amazon Technologies, Inc | Detecting system-directed speech |
5566272, | Oct 27 1993 | THE CHASE MANHATTAN BANK, AS COLLATERAL AGENT | Automatic speech recognition (ASR) processing using confidence measures |
20120179457, | |||
20160203002, | |||
20180247065, | |||
20190066674, | |||
20190318759, | |||
20200301657, | |||
CN105793921, | |||
CN107111516, | |||
CN1115902, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Dec 08 2020 | SHARIFI, MATTHEW | GOOGLE LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 062506 | /0236 | |
Dec 08 2020 | CARBUNE, VICTOR | GOOGLE LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 062506 | /0236 | |
Jan 27 2023 | GOOGLE LLC | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Jan 27 2023 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Date | Maintenance Schedule |
May 28 2027 | 4 years fee payment window open |
Nov 28 2027 | 6 months grace period start (w surcharge) |
May 28 2028 | patent expiry (for year 4) |
May 28 2030 | 2 years to revive unintentionally abandoned end. (for year 4) |
May 28 2031 | 8 years fee payment window open |
Nov 28 2031 | 6 months grace period start (w surcharge) |
May 28 2032 | patent expiry (for year 8) |
May 28 2034 | 2 years to revive unintentionally abandoned end. (for year 8) |
May 28 2035 | 12 years fee payment window open |
Nov 28 2035 | 6 months grace period start (w surcharge) |
May 28 2036 | patent expiry (for year 12) |
May 28 2038 | 2 years to revive unintentionally abandoned end. (for year 12) |