A system automatically frames locations by detecting a user's presence within a virtual detection space. The system detects sound in the detection space and converts the sound into electrical signals. The electrical signals are converted into a digital signals at common or periodic sampling rates. The system identifies speech segments in the digital signals and attenuates noise like components within or adjacent to them. The system identifies the physical locations of a speech source generating the speech segments and automatically adjusts a camera framing based on the estimated location of the active speech source generating the speech segments.
|
22. A computer implemented method of controlling an electronic device in an absence of a physical contact with the electronic device, comprising:
designating an interactive space into a virtual detection space and a blocking area, wherein the blocking area prevents the electronic device from tracking users and conveying audio and images captured by a camera and a microphone array;
sampling an aural signal received by the microphone array correlated with a noise event within a virtual detection space of the camera;
correlating a sample of the aural signal with attributes of a noise signal, wherein the correlating includes determining that a positive correlation level exceeds a predetermined threshold value;
modeling spectral components of the sample aural signal correlated with the noise signal to generate a noise signal model for the virtual detection space captured by the camera;
updating a background noise when a speech segment is undetected and when a measurement of the noise signal is at or below a median noise measurement in the virtual detection space;
detecting a user's presence within the virtual detection space of the camera while the electronic device is in a standby state by detecting noise components within the virtual detection space by the noise signal model;
transitioning the electronic device to an interactive state when the user's presence is detected;
detecting speech segments in the detection space and converting the speech segments into electrical signals;
converting the electrical signals into digital signals at periodic intervals;
identifying the speech segments in the digital signals;
attenuating an input comprising the audio and the images from the blocking area to render a conditioned signal;
locating a physical location of a speech source generating the speech segments;
adjusting the camera automatically on the physical location of the speech source generating the speech segments;
and transmitting the conditioned signal to a remote destination.
1. A computer implemented method of controlling an electronic device in an absence of a physical contact with the electronic device, comprising:
designating an interactive space into a virtual detection space and a blocking area;
the blocking area designated to prevent the electronic device from tracking users and conveying audio and images captured by a camera and a microphone array;
sampling an aural signal received by the microphone array correlated with a noise event within a virtual detection space of the camera;
correlating a sample of the aural signal with attributes of a noise signal, wherein the correlating includes determining that a positive correlation level exceeds a predetermined threshold value;
modeling spectral components of the sample aural signal correlated with the noise signal to generate a noise signal model for the virtual detection space captured by the camera;
updating a background noise when a speech segment is undetected and when a measurement of the noise signal is at or below a median noise measurement in the virtual detection space;
detecting a user's presence within the virtual detection space of the camera while the electronic device is in a standby state by detecting noise components within the virtual detection space by the noise signal model;
transitioning the electronic device to an interactive state when the user's presence is detected;
detecting speech segments in the detection space and converting the speech segments into electrical signals;
converting the electrical signals into digital signals at periodic intervals;
identifying the speech segments in the digital signals;
attenuating an input comprising the audio and the images from the blocking area to render a conditioned signal;
locating a physical location of a speech source generating the speech segments;
adjusting the camera automatically on the physical location of the speech source generating the speech segments;
and transmitting the conditioned signal to a remote destination.
10. An electronic device, comprising;
a display;
a processor in communication with the display;
and a computer program stored in a non-transitory memory executed by the processor that causes actions to be carried out through instructions for:
monitoring an interactive space comprising a detection space and a blocking area;
the blocking area prevents the electronic device from tracking users and conveying audio signals and images captured by a camera and a microphone array;
sampling an input signal received by the microphone array correlated with a noise event within a detection space of the camera;
correlating a sample of the input signal with attributes of an audio noise signal, wherein the correlating includes determining that a positive correlation level exceeds a predetermined threshold value;
modeling spectral components of the sample input signal to generate a noise signal model for the detection space captured by the camera;
updating a background noise measurement when a speech segment is undetected and only when a noise measurement of the noise signal is equal to or below an average noise measurement of a plurality of prior background noise measurements in the detection space;
detecting a user's presence within the detection space of the camera while the electronic device is in a standby state by detecting noise components within the detection space by the noise signal model;
transitioning the electronic device to an interactive state when the user's presence is detected within the detection space when noise components are detected within the detection space by the noise signal model;
detecting speech in the detection space and converting the speech into electrical signals;
converting the electrical signals into digital signals at periodic intervals;
identifying speech segments in the digital signals;
attenuating the noise components and an updated background noise measurement in the digital signals and aural signals and images from the blocking area to render a conditioned signal;
locating a physical location of a speech source generating the speech segments;
adjusting the camera automatically based on the physical location of the speech source generating the speech segments and a physical location of participants as participants enter or leave the detection space;
and transmitting the conditioned signal to a remote destination.
23. An electronic device, comprising;
a display;
a processor in communication with the display;
and a computer program stored in a non-transitory memory executed by the processor that causes actions to be carried out through instructions for:
monitoring an interactive space comprising a detection space and a blocking area, wherein the blocking area prevents the electronic device from tracking users and conveying audio signals and images captured by a camera and a microphone array;
sampling an input signal received by the microphone array correlated with a noise event within a detection space of the camera;
correlating a sample of the input signal with attributes of an audio noise signal, wherein the correlating includes determining that a positive correlation level exceeds a predetermined threshold value;
modeling spectral components of the sample input signal to generate a noise signal model for the detection space captured by the camera;
updating a background noise measurement when a speech segment is undetected and only when a noise measurement of the noise signal is equal to or below an average noise measurement of a plurality of prior background noise measurements in the detection space;
detecting a user's presence within the detection space of the camera while the electronic device is in a standby state by detecting noise components within the detection space by the noise signal model;
transitioning the electronic device to an interactive state when the user's presence is detected within the detection space when noise components are detected within the detection space by the noise signal model;
detecting speech in the detection space and converting the speech into electrical signals;
converting the electrical signals into digital signals at periodic intervals;
identifying speech segments in the digital signals;
attenuating the noise components and an updated background noise measurement from the digital signals and aural signals and images from the blocking area to render a conditioned signal;
locating a physical location of a speech source generating the speech segments;
adjusting the camera automatically based on the physical location of the speech source generating the speech segments and a physical location of participants as participants enter or leave the detection space;
and transmitting the conditioned signal to a remote destination.
2. The computer implemented method of
3. The computer implemented method of
4. The computer implemented method of
5. The computer implemented method of
6. The computer implemented method of
7. The computer implemented method of
extracting features of the participant from within the bounding box when a predicted score exceeds a predetermined threshold;
identifying the speech source by a classification that renders a highest confidence score;
and identifying the physical location of the speech source based on a relative position of the speech source to images of a plurality of objects captured by the camera.
8. The computer implemented method of
generating cross-correlation and a phase transform values at a plurality of time delays associated with a sensing direction of a plurality of microphone pairs processing the aural signal;
and generating an image of reverberation effects within the interactive space;
where the phase transform determines a time difference of arrival of a signal between the microphone pair.
9. The computer implemented method of
comprising:
generating a cross-correlation and phase transform values at a plurality of time delays associated with a sensing direction of a plurality of microphone pairs processing the aural signal;
where the phase transform determines a time difference of arrival of a signal between the microphone pair;
and generating an image showing reverberation effects within the interactive space.
11. The electronic device of
12. The electronic device of
13. The electronic device of
14. The electronic device of
15. The electronic device of
16. The electronic device of
extracting the facial features from a bounding box when a predicted score exceeds a predetermined threshold;
identifying an active speaker by a classification and confidence score;
and identifying a physical location of the active speaker based on a relative position of the active speaker to images of a plurality of other objects captured by the camera in the interactive space.
17. The electronic device of
generating a cross-correlation and phase transform values at a plurality of time delays associated with a sensing direction of a plurality of microphone pairs processing the aural signal;
where the phase transform determines a time difference of arrival of a signal between the microphone pair;
and generating an image showing reverberation effects within the interactive space.
18. The electronic device of
19. The electronic device of
20. The electronic device of
21. The electronic device of
|
This application claims the benefit of priority from U.S. Provisional Application No. 62/991,852, filed Mar. 19, 2020, titled “Auto-framing Through Speech and Video Localizations”, which is herein incorporated by reference.
This application relates to auto-framing, and specifically to an integrative control system that optimizes framing through speech and video localizations.
Video conferencing typically involves sharing images among geographically separated participants. Through cameras and microphones, the systems capture video and relay it to other participants. The simultaneous content shared among the participants is often constrained by the setup of the equipment. For example, many users are not familiar with camera and microphone setups. Thus, it is often not properly configured.
Manual control of video conferencing equipment during a meeting does not help as instructions are often complex, it requires technical know-how, and changing setups during meeting can be distracting. Often, the framing controls are not intuitive and many systems cannot track active speakers through an entire meeting.
An intelligent video conferencing control system and process (referred to as a system or systems) provide a natural and seamless tracking while improving the perceptual quality of speech shared with participants. The systems provide autonomous audio and video control by acquiring, conditioning, assimilating, and compressing digital audio and video content and transmitting that content to remote destinations. Using integrative technology that includes a speech locator and an optional video locator, the systems process speech to provide automatic image and audio control while providing simultaneous communication among geographically separated participants.
Some systems also allow users across networks to work together on content and documents that are simultaneously displayed to all users as though they were all gathered around a physical whiteboard. These alternative systems allow a single set of files at one location to be accessed and modified by all participants. By the systems' speech enhancements, the systems improve the perceptual speech quality of voiced speech by removing unwanted noise and dampening background noise received by an array of input devices. Some systems do not remove the entire noise from the signals to maintain the natural sound conditions participants are accustom to. The devices may be configured to sense the directional response of participants voices and voice patterns by processing the time difference of arrival of speech, exclusively or in part. Control of the video conferencing system is based on portions of the aural spectrum that are further compressed and transmitted over one or more networks that include a wide area network or the Internet, for example.
Some systems model spectral and temporal characteristics of undesired signals and remove random transient (e.g., non-periodic signals) and/or persistent signals (e.g., periodic or continuous signals) that correspond to one or more undesired signal characteristics, such as noise. When the undesired characteristics are detected, they are substantially removed or dampened rendering a cleaner sound. It also improves the perceptual quality of voiced signal. The processed voice and desired signals enable the systems to automatically control and adjust the system, such as the panning, tilting, and zooming of one or more cameras that may be part of the video conferencing system. The control ensures high resolution views, clean and improved perceptual quality speech, and cleaner desired sounds that are conveyed to the geographically remote participants without distracting, burdening, or requiring participants to adjust any equipment. Further, the systems are self-calibrating, making it unnecessary for users to calibrate or recalibrate the systems when they are first used, used in different surroundings, and/or used in new environments.
A touchless user interface enables all users to control the systems with little or no training regardless of their backgrounds or speaking styles. The systems are immediately accessible, and in some systems, provide visual cues of gesture commands and/or voice commands that may frame, view, track and enhance the accuracy of focusing on the presenters automatically.
Upon a detection, the systems transition from a stand-by state (e.g., the dormant state) to an interactive state (e.g., the active state) in response to the detection. The transition occurs in real-time (e.g., waking-up at the same rate the detection occurs, with no delay) in some systems, and in near-real time in other systems. Delay is minimized in some systems by loading application software in the background. In these systems, background loading improves system responsiveness eliminating a move and wait operating state that is perceived by some users to be slow and sluggish, and thus, impractical for commercial uses.
Some alternative systems render optional acknowledgement notifications such as audible acknowledgements made through a synthesized sound through a speech synthesis engine (e.g., such as a high fidelity text-to-speech engine that converts a textual acknowledgement into a voiced speech) and/or visual acknowledgements rendered on a display 402 shown in
Through algorithms and trained classifiers, the systems auto-frames participants that are captured by the camera 116 by the number and location of the participants in the interactive space. Some systems focus on active participants that may be identified by their active voice and/or frequent gestures regardless if they are a near-side or far-side talkers. Some alternative systems zoom in on active participants (e.g. enlarging their captured images) while optimizing the systems framing to include all of the participants present in the interactive space that is also based on the number of participants and their locations. The camera 116 naturally re-adjusts its pan, tilt, and/or zoom settings and zooms in on participants, at a natural and periodic rate based on the number of active speakers and augmented by video data so little is missed within the interactive space, meetings include all participants in the captured video images, and the meetings are more intimate. In these systems, participants seem closer to all of the viewers by excluding non-active spaces from the video images transmitted among the various geographically separated participants in the meetings through pan, tilt, and/or zoom settings.
With presence detected by a presence detector 102 and sound captured and sampled via a cache and sampler 104, the systems detect noise and/or voice via a detector 106 and enhance voiced speech by dampening undesired signals such as the level of background noise and other noises detected from the input via a noise attenuator 108. Speech comprises voiced speech such as vowels and unvoiced speech such as constants. Voiced speech has a regular harmonic structure; meaning it has harmonic peaks weighted by a spectral envelope. Unvoiced speech lacks a harmonic structure. Aural signals include non-periodic noises, periodic noises, and voiced and/or unvoiced speech.
In
In an alternative systems, voice and noise segments are identified by an identifier 208 when the sampled input signal is correlated with known noise events and/or is uncorrelated with speech by a correlator 302. A correlation between the spectral and/or temporal shape of a sampled signal and a previously modeled shape or between previously stored attributes of noise and/or modeled signal attributes of voiced and unvoiced speech may identify a potential noise segment and/or speech segment. When the correlation or uncorrelation levels exceed a predetermined threshold value, the signal segment is classified by the classifier 206 and marked as noise or as an undesired signal and identifies a human presence. When speech is not identified, some alternate systems measure the nearly continuous noise that is present near each of the microphones that comprise the microphone array 404 to estimate the background noise. The background noise measurement may be updated continuously when voice and unvoiced segments are not detected and at some intervals not measured when transient noise events are identified. Thereafter, it may be dampened in part to improve the perceptual quality of speech. A transient noise event is identified when noise measurement exceeds an average measurement or a median measurement of the prior background noise measurements.
With sampled signals marked, a noise attenuator 108 dampens or attenuates the noise (including portions of the background noise) and noise like components from the sampled signal regardless of the amplitude of the incoming signal. When the identifier 208 marks noise or undesired signals, a modeler (not shown) models the temporal and spectral components of the noise and undesired signals and generates a noise and/or undesired signal model, or alternatively, store attributes of those conditions in a data warehouse 606 (shown in
With noise and undesired signals dampened, a locator 110 executes an acoustic localization through the microphone array 404 that comprises several microphones equidistant from each other. The time difference of arrival from between microphones is processed to determine the direction of arrival of the speech signals.
Using a steered response power with phase transform, the system estimates the time difference of arrival between microphones of the microphone array 404. The steered response power is a real-valued spatial vector, which is defined by the field of view (a.k.a., a view of the interactive space) of a specific array. A high maxima in the steered response power estimates the location of the sound source in the field of view. The steered response power is computed for each direction sensed by microphone pairs that comprise the microphone array 406 to generate a cumulative generalized cross-correlation with phase transform value across the pairs of microphones at the time delays associated with the established sensing directions. The phase transform effectively weighs the generalized cross correlation processed to determine the time difference of arrival.
By computing steered response power for points in the interactive space, a steered response power image is generated that renders images of the whole observable interactive space. The rendered images show signal energy distributions and the associated reverberation effects. To reduce the processing bandwidth processed to extract the global maximum estimation to locate the high maxima of the steered response power, the systems apply a stochastic region contraction that iteratively reduces the search volume for the high maxima. The process begins by searching the entire interactive space or the whole field of view, and stochastically analyzes the function of the volume by selecting a predetermined number of points, and thereafter, contracting the volume into smaller volumes containing the desired high maxima, which continues recursively until the high maxima is located. The algorithm is thereafter repeated continuously or periodically to ensure speech source locations are precisely identified and updated in memory and ensures that the algorithm precisely reflects and tracks the changing speech sources, monitored conditions, and dynamics of the interactive space.
To enhance accuracy, some locators 110 generates estimates of the high maxima in each of the regions monitored by the microphone pairs along with a measurement of their uncertainties. Once a high maxima is estimated, the estimate is weighted by applying a weighted average, with more weight given to estimates associated with higher certainties. Like steered response power with phase transform processes, this algorithm is recursive and runs in real-time processing the previously calculated state and an associated uncertainty matrix continuously and updating speech source localizations continuously.
In some systems, an optional agumentor 112 supplements or confirms the estimate of the active speaker's (e.g., the sound source's) location. The agumentor 112 processes video streams rendered from single or multiple cameras 116 that are processed by machine learning and tracking algorithms. In
Rather than relying on a single machine learning algorithm to detect and classify active human speakers (or alternately, active participants), some optional augmentors 112 also process the video images using a second (different) type of machine learning algorithm (different from the first) in parallel to improve the accuracy and speed of the system's active speaker recognitions. In these augmentors 112, another optional classifier predicts bounding boxes enclosing a desired participants head and/or mouth using dimensions and clusters as anchor boxes to predict active speakers recognition. The systems predict four coordinates for each bounding box (e.g., each participant's mouth/head tracked). Applying a linguistic regression, a predicted object score is generated. When a bounding box's object score exceeds a predetermined threshold, a feature extraction is executed by a feature extractor processing the video images using successive 3×3 and 1×1 convolutional layers (e.g., fifty-three convolutional layers in an exemplary machine learning algorithm) until a predetermined mean-squared error is achieved. Each of the second-type of gesture classifiers are trained using full video images captured by the camera(s) 116 using a multi-scaling processes to render more trained classifiers that render recognition predictions and confidence scores. Once trained, the classifiers process the captured video by processing video images in real-time.
In operation, the extracted features of the active speakers in the video image are processed by the various types of classifiers and the identifications with the highest confidence score are selected by the processor 602 (shown in
Based on the predictions of the locator 110 and optional augmentor 112, the estimated location of the human sound source is known. When the predictions vary, a composite estimate may be derived. In these alternate systems, the estimates are updated using a weighted average, with more weight given to the estimates that have a higher certainty and less weight given to the estimates having a lower certainty. Using the various estimates, a weighted average provides an estimate of the active speech source location.
With the active speakers identified, modes are selected and control signals are generated by a controller that drives the one or more pan-tilt-zoom cameras 116. The camera 116 automatically adjusts the modes and framing of participants by adjusting the camera 116 as participants actively speak and move about or come in and out of the interactive area in response to the control signals. By panning, tilting and/or zooming, the controller 114 ensures that all meeting participants are captured in the camera's video images, and in some systems, focus in on or enlarge video images of the active speakers. The systems may focus on one speaker in the camera frame (known as a solo mode) when there is only one participant. The systems may focus on two to three speakers (known as a debate mode) when there are two to three participants. The speakers may focus on four or more speakers (known as a panel mode) when there are more than three participants.
In operation, the systems identify meeting participants and filters out incorrect predictions in 502 and 504 as disclosed in U.S. Provisional Application 62/900,232 titled Gesture Control Systems, that is incorporated by reference. Supplemental or alternative functionality may be rendered by OpenPose and Yolov3 tracking software, for example in alternate systems. The systems detect the number and location of the participants in an interactive space and focuses the camera(s) at 506. Using audio locator technology and the optional video locator technology described herein, the system selects modes and automatically adjusts the camera's framing gradually (e.g., not abrupt) by adjusting the pan, tilt, and/or zoom settings of the camera 116 at a natural rate (e.g., scheduled rate), and in some alternate systems, share perceptually improved speech with various geographically separated participants. The automatic and gradual adjustments occur asynchronously as people more about the interactive space or come into it or out of the interactive space. The process is recursive and continuously monitors the interactive space and adjusts the video framing. It optimizes video framing by locating active speakers and making viewers feel closer to their geographically remote participants.
The memory 604 and/or storage disclosed may retain an ordered listing of executable instructions for implementing the functions described above in a non-transitory computer code. The machine-readable medium may selectively be, but not limited to, an electronic, a magnetic, an optical, an electromagnetic, an infrared, or a semiconductor medium. A non-exhaustive list of examples of a machine-readable medium includes: a portable magnetic or optical disk, a volatile memory, such as a Random-Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM or Flash memory), or a database management system. The memory 604 may comprise a single device or multiple devices that may be disposed on one or more dedicated memory devices or disposed on a processor or other similar device. The term “engine” is intended to broadly encompass a processor or a portion of a program that executes or supports events such as the static and dynamic recognition events and processes. When functions, steps, etc. are said to be “responsive to” or occur “in response to” another function or step, etc., the functions or steps necessarily occur as a result of another function or step, etc. It is not sufficient that a function or act merely follow or occur subsequent to another.
Alternate systems are not limited to the particular hardware and machine learning algorithms described above. Other suitable hardware and machine learning algorithms can be used. Furthermore, the systems are not limited to physically static systems. Rather, the systems can be used in mobile devices and operate across distributed networks. The systems illustratively disclosed herein suitably may be practiced in the absence of any element (including hardware and/or software), which is not specifically disclosed herein. They may operate in the absence of those elements. Further, the various elements described in each of the many systems described herein is regarded as divisible with regard to the individual elements described, rather than inseparable as a whole. In other words, alternate systems encompass any variation and combinations of elements described herein and may be made or used without the various elements described (e.g., they may operate in the absence of one or more of the elements disclosed herein or shown in
An intelligent camera control system and process provides a natural and seamless active speaker tracking while improving the perceptual quality of speech shared with geographically separated participants. The systems provide autonomous audio and video control by acquiring, conditioning, assimilating, and compressing digital audio and video content and transmitting that content to remote destinations. Using integrative technology that include an active speech locator and an optional video locator, the systems process speech to provide automatic image and audio control while providing simultaneous communication among geographically separated participants through multimodal operations.
The subject-matter of the disclosure may also relate, among others, to the following aspects (the aspects are referenced by numbers):
1. A computer implemented method of controlling an electronic device in an absence of a physical contact with the electronic device, comprising:
detecting a user's presence within a virtual detection space of a camera while the electronic device is in a standby state;
transitioning the electronic device to an interactive state when the user's presence is detected;
detecting sound in the detection space and converting the sound into electrical signals;
converting the electrical signals into a digital signals at periodic intervals;
identifying speech segments in the digital signals;
attenuating noise components in the digital signals;
locating a physical location of a speech source generating the speech segments; and
adjusting the camera automatically on the speech source generating the speech segments.
2. The computer implemented method of aspect 1 further comprising rendering an acknowledgement in response to the virtual detection via a speech synthesis engine.
3. The computer implemented method of any of aspects 1 to 2, further comprising converting the digital signals into a plurality of cepstral coefficients.
4. The computer implemented method of aspect 3 further comprising identifying a human presence in response to processing the cepstral coefficients.
5. The computer implemented method of any of aspects 1 to 4 where the speech segments are identified by correlating spectral shapes of the digital signals attributed with voiced and unvoiced speech.
6. The computer implemented method of any of aspects 1 to 5 where the locating a physical location of a speech source comprises an acoustic localization executed by an acoustic locator.
7. The computer implemented method of aspect 6 where the locating a physical location of a speech source comprises a video localization executed by a video locator.
8. The computer implemented method of aspect 7 where the locating a physical location of a speech source is based on detecting a maximum in a steered response power segment.
9. The computer implemented method of aspect 6 where the locating a physical location of a speech source is based on detecting a maximum in a steered response power segment.
10. An electronic device, comprising;
a display;
a processor in communication with the display; and
a computer program stored in a non-transitory memory executed by the processor that causes actions to be carried out through instructions for:
detecting a user's presence within a virtual detection space of a camera while the electronic device is in a standby state;
transitioning the electronic device to an interactive state when the user's presence is detected;
detecting sound in the detection space and converting the sound into electrical signals;
converting the electrical signals into a digital signals at periodic intervals;
identifying speech segments in the digital signals;
attenuating noise like components in the digital signals;
locating a physical location of a speech source generating the speech segments; and
adjusting the camera automatically on the speech source generating the speech segments.
11. The electronic device of aspect 10 further comprising instructions for rendering an acknowledgement in response to the virtual detection via a speech synthesis engine.
12. The electronic device of any of aspects 10 to 11 further comprising instructions for converting the digital signals into a plurality of cepstral coefficients.
13. The electronic device of aspect 12 further comprising instructions for identifying a human presence in response to processing the cepstral coefficients.
14. The electronic device of any of aspects 10 to 13 where the speech segments are identified by correlating spectral shapes of the digital signals attributed with voiced and unvoiced speech.
15. The electronic device of any of aspects 10 to 14 further comprising instructions where the locating a physical location of a speech source comprises an acoustic localization executed by an acoustic locator.
16. The electronic device of any of aspects 10 to 15 where the locating the physical location of a speech source comprises a video localization executed by a video locator.
17. The electronic device of aspect 16 where the locating a physical location of a speech source is based on detecting a maximum in a steered response power segment.
18. The electronic device of aspect 15 where the locating a physical location of a speech source is based on detecting a maximum in a steered response power segment.
19. The electronic device of any of aspects 10 to 18 where the locating a physical location of a speech source is based on detecting a maximum in a steered response power segment and a stochastic region contraction.
20. The electronic device of any of aspects 10 to 19 where the locating a physical location of a speech source is based on detecting a maximum in a steered response power segment, a stochastic region contraction, and a video classifier.
Other systems, methods, features and advantages will be, or will become, apparent to one with skill in the art upon examination of the figures and detailed description. It is intended that all such additional systems, methods, features and advantages be included within this description, be within the scope of the disclosure, and be protected by the following claims.
Liu, Wei, Guo, Jin, Zhou, Yuchen, Dong, Jinxin, Tung, Sally
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
8131543, | Apr 14 2008 | GOOGLE LLC | Speech detection |
20070078649, | |||
20110075851, | |||
20120062729, | |||
20140156833, | |||
20140195577, | |||
20150269954, | |||
20160078873, | |||
20190025400, | |||
20190035431, | |||
20190158733, | |||
20190173446, | |||
20190341054, | |||
20200092519, | |||
20200412772, | |||
WO2009042579, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Aug 16 2020 | DONG, JINXIN | DTEN, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 053584 | /0622 | |
Aug 16 2020 | TUNG, SALLY | DTEN, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 053584 | /0622 | |
Aug 16 2020 | ZHOU, YUCHEN | DTEN, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 053584 | /0622 | |
Aug 22 2020 | LIU, WEI | DTEN, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 053584 | /0622 | |
Aug 22 2020 | GUO, JIN | DTEN, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 053584 | /0622 | |
Aug 24 2020 | DTEN, INC. | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Aug 24 2020 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Sep 02 2020 | SMAL: Entity status set to Small. |
Date | Maintenance Schedule |
Oct 04 2025 | 4 years fee payment window open |
Apr 04 2026 | 6 months grace period start (w surcharge) |
Oct 04 2026 | patent expiry (for year 4) |
Oct 04 2028 | 2 years to revive unintentionally abandoned end. (for year 4) |
Oct 04 2029 | 8 years fee payment window open |
Apr 04 2030 | 6 months grace period start (w surcharge) |
Oct 04 2030 | patent expiry (for year 8) |
Oct 04 2032 | 2 years to revive unintentionally abandoned end. (for year 8) |
Oct 04 2033 | 12 years fee payment window open |
Apr 04 2034 | 6 months grace period start (w surcharge) |
Oct 04 2034 | patent expiry (for year 12) |
Oct 04 2036 | 2 years to revive unintentionally abandoned end. (for year 12) |