Techniques are provided for qr decomposition (QRD) based minimum variance distortionless response (MVDR) adaptive beamforming. A methodology implementing the techniques according to an embodiment includes receiving signals from microphone array, identifying signal segments that include a combination of speech and noise, and identifying signal segments that include noise in the absence of speech. The method also includes calculating a QRD and an inverse QRD (IQRD) of the spatial covariance of the noise components. The method further includes estimating a relative transfer function (RTF) associated with the source of the speech, based on the noisy speech signal segments, the QRD, and the IQRD. The method further includes estimating a multichannel speech-presence-probability (SPP) on whitened input signals based on the IQRD. The method further includes calculating beamforming weights, for the microphone array, based on the RTF and the IQRD, to steer a beam in the direction associated with the speech source.
|
9. A system for audio beamforming, the system comprising:
a noisy speech indicator circuit to identify a first set of segments of a plurality of audio signals received from an array of microphones, the first set of segments comprising a combination of a speech signal and a noise signal;
a noise indicator circuit to identify a second set of segments of the plurality of audio signals, the second set of segments comprising the noise signal;
a noise tracking circuit to calculate a qr decomposition (QRD) of a spatial covariance matrix, and to calculate an inverse qr decomposition (IQRD) of the spatial covariance matrix, the spatial covariance matrix based on the second set of identified segments;
a speech tracking circuit to estimate a relative transfer function (RTF) associated with the speech signal of the first set of identified segments, the estimation based on the first set of identified segments, the QRD, and the IQRD; and
a weight calculation circuit to calculate a plurality of beamforming weights based on a multiplicative product of the estimated RTF and the IQRD, the beamforming weights to steer a beam of the array of microphones in a direction associated with a source of the speech signal.
16. At least one non-transitory computer readable storage medium having instructions encoded thereon that, when executed by one or more processors, result in the following operations for audio beamforming, the operations comprising:
identifying a first set of segments of a plurality of audio signals received from an array of microphones, the first set of segments comprising a combination of a speech signal and a noise signal;
identifying a second set of segments of the plurality of audio signals, the second set of segments comprising the noise signal;
calculating a qr decomposition (QRD) of a spatial covariance matrix, and an inverse qr decomposition (IQRD) of the spatial covariance matrix, the spatial covariance matrix based on the second set of identified segments;
estimating a relative transfer function (RTF) associated with the speech signal of the first set of identified segments, the estimation based on the first set of identified segments, the QRD, and the IQRD; and
calculating a plurality of beamforming weights based on a multiplicative product of the estimated RTF and the IQRD, the beamforming weights to steer a beam of the array of microphones in a direction associated with a source of the speech signal.
1. A processor-implemented method for audio beamforming, the method comprising:
identifying, by a processor-based system, a first set of segments of a plurality of audio signals received from an array of one or more microphones, the first set of segments comprising a combination of a speech signal and a noise signal;
identifying, by the processor-based system, a second set of segments of the plurality of audio signals, the second set of segments comprising the noise signal;
calculating, by the processor-based system, a qr decomposition (QRD) of a spatial covariance matrix, and an inverse qr decomposition (IQRD) of the spatial covariance matrix, the spatial covariance matrix based on the second set of identified segments;
estimating, by the processor-based system, a relative transfer function (RTF) associated with the speech signal of the first set of identified segments, the estimation based on the first set of identified segments, the QRD, and the IQRD; and
calculating, by the processor-based system, a plurality of beamforming weights based on a multiplicative product of the estimated RTF and the IQRD, the beamforming weights to steer a beam of the array of microphones in a direction associated with a source of the speech signal.
2. The method of
3. The method of
4. The method of
5. The method of
6. The method of
calculating a spatial covariance matrix based on the identified first set of segments;
estimating an eigenvector associated with the direction of the source of the speech signal, the eigenvector estimation based on the calculated spatial covariance matrix based on the identified first set of segments; and
normalizing the estimated eigenvector to a selected reference microphone of the array of microphones.
7. The method of
8. The method of
10. The system of
11. The system of
12. The system of
a noisy speech covariance update circuit to calculate a spatial covariance matrix based on the identified first set of segments;
an eigenvector estimation circuit to estimate an eigenvector associated with the direction of the source of the speech signal, the eigenvector estimation based on the calculated spatial covariance matrix based on the identified first set of segments; and
a scaling and transformation circuit to normalize the estimated eigenvector to a selected reference microphone of the array of microphones.
13. The system of
14. The system of
15. The system of
17. The computer readable storage medium of
18. The computer readable storage medium of
19. The computer readable storage medium of
20. The computer readable storage medium of
21. The computer readable storage medium of
calculating a spatial covariance matrix based on the identified first set of segments;
estimating an eigenvector associated with the direction of the source of the speech signal, the eigenvector estimation based on the calculated spatial covariance matrix based on the identified first set of segments; and
normalizing the estimated eigenvector to a selected reference microphone of the array of microphones.
22. The computer readable storage medium of
23. The computer readable storage medium of
|
Audio and speech processing techniques are being used in a growing number of application areas including, for example, speech recognition, voice-over-IP, and cellular communications. Methods for speech enhancement are often desired to mitigate the effects of noisy and dynamic environments that can be associated with these applications. The deployment of microphone arrays is becoming more common with advancements in technology, enabling the use of multichannel processing and beamforming techniques to improve signal quality. These multichannel processing techniques, however, can be computationally expensive.
Features and advantages of embodiments of the claimed subject matter will become apparent as the following Detailed Description proceeds, and upon reference to the Drawings, wherein like numerals depict like parts.
Although the following Detailed Description will proceed with reference being made to illustrative embodiments, many alternatives, modifications, and variations thereof will be apparent in light of this disclosure.
Generally, this disclosure provides techniques for adaptive acoustic beamforming in a dynamic environment, where a speaker of interest, noise sources, and the microphone array may all (or some subset thereof) be in motion relative to one another. Beamforming weights are calculated and updated, with improved efficiency, using a QR Decomposition (QRD) based minimum variance distortionless response (MVDR) process. The application of these beamforming weights to the microphone array enables a beam to be steered so that the moving speech source (and/or noise sources, as the case may be) can be tracked, resulting in improved quality of the received speech signal, in the presence of noise. As will be appreciated, a QR decomposition (sometimes referred to as QR factorization) generally refers to the decomposition of a given matrix into a product QR, where Q represents an orthogonal matrix and R represents a right triangular matrix.
The disclosed techniques can be implemented, for example, in a computing system or a software product executable or otherwise controllable by such systems, although other embodiments will be apparent. The system or product is configured to perform QR based MVDR acoustic beamforming. In accordance with an embodiment, a methodology to implement these techniques includes receiving audio signals from an array of microphones, identifying signal segments that include a combination of speech and noise, and identifying other signal segments that include noise in the absence of speech. The identification is based on a multichannel speech-presence-probability (SPP) model using whitened input signals. The method also includes calculating a QRD and an inverse QRD (IQRD) of a spatial covariance matrix generated from the speech-free noise segments. The method further includes estimating a relative transfer function (RTF) associated with the source of the speech. The RTF calculation is based on the noisy speech signal segments and on the QRD and the IQRD, as will be described in greater detail below. The method further includes calculating beamforming weights for the microphone array, the calculation based on the RTF and the IQRD, to steer a beam in the direction associated with the source of the speech.
As will be appreciated, the techniques described herein may allow for improved acoustic beamforming with relatively fast and efficient tracking of a speech or noise source, without degradation of noise reduction capabilities, compared to existing methods that can introduce noise bursts into speech segments during highly dynamic scenarios. The disclosed techniques can be implemented on a broad range of platforms including laptops, tablets, smart phones, workstations, personal computers, and speaker phones, for example. These techniques may further be implemented in hardware or software or a combination thereof.
In general, one or more of the speech source 102, the noise sources 104, and the platform 130 (or the sensor array 106) may be in motion relative to one another. At a high level, the sensor array 106 receives acoustic signals x1(n), . . . xM(n), through the M microphones, where n denotes the discrete time index. Each received signal includes a combination of the speech source signal s(n), which has been modified by an acoustic transfer function resulting from its transmission through the environment to the microphone, and the noise signal v(n). The symbol x(n) is a vector representation of the signals x1(n), . . . xM(n). The received signal x(n) can be expressed as
x(n)=h(n)*s(n)+v(n)
where h(n) is a vector of the acoustic impulse responses h1(n), . . . hM(n), associated with transmission to each of the M microphones and the * operator indicates convolution.
Beamformer weight calculation circuit 110 is configured to efficiently calculate (and update) weights w(n) from current and previous received signals x(n), using a QRD based MVDR process, as will be described in greater detail below. The beamforming filters, w(n), are calculated in the Fourier transform domain and denoted as w(k), M dimensional vectors with complex-valued elements, w1(k), . . . wM(k). These beamforming filters scale and phase shift the signals from each of the microphones. Beamformer circuit 108 is configured to apply those weights to the signals received from each of the microphones, to generate a signal y(k) which is an estimate of the speech signal s(k) through the steered beam 120. The application of beamforming weights has the effect of focusing the array 106 on the current position of the speech source 102 and reducing the impacts of the noise sources 104. The signal estimate y(k) is transformed back to the time-domain using an inverse short time Fourier transform (ISTFT) and may then be provided to an audio processing system 112 which can be configured to perform speech recognition and act in some desired manner based on the speech content of signal estimate y(n).
The audio signals received from the microphones are transformed to the short time Fourier transform (STFT) domain (by STFT circuit 510 described in connection with
x(l,k)=h(l,k)s(l,k)+v(l,k)
where l is a time index and k is a frequency bin index. The resulting signal estimate, after beamforming, can be expressed using similar notation as
y(l,k)=wH(l,k)x(l,k)
where (□)H denotes the conjugate-transpose operation.
The calculation of weights w is described now with reference to the whitening circuit 202, multichannel SPP circuit 200, noise tracking circuit 204, speech tracking circuit 210, noise indicator circuit 206, noisy speech indicator circuit 208, and weight calculation circuit 212.
Whitening circuit 202 is configured to calculate a whitened multi-channel signal z in which the noise component v in x is transformed by S−H into a spatially white noise component with unit variance:
z(l,k)S−H(l,k)x(l,k)
Noise tracking circuit 204 is configured to track the noise source component of the received signals over time. With reference now to
QR decomposition (QRD) circuit 304 is configured to calculate the matrix decomposition of a spatial covariance matrix Φvv of the noise components, into its square root matrices S and SH from the input signal x:
S(l,k),SH(l,k)←QRD(x(l,k))
Inverse QR decomposition (IQRD) circuit 306 is configured to calculate the matrix decomposition of φvv to its inverse square root matrices S−1 and S−H:
S−1(l,k),S−H(l,k)←IQRD(x(l,k))
In some embodiments, the QRD and IQRD calculations may be performed using a Cholesky decomposition, or other known techniques in light of the present disclosure, which can be efficiently performed with a computational complexity on the order of M2.
Returning now to
Φzz(l,k)=λΦzz(l−1,k)+(1−λ)z(l,k)zH(l,k)
Continuing with reference to
where I is the identity matrix, em is a selection vector that extracts the m-th column of an M×M matrix for m=1, . . . , M, and ρ is a scale factor to align the amplitudes and phases of the columns of Φzz−I.
Transformation circuit 406 is configured to generate the RTF estimate {tilde over (h)} by transforming the eigenvector g back to the domain of the microphone array and normalizing it to the reference microphone as follows:
Returning to
Multichannel SPP circuit 200 is configured to calculate a speech probability that incorporates both spatial coherence and signal-to-noise ratio. The calculations, which are described below, reuse previously computed terms (e.g., z) for increased efficiency.
The following calculations are performed to determine the generalized likelihood ratio μ:
where Tr is the matrix trace operation and q is an apriori (known or estimated) speech absence probability. A speech presence probability p is then calculated as:
Noise indicator circuit 206 marks the signal segment as noise in the absence of speech if
p≤τv
and noisy speech indicator circuit 208 marks the signal segment as a combination of noise and speech if
p≤τs
where τv and τs are predefined noise and noisy speech confidence thresholds, respectively, for the speech presence probability.
Returning to
The beamforming weights w are calculated to steer a beam of the array of microphones in a direction associated with the source of the speech signal and a null in the direction of the noise source.
y(n)=ISTFT(wH(l,k)x(l,k))
Methodology
As illustrated in
At operation 630, calculations are performed to generate a QR decomposition (QRD) and an inverse QR decomposition (IQRD) of the spatial covariance of the noise-only segments. In some embodiments, the QRD and the IQRD may be calculated using a Cholesky decomposition.
At operation 640, a relative transfer function (RTF), associated with the speech signal of the noisy speech segments, is estimated. The estimation is based on the noisy speech segments, the QRD, and the IQRD.
At operation 650, a set of beamforming weights are calculated based on a multiplicative product of the estimated RTF and the IQRD. The beamforming weights are configured to steer a beam of the array of microphones in a direction of the source of the speech signal. In some embodiments, the source of the speech signal may be in motion relative to the array of microphones, and the beam may be steered dynamically to track the moving speech signal source.
Of course, in some embodiments, additional operations may be performed, as previously described in connection with the system. For example, the audio signals received from the array of microphones are transformed into the frequency domain, for example using a Fourier transform. In some embodiments, the identification of the noisy speech segments and the noise-only speech segments may be based on a generalized likelihood ratio calculation.
Example System
In some embodiments, platform 130 may comprise any combination of a processor 720, a memory 730, beamforming system 108, 110, audio processing system 112, a network interface 740, an input/output (I/O) system 750, a user interface 760, a sensor (microphone) array 106, and a storage system 770. As can be further seen, a bus and/or interconnect 792 is also provided to allow for communication between the various components listed above and/or other components not shown. Platform 130 can be coupled to a network 794 through network interface 740 to allow for communications with other computing devices, platforms, or resources. Other componentry and functionality not reflected in the block diagram of
Processor 720 can be any suitable processor, and may include one or more coprocessors or controllers, such as a graphics processing unit, an audio processor, or hardware accelerator, to assist in control and processing operations associated with system 700. In some embodiments, the processor 720 may be implemented as any number of processor cores. The processor (or processor cores) may be any type of processor, such as, for example, a micro-processor, an embedded processor, a digital signal processor (DSP), a graphics processor (GPU), a network processor, a field programmable gate array or other device configured to execute code. The processors may be multithreaded cores in that they may include more than one hardware thread context (or “logical processor”) per core. Processor 720 may be implemented as a complex instruction set computer (CISC) or a reduced instruction set computer (RISC) processor. In some embodiments, processor 720 may be configured as an x86 instruction set compatible processor.
Memory 730 can be implemented using any suitable type of digital storage including, for example, flash memory and/or random access memory (RAM). In some embodiments, the memory 730 may include various layers of memory hierarchy and/or memory caches as are known to those of skill in the art. Memory 730 may be implemented as a volatile memory device such as, but not limited to, a RAM, dynamic RAM (DRAM), or static RAM (SRAM) device. Storage system 770 may be implemented as a non-volatile storage device such as, but not limited to, one or more of a hard disk drive (HDD), a solid-state drive (SSD), a universal serial bus (USB) drive, an optical disk drive, tape drive, an internal storage device, an attached storage device, flash memory, battery backed-up synchronous DRAM (SDRAM), and/or a network accessible storage device. In some embodiments, storage 770 may comprise technology to increase the storage performance enhanced protection for valuable digital media when multiple hard drives are included.
Processor 720 may be configured to execute an Operating System (OS) 780 which may comprise any suitable operating system, such as Google Android (Google Inc., Mountain View, Calif.), Microsoft Windows (Microsoft Corp., Redmond, Wash.), Apple OS X (Apple Inc., Cupertino, Calif.), Linux, or a real-time operating system (RTOS). As will be appreciated in light of this disclosure, the techniques provided herein can be implemented without regard to the particular operating system provided in conjunction with system 700, and therefore may also be implemented using any suitable existing or subsequently-developed platform.
Network interface circuit 740 can be any appropriate network chip or chipset which allows for wired and/or wireless connection between other components of computer system 700 and/or network 794, thereby enabling system 700 to communicate with other local and/or remote computing systems, servers, cloud-based servers, and/or other resources. Wired communication may conform to existing (or yet to be developed) standards, such as, for example, Ethernet. Wireless communication may conform to existing (or yet to be developed) standards, such as, for example, cellular communications including LTE (Long Term Evolution), Wireless Fidelity (Wi-Fi), Bluetooth, and/or Near Field Communication (NFC). Exemplary wireless networks include, but are not limited to, wireless local area networks, wireless personal area networks, wireless metropolitan area networks, cellular networks, and satellite networks.
I/O system 750 may be configured to interface between various I/O devices and other components of computer system 700. I/O devices may include, but not be limited to, user interface 760 and sensor array 106 (e.g., an array of microphones). User interface 760 may include devices (not shown) such as a display element, touchpad, keyboard, mouse, and speaker, etc. I/O system 750 may include a graphics subsystem configured to perform processing of images for rendering on a display element. Graphics subsystem may be a graphics processing unit or a visual processing unit (VPU), for example. An analog or digital interface may be used to communicatively couple graphics subsystem and the display element. For example, the interface may be any of a high definition multimedia interface (HDMI), DisplayPort, wireless HDMI, and/or any other suitable interface using wireless high definition compliant techniques. In some embodiments, the graphics subsystem could be integrated into processor 720 or any chipset of platform 130.
It will be appreciated that in some embodiments, the various components of the system 700 may be combined or integrated in a system-on-a-chip (SoC) architecture. In some embodiments, the components may be hardware components, firmware components, software components or any suitable combination of hardware, firmware or software.
Beamforming system 108, 110 is configured to perform QRD-MVDR based adaptive acoustic beamforming, as described previously. Beamforming system 108, 110 may include any or all of the circuits/components illustrated in
In some embodiments, these circuits may be installed local to system 700, as shown in the example embodiment of
In various embodiments, system 700 may be implemented as a wireless system, a wired system, or a combination of both. When implemented as a wireless system, system 700 may include components and interfaces suitable for communicating over a wireless shared media, such as one or more antennae, transmitters, receivers, transceivers, amplifiers, filters, control logic, and so forth. An example of wireless shared media may include portions of a wireless spectrum, such as the radio frequency spectrum and so forth. When implemented as a wired system, system 700 may include components and interfaces suitable for communicating over wired communications media, such as input/output adapters, physical connectors to connect the input/output adaptor with a corresponding wired communications medium, a network interface card (NIC), disc controller, video controller, audio controller, and so forth. Examples of wired communications media may include a wire, cable metal leads, printed circuit board (PCB), backplane, switch fabric, semiconductor material, twisted pair wire, coaxial cable, fiber optics, and so forth.
Various embodiments may be implemented using hardware elements, software elements, or a combination of both. Examples of hardware elements may include processors, microprocessors, circuits, circuit elements (for example, transistors, resistors, capacitors, inductors, and so forth), integrated circuits, ASICs, programmable logic devices, digital signal processors, FPGAs, logic gates, registers, semiconductor devices, chips, microchips, chipsets, and so forth. Examples of software may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, application program interfaces, instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an embodiment is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power level, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds, and other design or performance constraints.
Some embodiments may be described using the expression “coupled” and “connected” along with their derivatives. These terms are not intended as synonyms for each other. For example, some embodiments may be described using the terms “connected” and/or “coupled” to indicate that two or more elements are in direct physical or electrical contact with each other. The term “coupled,” however, may also mean that two or more elements are not in direct contact with each other, but yet still cooperate or interact with each other.
The various embodiments disclosed herein can be implemented in various forms of hardware, software, firmware, and/or special purpose processors. For example, in one embodiment at least one non-transitory computer readable storage medium has instructions encoded thereon that, when executed by one or more processors, cause one or more of the beamforming methodologies disclosed herein to be implemented. The instructions can be encoded using a suitable programming language, such as C, C++, object oriented C, Java, JavaScript, Visual Basic .NET, Beginner's All-Purpose Symbolic Instruction Code (BASIC), or alternatively, using custom or proprietary instruction sets. The instructions can be provided in the form of one or more computer software applications and/or applets that are tangibly embodied on a memory device, and that can be executed by a computer having any suitable architecture. In one embodiment, the system can be hosted on a given website and implemented, for example, using JavaScript or another suitable browser-based technology. For instance, in certain embodiments, the system may leverage processing resources provided by a remote computer system accessible via network 794. In other embodiments, the functionalities disclosed herein can be incorporated into other software applications, such as, for example, audio and video conferencing applications, robotic applications, smart home applications, and fitness applications. The computer software applications disclosed herein may include any number of different modules, sub-modules, or other components of distinct functionality, and can provide information to, or receive information from, still other components. These modules can be used, for example, to communicate with input and/or output devices such as a display screen, a touch sensitive surface, a printer, and/or any other suitable device. Other componentry and functionality not reflected in the illustrations will be apparent in light of this disclosure, and it will be appreciated that other embodiments are not limited to any particular hardware or software configuration. Thus, in other embodiments system 700 may comprise additional, fewer, or alternative subcomponents as compared to those included in the example embodiment of
The aforementioned non-transitory computer readable medium may be any suitable medium for storing digital information, such as a hard drive, a server, a flash memory, and/or random access memory (RAM), or a combination of memories. In alternative embodiments, the components and/or modules disclosed herein can be implemented with hardware, including gate level logic such as a field-programmable gate array (FPGA), or alternatively, a purpose-built semiconductor such as an application-specific integrated circuit (ASIC). Still other embodiments may be implemented with a microcontroller having a number of input/output ports for receiving and outputting data, and a number of embedded routines for carrying out the various functionalities disclosed herein. It will be apparent that any suitable combination of hardware, software, and firmware can be used, and that other embodiments are not limited to any particular system architecture.
Some embodiments may be implemented, for example, using a machine readable medium or article which may store an instruction or a set of instructions that, if executed by a machine, may cause the machine to perform methods and/or operations in accordance with the embodiments. Such a machine may include, for example, any suitable processing platform, computing platform, computing device, processing device, computing system, processing system, computer, process, or the like, and may be implemented using any suitable combination of hardware and/or software. The machine readable medium or article may include, for example, any suitable type of memory unit, memory device, memory article, memory medium, storage device, storage article, storage medium, and/or storage unit, such as memory, removable or non-removable media, erasable or non-erasable media, writeable or rewriteable media, digital or analog media, hard disk, floppy disk, compact disk read only memory (CD-ROM), compact disk recordable (CD-R) memory, compact disk rewriteable (CR-RW) memory, optical disk, magnetic media, magneto-optical media, removable memory cards or disks, various types of digital versatile disk (DVD), a tape, a cassette, or the like. The instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, encrypted code, and the like, implemented using any suitable high level, low level, object oriented, visual, compiled, and/or interpreted programming language.
Unless specifically stated otherwise, it may be appreciated that terms such as “processing,” “computing,” “calculating,” “determining,” or the like refer to the action and/or process of a computer or computing system, or similar electronic computing device, that manipulates and/or transforms data represented as physical quantities (for example, electronic) within the registers and/or memory units of the computer system into other data similarly represented as physical quantities within the registers, memory units, or other such information storage transmission or displays of the computer system. The embodiments are not limited in this context.
The terms “circuit” or “circuitry,” as used in any embodiment herein, are functional and may comprise, for example, singly or in any combination, hardwired circuitry, programmable circuitry such as computer processors comprising one or more individual instruction processing cores, state machine circuitry, and/or firmware that stores instructions executed by programmable circuitry. The circuitry may include a processor and/or controller configured to execute one or more instructions to perform one or more operations described herein. The instructions may be embodied as, for example, an application, software, firmware, etc. configured to cause the circuitry to perform any of the aforementioned operations. Software may be embodied as a software package, code, instructions, instruction sets and/or data recorded on a computer-readable storage device. Software may be embodied or implemented to include any number of processes, and processes, in turn, may be embodied or implemented to include any number of threads, etc., in a hierarchical fashion. Firmware may be embodied as code, instructions or instruction sets and/or data that are hard-coded (e.g., nonvolatile) in memory devices. The circuitry may, collectively or individually, be embodied as circuitry that forms part of a larger system, for example, an integrated circuit (IC), an application-specific integrated circuit (ASIC), a system-on-a-chip (SoC), desktop computers, laptop computers, tablet computers, servers, smart phones, etc. Other embodiments may be implemented as software executed by a programmable control device. In such cases, the terms “circuit” or “circuitry” are intended to include a combination of software and hardware such as a programmable control device or a processor capable of executing the software. As described herein, various embodiments may be implemented using hardware elements, software elements, or any combination thereof. Examples of hardware elements may include processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), field programmable gate array (FPGA), logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth.
Numerous specific details have been set forth herein to provide a thorough understanding of the embodiments. It will be understood by an ordinarily-skilled artisan, however, that the embodiments may be practiced without these specific details. In other instances, well known operations, components and circuits have not been described in detail so as not to obscure the embodiments. It can be appreciated that the specific structural and functional details disclosed herein may be representative and do not necessarily limit the scope of the embodiments. In addition, although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described herein. Rather, the specific features and acts described herein are disclosed as example forms of implementing the claims.
The following examples pertain to further embodiments, from which numerous permutations and configurations will be apparent.
Example 1 is a processor-implemented method for audio beamforming, the method comprising: identifying, by a processor-based system, a first set of segments of a plurality of audio signals received from an array of one or more microphones, the first set of segments comprising a combination of a speech signal and a noise signal; identifying, by the processor-based system, a second set of segments of the plurality of audio signals, the second set of segments comprising the noise signal; calculating, by the processor-based system, a QR decomposition (QRD) of a spatial covariance matrix, and an inverse QR decomposition (IQRD) of the spatial covariance matrix, the spatial covariance matrix based on the second set of identified segments; estimating, by the processor-based system, a relative transfer function (RTF) associated with the speech signal of the first set of identified segments, the estimation based on the first set of identified segments, the QRD, and the IQRD; and calculating, by the processor-based system, a plurality of beamforming weights based on a multiplicative product of the estimated RTF and the IQRD, the beamforming weights to steer a beam of the array of microphones in a direction associated with a source of the speech signal.
Example 2 includes the subject matter of Example 1, further comprising transforming the plurality of audio signals to the frequency domain, using a Fourier transform.
Example 3 includes the subject matter of Examples 1 or 2, wherein the calculated beamforming weights are to steer a beam of the array of microphones to track motion of the source of the speech signal relative to the array of microphones.
Example 4 includes the subject matter of any of Examples 1-3, wherein the QRD and the IQRD are calculated using a Cholesky decomposition.
Example 5 includes the subject matter of any of Examples 1-4, further comprising updating the spatial covariance matrix based on a recursive average of previously calculated spatial covariance matrices.
Example 6 includes the subject matter of any of Examples 1-5, wherein the RTF estimation further comprises: calculating a spatial covariance matrix based on the identified first set of segments; estimating an eigenvector associated with the direction of the source of the speech signal, the eigenvector estimation based on the calculated spatial covariance matrix based on the identified first set of segments; and normalizing the estimated eigenvector to a selected reference microphone of the array of microphones.
Example 7 includes the subject matter of any of Examples 1-6, wherein the identifying of the first set of segments and the second set of segments, of the plurality of audio signals, is based on a generalized likelihood ratio calculation.
Example 8 includes the subject matter of any of Examples 1-7, further comprising applying the calculated beamforming weights as scale factors to the plurality of audio signals received from the array of microphones and summing the scaled audio signals to generate an estimate of the speech signal.
Example 9 is a system for audio beamforming, the system comprising: a noisy speech indicator circuit to identify a first set of segments of a plurality of audio signals received from an array of microphones, the first set of segments comprising a combination of a speech signal and a noise signal; a noise indicator circuit to identify a second set of segments of the plurality of audio signals, the second set of segments comprising the noise signal; a noise tracking circuit to calculate a QR decomposition (QRD) of a spatial covariance matrix, and to calculate an inverse QR decomposition (IQRD) of the spatial covariance matrix, the spatial covariance matrix based on the second set of identified segments; a speech tracking circuit to estimate a relative transfer function (RTF) associated with the speech signal of the first set of identified segments, the estimation based on the first set of identified segments, the QRD, and the IQRD; and a weight calculation circuit to calculate a plurality of beamforming weights based on a multiplicative product of the estimated RTF and the IQRD, the beamforming weights to steer a beam of the array of microphones in a direction associated with a source of the speech signal.
Example 10 includes the subject matter of Example 9, further comprising a STFT circuit to transform the plurality of audio signals to the frequency domain, using a Fourier transform.
Example 11 includes the subject matter of Examples 9 or 10, wherein the noise tracking circuit further comprises a QR decomposition circuit to calculate the QRD using a Cholesky decomposition, and an inverse QR decomposition circuit to calculate the IQRD using the Cholesky decomposition.
Example 12 includes the subject matter of any of Examples 9-11, wherein the speech tracking circuit further comprises: a noisy speech covariance update circuit to calculate a spatial covariance matrix based on the identified first set of segments; an eigenvector estimation circuit to estimate an eigenvector associated with the direction of the source of the speech signal, the eigenvector estimation based on the calculated spatial covariance matrix based on the identified first set of segments; and a scaling and transformation circuit to normalize the estimated eigenvector to a selected reference microphone of the array of microphones.
Example 13 includes the subject matter of any of Examples 9-12, wherein the identifying of the first set of segments and the second set of segments, of the plurality of audio signals, is based on a generalized likelihood ratio calculation.
Example 14 includes the subject matter of any of Examples 9-13, further comprising a beamformer circuit to apply the calculated beamforming weights as scale factors to the plurality of audio signals received from the array of microphones and summing the scaled audio signals to generate an estimate of the speech signal.
Example 15 includes the subject matter of any of Examples 9-14, wherein the calculated beamforming weights are to steer a beam of the array of microphones to track motion of the source of the speech signal relative to the array of microphones.
Example 16 is at least one non-transitory computer readable storage medium having instructions encoded thereon that, when executed by one or more processors, result in the following operations for audio beamforming, the operations comprising: identifying a first set of segments of a plurality of audio signals received from an array of microphones, the first set of segments comprising a combination of a speech signal and a noise signal; identifying a second set of segments of the plurality of audio signals, the second set of segments comprising the noise signal; calculating a QR decomposition (QRD) of a spatial covariance matrix, and an inverse QR decomposition (IQRD) of the spatial covariance matrix, the spatial covariance matrix based on the second set of identified segments; estimating a relative transfer function (RTF) associated with the speech signal of the first set of identified segments, the estimation based on the first set of identified segments, the QRD, and the IQRD; and calculating a plurality of beamforming weights based on a multiplicative product of the estimated RTF and the IQRD, the beamforming weights to steer a beam of the array of microphones in a direction associated with a source of the speech signal.
Example 17 includes the subject matter of Example 16, further comprising the operation of pre-processing the plurality of audio signals to transform the audio signals to the frequency domain, the pre-processing including performing a Fourier transform on the audio signals.
Example 18 includes the subject matter of Examples 16 or 17, wherein the calculated beamforming weights are to steer a beam of the array of microphones to track motion of the source of the speech signal relative to the array of microphones.
Example 19 includes the subject matter of any of Examples 16-18, wherein the QRD and the IQRD are calculated using a Cholesky decomposition.
Example 20 includes the subject matter of any of Examples 16-19, further comprising the operation of updating the spatial covariance matrix based on a recursive average of previously calculated spatial covariance matrices.
Example 21 includes the subject matter of any of Examples 16-20, wherein the RTF estimation further comprises the operations of: calculating a spatial covariance matrix based on the identified first set of segments; estimating an eigenvector associated with the direction of the source of the speech signal, the eigenvector estimation based on the calculated spatial covariance matrix based on the identified first set of segments; and normalizing the estimated eigenvector to a selected reference microphone of the array of microphones.
Example 22 includes the subject matter of any of Examples 16-21, wherein the identifying of the first set of segments and the second set of segments, of the plurality of audio signals, is based on a generalized likelihood ratio calculation.
Example 23 includes the subject matter of any of Examples 16-22, further comprising the operations of applying the calculated beamforming weights as scale factors to the plurality of audio signals received from the array of microphones and summing the scaled audio signals to generate an estimate of the speech signal.
Example 24 is a system for audio beamforming, the system comprising: means for identifying a first set of segments of a plurality of audio signals received from an array of one or more microphones, the first set of segments comprising a combination of a speech signal and a noise signal; means for identifying a second set of segments of the plurality of audio signals, the second set of segments comprising the noise signal; means for calculating a QR decomposition (QRD) of a spatial covariance matrix, and an inverse QR decomposition (IQRD) of the spatial covariance matrix, the spatial covariance matrix based on the second set of identified segments; means for estimating a relative transfer function (RTF) associated with the speech signal of the first set of identified segments, the estimation based on the first set of identified segments, the QRD, and the IQRD; and means for calculating a plurality of beamforming weights based on a multiplicative product of the estimated RTF and the IQRD, the beamforming weights to steer a beam of the array of microphones in a direction associated with a source of the speech signal.
Example 25 includes the subject matter of Example 24, further comprising means for transforming the plurality of audio signals to the frequency domain, using a Fourier transform.
Example 26 includes the subject matter of Examples 24 or 25, wherein the calculated beamforming weights are to steer a beam of the array of microphones to track motion of the source of the speech signal relative to the array of microphones.
Example 27 includes the subject matter of any of Examples 24-26, wherein the QRD and the IQRD are calculated using a Cholesky decomposition.
Example 28 includes the subject matter of any of Examples 24-27, further comprising means for updating the spatial covariance matrix based on a recursive average of previously calculated spatial covariance matrices.
Example 29 includes the subject matter of any of Examples 24-28, wherein the RTF estimation further comprises: means for calculating a spatial covariance matrix based on the identified first set of segments; means for estimating an eigenvector associated with the direction of the source of the speech signal, the eigenvector estimation based on the calculated spatial covariance matrix based on the identified first set of segments; and means for normalizing the estimated eigenvector to a selected reference microphone of the array of microphones.
Example 30 includes the subject matter of any of Examples 24-29, wherein the identifying of the first set of segments and the second set of segments, of the plurality of audio signals, is based on a generalized likelihood ratio calculation.
Example 31 includes the subject matter of any of Examples 24-30, further comprising means for applying the calculated beamforming weights as scale factors to the plurality of audio signals received from the array of microphones and summing the scaled audio signals to generate an estimate of the speech signal.
The terms and expressions which have been employed herein are used as terms of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding any equivalents of the features shown and described (or portions thereof), and it is recognized that various modifications are possible within the scope of the claims. Accordingly, the claims are intended to cover all such equivalents. Various features, aspects, and embodiments have been described herein. The features, aspects, and embodiments are susceptible to combination with one another as well as to variation and modification, as will be understood by those having skill in the art. The present disclosure should, therefore, be considered to encompass such combinations, variations, and modifications. It is intended that the scope of the present disclosure be limited not be this detailed description, but rather by the claims appended hereto. Future filed applications claiming priority to this application may claim the disclosed subject matter in a different manner, and may generally include any set of one or more elements as variously disclosed or otherwise demonstrated herein.
Bar Bracha, Vered, Barnov, Anna, Agmon, Morag, Markovich-Golan, Shmulik
Patent | Priority | Assignee | Title |
10679617, | Dec 06 2017 | Synaptics Incorporated | Voice enhancement in audio signals through modified generalized eigenvalue beamformer |
11423906, | Jul 10 2020 | TENCENT AMERICA LLC | Multi-tap minimum variance distortionless response beamformer with neural networks for target speech separation |
11482236, | Aug 17 2020 | Bose Corporation | Audio systems and methods for voice activity detection |
11688411, | Aug 17 2020 | Bose Corporation | Audio systems and methods for voice activity detection |
11694710, | Dec 06 2018 | Synaptics Incorporated | Multi-stream target-speech detection and channel fusion |
11783809, | Oct 08 2020 | Qualcomm Incorporated | User voice activity detection using dynamic classifier |
11823707, | Jan 10 2022 | Synaptics Incorporated | Sensitivity mode for an audio spotting system |
11937054, | Jan 10 2020 | Synaptics Incorporated | Multiple-source tracking and voice activity detections for planar microphone arrays |
11960791, | Nov 04 2019 | SWORD HEALTH S A | Control of a motion tracking system by user thereof |
12057138, | Jan 10 2022 | Synaptics Incorporated | Cascade audio spotting system |
Patent | Priority | Assignee | Title |
20120082322, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Oct 03 2017 | MARKOVICH-GOLAN, SHMULIK | Intel Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 043850 | /0470 | |
Oct 03 2017 | AGMON, MORAG | Intel Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 043850 | /0470 | |
Oct 03 2017 | BAR BRACHA, VERED | Intel Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 043850 | /0470 | |
Oct 06 2017 | Intel Corporation | (assignment on the face of the patent) | / | |||
Oct 08 2017 | BARNOV, ANNA | Intel Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 043850 | /0470 |
Date | Maintenance Fee Events |
Oct 06 2017 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
May 30 2022 | REM: Maintenance Fee Reminder Mailed. |
Nov 14 2022 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Oct 09 2021 | 4 years fee payment window open |
Apr 09 2022 | 6 months grace period start (w surcharge) |
Oct 09 2022 | patent expiry (for year 4) |
Oct 09 2024 | 2 years to revive unintentionally abandoned end. (for year 4) |
Oct 09 2025 | 8 years fee payment window open |
Apr 09 2026 | 6 months grace period start (w surcharge) |
Oct 09 2026 | patent expiry (for year 8) |
Oct 09 2028 | 2 years to revive unintentionally abandoned end. (for year 8) |
Oct 09 2029 | 12 years fee payment window open |
Apr 09 2030 | 6 months grace period start (w surcharge) |
Oct 09 2030 | patent expiry (for year 12) |
Oct 09 2032 | 2 years to revive unintentionally abandoned end. (for year 12) |