system and method for enhancing audio reproduced by an audio reproduction device is described. A plurality of convolution coefficients are generated for a predefined space. A digital audio signal is modified based on the generated convolved digital audio signal to generate a convolved digital audio signal. The convolved digital audio signal is converted to a convolved analog audio signal. The convolved analog audio signal is fed to the audio reproduction device.
|
7. A method for enhancing audio reproduced by an audio reproduction device, including:
generating a plurality of convolution coefficients for a predefined space;
modifying a digital audio signal based on the generated plurality of convolution coefficients for the predefined space, to generate a convolved digital audio signal;
generating a convolved analog audio signal based on the convolved digital audio signal;
feeding the convolved analog audio signal to the audio reproduction device; and
adding correction to the digital audio signal with mid side modification based on a middle-side filter, before generating the analog audio signal.
17. A system for enhancing audio reproduced by an audio reproduction device, including:
a plurality of convolution coefficients for a predefined space is generated;
a digital audio signal is modified based on the generated plurality of convolution coefficients for the predefined space, to generate a convolved digital audio signal;
a convolved analog audio signal is generated based on the convolved digital audio signal;
the convolved analog audio signal is fed to the audio reproduction device; and
add a correction to the convolved digital audio signal with mid side modification based on a middle-side filter, before the analog audio signal is generated.
11. A system for enhancing audio reproduced by an audio reproduction device, including:
a plurality of convolution coefficients for a predefined space is generated, based on both a direct and reflected sound waves from the predefined space, for at least two channels;
at least two channels of digital audio signal are modified based on the generated plurality of convolution coefficients for the predefined space, to generate a convolved digital audio signal for the at least two channels of digital audio signal with effect of the predefined space;
correction to the generated convolved digital audio signal is added with mid side modification based on a middle-side filter to generate a modified convolved digital audio signal;
at least two channels of convolved analog audio signal with effect of the predefined space is generated based on the at least two channels of modified convolved digital audio signal with effect of the predefined space; and
the at least two channels of convolved analog audio signal with effect of the predefined space is fed to the audio reproduction device.
1. A method for enhancing audio reproduced by an audio reproduction device, including:
generating a plurality of convolution coefficients for a predefined space, based on both a direct and reflected sound waves from the predefined space, for at least two channels;
modifying at least two channels of digital audio signal received for reproduction based on the generated plurality of convolution coefficients for the predefined space, to generate a convolved digital audio signal for the at least two channels of digital audio signal with effect of the predefined space;
adding correction to the generated convolved digital audio signal with mid side modification based on a middle-side filter to generate a modified convolved digital audio signal;
generating at least two channels of convolved analog audio signal with effect of the predefined space, based on the modified convolved digital audio signal with effect of the predefined space, for the at least two channels of digital audio signal;
and feeding the at least two channels of convolved analog audio signal with effect of the predefined space to the audio reproduction device, for reproduction.
2. The method of
3. The method of
4. The method of
5. The method of
6. The method of
optimizing frequency response characteristics of the modified analog signal for a left driver and a right driver of the audio reproduction device, by an analog output tuner.
8. The method of
9. The method of
10. The method of
12. The system of
14. The system of
15. The system of
16. The system of
an analog output tuner configured to optimize frequency response characteristics of the modified analog signal for a left driver and a right driver of the audio reproduction device.
18. The system of
19. The system of
20. The system of
|
This application claims priority to provisional patent application No. 62/873,803 filed on Jul. 12, 2019, entitled “SYSTEM AND METHOD FOR AN AUDIO REPRODUCTION DEVICE”, which is incorporated herein by its entirety.
The present invention relates generally to an audio reproduction device, and, more particularly, to a head phone.
System and method for an audio reproduction device is described. Audio sound reproduction devices may include headphones and earbuds. Humans have evolved to hear sounds within physical spaces. The physical configuration of our two ears, our head between them, and the ways in which we perceive sound is the result of the interface with, and the physical characteristics of, the environment within which sounds are created and transported. However, since the introduction of the Walkman® in 1979, headphones (and later earbuds) became very popular ways to enjoy listening to sound. By closely coupling two sound transducers with our two ears independently, all of environmental effects and the natural perception of sound are circumvented. This creates a synthetic, artificial listening environment, and substantially changes our psychoacoustic interpretation of the sounds that we hear.
Further, entertainment content such as music and film soundtracks are typically created in carefully designed physical environments (studios and sound stages). Therefore, by listening to the resulting music or film soundtracks through headphones, our psychoacoustic experience is typically significantly different than that which was intended by the creators, producers or editors of the content. This presents numerous problems. In some examples, creating content using headphones is highly challenging, therefore requiring carefully designed studio spaces and expensive monitor loudspeakers. In some examples, a listener's psychoacoustic experience while consuming audible content is different when accessed through loudspeakers versus headphones. There is a need to solve one or more of these problems. It is with these needs in mind, this disclosure arises.
In one embodiment, a method for enhancing audio reproduced by an audio reproduction device is disclosed. A plurality of convolution coefficients are generated for a predefined space. A digital audio signal is modified based on the generated convolved digital audio signal to generate a convolved digital audio signal. The convolved digital audio signal is converted to a convolved analog audio signal. The convolved analog audio signal is fed to the audio reproduction device.
In another embodiment, a system for enhancing audio reproduced by an audio reproduction device is disclosed. A plurality of convolution coefficients are generated for a predefined space. A digital audio signal is modified based on the generated convolved digital audio signal to generate a convolved digital audio signal. The convolved digital audio signal is converted to a convolved analog audio signal. The convolved analog audio signal is fed to the audio reproduction device.
This brief summary has been provided so that the nature of the disclosure may be understood quickly. A more complete understanding of the disclosure can be obtained by reference to the following detailed description of the preferred embodiments thereof in connection with the attached drawings.
The foregoing and other features of several embodiments are now described with reference to the drawings. In the drawings, the same components have the same reference numerals. The illustrated embodiments are intended to illustrate but not limit the invention. The drawings include the following Figures:
The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
The embodiments herein disclose an audio reproduction device. Referring now to the drawings, where similar reference characters denote corresponding features consistently throughout the figures, various examples of this disclosure is described.
According to an example of this disclosure, real-time convolution to the digital sound signals are applied, with separate convolution functions for each incoming channel, and for each ear. For example, with a two-channel stereo signal, convolutions will be applied in real-time for the left channel to the left ear sometimes referred to as a LL convolution, left channel to the right ear, sometimes referred to as a LR convolution, right channel to the left ear, sometimes referred to as a RL convolution and right channel to the right ear, sometimes referred to as RR convolution.
In one example, each convolution function applies pre-calculated coefficients, associated with the impulse response data from a specific physical space. The number of coefficients for each convolution set can be calculated as follows: n=s*t, where n is the number of coefficients per convolution set, s is the sample rate of the digital signal source in samples per second, and t is the maximum convolution time in seconds. For example, with a signal sample rate of 64,000 samples per second and 0.25 seconds of maximum convolution time, n of 16,000 coefficients are required.
In one example, a non-linear bass distortion (NLBD) function generator is used to digitally generate a controlled harmonic distortion (sometimes referred to as CH distortion) associated with physical subwoofers. The digital NLBD function generator includes a low-pass filter to separate only low frequencies, circuit to generate even and/or odd harmonics, and another low-pass filter. The generated CH distortion is then mixed with the original signal.
In one example, a middle-side filter (MS filter) circuit is used to adjust the physical separation of the original sound source, which may be referred to as the perceived “sound stage”. In the case of stereo signal, middle-side filter determines the perceived distance between the right and left virtual speakers within this sound stage. One implementation of a MS filter includes summing the signals from the left and right channels to create a “middle” signal. It also includes calculating the difference between the signals from the left and right channels to create a separate “side” signal. The middle channel then contains just the information that appears in both the left and right channels, and the side channel contains all the information that differs between the left and right channels. In other words, the middle signal represents sounds that would be perceived by a listener to be emanating mainly from a center location. Similarly, the side signal represents sounds that would be perceived by a listener to be emanating from either the left or right sides of the perceived sound stage. Therefore, by independently amplifying or attenuating the middle and side signals, it is possible to emphasize or reduce sound that appear to originate from either the center or the left and right sides of the perceived sound stage. Among other things, this has the effect of determining how far apart the virtual speakers are located within the perceived sound stage. After applying the amplification or attenuation of the middle and side signals, they are then subsequently summed together and divided by 2 to re-create the left signal, and subtracted from each and divided by 2 to recreate the right signal.
Given:
L=left signal
R=right signal
M=middle signal
S=side signal
MG=center gain; >1 represents amplification, 0<MG<1 represents attenuation
SG=side gain; >1 represents amplification, 0<SG<1 represents attenuation
Then:
M=MG*(L+R) Equation 1
S=SG*(L−R) Equation 2
Finally:
Recreated Left Signal L′=0.5*(M+S) Equation 3
Recreated Right Signal R′=0.5*(M−S) Equation 4
A combination of one or more of the convolution coefficients, CH distortion and MS filter may be applied to the original digital sound. Such a corrected digital sound may assist in recreating the perception of listening to sound as if it were being reproduced by loudspeakers in a defined (modeled) space. For example, the LL, LR, RL and RR convolutions emulate the sounds that would be received by the listener's ears within the modeled space. Instead of perceiving a narrow phantom center channel, the listener's brain reconstructs the processed left and right analog signals reproduced by the left and right headphone drivers into a natural left and right channels, and enables reconstruction of an accurate center channel.
To generate the required convolution coefficients, the desired (modeled) space must be evaluated. Now, referring to
A left ear microphone 306 and a right ear microphone 308 are selectively placed within the desired space 300, for example, at locations that may substantially correspond to a listener's left ear and right ear respectively.
Now, referring to
For example, the signal received at the left ear microphone 306 from the left speaker 302 is deconvolved to generate the LL coefficients. The signal received at the right ear microphone 308 from the left speaker 302 is deconvolved to generate the LR coefficients.
Referring to
Now, referring to
For example, the signal received at the left ear microphone 306 from the sound received from the right speaker 304 is deconvolved to generate the RL coefficients. Referring to
In one example, a digital signal processor may be configured to modify input signal based on the convolution coefficients measured for a modeled space. Now, referring to
The communication management engine 402 is configured to communicate with external devices, for example, computing device 416, over a wired connection 418 or a wireless connection 420. In one example, the communication management engine 402 is configured to communicate with the computing device 416 and receive various parameters for configuring the audio system 400, including the digital signal processor 408. In one example, the communication management engine 402 is configured to receive digital audio signal to be reproduced by the audio system 400, over the wired connection 418 or wireless connection 420. The received digital audio signal (for example, two channel digital audio signals L and R) is fed to the DSP 408.
The analog input tuner 404 is configured to communicate with an analog sound source 422, for example, over an analog wired connection 424, to receive audio signal to be reproduced by the audio system 400. In one example, a two-channel audio signal (left and right) is received. The analog input tuner 404 is configured to optimize impedance and frequency response characteristics of the analog audio signal received from the analog audio source 422. The output of the analog input tuner 404 is fed to the A/D converter 406, to generate digital audio signal (for example, two channel digital audio signals L and R). The digital audio signal is fed to the DSP 408.
The DSP 408 processes the received digital audio signal, applying modifications to the received digital audio signal, based on the convolution coefficients, generated CH distortion and the middle-side filter (MS filter) digital settings. Modified digital audio signal is then fed to the D/A converter 410 to generate modified analog audio signal. The modified analog audio signal in some examples may be amplified by the amplifier 412 to generate an amplified modified analog audio signal. The amplified modified analog audio signal is then fed to an analog output tuner 414. The analog output tuner 414 feeds the amplified modified analog audio signal to left driver 426 and right driver 428, for reproduction of the amplified modified analog audio signal. As one skilled in the art appreciates, if the amplifier 412 is not used, the modified analog audio signal will be fed to the analog output tuner 414 which in turn will feed the modified analog audio signal to the left driver 426 and the right driver 428, for reproduction of the modified analog audio signal. The analog output tuner 414 is configured to optimize impedance and frequency response characteristics of the modified analog audio signal for the left driver 426 and the right driver 428.
Having described the general operation of the audio system 400, functions and features of the DSP 408 will now be described. In general, the DSP 408 is configured to receive digital audio signal (for example, as L and R signals) from the A/D converter 406 (for audio received from an analog audio source) or the communication management engine 402 (for audio received from a digital audio source). The DSP 408 then selectively modifies the received digital audio signal to generate the modified digital audio signal and output the modified digital audio signal, to be fed to the D/A converter 410.
The DSP 408 includes a coefficients and parameters data store 430, a selected convolution coefficients data store 432, a selected DSP filter parameters data store 434, a LL convolution generator 436, a LR convolution generator 438, a RL convolution generator 440, a RR convolution generator 442, a CH distortion generator 444 and a middle-side filter circuit 446. The coefficients and parameters data store 430 stores various coefficients and parameters for one or more modeled space. In one example, various coefficients and parameters are received by the communication management engine 402, from an external computing device and loaded into the coefficients and parameters data store 430.
When a specific modeled space is selected, corresponding coefficients and parameters are retrieved from the coefficients and parameters data store 430 and selectively loaded into the selected convolution coefficients data store 432 and the selected DSP filter parameters data store 434. As one skilled in the art appreciates, the selected convolution coefficients data store 432 and the selected DSP filter parameters data store 434 may be configured to be high speed memory, so that data may be retrieved from them at a speed to process the data in real time.
The LL convolution generator 436, a LR convolution generator 438, a RL convolution generator 440, a RR convolution generator 442 selectively retrieve the selected convolution coefficients from the selected convolution coefficients data store 432 and apply appropriate convolution to each of the channels (L and R) of the digital audio signal to generate a convolved digital audio signal. The convolved digital audio signal is then fed to the D/A converter 410, to generate modified analog audio signal.
In one example, the CH distortion generator 444 adds CH distortion to the convolved digital audio signal. The middle-side filter circuit 446 based on the selected parameters, applies appropriate correction to the convolved digital audio signal with CH distortion, to generate the modified digital audio signal. The modified digital audio signal is then fed to the D/A converter 410, to generate modified analog audio signal.
In one example, the audio system 400 may be selectively placed within an enclosure of an audio reproduction device 448. The audio reproduction device 448 may be a headphone with the left driver 426 and the right driver 428. Additionally, any power source needed to operate the audio system 400 may also be selectively placed within the enclosure of the audio reproduction device 448.
Now, referring to
Now, referring to
In block S604, a digital audio signal is modified based on the generated plurality of convolution coefficients, to generate a convolved digital audio signal. For example, as previously described with reference to
In block S606, a convolved analog audio signal is generated based on the generated convolved digital audio signal. For example, as previously described with reference to
In block S608, the generated convolved analog audio signal is fed to an audio reproduction device. For example, generated convolved analog audio signal if fed to the audio reproduction device 448, as previously described.
People who create professional audio content, including but not limited to musicians, recording engineers, producers, sound producers, mixers, often struggle due to the limitations of traditional headphones. This requires them to seek professionally-treated physical spaces to deliver professional-sounding content. This includes high fidelity loudspeakers, carefully designed positioning and geometry of hard surfaces within the room such as walls, ceiling, and other reflective objects which shape the sound. The result of this space is to deliver an optimal sound experience with the listener located at a well-defined location, sometimes referred to as the “sweet spot” in the room. However, it is not practical for many audio professionals to utilize sonically-treated spaces, such as recording studios. These spaces typically cost money, may be in inconvenient locations, and require advance reservations. Yet many professionals prefer to work with headphones.
The physical space emulation described in this disclosure enables creating all of the effects of a professionally-treated physical space within headphones, whenever and where ever inspiration strikes. By modeling multiple different recording studio spaces and allowing the user to alternately selecting them, the content creator can even test their work in different virtual studios with the same set of headphones—even if the studios are geographically dispersed. For example, a recording engineer can test their work in an emulated studio located in Los Angeles, another studio in London, and a third in Nashville, all with the same set of headphones.
Our perception is trained to sense stereo sound in three-dimensional space. Traditional stereo headphones isolate our two ears and destroy that perception. Many people prefer to perceive sound with the sensation of emulated 3D space. For example, music sounds more natural and less fatiguing according to this disclosure and is generally more desirable. Since most music is created in carefully designed recording studios, adding emulation of a studio space to music allows the listener to enjoy a sonic experience that is similar to that intended by the producer, recording engineer and artist creators. Additionally, live venue spaces can also be emulated, allowing the listener to experience music as if she were hearing it in a dance club, concert hall, outside concert venue, or any other physical space which can be modeled.
The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the claims as described herein.
Curd, Steven Elliott, Curd, Wendy Susan
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
5521981, | Jan 06 1994 | Focal Point, LLC | Sound positioner |
5970153, | May 16 1997 | Harman Motive, Inc. | Stereo spatial enhancement system |
8488807, | Dec 24 2009 | TOSHIBA CLIENT SOLUTIONS CO , LTD | Audio signal compensation device and audio signal compensation method |
9794717, | Jun 20 2013 | PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD. | Audio signal processing apparatus and audio signal processing method |
20040196991, | |||
20070003075, | |||
20090208027, | |||
20110038490, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jul 13 2020 | Scaeva Technologies, Inc. | (assignment on the face of the patent) | / | |||
Jul 13 2020 | CURD, STEVEN ELLIOTT | SCAEVA TECHNOLOGIES, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 053195 | /0599 | |
Jul 13 2020 | CURD, WENDY SUSAN | SCAEVA TECHNOLOGIES, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 053195 | /0599 |
Date | Maintenance Fee Events |
Jul 13 2020 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Jul 22 2020 | SMAL: Entity status set to Small. |
Date | Maintenance Schedule |
Dec 27 2025 | 4 years fee payment window open |
Jun 27 2026 | 6 months grace period start (w surcharge) |
Dec 27 2026 | patent expiry (for year 4) |
Dec 27 2028 | 2 years to revive unintentionally abandoned end. (for year 4) |
Dec 27 2029 | 8 years fee payment window open |
Jun 27 2030 | 6 months grace period start (w surcharge) |
Dec 27 2030 | patent expiry (for year 8) |
Dec 27 2032 | 2 years to revive unintentionally abandoned end. (for year 8) |
Dec 27 2033 | 12 years fee payment window open |
Jun 27 2034 | 6 months grace period start (w surcharge) |
Dec 27 2034 | patent expiry (for year 12) |
Dec 27 2036 | 2 years to revive unintentionally abandoned end. (for year 12) |