Methods and systems for positioning audio signals on virtual soundstages. The mobile device may include one or more orientation components that are used to generate orientation data that track directional changes that are used to position an audio signal on a virtual soundstage. A listener of the sound from the audio signal may be associated with the orientation data such that the positioning of the audio signal provides virtual acoustic presence on the virtual soundstage. The audio signal is received, the audio signal may be associated with a plurality audio channels. The orientation data for positioning the audio signal is also received. A position for the audio signal on the virtual soundstage is determined based in part on the orientation data. The audio signal is then positioned on the virtual soundstage.
|
1. Non-transitory computer-readable media having computer-executable instructions embodied thereon that, when executed, enable a computing device to perform a method for positioning audio signals on virtual soundstages, the method comprising:
receiving, at a mobile device, an audio signal;
interpreting, using a decoder of the mobile device, a configuration mapping of the audio signal to a virtual soundstage,
wherein the configuration mapping comprises audio channels of the audio signal mapped to the virtual soundstage to simulate surround sound on a plurality of virtual speakers on the virtual soundstage,
wherein the configuration mapping includes audio cues and position indicators,
wherein the decoder interprets each of the audio cues and position indicators;
receiving orientation data from an orientation component at the mobile device, the orientation data used for positioning the audio signal; and
maintaining a position for the audio signal on the virtual soundstage based on the orientation data and the configuration mapping, wherein maintaining the position for the audio signal is based on:
(1) identifying individual parsed sound components associated with the plurality of virtual speakers; and
(2) retaining a source of sound components associated with the plurality of virtual speakers relative to a change in the mobile device position as captured in the orientation data.
18. A system for positioning audio signals on virtual soundstages, the system comprising:
an orientation component configured for:
(1) generating orientation data of a mobile device, wherein the orientation data tracks directional changes of the mobile device; and
(2) communicating the orientation data for positioning an audio signal based on the orientation data, wherein the orientation data includes multidimensional positioning data; and
a positioning component configured for:
(1) using, a decoder, to interpret a configuration mapping of the audio signal to a virtual soundstage,
wherein the configuration mapping comprises audio channels of the audio signal mapped to the virtual soundstage to simulate surround sound on a plurality of virtual speakers on the virtual soundstage,
wherein the configuration mapping includes audio cues and position indicators, and
wherein the audio cues comprise spatial cues and timing cues simulate the audio signal as originating from multiple speakers on the virtual soundstage;
(2) receiving orientation data from the orientation data used for positioning the audio signal; and
maintaining a position for the audio signal on the virtual soundstage based on the orientation data and the configuration mapping, wherein maintaining the position for the audio signal is based on:
(1) identifying individual parsed sound components associated with the plurality of virtual speakers; and
(2) retaining a source of sound components associated with the plurality of virtual speakers relative to a change in the mobile device position as captured in the orientation data.
13. Non-transitory computer-readable media having computer-executable instructions embodied thereon that, when executed, enable a computing device to perform a method for positioning audio signal on virtual soundstages, the method comprising:
receiving, at a mobile phone or tablet, an audio signal having a first set of channels;
storing the audio signal having the first set of channels in a memory of the mobile phone or tablet;
generating, at the mobile phone or tablet, from the audio signal having the first set of channels, an audio signal having a second set of channels;
referencing, at the mobile phone or tablet, orientation data from the mobile phone or tablet, the orientation data utilized for positioning the audio signal having the second set of channels;
generating, at the mobile phone or tablet, a virtual surround sound audio signal, wherein generating the virtual surround audio signal comprises:
(1) determining, using a decoder, position indicators based on the orientation data for positioning the audio signal on a plurality of virtual speakers on a virtual soundstage; and
(2) determining, using the decoder, audio cues for simulating virtual surround sound for the audio signal;
communicating, from the mobile phone or tablet, the virtual surround sound audio signal comprising the position indicators and the audio cues to headphones operably connected to the mobile phone or tablet, wherein the virtual surround sound audio signal is output to two audio channels that simulate a plurality of virtual audio channels on the plurality of virtual speakers on the virtual soundstage;
referencing a change in the orientation data from an orientation component at the mobile device or tablet;
maintaining a position for the audio signal on the virtual soundstage based on the change in orientation data and the position indicators and the audio cues, wherein maintaining the position of the audio signal is based on:
(1) identifying individual parsed sound components associated with the plurality of virtual speakers; and
(2) retaining a source of sound components associated with the plurality of virtual speakers relative to a change in the mobile device position as captured in the orientation data.
2. The media of
4. The media of
magnetometer;
accelerometer; and
gyroscope.
5. The media of
7. The media of
8. The media of
9. The media of
10. The media of
11. The media of
12. The media of
15. The media of
16. The media of
17. The media of
19. The system of
20. The system of
|
A high-level overview of the invention is provided here to disclose and to introduce a selection of concepts that are further described below in the detailed-description section. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in isolation to determine the scope of the claimed subject matter.
In brief and at a high level, this disclosure describes, among other things, systems and methods for positioning audio signals on virtual soundstages. A mobile device may include one or more orientation components that are used to generate orientation data. The orientation data include directional changes that are used in positioning an audio signal on a virtual soundstage. A listener of the sound from the audio signal may be associated with the orientation data such that the positioning of the audio signal provides virtual acoustic presence on the virtual soundstage. The virtual soundstage may include a plurality of speakers that are used to simulate the acoustic presence of the listener on the virtual soundstage. In operation, the audio signal is received, the audio signal may be associated with a plurality audio channels. In embodiments, the audio signal is further converted to a virtual surround sound audio signal using aural cues, thus, virtual acoustic presence includes positioning simulated surround sound. The orientation data for positioning the audio signal is received from the one or more orientation components. A position for the audio signal on the virtual soundstage is determined based in part on the orientation data. The audio signal is then positioned on the virtual soundstage.
Illustrative embodiments of the present invention are described in detail below with reference to the attached drawing figures, and wherein:
The subject matter of select embodiments of the present invention is described with specificity herein to meet statutory requirements. But the description itself is not intended to define what we regard as our invention, which is what the claims do. The claimed subject matter might be embodied in other ways to include different steps or combinations of steps similar to the ones described in this document, in conjunction with other present or future technologies. Terms should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly described. Throughout this disclosure, several acronyms and shorthand notations are used to aid the understanding of certain concepts pertaining to the associated system and services. These acronyms and shorthand notations are intended to help provide an easy methodology of communicating the ideas expressed herein and are not meant to limit the scope of the present invention.
Embodiments of our technology may be embodied as, among other things, a method, system, or set of instructions embodied on one or more computer-readable media. Computer-readable media include both volatile and nonvolatile media, removable and non-removable media, and contemplate media readable by a database, a switch, and various other network devices. Computer-readable media include media implemented in any way for storing information. Examples of stored information include computer-useable instructions, data structures, program modules, and other data representations. Media examples include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile discs (DVD), holographic media or other optical disc storage, magnetic cassettes, magnetic tape, magnetic disk storage, and other magnetic storage devices. These technologies can store data momentarily, temporarily, or permanently.
A mobile device generally refers to a handheld computing device (e.g., handsets, smartphones or tablets). It may include a display screen with touch input and/or a miniature keyboard. The mobile device may run an operating system and various types of software. In embodiments of the present invention, a mobile device includes a headphone or a headset, which may be combined with a microphone. Headphones may include functional features (e.g., processor, input/output port, memory, and orientation components) usually associated with a mobile device. Headphones may provide a range of functionality including game audio for video games. Mobile devices receive audio signals from either an internal audio source or an external audio source. For example, a tablet may have audio files stored in memory on the tablet, which are then played back at the tablet, or a smartphone may use wired or wireless technology to playback audio files stored at an external location. The audio source may refer to either a device communicating the audio signal or an audio file used to generate the audio signal. For example, headphones may be plugged into an external device, which then communicates the audio signal from the external device to the headphones, or a tablet may store an audio format (e.g., MP3) in memory that is communicated as an audio signal.
An audio signal generally refers to a representation of sound. Audio signals may be characterized in parameters such as bandwidth, power, and voltage levels. An audio signal may alternatively be represented Pulse Code Modulation (PCM) that digitally represents sampled audio signals. Conventionally, PCM is the standard form of digital audio in computers. Sound may be stored in a variety of audio formats or physical methods used to store data. In some cases, sound may be presented as stereophonic sound or stereo as it is more commonly known, that provides direction and perspective to sound using two independent audio channels. In other cases, sound may be alternatively provided as multichannel sound (e.g., surround sound) that includes more than two audio channels that surround the listener. Generally, sound, such as stereo sound or surround sound, is perceived based on psychoacoustics. Psychoacoustics describes sound perception as a function of both the ear and the brain. In this regard, the sound a listener hears is not limited to a mechanical phenomenon of just hearing the sound with the ear but also includes the way the brain of the listener makes meaning of the sound. Psychoacoustics also includes how a listener locates sound. Sound localization involves the brain locating the source of sound using differences in intensity, spectral cues, and timing cues. As such, psychoacoustics plays an important role in how a listener perceives sound in a physical space where the person is present.
Surround sound may provide enriched sound reproduction quality of an audio source, in that, additional channels from speakers surround the listener. Conventionally, surround sound is presented from a listener's forward arc. Surround sound perception is a function of sound localization; a listener's ability to identify the location or origin of a detected sound in direction and distance. Surround sound may use different types of media, including, Digital Video Discs (DVD), High Definition Television (HDTV) encoded as compressed DOLBY DIGITAL and DTS formats. Surround sound or multichannel audio techniques are used to reproduce content as varied as music, speech, natural and synthetic sounds for cinema, television, broadcasting, video games, or computers.
Surround sound can be created by using surround sound recording microphone techniques and/or mixing-in surround sound for playback on an audio system using speakers encircling the listener to play the audio from different directions. Generating surround sound may further include mapping each source channel into its own speaker. In this regard, the audio signal channels can be identified and applied to respective speakers. The audio signal may encode the mapping information such that the surround sound is rendered for playing by a decoder processing the mapping information to audio signals that are sent to different speakers. Surround sound may also include low-frequency effects that require only a fraction of the bandwidth of other audio channels. This is usually the 0.1 channel in surround sound notation (e.g., 5.1 or 7.1). Low-frequency effects are directed to a speaker specifically designed for low-pitched sounds (e.g., subwoofer).
Further, surround sound can be presented as simulated surround sound (e.g., virtual surround sound) in a two-dimensional sound field with headphones. Simulated surround sound may include surround sound achieved by mastering levels which use digital signal processing analysis of stereo recordings to parse out individual sounds to component panorama positions. As such, mobile devices may be configured to manipulate audio signals to change sound perception. Mobile devices may utilize technology in the form of chipsets and/or software that support digital processing to simulate surround sound from a 2 channel stereo input or other multichannel input (e.g., DOLBY HEADPHONE technology or DTS SURROUND SENSATION technology). The technology includes algorithms that create an acoustic illusion of surround sound. Such technology may be incorporated into any type of audio or video product normally featuring a headphone outlet. The technology may be implemented using a chipset (e.g., DOLBY ADSST-MELODY 1000) that accepts a number of digital audio formats for digital audio processing. The technology may alternatively be implemented using software or application specific integration circuits.
Digital analysis techniques enable providing surround sound on stereo headphones. A virtual surround sound environment may be created in real-time using any set of two-channel stereo headphones. In operation, the analysis technique can take a multichannel (including a 2 channel input) and send as output a 2 channel stereo signal that includes audio cues intended to place the input channels in a simulated virtual soundstage. The signal processing may create the sensation of multiple loud speakers in a room. For example, DOLBY DIGITAL technology provides signal processing technology that delivers 7.1 channel surround sound over any pair of headphones for richer, more spacious headphone audio. Digital analysis techniques are based on algorithms that determine how sounds with different points of origins or how a single sound interacts with different parts of the body. The algorithm essentially is a group of rules that describe the Head-related transfer function (HRTF) and other factors change the shape of the sound wave. HRTF refers to a response that characterizes how an ear receives a sound from a point in space. A listener estimates the location of source by taking cues derived from one ear and comparing cues received at both ears. Among the differences are time differences of arrival and intensity differences. HRTF describes how a given sound wave input that may be defined in frequency and source location is filtered by diffraction and reflection properties of the head, pinna and torso, before the sound reaches the ears. With an appropriate HRTF, the signals required at the eardrums for the listener to perceive sound from any direction may be calculated.
At a basic level, the process adds aural cues to sound waves, convincing the brain into interpreting the sound as though it came from multiple speakers on a virtual soundstage (e.g., fives sources instead of two). In this regard, virtual surround sound creates the perception that there are many sources of sound than are actually present. A virtual soundstage refers to the simulated physical environment created by the surround sound experience. Virtual surround sound produces a multichannel surround sound experience on the virtual soundstage without the need for actual physical speakers. The virtual surround sound through headphones provides a perceived surround sound experience on the virtual soundstage.
Embodiments of the present invention provide an efficient method for positioning audio signals on virtual soundstages such that a listener experiences virtual surround sound that is augmented by providing virtual acoustic presence. Acoustic presence may be simulated based on audio cues that are used to manipulate sound to provide virtual surround sound on the virtual soundstages and orientation data referenced from a mobile device orientation component to further position the sound on the virtual soundstage. For example, when a listener that is listening to virtual surround sound turns (e.g., 30° from an initial position), the virtual soundstage is maintained relative to the listener. In the case of a surround sound recording of multiple audio signals from multiple musicians, a listener may audibly or virtually face different audio signals/musicians on a virtual surround sound stage as the listener turns. In a gaming context, a listener may identify position relative to the listener's viewing position on the screen. As such, embodiments of the present invention provide for positioning the audio signal on the virtual soundstage such that simulating virtual surround sound further incorporates orientation data to maintain the virtual soundstage with reference to the listener's change in orientation as calculated by the orientation data from the mobile device.
For purposes of a detailed discussion below, a mobile phone including one or more orientation components is described. Further, while embodiments of the present invention may generally refer to the components described, it is understood that an implementation of the techniques described may be extended to cases with different components carrying out the steps described herein. It is contemplated that embodiments of the present invention may utilize orientation data from the mobile device (e.g., mobile handset or headphones). A mobile device may include one or more orientation components. An orientation component may refer to a component used to obtain directional changes made at the mobile device. An orientation component may be implemented as software or hardware or a combination thereof. By way of example, a mobile device may be embedded with a gyroscope, an accelerometer, a magnetometer, or a user interface, each of these components may provide orientation data (e.g., positional changes of the mobile device) communicated for positioning surround sound. Any other variations and combinations of orientation components are contemplated within the scope of embodiments of the present invention.
In a first aspect of the present invention, computer-readable media having computer-executable instructions embodied thereon that, when executed, enable a computing device to perform a method for positioning audio signals on virtual soundstages. The method includes receiving an audio signal. The method also includes receiving orientation data from an orientation component at a mobile device, the orientation data used for positioning the audio signal. The method further includes determining a position for the audio signal on a virtual soundstage based on the orientation data. The method also includes positioning the audio signal on the virtual soundstage.
In a second aspect of the present invention, computer-readable media having computer-executable instructions embodied thereon that, when executed, enable a computing device to perform a method for positioning audio signals on virtual soundstages. The method includes receiving an audio signal having a first set of channels. The method also includes generating from the audio signal having a first set of channels, an audio signal having a second set of channels. The method further includes referencing orientation data for positioning the audio signal having the second set of channels. The method also includes generating a virtual surround sound audio signal based on the orientation data and the audio signal having the second set of channels. Generating the virtual surround sound audio signal comprises: determining position indicators based on the orientation data for positioning the audio signal on the virtual soundstage and determining audio cues for simulating virtual surround sound for the audio signal. The method further includes communicating the virtual surround sound audio signal comprising the position indicators and the audio cues. The virtual audio signal is output to two audio channels that simulate a plurality of virtual audio channels on the virtual soundstage.
In a third aspect of the present invention, a system is provided for positioning audio signals on virtual soundstages. The system includes an orientation component configured for generating orientation data of the mobile device. The orientation data tracks directional changes of the mobile device. The orientation component also communicates the orientation data for positioning the audio signal. The orientation data includes multidimensional positioning data. The system also includes a positioning component configured for generating virtual surround sound audio signal based on a received audio signal. Generating the virtual surround audio signal comprises: determining position indicators based on the orientation data for positioning the audio signal on a virtual soundstage and determining audio cues for simulating virtual surround sound for the audio signal. The positioning component is also configured for communicating the virtual surround sound audio signal comprising the position indicators and the audio cues as audio signals onto a virtual soundstage.
Turning now to
Memory 112 might take the form of one or more of the aforementioned media. Thus, we will not elaborate more here, only to say that memory component 112 can include any type of medium that is capable of storing information in a manner readable by a computing device. Processor 114 might actually be multiple processors that receive instructions and process them accordingly. Presentation component 116 includes the likes of a display and a speaker, as well as other components that can present information (such as a lamp (LED), or even lighted keyboards).
Radio 117 represents a radio that facilitates communication with a wireless telecommunications network. Illustrative wireless telecommunications technologies include Long Term Evolution (LTE) and Evolved Data Optimized (EVDO) and the like. In some embodiments, radio 117 might also facilitate other types of wireless communications including Wi-Fi communications.
Input/output port 118 might take on a variety of forms. Illustrative input/output ports include a USB jack, stereo jack, infrared port, proprietary communications ports, and the like. Input/output components 120 include items such as keyboards, microphones, touchscreens, and any other item usable to directly or indirectly input data into mobile device 100. Power supply 122 includes items such as batteries, fuel cells, or any other component that can act as a power source to power mobile device 100.
In embodiments, mobile device 202 may include a client service (not shown) that facilitates carrying out aspects of the technology described herein. The client service may be a resident application on the mobile device, a portion of the firmware, a stand-alone website, or a combined application/web offering that is used to facilitate generating and transmitting information relevant to positioning audio signals on virtual soundstages. Whenever we speak of an application, software, or the like, we are really referring to one or more computer-readable media that are embodied with a set of computer-executable instructions that facilitate various actions to be performed. For readability purposes, we will not always include this lengthy terminology.
An audio signal, in accordance with embodiments of the present invention, may be received from an audio source (e.g., external audio source 206 and internal audio source 210). Audio signals refer to a representation of sound that can be characterized in parameters such as bandwidth, power, and voltage levels. Sound may be stored in variety of audio formats or physical methods used to store data. Sound may be communicated wirelessly using the communications link 204a as discussed above. In some cases, sound may be communicated using a wired link 204b. A wired link generally refers to a physical electrical connection between a source and a destination of the audio signal. The physical electrical connection may be an electrical conductor that carries the audio signal from the source to the destination. Wired connections are well known in the art; as such they are not further discussed herein. The external audio source 206 and the internal audio source 210 may communicate an audio signal to a component (e.g., positioning component) at the mobile device, the component then facilitates positioning the audio signal. By way of example, a mobile device may have audio files stored in memory of the mobile device or an external storage may wirelessly communicate an audio signal to the headphones. Any other variations and combinations of audio sources are contemplated within the scope of embodiments of the present invention.
The mobile device 202 includes a user interface component 220. Such a user interface component can control interface features associated with positioning audio signals on virtual soundstages. The user interface component 220 includes a variety of different types of interfaces, such as, a touchscreen interface, a voice interface, a gesture interface, and a direct manipulation interface. The user interface component 220 may further include controls to calibrate and to turn on and off the positioning capabilities. The user interface component 220 can include orientation defaults and orientation presets for simplifying particular orientation configurations for the mobile device. The user interface component 220 can provide controls for selecting one or more orientation components used in referencing orientation data of the mobile device. In embodiments, the user interface component 220 may function to directly provide orientation data communicated via the user interface. Further, the user interface component 220 may receive information for calibrating specific features of the virtual soundstage (e.g., 5.1, 7.1 or 11.1 surround sound) and indicating thresholds for the one or more orientation components. Any other variations and combinations of user interface features and controls are contemplated within the scope of embodiments of the present invention.
With continued reference to
Orientation data at the orientation component 230 may be captured using several different methods. By way of example, the orientation component 230 of the mobile device may include one or more orientation data units (not show) such as an interface, gyroscope, an accelerometer, or a magnetometer, where each provide orientation data (e.g., positional changes of the mobile device) communicated for positioning surround sound. The orientation component 230 may comprise a sensor that measures position changes and converts it into a signal which may be interpreted. The sensors may be calibrated with different sensitivities and thresholds to properly execute embodiments of the present invention. It is further contemplated within embodiments of the present invention that a mobile device 202 may include any number and different types of orientation data units. Each type of orientation data unit may generate different types of orientation data which may be factored into a positioning algorithm. It is further contemplated that the orientation data from a first orientation data unit may overlap with orientation data from a second orientation data unit. Whether distinct or overlapping, the orientation data from the different types of orientation data units may be combined in performing calculations for positioning the audio signal.
An accelerometer, for example, may measure the linear acceleration of the device. The accelerometer measures proper acceleration, an accelerometer sensor may measure acceleration relative to a free falling frame of reference. In particular, when the mobile device 202 is static, in whatever orientation, the orientation data represents the force of gravity acting on the device, and corresponds to the roll and pitch of the device (in the X and Y directions at least). But while in motion, the orientation data represents the acceleration due to gravity, plus the acceleration of the device itself relative to its rest frame. An accelerometer may measure the magnitude and direction of acceleration and can be used to sense the orientation of the mobile device 202.
A gyroscope refers to an exemplary orientation data unit for measuring and/or maintaining orientation based on principles of angular momentum. The angular momentum data represents rotational inertia and rotational velocity about an axis of the mobile device. For example, with the inclusion of a gyroscope, a user may simply move the mobile device 202 or even rotate the mobile device 202 and receive orientation data representing the directional changes. A gyroscope may sense motion including vertical and horizontal rotation. The accelerometer measurements of a mobile device 202 may be combined with the gyroscope measurements, to create orientation data for a plurality of axes, for example, six-axes: up and down, left and right, forward and backwards, as well as the roll, pitch and yaw rotations.
The mobile device 202 may include another exemplary orientation data unit, a magnetometer that measures the strength and/or direction of magnetic fields. A magnetometer may be integrated into circuits installed on a mobile device. A magnetometer on a mobile device 202 can be used to measure a three-dimensional space around the mobile device 202. The orientation data of the magnetometer may be combined with any of the other orientation components to generate different types of orientation data. For example, an accelerometer may measure the linear acceleration of the device so that it can report its roll and pitch, but combined with the magnetometer orientation data may have roll, pitch, and yaw measurements. Moreover, orientation data may be used to define particular gesture classifications that can be communicated and interpreted to execute predefined positioning features of the audio signal. For example, turning the mobile device 202 sideways may be associated with a specific predefined effect to the position of the audio signal on the virtual soundstage. Any other variations and combinations of mobile device based gestures are contemplated within the scope of embodiments of the present invention.
In embodiments, the user interface component 220 may be configured to directly receive orientation information in a two-dimensional or three-dimensional space, in this regard also functioning as an orientation data unit of the orientation component 230. A user interface component 220 may include a direct manipulation interface of a virtual soundstage where orientation data is captured based on inputs to the interface. The user interface component 220 may also receive discrete entries for a plurality of dimensions which are converted into orientation data for processing. In addition, orientation data may further be captured based on elements featured in video. Particular elements in video may be identified for determining changes in orientation, therefore, generating orientation data which may be referenced for positioning an audio signal on the virtual soundstage. By way of example, a video game (e.g., a first person shooter game) may include a video element (e.g., a video game character) whose positioning is used to generate orientation data, thus, the audio signal is positioned based on the directional changes of this video element. Any other variations and combinations of sources of orientation data for positioning audio signals are contemplated within the scope of embodiments of the present invention.
With continued reference to
The positioning component 240 is responsible for creating and for orienting audio signals on the virtual soundstage with the changing orientation of the mobile device 202, or portions thereof, associated with a listener. In one embodiment, the positioning component 240 may include an decoder for interpreting configuration mapping between the audio channels of the audio signal and the speakers on the virtual soundstage. The audio channels are mapped such that the audio signal may be rendered for playing on headphones that simulate surround sound on a virtual soundstage. In particular, the mapping may include audio cues and position indicators for simulating surround sound and acoustic presence by maintaining the source of a sound based on orientation data for the mobile device. Maintaining the position of the source of a sound may include identifying individual parsed out sound components associated with a speaker of on the virtual soundstage and retaining the source of the sound components relative to the change in orientation of the listener as captured by the orientation component on the mobile device 202.
The virtual surround sound environment may be created in real-time or live using any set of two channel stereo headphones, and changed in real-time as the orientation of the mobile device 202 changes. Basically, the sound of the virtual soundstage moves synchronously as the listener turns. It is contemplated that embodiments of the present invention may also include stored virtual surround environments and configurations that may be played back on-demand. Virtual surround sound can include a multi-channel audio signal that is mixed down to a 2-channel audio signal. The 2-channel audio signal may be digitally filtered using virtual surround sound algorithms. The filtered audio signal may be converted into an analog audio signal by a digital-to-analog converter (DAC). The analog audio signal may further be amplified by an amplifier and output to left and right channels, i.e., 2-channel speakers. Since the 2-channel audio signal has 3 dimensional (3D) audio data, a listener can feel a surround effect.
At the positioning component 240, the analysis technique and algorithms may take the orientation data and a multichannel audio input and send as output a 2 channel stereo signal that includes the 3D audio data as both position indicators and audio cues within the virtual soundstage intended to place the input channels in a simulated virtual soundstage. Positioning indicators may be based on orientation data received from the orientation component 220 and aural cues may be based on HRTF functions applied to the audio signal. In particular, the orientation component 230 can determine the position of the listener as captured by the mobile device. In embodiments the position comprises a location (e.g., a location variable) of a listener in, for example, n-dimensional and/or a direction of the listener (e.g., direction variable) in, for example, cardinal coordinates (N, S, E, W). The orientation changes can be determined using the one or more orientation data units that capture a change in the position, i.e., the location and/or direction of the mobile device associated with the listener. For example, a change in location can be captured in x, y, z coordinates and a change in direction captured in cardinal directions. Any variations of representations of positional changes and combinations thereof are contemplated in embodiments of the present invention. In this regard, the orientation data is communicated to the positioning component. The orientation component may either communication a first original position and a second position, and/or a change in from the first original position to the second position, where the orientation data information is incorporated into positioning virtual surround sound on a virtual soundstage.
The positioning component 220 is configured to apply the algorithms to orientation data and audio signal to develop position indicators and aural cues to sound waves, convincing the brain to experience virtual acoustic presence as though it came from multiple speakers in particular positions on a virtual soundstage. For example, DOLBY DIGITAL technology provides signal processing technology that delivers 7.1 channel surround sound over any pair of headphones for richer, more spacious headphone audio. Further, the change in position, captured at the orientation component, is referenced and the positioning component maintains the positioning of the surround sound elements. For example, an algorithm at the position component receives the change in position and in real-time the psychoacoustic calculations are maintained based on the previous position relative to the change in position.
Several variations of calculation to maintain the source of surround sound in providing acoustic presence are contemplated with embodiments of the present invention. In particular, the positioning information i.e., location and direction are processed into the virtual surround audio signal One or more of the position indicators and aural cues of the virtual surround sound are processed with the one or more of the different types of orientation data from the orientation component. For example, the location information x, y, z, and the direction information N, S, E, W may be used to recalibrate the virtual surround to maintain the source of sounds as the user moves. In this regard, the processing calculations may maintain the virtual surround sound only with reference to location or direction depending on the orientation data received from the orientation component 230. Further, the virtual surround sound experience is transformed by the orientation data in magnitude and direction of sound as recalculated for the position indicators and aural cues based on the processing the orientation data at the positioning component.
The positioning component 240 further leverage the mapping information associated with surround sound. As the surround sound format may include a mapping of each source channel into its own virtual speaker on the virtual soundstage, the algorithms may efficiently derive positioning information with the orientation data as described in embodiments of the present information, while factoring the mapping information of the audio signal to particular speakers. As such, the audio signal may utilize the mapping information in generating the position indicators and aural cues for playing of the audio signal. In particular, positioning the audio signal may further include positioning low-frequency effects directed to a speaker specifically designed for low-pitched sounds (e.g., subwoofer).
With continued reference to
Turning to
Now assume, on the second virtual soundstage 320, of
Referring to
Many different arrangements of the various components depicted, as well as components not shown, are possible without departing from the scope of the claims below. Embodiments of our technology have been described with the intent to be illustrative rather than restrictive. Alternative embodiments will become apparent to readers of this disclosure after and because of reading it. Alternative means of implementing the aforementioned can be completed without departing from the scope of the claims below. Certain features and sub-combinations are of utility and may be employed without reference to other features and sub-combinations and are contemplated within the scope of the claims.
Patent | Priority | Assignee | Title |
10257630, | Feb 26 2015 | UNIVERSITEIT ANTWERPEN | Computer program and method of determining a personalized head-related transfer function and interaural time difference function |
10771881, | Feb 27 2017 | BRAGI GmbH | Earpiece with audio 3D menu |
11115773, | Sep 27 2018 | Apple Inc. | Audio system and method of generating an HRTF map |
11259135, | Nov 25 2016 | Sony Corporation | Reproduction apparatus, reproduction method, information processing apparatus, and information processing method |
11412340, | Aug 22 2019 | Microsoft Technology Licensing, LLC | Bidirectional propagation of sound |
11503420, | Oct 09 2013 | Voyetra Turtle Beach, Inc. | Method and system for surround sound processing in a headset |
11785410, | Nov 25 2016 | SONY GROUP CORPORATION | Reproduction apparatus and reproduction method |
11877143, | Dec 03 2021 | Microsoft Technology Licensing, LLC | Parameterized modeling of coherent and incoherent sound |
Patent | Priority | Assignee | Title |
20030059070, | |||
20030227476, | |||
20110299707, | |||
20140002582, | |||
20140126758, | |||
20140153751, | |||
20140372944, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Aug 30 2013 | Sprint Communications Company L.P. | (assignment on the face of the patent) | / | |||
Aug 30 2013 | HILLS, PATRICK J | SPRINT COMMUNICATIONS COMPANY L P | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 031124 | /0884 | |
Feb 03 2017 | SPRINT COMMUNICATIONS COMPANY L P | DEUTSCHE BANK TRUST COMPANY AMERICAS | GRANT OF FIRST PRIORITY AND JUNIOR PRIORITY SECURITY INTEREST IN PATENT RIGHTS | 041895 | /0210 | |
Apr 01 2020 | DEUTSCHE BANK TRUST COMPANY AMERICAS | SPRINT COMMUNICATIONS COMPANY L P | TERMINATION AND RELEASE OF FIRST PRIORITY AND JUNIOR PRIORITY SECURITY INTEREST IN PATENT RIGHTS | 052969 | /0475 | |
Apr 01 2020 | ASSURANCE WIRELESS USA, L P | DEUTSCHE BANK TRUST COMPANY AMERICAS | SECURITY AGREEMENT | 053182 | /0001 | |
Apr 01 2020 | SPRINT SPECTRUM L P | DEUTSCHE BANK TRUST COMPANY AMERICAS | SECURITY AGREEMENT | 053182 | /0001 | |
Apr 01 2020 | SPRINT INTERNATIONAL INCORPORATED | DEUTSCHE BANK TRUST COMPANY AMERICAS | SECURITY AGREEMENT | 053182 | /0001 | |
Apr 01 2020 | SPRINT COMMUNICATIONS COMPANY L P | DEUTSCHE BANK TRUST COMPANY AMERICAS | SECURITY AGREEMENT | 053182 | /0001 | |
Apr 01 2020 | Clearwire Legacy LLC | DEUTSCHE BANK TRUST COMPANY AMERICAS | SECURITY AGREEMENT | 053182 | /0001 | |
Apr 01 2020 | Clearwire IP Holdings LLC | DEUTSCHE BANK TRUST COMPANY AMERICAS | SECURITY AGREEMENT | 053182 | /0001 | |
Apr 01 2020 | CLEARWIRE COMMUNICATIONS LLC | DEUTSCHE BANK TRUST COMPANY AMERICAS | SECURITY AGREEMENT | 053182 | /0001 | |
Apr 01 2020 | BOOST WORLDWIDE, LLC | DEUTSCHE BANK TRUST COMPANY AMERICAS | SECURITY AGREEMENT | 053182 | /0001 | |
Apr 01 2020 | PUSHSPRING, INC | DEUTSCHE BANK TRUST COMPANY AMERICAS | SECURITY AGREEMENT | 053182 | /0001 | |
Apr 01 2020 | LAYER3 TV, INC | DEUTSCHE BANK TRUST COMPANY AMERICAS | SECURITY AGREEMENT | 053182 | /0001 | |
Apr 01 2020 | T-MOBILE CENTRAL LLC | DEUTSCHE BANK TRUST COMPANY AMERICAS | SECURITY AGREEMENT | 053182 | /0001 | |
Apr 01 2020 | ISBV LLC | DEUTSCHE BANK TRUST COMPANY AMERICAS | SECURITY AGREEMENT | 053182 | /0001 | |
Apr 01 2020 | T-Mobile USA, Inc | DEUTSCHE BANK TRUST COMPANY AMERICAS | SECURITY AGREEMENT | 053182 | /0001 | |
Mar 03 2021 | SPRINT COMMUNICATIONS COMPANY L P | T-MOBILE INNOVATIONS LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 055604 | /0001 | |
Aug 22 2022 | DEUTSCHE BANK TRUST COMPANY AMERICAS | PUSHSPRING, LLC | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 062595 | /0001 | |
Aug 22 2022 | DEUTSCHE BANK TRUST COMPANY AMERICAS | IBSV LLC | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 062595 | /0001 | |
Aug 22 2022 | DEUTSCHE BANK TRUST COMPANY AMERICAS | Sprint Spectrum LLC | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 062595 | /0001 | |
Aug 22 2022 | DEUTSCHE BANK TRUST COMPANY AMERICAS | SPRINT INTERNATIONAL INCORPORATED | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 062595 | /0001 | |
Aug 22 2022 | DEUTSCHE BANK TRUST COMPANY AMERICAS | SPRINT COMMUNICATIONS COMPANY L P | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 062595 | /0001 | |
Aug 22 2022 | DEUTSCHE BANK TRUST COMPANY AMERICAS | SPRINTCOM LLC | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 062595 | /0001 | |
Aug 22 2022 | DEUTSCHE BANK TRUST COMPANY AMERICAS | Clearwire IP Holdings LLC | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 062595 | /0001 | |
Aug 22 2022 | DEUTSCHE BANK TRUST COMPANY AMERICAS | CLEARWIRE COMMUNICATIONS LLC | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 062595 | /0001 | |
Aug 22 2022 | DEUTSCHE BANK TRUST COMPANY AMERICAS | BOOST WORLDWIDE, LLC | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 062595 | /0001 | |
Aug 22 2022 | DEUTSCHE BANK TRUST COMPANY AMERICAS | ASSURANCE WIRELESS USA, L P | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 062595 | /0001 | |
Aug 22 2022 | DEUTSCHE BANK TRUST COMPANY AMERICAS | LAYER3 TV, LLC | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 062595 | /0001 | |
Aug 22 2022 | DEUTSCHE BANK TRUST COMPANY AMERICAS | T-MOBILE CENTRAL LLC | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 062595 | /0001 | |
Aug 22 2022 | DEUTSCHE BANK TRUST COMPANY AMERICAS | T-Mobile USA, Inc | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 062595 | /0001 |
Date | Maintenance Fee Events |
May 10 2021 | REM: Maintenance Fee Reminder Mailed. |
Oct 25 2021 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Sep 19 2020 | 4 years fee payment window open |
Mar 19 2021 | 6 months grace period start (w surcharge) |
Sep 19 2021 | patent expiry (for year 4) |
Sep 19 2023 | 2 years to revive unintentionally abandoned end. (for year 4) |
Sep 19 2024 | 8 years fee payment window open |
Mar 19 2025 | 6 months grace period start (w surcharge) |
Sep 19 2025 | patent expiry (for year 8) |
Sep 19 2027 | 2 years to revive unintentionally abandoned end. (for year 8) |
Sep 19 2028 | 12 years fee payment window open |
Mar 19 2029 | 6 months grace period start (w surcharge) |
Sep 19 2029 | patent expiry (for year 12) |
Sep 19 2031 | 2 years to revive unintentionally abandoned end. (for year 12) |