Embodiments of the present invention provide an audio-based position tracking system. The position tracking systems comprises one or more speakers, an array of microphones and a computing device. The speaker is located at a fixed position and transmits an audio signal. The microphone array is mounted upon a moving object and receives the audio signal. The computing device determines a position of the moving object as a function of the delay of the audio signal received by each microphone in the array.
|
5. A method of tracking comprising:
transmitting simultaneously a first non-audible signal from a first speaker and a second non-audible signal from a second speaker;
transmitting an audible signal from the first speaker substantially simultaneously with the first and second non-audible signals;
receiving said first and second non-audible signals at a plurality of microphones;
determining a delay for each of said received first and second non-audible signals for each of said plurality of microphones; and
determining at least one of a relative position and a relative orientation of said plurality of microphones as a function of said determined delays.
16. A sound wave-based tracking system comprising:
a speaker at a fixed location for automatically transmitting a given signal combined with one or more other signals, wherein said given signal has a given frequency above an audible range and said other signals have frequencies in the audible range;
a plurality of microphones mounted upon an object for receiving said given signal; and
a computing device for determining at least one of a position and an orientation of said object from a delay of said given signal received by each of said plurality of microphones, wherein said delay is determined as a function of a time delay of said signal received by each of said plurality of microphones relative to a reference signal.
1. A sound wave-based tracking system comprising:
a speaker at a fixed location for automatically transmitting a given signal combined with one or more other signals, wherein said given signal has a given frequency above an audible range and said other signals have frequencies in the audible range;
a plurality of microphones mounted upon an object for receiving said given signal; and
a computing device for determining at least one of a position and an orientation of said object from a delay of said given signal received by each of said plurality of microphones, wherein said signal comprises a marker and wherein said delay is determined as a function of a delay of said marker received by each of said plurality of microphones relative to said marker of a reference signal.
9. A computing system comprising:
a plurality of speakers for transmitting one or more sound waves in the audible range, and wherein a first one of the plurality of speakers automatically transmits a first signal at a first frequency above the audible range substantially simultaneously with said one or more sounds in the audible range and a second one of the plurality of speakers automatically transmits a second signal at a second frequency above the audible range substantially simultaneously with the first signal and said one or more sounds in the audible range;
a plurality of microphones mounted on an assembly for receiving said first and second signals; and
a computing device coupled to control said speakers and coupled to receive said first and second signals from each of said plurality of microphones, said computing device for determining at least one of a relative position and a relative orientation of said assembly based on delay differences of said first and second signals received from each of said plurality of microphones.
2. The sound wave-based tracking system according to
3. The sound wave-based tracking system according to
4. The sound wave-based tracking system according to
6. The method of tracking according to
said first non-audible signal comprises a sine wave having a first frequency; and
said second non-audible signal comprises a sine wave having a second frequency.
7. The method of tracking according to
8. The method of tracking according to
10. The computing system as described in
11. The computing system as described in
12. The computing system as described in
13. The computing system as described in
14. The computing system as described in
17. The sound wave-based tracking system according to
18. The sound wave-based tracking system according to
19. The sound wave-based tracking system according to
|
Embodiments of the present invention relate to tracking the position and/or orientation of a moving object, and more particularly to an audio-based computer implemented system and method of tracking position and/or orientation.
Traditionally, audio-based tacking methods have been limited to determining the location of a moving sound source. Such methods comprise mounting a sound source on a moving object. The location of the moving object is determined by tracking the audio signal by utilizing an array of microphones at known fixed locations. The sound source (e.g., speakers) requires power to generate the necessary audio signals. The sound source is also relatively heavy. Therefore, conventional audio-based tracking methods have not been utilized for head tracking applications such as gaming environments and the like.
Head tracking has been utilized in three dimensional animation, virtual gaming and simulators. Conventional computer implemented devices that track the location of a user's head utilize gyroscopes, optical systems, accelerometers and/or video based methods and systems. Accordingly, they tend to be relatively heavy, expensive and/or require substantial processing resources. Therefore, it is unlikely that any of the prior art systems would be used in the gaming environment due to cost factors.
Embodiments of the present invention are directed toward a system and method of tracking position and/or orientation of an object (e.g., user's head) utilizing audio signals. In one embodiment, the system comprises a computing device, a stereo microphone (e.g., two microphones) and a stereo speaker system (e.g., two speakers). The stereo microphones may be mounted on the object (e.g., user). The stereo speakers are generally positioned at fixed locations (e.g., on top of a table or desk). A computer generated sine wave is transmitted from the stereo speakers to the stereo microphones. The system can determine the position (e.g., between the speakers) and/or the orientation (e.g., one or more planes) of the speaker array. The position and/or orientation of the object is determined as a function of the time delay between the audio signals received at each microphone. Therefore, the position and/or orientation of the user's head can be determined and tracked in real-time by the system.
In one embodiment, the tracking system comprises one or more speakers, an array of microphones and a computing device. The speaker may be located at a fixed position and transmits an audio signal (e.g., sine wave or any other wave of known pattern). The microphone array is mounted upon an object and receives the audio signal. The computing device comprises a sine wave generator, a delay comparison engine and a position/orientation engine, all of which may be implemented in a computer system or game console unit. The sine wave generator is communicatively coupled to the speakers. The delay comparison engine is communicatively coupled to the array of microphones. The position/orientation engine is communicatively coupled to the delay comparison engine. The position/orientation engine determines a position and/or orientation of the object as a function of the delay of the audio signal received by each microphone in the array. In one embodiment, the position and/of orientation information can be determined in real-time and provided to a software application for real-time response thereto.
In one embodiment, the method of tracking a position comprises transmitting an audio signal from a speaker. The audio signal is received at a plurality of microphones. A delay of the received audio signal is determined for each of the plurality of microphones. A real-time relative position and/or orientation of the plurality of microphones is determined as a function of the determined delay.
In accordance with embodiments of the present invention, the determined position and/or orientation may be utilized as an input of a computing device or software application. For example, the determined position and/or orientation may be utilized for feedback in a simulator or virtual reality gaming application, or to control an application executing on the computing device. In addition, the determined position and/or orientation may also be utilized to control the position of a cursor (e.g., pointing device or mouse) of the computing device. Accordingly, a headset containing an array of microphones may allow a user having a mobility impairment to operate the computing device. The computing device may be a personal computer, a gaming console, a portable or handheld computer, a cell phone or any other intelligent unit.
Furthermore, embodiments of the present invention are advantageous in that the microphone array is lightweight, requires very little power, and is inexpensive. Moreover, this equipment is consistent with many existing gaming applications. The low power requirements and the lightweight of the microphone array is also advantageous for wireless implementations. Furthermore, the high frequency of the sine wave advantageously provides sufficient resolution and reduces latency of the position and/or orientation calculations. The high frequency of the sine wave is also resistant to interference from other computer and environmental sounds.
The present invention is illustrated by way of example and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:
Reference will now be made in detail to the embodiments of the invention, examples of which are illustrated in the accompanying drawings. While the invention will be described in conjunction with these embodiments, it will be understood that they are not intended to limit the invention to these embodiments. On the contrary, the invention is intended to cover alternatives, modifications and equivalents, which may be included within the spirit and scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of the present invention, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it is understood that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects of the present invention.
Referring to
The array of microphones 130, 131 is mounted upon an object (e.g., a user). The microphones 130, 131 are lightweight, require little power and are inexpensive. Thus, the microphone array is readily adapted for mounting upon the user (e.g., as a headset, etc.). The low power requirement and lightweight features of the microphones 130, 131 also readily enable wireless implementations. Although shown as a desktop computer, device 110 could be any intelligent computing device (e.g., laptop compute, handheld device, cell phone, gaming console, etc.).
Each microphone 130, 131 receives the audio signal 140, 141 transmitted from the one or more speakers 120, 121. The relative position and/or orientation of the object (e.g., the user's head) is determined as a function of the delay (e.g., time delay) between the audio signals 140, 141 received at each microphone 130, 131. This information is communicated back to device 110 by wired or wireless medium. Any well-known triangulation algorithm may be applied by the computing device 110 to determine the position and/or orientation of the microphones, and thereby the user. Accordingly, the triangulation algorithm determines the position and/or orientation as a function of the delay between the audio signals 140, 141 received at each microphone 130, 131. Determining position and/or orientation is intended to herein mean determining the position, location, locus, locality, place, orientation, direction, alignment, bearing, aspect, movement, motion, action and/or the relative change thereof, or the like.
In one implementation, the audio signal includes a marker. The marker may be a change in the amplitude of the sine wave for one or more cycles. Accordingly, the time is determined from the time lapse between a transmitted marker and the received marker. In another implementation, the audio signal does not include a marker. Instead, the delay is determined from the delay between the received audio signals and a reference signal, or between pairs of received audio signals.
Referring now to
The computing device 210 comprises a sine wave generator 225, a bandpass filter 230, a delay comparison engine 235 and a position/orientation engine 240. The sine wave generator 225 produces a sinusoidal signal having a frequency above the audible range of the user. The sine wave generator 225 is communicatively coupled to the speaker 215. Accordingly, the speaker 215 transmits the sinusoidal signal. The sinusoidal signal may be combined with one or more additional audio output signals 245 of the computing device 210 by a mixer 250. The sine wave generator 225 could be implemented in hardware or could be implemented in software.
The microphones 221, 222, 223 receive the sinusoidal signal transmitted by the speaker 215. Each microphone 221, 222, 223 receives the signal with a particular delay representing the length of a given path from the speaker 215 to each microphone 221, 222, 223. The length of each given path depends upon the position and/or orientation of each microphone 221, 222, 223 with respect to the speaker. In addition, the plurality of microphones 221, 222, 223 may provide for active noise cancellation.
Each microphone 221, 222, 223 is communicatively coupled to the bandpass filter 230. The bandpass filter has a pass band centered about the particular frequency of the sinusoidal signal utilized for determining position and/or orientation. Thus, the bandpass filter 230 recovers the sinusoidal signal from the signal received at the microphones 221, 222, 223, which may comprise the additional audio output signal that was mixed with the transmitted sinusoidal signal and any noise.
The bandpass filter 230 is communicatively coupled to the delay comparison engine 235. The delay comparison engine 235 determines the relative delay between the received sinusoidal signals for each pair of microphones in the array. In another implementation, the output of the sine wave generator 235 provides a reference signal 226 to the delay comparison engine 235. Accordingly the delay of each recovered sinusoidal signal is determined with respect to the reference signal.
The delay comparison engine 235 is communicatively coupled to the position/orientation engine 240. The position/orientation engine 240 determines the relative position and/or orientation of the headset 220 (e.g., user's head) as a function of the relative delay determined for each received sinusoidal signal. The position may be determined utilizing any well-known triangulation algorithm.
In another embodiment, the position-tracking interface comprises a plurality of speakers. The sine wave produced by the sine wave generator 225 is transmitted from a first speaker 215 for a first period of time, from a second speaker 216 for a second period of time, and so on, in a round robin manner. The sine wave transmitted by each of the speakers 215, 216 is received by the array of microphones 221, 222, 223.
Each received signal is bandpass filtered 230 to recover the sinusoidal signal for each period of time. The recovered sinusoidal signals, for each period of time, are compared by the delay comparison engine 235. The delay comparison engine 235 determines a delay of each recovered signal. The position/orientation engine 240 determines the position and/or orientation of the headset 220 as a function of the delay of the received sinusoidal signals as received by each microphone 221, 222, 223, during each period of time.
In another embodiment, the sine wave generator 225 produces a sine wave having a different frequency for transmission by a corresponding speaker 215, 216. More specifically, a first signal having a first frequency is transmitted from a first speaker 215, a second signal having a second frequency is transmitted from a second speaker, and so on. The sine wave having a given frequency transmitted by each of the speakers 215, 216 is received by the array of microphones 221, 222, 223.
Each received signal is bandpass filtered 230 to recover the sinusoidal signal of the given frequency. Each recovered sinusoidal signal is compared to a reference signal 226, having a corresponding frequency, by the delay comparison engine 235. Accordingly, the delay comparison engine 235 determines the delay (e.g., time delay) of each sinusoidal signal at each microphone 221, 222, 223. The position/orientation engine 240 determines the position and/or orientation of the headset 220 as a function of the delay of the received sinusoidal signals as received by each microphone 221, 222, 223.
It is appreciated that use of a sine wave provides for readily determining the delay of a signal. The use of a sine wave also provides for readily determining the time delay utilizing an amplitude-type marker.
It is also appreciated that conventional computer speaker systems may introduce clipping of the high frequency signal utilized to determined position and/or orientation. Therefore in one implementation, the sinusoidal signal is emitted from a dedicated sine wave transmitter instead of computer speakers. In another implementation, the sinusoidal signal and the additional audio output are attenuated in the mixer to prevent clipping.
Referring now to
At step 320, an audio signal is transmitted from one or more speakers. At step 330, the audio signal is received at each of a plurality of microphones. At step 340, a delay between receipt of the audio signal at each microphone is determined. At step 350, a relative position and/or orientation is determined as a function of the delay. The processes of 320, 330 340 and 350 are repeated periodically to obtain an updated position and/or orientation.
In one implementation, the audio signal includes a marker. The marker may be a change in the amplitude of the sine wave for one or more cycles. Accordingly, the delay is determined from the time lapse between a transmitted marker and the received marker. In another implementation, the audio signal does not include a marker. Instead, the delay is determined from the delay between the received audio signals and a reference signal, or between pairs of received audio signals. For example, the zero crossing of the signals may be compared to determine the relative change per cycle. In another implementation, the audio signal includes a marker, and position is determined utilizing delay. The markers are utilized to periodically recalibrate the system if errors are introduced to the captured waveform.
In one embodiment, a sine wave having a frequency between 14-24 KHz is transmitted from a single speaker, at step 320. The sine wave is received by a first and second microphone, at step 330. The relative delay between receipt of the sine wave by the first microphone and receipt of the sine wave by the second microphone is determined, at step 340. The relative position and/or orientation of the microphone array, which is indicative of the position and/or orientation of a user's head, is determined as a function of the delay, at step 350.
In another embodiment, a sine wave having a frequency between 14-24 KHz is transmitted from a first speaker during a first period of time and a second speaker during a second period of time, at step 320. The sine wave transmitted by each of the first and second speakers is received by a first and second microphone at step 330. A plurality of relative delays between receipt of the sine wave by the first microphone and receipt of the sine wave by the second microphone is determined for each of the first and second periods of time, at step 340. The relative position and/or orientation of the microphone array is determined as a function of the plurality of delays, at step 350.
In another embodiment, a first sine wave is transmitted from a first speaker and a second sine wave is transmitted from a second speaker simultaneously, at step 320. The frequency of the first and second sine waves are different from each other, but are each between 14-24 KHz. The first and second sine waves are both received at a first and second microphone, at step 330. A plurality of relative delays, corresponding to receipt the first sine wave by the first and second microphone and receipt of the second sine wave by the first and second microphone, are determined, at step 340. The relative real-time position and/or orientation of the microphone array is determined as a function of the plurality of delays, at step 350, and may be stored in memory. When using two different sine waves simultaneously it advantageous to space the frequency of the sine waves as far apart as possible. Spacing the sine waves as far apart as possible, in terms of the frequency, readily enables isolation of the signals by the bandpass filters. Therefore, by going to a 96 Khz sample rate (14-28 KHz) the frequency spacing of the two or more sine wave signals may be increased.
Referring now to
The high frequency audio signal 440 is a repetitive pattern wave (e.g., sine) selected such that it is above the audible range of a user. In one implementation the audio signal 440 is a sine wave between 14-24 Khz, which can typically be produced by conventional television audio subsystems. Furthermore, the audio signal 440 may be transmitted simultaneously with other audio signals with minimal interference.
The array of microphones 430 is mounted upon a user. The microphones 430 are lightweight, require little power and are inexpensive. Thus, the microphone array 430 is readily adapted for mounting in a headset to be worn by the user. The low power requirement and lightweight features of the microphones 430 also readily enable wireless implementations.
In one embodiment, the microphone array 430 includes two microphone. As depicted in
In an exemplary implementation, when the user is facing the monitor (e.g., speaker) 420, the delay between each microphone 430 will be substantially equal. When the user pivots their head 90 degree to the left, the right microphone 430 will be approximately 20 centimeters (cm) closer to the monitor 420 than the left microphone 430. The speed of sound is roughly 34,500 cm/sec. Thus, it will take 0.58 mili-seconds longer to reach the left microphone 430 than the right microphone 430. Accordingly, at a 48 KHz sample rate, there will be approximately a 28 sample differential between the left and right microphones 430.
As depicted in
In another embodiment, the microphone array 430 includes three microphones. As depicted in
Hence, the position and/or orientation of the user's head can be determined and tracked in real-time by the system 400. Such position and/or orientation information may be provided to the game console 420 for real-time response to interactive games executing thereon.
The accuracy of the position and/or orientation calculations can be increased by increasing the number of output sources. In doing so, two points of reference are available, and the possibility of a lower angle can be achieved with one source over another. The accuracy of the orientation calculation can also be increased by interpolating delay between samples. Increasing the capture sample rate can also increase the accuracy of the position and/or orientation calculations. At 96 KHz, the same delay is represented by twice as many samples. In addition, a given high frequency waveform can be better represented at a higher sample rate. Furthermore, by increasing the distance between microphones 430, the delay will be increased for the same orientation.
The degrees of freedom of motion of the user's head can be increased by adding additional microphones to the array 430. The degrees of freedom can also be increased by adding additional speakers.
In accordance with embodiments of the present invention, the determined position and/or orientation may be utilized as an input of a computing device. For example, the determined position and/or orientation may be utilized for feedback in a simulator or virtual reality gaming, or to control an application executing on the computing device. In addition, the determined position and/or orientation may also be utilized to control the position of a cursor (e.g., pointing device or mouse) of the computing device. Accordingly, a headset containing an array microphones may allow a user having a mobility impairment to operate the computing device.
Furthermore, embodiments of the present invention are advantageous in that the microphone array is lightweight, requires very little power, and is inexpensive. The low power requirements and the lightweight of the microphone array is also advantageous for wireless implementations. Furthermore, the high frequency of the sine wave advantageously provides sufficient resolution and reduces latency of the position and/or orientation calculations. The high frequency of the sine wave is also resistant to interference from other computer and environmental sounds.
The foregoing descriptions of specific embodiments of the present invention have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed, and obviously many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the Claims appended hereto and their equivalents.
Patent | Priority | Assignee | Title |
10180572, | Feb 28 2010 | Microsoft Technology Licensing, LLC | AR glasses with event and user action control of external applications |
10204137, | Feb 24 2012 | FOURSQUARE LABS, INC | System and method for data collection to validate location data |
10268888, | Feb 28 2010 | Microsoft Technology Licensing, LLC | Method and apparatus for biometric data capture |
10291999, | Mar 29 2018 | CAE INC | Method and system for validating a position of a microphone |
10539787, | Feb 28 2010 | Microsoft Technology Licensing, LLC | Head-worn adaptive display |
10548510, | Jun 30 2015 | HEADCHECK HEALTH INC | Objective balance error scoring system |
10860100, | Feb 28 2010 | Microsoft Technology Licensing, LLC | AR glasses with predictive control of external device based on event input |
10939218, | Nov 30 2016 | Samsung Electronics Co., Ltd. | Method for detecting wrong positioning of earphone, and electronic device and storage medium therefor |
10993067, | Jun 30 2017 | Nokia Technologies Oy | Apparatus and associated methods |
11182383, | Feb 24 2012 | FOURSQUARE LABS, INC | System and method for data collection to validate location data |
11199906, | Sep 04 2013 | Amazon Technologies, Inc | Global user input management |
11350229, | Mar 29 2018 | CAE Inc. | Method and system for determining a position of a microphone |
7961893, | Oct 19 2005 | Sony Corporation | Measuring apparatus, measuring method, and sound signal processing apparatus |
7983907, | Jul 22 2004 | Qualcomm Incorporated | Headset for separation of speech signals in a noisy environment |
8090113, | Nov 26 2007 | Gold Charm Limited | System and method for modulating audio effects of speakers in a sound system |
8175297, | Jul 06 2011 | GOOGLE LLC | Ad hoc sensor arrays |
8218902, | Dec 12 2011 | GOOGLE LLC | Portable electronic device position sensing circuit |
8433580, | Dec 12 2003 | NEC Corporation | Information processing system, which adds information to translation and converts it to voice signal, and method of processing information for the same |
8467133, | Feb 28 2010 | Microsoft Technology Licensing, LLC | See-through display with an optical assembly including a wedge-shaped illumination system |
8472120, | Feb 28 2010 | Microsoft Technology Licensing, LLC | See-through near-eye display glasses with a small scale image source |
8473099, | Dec 12 2003 | NEC Corporation | Information processing system, method of processing information, and program for processing information |
8477425, | Feb 28 2010 | Microsoft Technology Licensing, LLC | See-through near-eye display glasses including a partially reflective, partially transmitting optical element |
8482859, | Feb 28 2010 | Microsoft Technology Licensing, LLC | See-through near-eye display glasses wherein image light is transmitted to and reflected from an optically flat film |
8488246, | Feb 28 2010 | Microsoft Technology Licensing, LLC | See-through near-eye display glasses including a curved polarizing film in the image source, a partially reflective, partially transmitting optical element and an optically flat film |
8700392, | Sep 10 2010 | Amazon Technologies, Inc. | Speech-inclusive device interfaces |
8814691, | Feb 28 2010 | Microsoft Technology Licensing, LLC | System and method for social networking gaming with an augmented reality |
9091851, | Feb 28 2010 | Microsoft Technology Licensing, LLC | Light control in head mounted displays |
9097890, | Feb 28 2010 | Microsoft Technology Licensing, LLC | Grating in a light transmissive illumination system for see-through near-eye display glasses |
9097891, | Feb 28 2010 | Microsoft Technology Licensing, LLC | See-through near-eye display glasses including an auto-brightness control for the display brightness based on the brightness in the environment |
9128281, | Sep 14 2010 | Microsoft Technology Licensing, LLC | Eyepiece with uniformly illuminated reflective display |
9129295, | Feb 28 2010 | Microsoft Technology Licensing, LLC | See-through near-eye display glasses with a fast response photochromic film system for quick transition from dark to clear |
9129515, | Mar 15 2013 | Qualcomm Incorporated | Ultrasound mesh localization for interactive systems |
9134534, | Feb 28 2010 | Microsoft Technology Licensing, LLC | See-through near-eye display glasses including a modular image source |
9182596, | Feb 28 2010 | Microsoft Technology Licensing, LLC | See-through near-eye display glasses with the optical assembly including absorptive polarizers or anti-reflective coatings to reduce stray light |
9223134, | Feb 28 2010 | Microsoft Technology Licensing, LLC | Optical imperfections in a light transmissive illumination system for see-through near-eye display glasses |
9223415, | Jan 17 2012 | Amazon Technologies, Inc | Managing resource usage for task performance |
9229227, | Feb 28 2010 | Microsoft Technology Licensing, LLC | See-through near-eye display glasses with a light transmissive wedge shaped illumination system |
9274744, | Sep 10 2010 | Amazon Technologies, Inc. | Relative position-inclusive device interfaces |
9285589, | Feb 28 2010 | Microsoft Technology Licensing, LLC | AR glasses with event and sensor triggered control of AR eyepiece applications |
9329689, | Feb 28 2010 | Microsoft Technology Licensing, LLC | Method and apparatus for biometric data capture |
9341843, | Dec 30 2011 | Microsoft Technology Licensing, LLC | See-through near-eye display glasses with a small scale image source |
9366862, | Feb 28 2010 | Microsoft Technology Licensing, LLC | System and method for delivering content to a group of see-through near eye display eyepieces |
9367203, | Oct 04 2013 | Amazon Technologies, Inc. | User interface techniques for simulating three-dimensional depth |
9451377, | Jan 07 2014 | Howard, Massey | Device, method and software for measuring distance to a sound generator by using an audible impulse signal |
9759917, | Feb 28 2010 | Microsoft Technology Licensing, LLC | AR glasses with event and sensor triggered AR eyepiece interface to external devices |
9875406, | Feb 28 2010 | Microsoft Technology Licensing, LLC | Adjustable extension for temple arm |
Patent | Priority | Assignee | Title |
4695953, | Aug 25 1983 | TV animation interactively controlled by the viewer | |
5174759, | Aug 04 1988 | TV animation interactively controlled by the viewer through input above a book page | |
5220922, | Mar 05 1992 | Ultrasonic non-contact motion monitoring system | |
6176837, | Apr 17 1998 | Massachusetts Institute of Technology | Motion tracking system |
6445364, | Nov 28 1995 | Meta Platforms, Inc | Portable game display and method for controlling same |
6856876, | Jun 09 1998 | AMERICAN VEHICULAR SCIENCES LLC | Methods for controlling a system in a vehicle using a transmitting/receiving transducer and/or while compensating for thermal gradients |
7012630, | Feb 08 1996 | Verizon Patent and Licensing Inc | Spatial sound conference system and apparatus |
7016505, | Nov 30 1999 | HONDA MOTOR CO , LTD | Robot acoustic device |
7130430, | Dec 18 2001 | PATENT ARMORY INC | Phased array sound system |
7227960, | May 28 2001 | International Business Machines Corporation | Robot and controlling method of the same |
20020034310, | |||
20020090094, | |||
20020143414, | |||
20020181723, | |||
20030142829, | |||
20040213419, | |||
20050036631, | |||
20050047611, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Oct 27 2003 | PEREIRA, MARK | Nvidia Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 014651 | /0707 | |
Oct 28 2003 | Nvidia Corporation | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Feb 06 2013 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Feb 23 2017 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Sep 23 2020 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Sep 08 2012 | 4 years fee payment window open |
Mar 08 2013 | 6 months grace period start (w surcharge) |
Sep 08 2013 | patent expiry (for year 4) |
Sep 08 2015 | 2 years to revive unintentionally abandoned end. (for year 4) |
Sep 08 2016 | 8 years fee payment window open |
Mar 08 2017 | 6 months grace period start (w surcharge) |
Sep 08 2017 | patent expiry (for year 8) |
Sep 08 2019 | 2 years to revive unintentionally abandoned end. (for year 8) |
Sep 08 2020 | 12 years fee payment window open |
Mar 08 2021 | 6 months grace period start (w surcharge) |
Sep 08 2021 | patent expiry (for year 12) |
Sep 08 2023 | 2 years to revive unintentionally abandoned end. (for year 12) |