A noisy audio signal, with user input device noise, is received. Particular frames in the audio signal that are corrupted by user input device noise are identified and removed. The removed audio data is then reconstructed to obtain a clean audio signal.
|
11. A method of reconstructing an audio signal corrupted by user input device noise, comprising:
removing a corrupted spectral content of a set of frames in the audio signal corrupted by the user input device noise;
estimating clean values for the corrupted spectral content removed based on observed values in neighboring frames, neighboring the set of frames, wherein estimating comprises estimating the clean values based on a model of correlation between vector values in a sequence of vectors of log spectra from a training corpus;
combining the estimated clean values of the spectral content with a phase of the audio signal to obtain a combined audio signal; and
outputting the combined audio signal.
18. A system for reconstructing an audio signal corrupted by user input device noise, comprising:
a signal removal device that removes a corrupted spectral content of a set of frames in the audio signal corrupted by the user input device noise; an estimation device that estimates clean values for the corrupted spectral content removed based on observed values in neighboring frames, neighboring the set of frames, wherein estimating comprises estimating the clean values based on a model of correlation between vector values in a sequence of vectors of log spectra from a training corpus;
an estimation combining device that combines the estimated clean values of the spectral content with a phase of the audio signal to obtain a combined audio signal; and
an output device that outputs the combined audio signal.
19. A method for removing user input device noise from an audio signal, comprising:
identifying a portion of the audio signal that includes user input device noise, wherein identifying comprises identifying the portion of the audio signal by calculating how likely a selected portion of the audio signal is, given surrounding portions of the audio signal, and wherein the user input device noise comprises noise generated during user inputs as a result of physical interactions with a user input device, and wherein identifying still further comprises receiving a time stamp indicative of a time of occurrence of one of the user interactions, in a computer system; and
removing magnitude values of a spectral content of the portion of the audio signal and estimating clean magnitude values based on values proximate the removed values in the audio signal.
13. A system for removing user input device noise from an audio signal, comprising:
a noise detection device configured to identify a portion of the audio signal that includes user input device noise, wherein the noise detection device is configured to identify the portion of the audio signal by calculating how likely a selected portion of the audio signal is, given surrounding portions of the audio signal, and
wherein the user input device noise comprises noise generated during user inputs as a result of physical interactions with a user input device, the noise detection device including an input detection device configured to receive a time stamp indicative of a time of occurrence of one of the user interactions in a computer system; and
a signal reconstruction device configured to remove magnitude values of a spectral content of the portion of the audio signal and to estimate clean magnitude values based on values proximate the removed values in the audio signal.
1. A method of removing user input device noise from an audio signal, comprising:
receiving a corrupted audio signal including user input device noise from user inputs on a user input device, wherein the user input device noise comprises noise generated during the user inputs as a result of physical interactions with the user input device;
dividing the corrupted audio signal into frames;
identifying a set of frames corrupted by the user input device noise, wherein identifying a set of frames comprises:
identifying a search space based on an operating system time stamp associated with a frame in the audio signal;
searching the search space for a first frame that is least similar to neighboring frames;
identifying a first set of frames as corrupted frames based on the first frame that is least similar; and
removing corrupted spectral content of the set of identified frames; and reconstructing the corrupted spectral content of the set of identified frames, without the user input device noise, from neighboring frames proximate the set of identified frames.
17. A system for removing user input device noise from an audio signal, comprising:
an signal receiving device that receives a corrupted audio signal that includes user input device noise from user inputs on a user input device, wherein the user input device noise comprises noise generated during the user inputs as a result of physical interactions with the user input device; a signal dividing device that divides the corrupted audio signal into frames;
a frame identification device that identifies a set of frames corrupted by the user input device noise, wherein identifying a set of frames comprises; identifying a search space based on an operating system time stamp associated with a frame in the audio signal;
searching the search space for a first frame that is least similar to neighboring frames;
identifying a first set of frames as corrupted frames based on the first frame that is least similar; and
a content removal device that removes corrupted spectral content of the set of identified frames; and
a signal reconstruction device that reconstructs the corrupted spectral content of the set of identified frames, without the user input device noise, from neighboring frames proximate the set of identified frames.
2. The method of
removing an entire spectral content of the set of identified frames.
3. The method of
calculating how well a selected frame can be predicted based on surrounding frames, in the audio signal; and
identifying whether the selected frame is corrupted by user input device noise based on the step of calculating.
4. The method of
if the selected frame is corrupted by the user input device noise, identifying the set of frames as the selected frame and one or more additional frames, closely proximate the selected frame in the audio signal.
5. The method of
6. The method of
calculating a similarity of the selected frame to given other frames, closely proximate the selected frame in the audio signal.
7. The method of
determining that the selected frame is corrupted by user input device noise if the similarity fails to meet a predetermined threshold.
8. The method of
searching the search space for a second frame, not in the first set of frames, that is least similar to neighboring frames; and
identifying a second set of frames as corrupted frames based on the second frame.
9. The method of
identifying the search space as extending in the audio signal from the frame associated with the time stamp to a frame associated with an immediately preceding time stamp.
10. The method of
reconstructing the magnitude of the corrupted spectral content of the set of identified frames.
12. The method of
14. The system of
a vector sequence model trained to model clean sequences of spectral vectors and
correlations between values in the spectral vectors.
15. The system of
16. The system of
|
Personal computers and laptop computers are increasingly being used as devices for sound capture in a variety of recording and communication scenarios. Some of these scenarios includes recording of meetings and lectures for archival purposes, and the transmission of voice data for voice over IP (VOIP) telephony, video conferencing and audio/video instant messaging. In these types of scenarios, recording is typically done using the local microphone for the particular computer being used. This recording configuration is highly vulnerable to environmental noise sources. In particular, this configuration is particularly vulnerable to a specific type of additive noise, that of a user simultaneously using a user input device, such as typing on the keyboard of the computer being used for sound capture, mouse clicks or even stylus taps, to name a few.
There are many reasons that a user may be using a keyboard or other input device during sound capture. For instance, while recording a meeting, the user may often take notes on the same computer. Similarly, when video conferencing, users often multi-task while talking to another party, by typing emails or notes, or by navigating and browsing the web for information. In these types of situations, the keyboard or other user input device may commonly be closer to the microphone than the speaker. Therefore, the speech signal can be significantly corrupted by the sound of the user's input activity, such as keystrokes.
Continuous typing on a keyboard, mouse clicks, or stylus taps, for instance, produce a sequence of noise-like impulses in the audio stream. The presence of this nonstationary, impulsive noise in the captured speech can be very unpleasant for the listener.
In the past, some attempts have been made to deal with impulsive noise related to keystrokes. However, these have typically included an attempt to explicitly model the keystroke noise. This presents significant problems, however, because keystroke noise (and other user input noise, for that matter) can be highly variable across different users and across different keyboard devices.
The discussion above is merely provided for general background information and is not intended to be used as an aid in determining the scope of the claimed subject matter.
A noisy audio signal, with user input device noise, is received. Particular frames in the audio signal that are corrupted by the user input device noise are identified and removed. The removed audio frames are then reconstructed to obtain a clean audio signal.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. The claimed subject matter is not limited to implementations that solve any or all disadvantages noted in the background.
The present invention can be used to detect and remove noise associated with physical manipulation of many types of user input devices from an audio stream. Some such user input devices include keyboards, computer mice, touch screen devices that are used with a stylus, to name but a few examples. The invention will be described herein in terms of keystroke noise, but that is not intended to limit the invention in any way and is exemplary only.
Keys on conventional keyboards are mechanical pushbutton switches. Therefore, a typed keystroke appears in an audio signal as two closely spaced noise-like impulses, one generated by the key-down action and the other by the key-up action. The duration of a keystroke is typically between 60-80 ms but may last up to 200 ms. Keystrokes can be broadly classified as spectrally flat. However, the inherent variety of typing styles, key sequences, and the mechanics of the keys themselves, introduce a degree of randomness in the spectral content of a keystroke. This leads to a significant variability across frequency and time for even the same key. It has also been empirically found that the keystroke noise primarily affects only the magnitude of an audio signal (e.g., a speech signal) and has virtually no human perceptual affect on the phase of the signal.
Environment 100 includes a user that provides a speech signal to a microphone 104. The microphone also receives keystroke noise 106 from a keyboard 108 that is being used by the user. The microphone 104 therefore provides an audio speech signal 110, with noise, to keystroke removal system 102. Keystroke removal system 102 includes a keystroke detection component 112 and a frame reconstruction component 114 to detect audio frames that are corrupted by keystroke noise, to remove those frames, and to reconstruct the data in those frames to obtain a speech signal 116 without keystroke noise. That signal can then be provided to a speaker 118 to produce audio 120, or it can be provided to any other component (such as a speech recognizer, etc.).
Keystroke removal system 102 then uses keystroke detection component 112 to determine whether keystrokes are present in the speech signal. This is indicated by block 154 in
Keystroke removal system 102 receives the speech signal with noise 110 and the speech signal is segmented into a sequence of frames. In one embodiment, the sequence of frames comprises 20-millisecond frames with 10-millisecond overlap with adjacent frames. Segmenting the speech signal into a sequence of frames is indicated by block 170 in
Next, keystroke detection component 112 selects a frame. This is indicated by block 172. Keystroke detection component 112 then determines whether the selected frame can be predicted well from surrounding frames. This is indicated by block 174. A particular way in which this is done is described in more detail below with respect to
The reason that the predictability of the selected frame is measured is that speech evolves, in general, quite smoothly and slowly over time. Therefore, any given frame in a speech signal can be predicted relatively accurately from neighboring frames. Therefore, if the selected frame can be predicted accurately from the surrounding frame, it is likely not corrupted by keystroke noise. Therefore, keystroke detection component 112 simply moves to the next frame and determines whether keystroke noise is present in that frame. Determining whether the selected frame can be predicted accurately from surrounding frames and determining whether there are more frames to process is indicated by blocks 176 and 178, respectively, in
However, if, at block 176, keystroke detection component 112 determines that the selected frame cannot be predicted accurately from the surrounding frames, then the frame is determined to be corrupted with keystroke noise. Because keystroke noise deleteriously affects many, if not all, frequencies components of the corrupted frame, the corrupted frame is simply removed from the speech signal. This is indicated by block 180 in
Keystroke removal system 102 then uses frame reconstruction component 114 to reconstruct the speech signal for the frames that have been removed. This is indicated by block 182 in
Where τ=[τ1, . . . ,τM] defines the frames used to predict the current frame, αk=[αk1, . . . ,αkM] are weights applied to these frames, and V(t,k) is zero-mean Gaussian noise (i.e., V(t,k)˜(0,σtk2)
σtk2 is the variance and (m,v) is a Gaussian distribution with mean m and variance v factor. Thus, the following equation can be written:
It is assumed that the frequency components in a given frame are independent. Therefore, the joint probability of the frame can be written as:
p(S(t))=Πkp(S(k,t)) Eq. 3
Therefore, the conditional log-likelihood Ft of the current frame S(t) given the neighboring frames defined by τ can be written as follows:
In Eq. 4, Ft measures the likelihood that the signal at frame t can be predicted by the neighboring frames. A threshold value T is then set for Ft, and a frame is classified as one that is corrupted by keystroke data if Ft<T.
Therefore, referring again to
The value of Ft is then compared to the threshold value T to determine whether the likelihood that the current frame can be predicted from its neighbors meets the threshold value. This is indicated by block 204 in
However, if, at block 204, it is determined that the present frame cannot be predicted sufficiently accurately given its neighboring frames, then the present frame is marked as one that is corrupted by keystroke data. It has also been empirically noted that keystrokes typically last approximately three frames. Therefore, τ can be set equal to [−2,2] so that one frame ahead and one frame behind the current frame are also marked as being corrupted by keystroke noise. Marking the frames as being corrupted by keystroke data is indicated by block 210 in
If there are more frames to consider (at block 207) then component 112 selects the next frame for processing. This is indicated by block 209 in
In addition, the value for the mean can be estimated by setting αkm=1/m, and the variance in Eq. 1 can be estimated, as follows:
Despite this,
After component 112 receives the time stamp indicating that a key down action was detected by OS event handler 122, component 112 identifies a time frame tp corresponding to the system clock time p indicated by the time stamp. This is indicated by block 402.
Component 112 then defines a search region Θp as all frames between the previously received time stamp and the current time stamp. In other words, during continuous typing, time stamps corresponding to key down events will be received by component 112. When a current time stamp is received, it is associated with a time frame. Component 112 then knows that the key down action occurred somewhere between the current time frame and the time frame associated with the last time stamp received (which was, itself, associated with a key down action). Therefore, the search region Θp corresponds to all frames between the previous time stamp tp−1 and the current time stamp tp. Defining the search region is indicated by block 404 in
Component 112 then searches through the search region to identify a key down frame as a frame that is least likely to be predicted from it neighbors. For instance, the function Ft defined above in Eq. 4 predicts how likely a given frame can be predicted from its neighbors. Within the search region defined in step 402, the frame which is least likely to be predicted from its neighbors will be that frame most strongly corrupted by the keystroke within that search region Θp. Because the key down action introduces more noise than the key up action, when component 112 finds a local minimum value for Ft, within the search region Θp, it is very likely that the frame corresponding to that value is the frame which has been corrupted by the key down action. In terms of the mathematical terminology already described, component 112 finds:
Identifying the key down frame in the search region is indicated by block 406 in
Then, because the key down action will corrupt more than one frame, component 112 classifies frames:
ΨD={{circumflex over (t)}D−1, . . . , {circumflex over (t)}D+l} Eq. 7
as keystroke-corrupted frames corresponding to the key down action. Identifying this first set of corrupted frames based on the key down frame is indicated by block 408 in
Keystroke detection component 112 then finds, within the search region, the frame corresponding to the key up action as follows:
Identifying the key up frame is indicated by block 410 in
Component 112 then identifies the set of frames that have been corrupted by the key up action by classifying frames:
ΨU={{circumflex over (t)}U−l, . . . ,tU+l} Eq. 9
as keystroke-corrupted frames corresponding to the key up action. Identifying the second set of corrupted frames based on the key up frame is indicated by block 412 in
It has been empirically noted that, because key strokes typically last on the order of three frames, setting l=1 provides good performance.
It can be seen that, because component 112 searches the entire search region for the key down and key up frames, it can accurately find those frames, even given significant variability in the lag between the physical occurrence of the keystrokes and the operating system time stamp associated with the keystrokes. It can also be seen, that by using the time stamps from the operating system, component 112 can detect keystrokes in the speech signal without using a threshold T for equation Ft.
To reconstruct the keystroke-corrupted frames, a correlation-based reconstruction technique is employed in which a sequence of log-spectral vectors of a speech utterance is assumed to be generated by a stationary Gaussian random process. The statistical parameters of this process (its mean and covariance) are estimated from a clean training corpus in order to model the sequence of vectors. The vector sequence model is indicated by block 115 in
By modeling the sequence of vectors in this manner, co-variances are estimated not just across frequency, but across time as well. Because the process is assumed to be stationary, the estimated mean vector is independent of time and the covariance between any two components is only a function of the time difference between them.
In order for the data to better fit the Gaussian assumption of model 115, operations are performed on the log-magnitude spectra rather than on the magnitude directly.
Thus, frame reconstruction component 114 first receives the frames marked as corrupted (from component 112) and the neighboring frames of the corrupted frames. This is indicated by block 500 in
X(t)=log(S(t)) Eq. 10
where S(t) represents the magnitude spectrum as discussed above. The log magnitude vectors for the clean (observed) and the keystroke-corrupted (missing) speech are defined as X0 and Xm, respectively. Separating the magnitude and phase of the clean frames is indicated by block 512 in
Under the Gaussian process assumption, a MAP estimate of Xm can now be expressed as follows:
where
are the appropriate partitions of the covariance matrix learned in training. Thus, for each keystroke-corrupted frame in:
Ψ={ΨD,ΨU}, Eq. 12
frame reconstruction component 114 sets the log magnitude vectors as follows:
Component 114 then estimates the magnitude spectrum for the missing frames using model 115 and the observed values in the neighboring frames according to Eq. 11, set out above. Estimating the magnitude spectrum for the missing frames is indicated by block 514 in
Finally, the estimated magnitude spectrum is recombined with the phase for the missing frames, to fully reconstruct the frames. This is indicated by block 516 in
In other words, in the log spectral domain, each frame consists of N components, where 2N is the DFT size. Conversely,
is cN×cN, where c is the number of frames of observed speech used to estimate the missing frames. Typically, N≧128 and c≧2, making the matrix inversion required in Eq. 11 computationally expensive. To reduce the complexity of the operations, it is assumed that the covariance matrix has a block-diagonal structure, preserving only local correlations. If a block size B is used, then the inverse of N/B matrices of size cB×cB is computed, thus reducing the number of computations. In one embodiment, B was empirically set to 5, although other values of B can be used as well.
Using a block diagonal covariance structure also improves the environmental robustness of farfield speech. There can be long-span correlations across time and frequency in close-talking speech. However, these correlations can be significantly weaker in farfield audio. This mismatch results in reconstruction errors, producing artifacts in the resulting audio. By using a block-diagonal structure, only short-span correlations are utilized, making the reconstruction more robust in unseen farfield conditions. To incorporate this change into the MAP estimation algorithm, the single MAP estimation for the keystroke-corrupted frames is simply replaced with multiple estimations, one for each block in the covariance matrix.
Also, in order to reduce the complexity of the computations performed, component 114 illustratively performs the estimation of the magnitude spectrum for the missing frames by estimating a locally adapted mean vector. This is indicated by block 520 in
In other words, the Gaussian model 115 described above with respect to Eq. 11 uses a single mean vector to represent all speech. Because the present system illustratively reconstructs the full magnitude spectrum of the missing frames, and because it operates on farfield audio, there is considerable variation in the observed features. This can result, when using a single pre-trained mean vector in the MAP estimation process, in some reconstruction artifacts.
In one embodiment, a single mean vector is still used, but it is used with a locally adapted value. To locally adapt the mean vector value, a linear predictive framework, similar to that discussed above in Eq. 4 for detecting corrupted frames, can be used. The mean vector is estimated as a linear combination of the neighboring clean frame surrounding the keystroke-corrupted segment of the signal. Assume that μk is the kth spectral component of the mean vector μ, then the adapted value of this component can be defined as follows:
Where Γ defines the indices of the neighboring clean frames, and βτ is the weight applied to the observation at time t−τ. Because the mean is computed online, it can easily adapt to different environmental conditions. In one embodiment, the adapted mean value in Eq. 14 is estimated as the same mean of the frames used for reconstruction, by setting Γ to the indices of frames in X0 and βτ1/|Γ|.
It should be also noted that the present discussion has proceeded by removing the entire spectral content of corrupted frames. However, where only specific portions of the spectral content of a corrupted frame are corrupted, only the corrupt spectral content needs to be removed. The uncorrupt portions can then be used to estimate the corrupt portions along with reliable surrounding frames. The estimation is the same as that described above except that the definition of Xm and X0 would, of course, change slightly to reflect that only a portion of the spectral content is being estimated.
Embodiments are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with various embodiments include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, telephony systems, distributed computing environments that include any of the above systems or devices, and the like.
Embodiments may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Some embodiments are designed to be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules are located in both local and remote computer storage media including memory storage devices.
With reference to
Computer 610 typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by computer 610 and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computer 610. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer readable media.
The system memory 630 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 631 and random access memory (RAM) 632. A basic input/output system 633 (BIOS), containing the basic routines that help to transfer information between elements within computer 610, such as during start-up, is typically stored in ROM 631. RAM 632 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 620. By way of example, and not limitation,
The computer 610 may also include other removable/non-removable volatile/nonvolatile computer storage media. By way of example only,
The drives and their associated computer storage media discussed above and illustrated in
A user may enter commands and information into the computer 610 through input devices such as a keyboard 662, a microphone 663, and a pointing device 661, such as a mouse, trackball or touch pad. Other input devices (not shown) may include a joystick, game pad, satellite dish, scanner, or the like. These and other input devices are often connected to the processing unit 620 through a user input interface 660 that is coupled to the system bus, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB). A monitor 691 or other type of display device is also connected to the system bus 621 via an interface, such as a video interface 690. In addition to the monitor, computers may also include other peripheral output devices such as speakers 697 and printer 696, which may be connected through an output peripheral interface 695.
The computer 610 is operated in a networked environment using logical connections to one or more remote computers, such as a remote computer 680. The remote computer 680 may be a personal computer, a hand-held device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 610. The logical connections depicted in
When used in a LAN networking environment, the computer 610 is connected to the LAN 671 through a network interface or adapter 670. When used in a WAN networking environment, the computer 610 typically includes a modem 672 or other means for establishing communications over the WAN 673, such as the Internet. The modem 672, which may be internal or external, may be connected to the system bus 621 via the user input interface 660, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 610, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation,
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
Acero, Alejandro, Subramanya, Amarnag, Seltzer, Michael
Patent | Priority | Assignee | Title |
8538751, | Oct 15 2010 | HONDA MOTOR CO , LTD | Speech recognition system and speech recognizing method |
8775171, | Nov 10 2009 | Microsoft Technology Licensing, LLC | Noise suppression |
8867757, | Jun 28 2013 | GOOGLE LLC | Microphone under keyboard to assist in noise cancellation |
9293134, | Sep 30 2014 | Amazon Technologies, Inc | Source-specific speech interactions |
9437200, | Nov 10 2009 | Microsoft Technology Licensing, LLC | Noise suppression |
9520141, | Feb 28 2013 | GOOGLE LLC | Keyboard typing detection and suppression |
9608889, | Nov 22 2013 | GOOGLE LLC | Audio click removal using packet loss concealment |
9721580, | Mar 31 2014 | GOOGLE LLC | Situation dependent transient suppression |
9922637, | Jul 11 2016 | Microsoft Technology Licensing, LLC | Microphone noise suppression for computing device |
Patent | Priority | Assignee | Title |
6581032, | Sep 22 1999 | QUARTERHILL INC ; WI-LAN INC | Bitstream protocol for transmission of encoded voice signals |
7020605, | Sep 15 2000 | Macom Technology Solutions Holdings, Inc | Speech coding system with time-domain noise attenuation |
20040001599, | |||
20050114124, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Nov 17 2006 | SELTZER, MICHAEL | Microsoft Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 018988 | /0038 | |
Nov 17 2006 | ACERO, ALEJANDRO | Microsoft Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 018988 | /0038 | |
Nov 17 2006 | SUBRAMANYA, AMARNAG | Microsoft Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 018988 | /0038 | |
Nov 20 2006 | Microsoft Corporation | (assignment on the face of the patent) | / | |||
Oct 14 2014 | Microsoft Corporation | Microsoft Technology Licensing, LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 034542 | /0001 |
Date | Maintenance Fee Events |
Feb 25 2015 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
May 06 2019 | REM: Maintenance Fee Reminder Mailed. |
Oct 21 2019 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Sep 13 2014 | 4 years fee payment window open |
Mar 13 2015 | 6 months grace period start (w surcharge) |
Sep 13 2015 | patent expiry (for year 4) |
Sep 13 2017 | 2 years to revive unintentionally abandoned end. (for year 4) |
Sep 13 2018 | 8 years fee payment window open |
Mar 13 2019 | 6 months grace period start (w surcharge) |
Sep 13 2019 | patent expiry (for year 8) |
Sep 13 2021 | 2 years to revive unintentionally abandoned end. (for year 8) |
Sep 13 2022 | 12 years fee payment window open |
Mar 13 2023 | 6 months grace period start (w surcharge) |
Sep 13 2023 | patent expiry (for year 12) |
Sep 13 2025 | 2 years to revive unintentionally abandoned end. (for year 12) |