A redundance reduction method used in coding multichannel signals is proposed, where signals available in digitized form are predicted. A prediction error is computed, which is subsequently quantized and loaded for transmission over a transmission path. In the method, prediction is performed linearly in a backward-adaptive fashion for at least two channels simultaneously, and the statistical relationships of the signals within a channel and between at least two channels is taken into account. A device for decoding redundance-reduced multichannel signals is proposed. It comprises a linear backward-adaptive predictor for at least two channels.
|
8. A device for decoding a multichannel signal, the multichannel signal having been redundance reduced by a linear, backward-adaptive technique, the device comprising:
a linear, backward-adaptive predictor for providing a prediction error signal, the linear backward-adaptive predictor receiving a plurality of quantized prediction values for each of the channels of the multichannel signal, the linear, backward-adaptive predictor including a plurality of stages, each of the plurality of stages determining a component that is deducted from a digital value in a respective channel of the multichannel signal, each of the plurality of stages having a lattice-type structure, wherein at least one of the plurality of stages is capable of being switched off when a prediction gain cannot be achieved, wherein when the at least one of the plurality of stages is switched off the at least one of the plurality of stages operates as a backward-adaptive predictor in the forward-feed direction, and wherein each one of the plurality of stages adapts at least one respective predictor coefficient regardless of whether the at least one of the plurality of stages is switched off wherein the prediction error signal does not depend on the at least one of the plurality of stages that is switched off and that continues to be adapted.
4. A method for coding a multichannel signal, the multichannel signal being in a digitized form, comprising the steps of:
performing a linear, backward-adaptive prediction on the multichannel signal to form a predicted signal, the linear, backward-adaptive prediction being performed in a plurality of stages, each of the plurality of stages having a lattice structure, at least one of the plurality of stages capable of being switched off, the performing step including the steps of in each of the plurality of stages, computing a respective component value, subtracting each respective component value from the multichannel signal to form the predicted signal, determining whether the at least one of the plurality of stages can achieve a performance gain, if the at least one of the plurality of stages cannot achieve a performance gain, switching off the at least one of the plurality of stages, for each of the plurality of stages, determining at least one respective predictor coefficient, and for each of the plurality of stages, adapting the at least one respective predictor coefficient regardless of whether the at least one of the plurality of stages is switched off; and forming a prediction error signal as a function of the multichannel signal and the predicted signal; quantizing the prediction error signal; and loading the quantized prediction error signal for transmission over a transmission path; wherein the prediction error signal does not depend on the at least one of the plurality of stages that is switched off and that continues to be adapted.
16. A method for coding a signal, the signal being in a digitized form, comprising the steps of:
performing a linear, backward-adaptive prediction on the signal to form a predicted signal, the linear, backward-adaptive prediction being performed in a plurality of stages, each of the plurality of stages having a lattice structure, at least one of the plurality of stages capable of being switched off, the performing step including the steps of in each of the plurality of stages, computing a respective component value, subtracting each respective component value from the signal to form the predicted signal, determining whether the at least one of the plurality of stages can achieve a performance gain, if the at least one of the plurality of stages cannot achieve a performance gain, switching off the at least one of the plurality of stages, for each of the plurality of stages determining at least one respective predictor coefficient, for each of the plurality of stages, adapting the at least one respective predictor coefficient regardless of whether the at least one of the plurality of stages is switched off, for each of the at least one of the plurality of stages that is switched off, operating as a backward-adaptive predictor in a forward-feed direction, and forming a prediction error signal as a function of the signal and the predicted signal; quantizing the prediction error signal; and loading the quantized prediction error signal for transmission over a transmission path; wherein the prediction error signal does not depend on the at least one of the plurality of stages that is switched off and that continues to be adapted.
1. A method for coding a multichannel signal, the multichannel signal being in a digitized form, comprising the steps of:
performing a linear, backward-adaptive prediction on the multichannel signal to form a predicted signal, the linear, backward-adaptive prediction being performed in a plurality of stages, each of the plurality of stages having a lattice structure, at least one of the plurality of stages capable of being switched off, the performing step including the steps of in each of the plurality of stages, computing a respective component value, subtracting each respective component value from the multichannel signal to form the predicted signal, determining whether the at least one of the plurality of stages can achieve a performance gain, if the at least one of the plurality of stages cannot achieve a performance gain, switching off the at least one of the plurality of stages, for each of the plurality of stages, determining at least one respective predictor coefficient, for each of the plurality of stages, adapting the at least one respective predictor coefficient regardless of whether the at least one of the plurality of stages is switched off, for each of the at least one of the plurality of stages that is switched off, operating as a backward-adaptive predictor in a forward-feed direction, and forming a prediction error signal as a function of the multichannel signal and the predicted signal; quantizing the prediction error signal; and loading the quantized prediction error signal for transmission over a transmission path; wherein the prediction error signal does not depend on the at least one of the plurality of stages that is switched off and that continues to be adapted.
2. The method according to
summing the plurality of error signal intensity values; and determining the minimum of the sum of the plurality of the error signal intensity values for determining each of the at least one respective predictor coefficient.
3. The method according to
summing the plurality of error signal intensity values; and determining the minimum of the sum of the plurality of the error signal intensity values for determining each of the at least one respective predictor coefficient.
5. The method according to claims 4, wherein the step of determining at least one respective predictor coefficient includes the steps of:
determining statistical linkages between simultaneous sampled values of at least two channels of the multichannel signal in a downstream zero-order inter-channel predictor; and determining the at least one predictor coefficient as a function of the statistical linkages.
6. The method according to
decomposing the multichannel signal into a plurality of spectral components; and performing the linear, backward-adaptive prediction on each of the plurality of spectral components separately.
7. The method according to
9. The device according to
10. The device according to
a plurality of linear, backward-adaptive predictors, each of the plurality of linear, backward-adaptive predictors corresponding to a different one of the plurality of subbands, wherein each of plurality of linear, backward-adaptive predictors is capable of being switched off independent of the others of the plurality of linear, backward-adaptive predictors.
11. The method according to
12. The method according to
13. The method according to
14. The method according to
15. The method according to
|
The present invention relates redundance reduction method used in coding of signals and further relates to a device for decoding redundance-reduced signals.
German Patent 43 20 990 A1 describes a redundance reduction method used in coding multichannel signals. In particular, the coding of dual-channel audio signals is described. For the purpose of redundance reduction, the signals are sampled, quantized, and predictively coded in an encoder. Estimated values for the actual sampled values are obtained. The prediction error is determined and loaded for transmission over a data line. The predictive coding is an adaptive interchannel prediction, i.e., use is made of the statistical relationship between the signals in the two channels (cross-correlation). The predictor coefficients must be transmitted to the receiver as lateral information.
U.S. Pat. No. 4,815,132 describes a redundance reduction method used in coding dual-channel signals that are available in digitized form. The digitized signals are predictively coded, and a prediction error between the digitized and predictively coded signals is determined. The prediction error values are quantized and loaded for transmission over a transmission path. Linear backward-adaptive prediction is performed for two channels simultaneously, with the statistical relationships within a channel and between two channels taken into account. Furthermore, U.S. Pat. No. 4,815,132 describes the use of a linear backward-adaptive predictor that receives the quantized prediction values of two channels.
WO A-9016136 discloses predictors with a lattice-type structure.
A backward-adaptive predictor for the case of a single channel is described in N. S. Jayant, P. Noll, "Digital coding of waveforms," Prentice-Hall, Englewood, N.J., 1984.
In P. Cambridge, M. Todd, "Audio Data Compression Techniques," 94th AES Convention, Preprint 3584 (K1-9), Berlin, March 1993, the possibility of using auto-correlations and cross-correlations for a predictor for two stereo channels is mentioned. No specific implementation is suggested, however.
The method according to the present invention provides the advantage that coding is not degraded by prediction. Another advantage is that linear and backward-adaptive prediction is carried out jointly for at least two channels; the statistical linkages are made use of not only between the channels, but are also taken into account within each individual channel. Thus it is achieved that the quality of the prediction is improved and a higher prediction gain is made possible in many signal ranges. For example, in coding audio signals with a data rate of 64 kbit/s, the quality of the transmitted signals can be significantly improved.
The present invention relates to predictors in general for the case of N channels; i.e., more than one channel can be predicted at a time, making use of the statistical linkages both within a single channel (auto-correlations) and between the channels (cross-correlations). A clear prediction gain is thus achieved in comparison with the single-channel case, where only autocorrelation is used.
By performing backward-adaptive prediction, the predictor coefficients may be calculated from the values already transmitted, so that the predictor coefficients do not need to be transmitted. This is not possible in the case of forward-adaptive predictors, where the predictor coefficients would also have to be transmitted to the receiver, which would result in increased data transmission load.
The present invention provides the advantage that signal prediction may be performed in stages with a lattice-type structure used for each predictor stage. This results in an orthogonal system being formed, at least in the case of steady-state signals, which allows for simple variation of the order of the predictors, since adding a stage to the existing stages does not affect the previous stages. Therefore the optimum predictor from the point of view of computer load and prediction quality may be selected in each specific case.
In order to adaptively compute the predictor coefficients, the sum of error signal intensities is determined in the at least two channels. For this purpose, the expectation values for the error signals in the at least two channels are needed. It is advantageous if the expectation values are replaced by mean values over a certain signal history period. The computer load is significantly reduced in this manner.
It is also advantageous when individual predictor stages are designed so that they can be switched off. This allows the predictor to react flexibly if, for example, instability occurs in a prediction. In backward-adaptive systems, this may occur, for example, if the signal statistics are changed due to a signal change.
It is also advantageous when the stages that have been switched off continue to operate as backward-adaptive predictors in the forward feed direction. Thus it is achieved that the predictor coefficients even of the stages that have been turned off are further adapted, so that when those stages are switched on again, the corresponding predictor coefficients do not need to be adapted completely anew.
Furthermore, it is advantageous when use is also made of the statistical relationships between simultaneous sampling values in at least two channels (cross-correlations). For this purpose, a simple zero-order inter-channel predictor may be connected downstream. This results in a prediction gain, for example, in coding audio signals, in particular for monotype signals when, e.g., the signals of the channels only differ in their amplitudes.
In addition, it is also advantageous when the multichannel signals are decomposed into their spectral components, for example, using a filter array or a transformation, and the multichannel spectral components thus obtained are coded separately, the prediction for the spectral components of the at least two channels being performed separately. This has the advantage of allowing flexible and effective control of the predictors. If no signal components are present in a subband, or no prediction gain is achieved, the corresponding predictor can be switched off. Implementation with a plurality of lower-order predictors is often simpler than with one higher-order broad-band predictor.
The following remarks are in order concerning the notation. The aforementioned signals are written as vector signals. The vectors correspond to an N-channel signal.
represent the N-channel input signal, the N-channel predictor output signal, and the quantized N-channel signal, and
represent the N-channel prediction error signal and the quantized prediction error signal.
The computation of the predictor output signal is explained in more detail below. N-channel predictor 43 has an order P and is described by the P matrices of the predictor coefficients
where m and P are natural numbers. To calculate predictor output signal {right arrow over ({tilde over (x)})} (n), i.e., the estimated values for the instantaneous sampled value in each channel,
applies. Then the N-channel prediction error signal \e(n) is calculated according to
For the quantized prediction error signal the following applies:
and therefore for the quantized input signal
where {right arrow over (q)}(n) is a type of quantizing error signal, which, however, is not used later in the computation.
The predictor coefficients in the matrices Am(n) are calculated by minimizing the sum of the prediction error signal intensities
In equation 8, {right arrow over (e)}T(n) is the transposed vector of {right arrow over (e)}(n). The prediction gain is then provided by the following formula:
In equation 9, it has been assumed that these are average-free signals, for which the formulas σxi2=E[xi2(n)] and σei2=E[ei2(n)] also apply. σxi2 and σei2 are the variances of the corresponding values xi and ei.
The implementation of N-channel predictor 43 is explained below with reference to FIG. 2. Predictor 43 has a lattice-type structure, which is conventional in single-channel predictors. Reference is made in this respect to the book by P. Strohbach "Linear Prediction Theory," Springer Verlag, pp. 158-178, 1990. A substantial advantage of the lattice-type structure is that, with it, an orthogonal system is formed, which allows the predictor order to be varied in a simple manner by cascading the predictor basic stages, since adding another stage does not affect the previous stages. According to this invention, this structure also applies to the case of N channels and is explained in detail below.
The N-channel predictor illustrated in
Now we shall discuss the mathematical description of this N-channel predictor structure and the computation of the predictor coefficients, which are also referred to as reflection or Parcor coefficients. The mth stage of the N-channel predictor is described by the recursion equations
with the vectors
and matrices of the reflection coefficients
In order to illustrate the relation with
The reflection or Parcor coefficients are calculated for steady-state signals by minimizing the sum of the N-error signal [intensities]:
and
The minimum is computed by setting the N×N partial derivatives by the individual reflection coefficients to zero. N×N equations are obtained for the N×N elements of the reflection coefficient matrix. The following computations are limited to equation 14, but also apply to equation 15.
The zero matrix is on the right-hand side of the equation.
By substituting equation 10 in equation 16, after transformation the following is obtained:
The vector products within the expectation values represent dyadic products, so that matrices are obtained again. Equation 17 is therefore also a matrix equation. Solved by kme, we obtain:
Similarly, for Kmr:
and
The mth stage of N-channel predictor 43 is illustrated in more detail in FIG. 3. The same reference numbers denote the same components in
The specific case of a dual-channel stereo signal processing and the respective mathematical representation is explained in detail below.
For the case of stereo prediction (N=2), we have with
and the simplified form
as well as
and the simplified form
and
For the individual matrix coefficients we then have
To adapt the coefficients in equation 23 to the instantaneous signal characteristics, the expectation values in equations 21 through 23 used for optimization are substituted by measured averages over a limited period of signal history. In the case of stereo signal processing, a limited period of signal history can be, for example, the signals during a few ms up to 100 ms for averaging. A compromise may be found between good convergence to the optimum predictor setting in signal segments with quasi-steady-state signal character and the capability to quickly adapt to changed signal characteristics in the sense of short-term statistics. In this connection, algorithms for which the estimation values of the required parameters are improved iteratively, i.e., from one sampled value to another, can be used. Such algorithms include the conventional LMS (least mean square) or RLS (recursive least mean square) algorithms. These algorithms are also described in the book by P. Strohbach "Linear Prediction Theory," Springer Verlag, 1990.
In the following the adaption of the coefficients kexy,m and krx,y,m is explained using the example of the LMS algorithm. The expectation values are replaced as follows by plausible estimates, which can be computed recursively:
where
Here, β is an adaption time constant, which determines the influence of the instantaneous sampled values on the expectation value estimates. It should be optimized in particular as a function of the sampling frequency of the respective predictor input signal, which may be done experimentally for a given coding algorithm. Thus the following equations are obtained for determining the above-mentioned reflection coefficients:
Cxx,m(n), Dexx,m(n), and Drxx,m(n) are obtained as in equations 24 and 25. For the remaining coefficients, similar formulas are used, which are not individually explained in detail. The resulting formulas are implemented in each predictor stage by adaption networks 57 as illustrated in FIG. 3. For this purpose, a suitably programmed microcomputer may be used. The optimum values of the attenuation constants a and b can also be determined experimentally. They allow the effects of transmission errors to be reduced more rapidly.
In this case the following formulas apply to the recursion equations of the lattice-type structure:
and
where the reflection coefficients of the matrices kme and kmr are calculated as explained with reference to equations 26 and 27.
In the structure illustrated in
{overscore (e)}y,P+1(n)={tilde over (e)}y,P(n)-kxy,0(n)·{overscore (e)}x,P(n). (30)
Coefficients kxy,0(n) are calculated from the condition of minimum error intensity for the quantized prediction error signal ey,P+1(n).
Minimization, as given in equation 31, is performed by setting the derivative by kxy,0(0) of the error intensity to zero.
provides
i.e.,
These equations can be derived similarly for the N-channel case.
In the following we shall explain a specific implementation of signal encoding with the N-channel predictor according to the present invention in more detail. In
Instead of the filter array, a computer unit performing a transformation, such as a discrete Fourier transformation or a discrete cosine transformation, may also be used in FIG. 6. The spectral components are then the transformation coefficients. For each series of transformation coefficients, a separate predictor must then be used.
One problem when a predictor is switched off or switched over lies in the fact that, when it is switched on again, the entire predictor, or the higher stages that have been switched off, must be adapted anew, which results in reduced prediction gain. In order to eliminate this problem, the circuit illustrated in
In the general case of N channels, the number of coefficients to be computed per stage increases rapidly with increasing N, since it is proportional to the square of N, so that it may prove to be advantageous from the practical point of view that the N channels be subdivided into subgroups N=N1+N2+ . . . and then N1-channel, N2-channel, etc. predictors be used. For the case that is relevant today, N=5, where there are three front channels (left, center, right) and two surround channels (left-surround, right-surround), for example, a subdivision into a 3-channel predictor for the front channels and a 2-channel predictor for the surround channels is possible. Thus the number of coefficients per stage is reduced from 2×25=50 to 2×4+2×9=26.
The linear backward-adaptive N-channel predictors described here may be used in many different manners. Thus, applications such as transmission of video or audio signals over the ISDN are conceivable. Other applications include, for example, the transmission of digital audio signals with the help of Digital Audio Broadcasting (DAS). The predictors may also be used in data transmission over computer networks, in particular in multimedia applications.
Such predictors may be integrated in an audio codec, and, in particular, integrated in ISO MPEG layer 2 and ISO MPEG layer 3. The ISO MPEG layer 2 codec is based on a subband decomposition of the 750 Hz band width into 32 subbands of equal widths with the subband sampling frequency of 1.5 kHz. A backward-adaptive stereo predictor that can be switched off or to another predictor length may be used for each partial stereo band. In this codec, each block may be scaled to one of 36 subband sampled values, quantized, and coded in each channel. Quantization for a block is known from the beginning of the block for backward-adaptive prediction. Since this, in turn, depends on the prediction error values, prediction, scaling, and quantization is performed iteratively. The bit allocation algorithm responsible for the selection of the quantizer is adapted so that the prediction gains are taken into account when assigning bits. With the values β=2-6 for the adaption time constant and a=1-2β and b=1-β for the attenuation constants, the previously most critical test signals were measured when coding at a data rate of 2*64=128 kbit/s average coding gain compared to layer 2 without prediction in the range of 45 to 60 kbit/s. The subjective audio quality was considerably improved.
The ISO MPEG layer 3 codec is based on a subband decomposition with subsequent transformation, whose output delivers 576 spectral coefficients. The coefficients are 12 ms apart, which corresponds to a "subband sampling frequency" of approximately 83 Hz. For encoding, the 576 spectral coefficients are combined to form so-called scale factor bands, whose band widths approximately correspond to audible range of frequency groups.
For each series of stereo coefficients, a backward-adaptive stereo predictor is used, i.e., a total of 576 predictors, which may be switched off by scale factor band. In the case of signal changes, the frequency resolution in codec layer 3 is reduced to 192 spectral coefficients. Prediction is switched off here and is only activated upon transition to 576 coefficients. The integration of stereo prediction in layer 3 is relatively uncomplicated, since quantization is performed coefficient by coefficient due to the processing structure. With the values β=0.1 for the adaption time constant and a=b=0.85 for the attenuation constants, the previously most critical test signals can be measured in encoding at a data rate of 2×64=128 kbit/s average coding gain compared to layer 3 without prediction in the range of 25 to 35 kbit/s. The lower coding gain compared to layer 2 is explained by the fact that layer 3 has a greater coding efficiency compared to layer 2. The subjective audio quality was improved also in this case.
Patent | Priority | Assignee | Title |
10068584, | Apr 27 2012 | NTT DOCOMO, INC. | Audio decoding device, audio coding device, audio decoding method, audio coding method, audio decoding program, and audio coding program |
10687059, | Oct 01 2012 | GE VIDEO COMPRESSION, LLC | Scalable video coding using subblock-based coding of transform coefficient blocks in the enhancement layer |
10714113, | Apr 27 2012 | NTT DOCOMO, INC. | Audio decoding device, audio coding device, audio decoding method, audio coding method, audio decoding program, and audio coding program |
10757485, | Aug 25 2017 | Honda Motor Co., Ltd. | System and method for synchronized vehicle sensor data acquisition processing using vehicular communication |
11134255, | Oct 01 2012 | GE VIDEO COMPRESSION, LLC | Scalable video coding using inter-layer prediction contribution to enhancement layer prediction |
11163317, | Jul 31 2018 | Honda Motor Co., Ltd. | System and method for shared autonomy through cooperative sensing |
11181929, | Jul 31 2018 | Honda Motor Co., Ltd. | System and method for shared autonomy through cooperative sensing |
11477467, | Oct 01 2012 | GE VIDEO COMPRESSION, LLC | Scalable video coding using derivation of subblock subdivision for prediction from base layer |
11562760, | Apr 27 2012 | NTT DOCOMO, INC. | Audio decoding device, audio coding device, audio decoding method, audio coding method, audio decoding program, and audio coding program |
11575921, | Oct 01 2012 | GE VIDEO COMPRESSION, LLC | Scalable video coding using inter-layer prediction of spatial intra prediction parameters |
11589062, | Oct 01 2012 | GE VIDEO COMPRESSION, LLC | Scalable video coding using subblock-based coding of transform coefficient blocks in the enhancement layer |
6751587, | Jan 04 2002 | Qualcomm Incorporated | Efficient excitation quantization in noise feedback coding with general noise shaping |
6961432, | Apr 29 1999 | AVAGO TECHNOLOGIES GENERAL IP SINGAPORE PTE LTD | Multidescriptive coding technique for multistream communication of signals |
7206740, | Jan 04 2002 | Qualcomm Incorporated | Efficient excitation quantization in noise feedback coding with general noise shaping |
7386446, | Feb 13 2004 | Fraunhofer-Gesellschaft zur Foerderung der Angewandten Forschung E V | Predictive coding scheme with adaptive speed parameters |
7634148, | Jan 07 2005 | NTT DoCoMo, Inc | Image signal transforming and inverse-transforming method and computer program product with pre-encoding filtering features |
8112286, | Oct 31 2005 | III Holdings 12, LLC | Stereo encoding device, and stereo signal predicting method |
8428956, | Apr 28 2005 | III Holdings 12, LLC | Audio encoding device and audio encoding method |
8433581, | Apr 28 2005 | III Holdings 12, LLC | Audio encoding device and audio encoding method |
8867752, | Jul 30 2008 | Orange | Reconstruction of multi-channel audio data |
8898053, | May 22 2009 | III Holdings 12, LLC | Encoding device, decoding device, and methods therein |
9111527, | May 20 2009 | III Holdings 12, LLC | Encoding device, decoding device, and methods therefor |
9135921, | Jan 18 2012 | Fujitsu Limited | Audio coding device and method |
9761240, | Apr 27 2012 | NTT DoCoMo, Inc | Audio decoding device, audio coding device, audio decoding method, audio coding method, audio decoding program, and audio coding program |
9779746, | Nov 29 2001 | DOLBY INTERNATIONAL AB | High frequency regeneration of an audio signal with synthetic sinusoid addition |
9792923, | Nov 29 2001 | DOLBY INTERNATIONAL AB | High frequency regeneration of an audio signal with synthetic sinusoid addition |
9818417, | Nov 29 2001 | DOLBY INTERNATIONAL AB | High frequency regeneration of an audio signal with synthetic sinusoid addition |
Patent | Priority | Assignee | Title |
4815132, | Aug 30 1985 | Kabushiki Kaisha Toshiba | Stereophonic voice signal transmission system |
5249205, | Sep 03 1991 | BlackBerry Limited | Order recursive lattice decision feedback equalization for digital cellular radio |
5511093, | Jun 05 1993 | Robert Bosch GmbH | Method for reducing data in a multi-channel data transmission |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jan 16 1998 | EDLER, BERND | Robert Bosch GmbH | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 009149 | /0135 | |
Jan 19 1998 | FUCHS, HENDRIK | Robert Bosch GmbH | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 009149 | /0135 | |
Apr 02 1998 | Robert Bosch GmbH | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Jan 08 2003 | ASPN: Payor Number Assigned. |
Sep 05 2005 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Oct 05 2005 | REM: Maintenance Fee Reminder Mailed. |
Sep 07 2009 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Oct 25 2013 | REM: Maintenance Fee Reminder Mailed. |
Mar 19 2014 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Mar 19 2005 | 4 years fee payment window open |
Sep 19 2005 | 6 months grace period start (w surcharge) |
Mar 19 2006 | patent expiry (for year 4) |
Mar 19 2008 | 2 years to revive unintentionally abandoned end. (for year 4) |
Mar 19 2009 | 8 years fee payment window open |
Sep 19 2009 | 6 months grace period start (w surcharge) |
Mar 19 2010 | patent expiry (for year 8) |
Mar 19 2012 | 2 years to revive unintentionally abandoned end. (for year 8) |
Mar 19 2013 | 12 years fee payment window open |
Sep 19 2013 | 6 months grace period start (w surcharge) |
Mar 19 2014 | patent expiry (for year 12) |
Mar 19 2016 | 2 years to revive unintentionally abandoned end. (for year 12) |