Embodiments of the present invention comprise systems and methods for processing of data related to video wherein reduced bit depth intermediate calculations are enabled.

Patent
   RE42745
Priority
Aug 09 2001
Filed
Jan 19 2010
Issued
Sep 27 2011
Expiry
May 02 2022
Assg.orig
Entity
Large
0
31
all paid
1. A method for dequantization and inverse transformation, said method comprising:
(a) receiving a matrix of quantized coefficient levels;
(b) receiving at least one quantization parameter (qp);
(c) determining a reconstructed transform coefficient (rtc) matrix wherein each value in said matrix of quantized coefficient level matrix levels is scaled by a value in a scaling matrix which is dependent on qp % p, where p is a constant value;
(d) computing scaled reconstructed samples (SRS) by performing an inverse transformation on said rtc matrix values; and
(e) computing reconstructed samples, by normalizing the SRS values.
9. A method for dequantization and inverse transformation, said method comprising:
(a) receiving a matrix of quantized coefficient levels (qcl matrix);
(b) receiving a quantization parameter (qp);
(c) calculating a scaling matrix using a weighting matrix scaled by a dequantization matrix selected using qp % p;
(d) determining a reconstructed transform coefficient (rtc) matrix wherein said qcl matrix is scaled by said scaling matrix;
(e) computing scaled reconstructed samples (SRS) by performing an inverse transformation on said rtc matrix values; and
(f) computing reconstructed samples, by normalizing the SRS values with a constant shift operation.
13. A computer-readable medium encoded with computer executable instructions for dequantization and inverse transformation, said instructions comprising:
(a) receiving a matrix of quantized coefficient levels;
(b) receiving at least one quantization parameter (qp);
(c) determining a reconstructed transform coefficient (rtc) matrix wherein each value in said matrix of quantized coefficient level matrix levels is scaled by a value in a scaling matrix which is dependent on qp % p, where p is a constant value;
(d) computing scaled reconstructed samples (SRS) by performing an inverse transformation on said rtc matrix values; and
(e) computing reconstructed samples, by normalizing the SRS values.
11. A method for dequantization and inverse transformation, said method comprising:
(a) fixing a limited set of scaling matrices, wherein each of said scaling matrices in said limited set of scaling matrices is dependent on an associated quantization parameter qp and an associated constant parameter p according to the relation qp % p;
(b) receiving a quantized coefficient level (qcl) matrix;
(c) determining a reconstructed transform coefficient (rtc) matrix wherein each value in said quantized coefficient level matrix is scaled by a value in a scaling matrix that is selected from said limited set of scaling matrices;
(d) computing scaled reconstructed samples (SRS) by performing an inverse transformation on said rtc matrix values; and
(e) computing reconstructed samples, by normalizing the SRS values.
12. An apparatus for dequantization and inverse transformation, said apparatus comprising:
(a) a qcl receiver for receiving a matrix of quantized coefficient levels (QCLs);
(b) a qp receiver for receiving at least one quantisation quantization parameter (qp);
(c) a processor, wherein said processor is capable of determining a reconstructed transform coefficient (rtc) matrix wherein each value in said matrix of quantized coefficient level matrix levels is scaled by a value in a scaling matrix which is dependent on qp % p, where p is a constant value;
(d) said processor comprising a further capability of computing scaled reconstructed samples (SRS) by performing an inverse transformation on said rtc matrix values; and
(e) said processor comprising the capability of computing reconstructed samples, by normalizing said SRS values.
2. A method as described in claim 1 wherein P=6.
3. A method as described in claim 1 wherein said scaling matrix is a 4×4 matrix.
4. A method as described in claim 1 wherein said scaling matrix is an 8×8 matrix.
5. A method as described in claim 1 wherein said at least one quantization parameter (qp) comprises a chroma quantization parameter.
6. A method as described in claim 1 wherein said at least one quantization parameter (qp) comprises a luma quantization parameter.
7. A method as described in claim 1 wherein said at least one quantization parameter (qp) comprises a chroma quantization parameter for each chroma channel.
8. A method as described in claim 1 wherein said at least one quantization parameter (qp) comprises a chroma quantization parameter for each chroma channel and a luma quantization parameter.
10. A method as described in claim 9 further comprising shifting said rtc matrix values by a value dependent on qp/p before said computing scaled reconstructed samples.

Normalization and quantization are performed simultaneously using these integers and divisions by powers of 2. Transform coding in H.26L uses a 4×4 block size and an integer transform matrix T, Equation 2. For a 4×4 block X, the transform coefficients K are calculated as in Equation 3. From the transform coefficients, the quantization levels, L, are calculated by integer multiplication. At the decoder the levels are used to calculate a new set of coefficients, K′. Additional integer matrix transforms followed by a shift are used to calculate the reconstructed values X′. The encoder is allowed freedom in calculation and rounding of the forward transform. Both encoder and decoder must compute exactly the same answer for the inverse calculations.

T = ( 13 13 13 13 17 7 - 7 - 17 13 - 13 - 13 13 7 - 17 17 - 7 )


Y=T·X
K=Y·TT
L=(ATML(QP)·K)/220
K′=BTML(QP)·L
Y′=TT·K′
X′=(Y′·T)/220

The dynamic range required during these calculations can be determined. The primary application involves 9-bit input, 8 bits plus sign, the dynamic range required by intermediate registers and memory accesses is presented in Table 2.

TABLE 2
Dynamic range of TML transform and inverse transform (bits)
9-bit input LUMA Transform Inverse Transform
Register 30 27
Memory 21 26

To maintain bit-exact definitions and incorporate quantization, the dynamic range of intermediate results can be large since division operations are postponed. The present invention combines quantization and normalization, to eliminate the growth of dynamic range of intermediate results. With the present invention the advantages of bit exact inverse transform and quantization definitions are kept, while controlling the bit depth required for these calculations. Reducing the required bit depth reduces the complexity required of a hardware implementation and enables efficient use of single instruction multiple data (SIMD) operations, such as the Intel MMX instruction set.

Accordingly, a method is provided for the quantization of a coefficient. The method comprises: receiving a coefficient K; receiving a quantization parameter (QP); forming a quantization value (L) from the coefficient K using a mantissa portion (Am(QP)) and an exponential portion (xAe(QP)). Typically, the value of x is 2.

In some aspects of the method, forming a quantization value (L) from the coefficient K includes:

L = K * A ( QP ) = K * Am ( QP ) * ( 2 Ae ( QP ) ) .

In other aspects, the method further comprises: normalizing the quantization value by 2N as follows:

Ln = L / 2 N = K * Am ( QP ) / 2 ( N · Ae ( QP ) ) .

In some aspects, forming a quantization value includes forming a set of recursive quantization factors with a period P, where A(QP+P)=A(QP)/x. Therefore, forming a set of recursive quantization factors includes forming recursive mantissa factors, where Am(QP)=Am(QP mod P). Likewise, forming a set of recursive quantization factors includes forming recursive exponential factors, where Ae(QP)=Ae(QP mod P)−QP/P.

More specifically, receiving a coefficient K includes receiving a coefficient matrix K[i][j]. Then, forming a quantization value (L) from the coefficient matrix K[i][j] includes forming a quantization value matrix (L[i][j]) using a mantissa portion matrix (Am(QP)[i][j]) and an exponential portion matrix (xAe(QP)[i][j]).

Likewise, forming a quantization value matrix (L[i][j]) using a mantissa portion matrix (Am(QP)[i][j]) and an exponential portion matrix (xAe(QP)[i][j]) includes, for each particular value of QP, every element in the exponential portion matrix being the same value. Every element in the exponential portion matrix is the same value for a period (P) of QP values, where Ae(QP)=Ae(P*(QP/P)).

Additional details of the above-described method, including a method for forming a dequantization value (X1), from the quantization value, using a mantissa portion (Bm(QP)) and an exponential portion (xBe(QP)), are provided below.

FIG. 1 is a flowchart illustrating the present invention method for the quantization of a coefficient.

FIG. 2 is a diagram showing embodiments of the present invention comprising systems and methods for video encoding wherein a quantization parameter may be established based on user inputs;

FIG. 3 is a diagram showing embodiments of the present invention comprising systems and methods for video decoding;

FIG. 4 is diagram showing embodiments of the present invention comprising storing encoder output on a computer-readable storage media;

FIG. 5 is a diagram showing embodiments of the present invention comprising sending encoder output over a network;

FIG. 6 is a diagram showing embodiments of quantization methods and apparatuses of the present invention comprising a first mantissa portion processing means and a first exponential portion processing means;

FIG. 7 is a diagram showing embodiments of quantization methods and apparatuses of the present invention comprising a first mantissa portion processing means and a first shifting portion processing means;

FIG. 8 is a diagram showing embodiments of dequantization methods and apparatuses of the present invention comprising a second mantissa portion processing means and a second exponential portion processing means;

FIG. 9 is a diagram showing embodiments of dequantization methods and apparatuses of the present invention comprising a second mantissa portion processing means and a second shifting portion processing means;

FIG. 10 is diagram showing prior art methods comprising dequantization, inverse transformation, and normalization (Prior Art);

FIG. 11 is a diagram showing embodiments of the present invention comprising factorization of an equivalent of a dequantization scaling factor;

FIG. 12 is a diagram showing embodiments of the present invention comprising factorization thereby achieving a reduction in bit depth for inverse transformation calculations and reduce memory requirements for dequantization parameter storage;

FIG. 13 is a diagram showing embodiments of the present invention comprising a normalization process independent of quantization parameter (QP);

FIG. 14 is diagram showing embodiments of the present invention comprising frequency dependent quantization;

FIG. 15 is a diagram showing prior art methods comprising quantization (Prior Art); and

FIG. 16 is a diagram showing embodiments of the present invention comprising factorization of an equivalent of a quantization scaling factor.

The dynamic range requirements of the combined transform and quantization is reduced by factoring the quantization parameters A(QP) and B(QP) into a mantissa and exponent terms as shown in Equation 4. With this structure, only the precision due to the mantissa term needs to be preserved during calculation. The exponent term can be included in the final normalization shift. This is illustrated in the sample calculation Equation 5.


Aproposed(QP)=Amantissa(QP)·2Aexp onent(Qp)
Bproposed(QP)=Bmantissa(QP)·2Bexp onent(QP)


Y=T·X
K=Y·TT
L=(Amantissa(QP)·K)/220−Aexp onent(QP)
K′=TT·L
Y′=K′·T
X′=(Bmantissa(QP)·Y′)/220−Bexp onent(QP)

To illustrate the present invention, a set of quantization parameters is presented that reduce the dynamic range requirement of an H.26L decoder to 16-bit memory access. The memory access of the inverse transform is reduced to 16 bits. Values for Amantissa, Aexponent, Bmantissa, Bexponent, Aproposed, Bproposed are defined for QP=0-5 as shown in Table 3. Additional values are determined by recursion, as shown in Equation 6. The structure of these values makes it possible to generate new quantization values in addition to those specified.

TABLE 3
Quantization values 0-5 for TML
QP Amantissa Aexponent Bmantissa Bexponent Aproposed Bproposed
0 5 7 235 4 640 3760
1 9 6 261 4 576 4176
2 127 2 37 7 508 4736
3 114 2 165 5 456 5280
4 25 4 47 7 400 6016
5 87 2 27 8 348 6912


Amantissa(QP+6)=Amantissa(QP)
Bmantissa(QP+6)=Bmantissa(QP)
Aexp onent(QP+6)=Aexp onent(QP)−1
Bexp onent(QP+6)=Bexp onent(QP)+1

Using the defined parameters, the transform calculations can be modified to reduce the dynamic range as shown in Equation 5. Note how only the mantissa values contribute to the growth of dynamic range. The exponent factors are incorporated into the final normalization and do not impact the dynamic range of intermediate results.

With these values and computational method, the dynamic range at the decoder is reduced so only 16-bit memory access is needed as seen in Table 4.

TABLE 4
Dynamic range with low-bit depth quantization (QP > 6)
8-bit LUMA Transform LUMA Inverse Transform
Register 28 24
Memory 21 16

Several refinements can be applied to the joint quantization/normalization procedure described above. The general technique of factoring the parameters into a mantissa and exponent forms the basis of these refinements.

The discussion above assumes all basis functions of the transform have an equal norm and are quantized identically. Some integer transforms have the property that different basis functions have different norms. The present invention technique has been generalized to support transforms having different norms by replacing the scalars A(QP) and B(QP) above by matrices A(QP)[i][j] and B(QP)[i][j]. These parameters are linked by a normalization relation of the form shown below, Equation 7, which is more general than the single relation shown in Equation 1.


A(QP)[i][j]·B(QP)[i][j]=N[i][j]

Following the method previously described, each element of each matrix is factored into a mantissa and an exponent term as illustrated in the equations below, Equation 8.


A(QP)[i][j]=Amantissa(QP)[i][j]·2Aexp onent(QP)[i][j]
B(QP)[i][j]=Bmantissa(QP)[i][j]·2Bexp onent(QP)[i][j]

A large number of parameters are required to describe these quantization and dequantization parameters. Several structural relations can be used to reduce the number of free parameters. The quantizer growth is designed so that the values of A are halved after each period P at the same time the values of B are doubled maintaining the normalization relation. Additionally, the values of Aexponent(QP)[i][j] and Bexponent(QP)[i][j] are independent of i, j and (QP) in the range [0,P−1]. This structure is summarized by structural equations, Equation 9. With this structure there are only two parameters Aexponent[0] and Bexponent[0].


Aexp onent(QP)[i][j]=Aexp onent[0]−QP/P
Bexp onent(QP)[i][j]=Bexp onent[0]−QP/P

A structure is also defined for the mantissa values. For each index pair (i,j), the mantissa values are periodic with period P. This is summarized by the structural equation, Equation 10. With this structure, there are P independent matrices for Amantissa and P independent matrices for Bmantissa reducing memory requirements and adding structure to the calculations.


Amantissa(QP)[i][j]=Amantissa(QP % P)[i][j]
Bmantissa(QP)[i][j]=Bmantissa(QP % P)[i][j]

The inverse transform may include integer division that requires rounding. In cases of interest, the division is by a power of 2. The rounding error is reduced by designing the dequantization factors to be multiples of the same power of 2, giving no remainder following division.

Dequantization using the mantissa values Bmantissa(QP) gives dequantized values that are normalized differently depending upon QP. This must be compensated for following the inverse transform. A form of this calculation is shown in Equation 11.


K[i][j]=Bmantissa(QP % P)[i][j]·Level[i][j]
X=(T−1·K·T)/2(N−QP/P)

To eliminate the need for the inverse transform to compensate for this normalization difference, the dequantization operation is defined so that all dequantized values have the same normalization. The form of this calculation is shown in Equation 12.


K[i][j]=Bmantissa(QP % P)[i][j]·Level[i][j]
X=(T−1·K·T)/2N
An example follows that illustrates the present invention use of quantization matrices. The forward and inverse transforms defined in Equation 13 need a quantization matrix rather than a single scalar quantization value. Sample quantization and dequantization parameters are given. Equation 14 and 16, together with related calculations, illustrate the use of this invention. This example uses a period P=6.

Equation 13 transforms T forward = ( 1 1 1 1 2 1 - 1 - 2 1 - 1 - 1 1 1 - 2 2 - 1 ) T reverse = ( 2 2 2 1 2 1 - 2 - 2 2 - 2 - 2 2 2 - 1 2 - 1 )

Equation 14 quantization parameters Q ( m ) [ i ] [ j ] = M m .0 for ( i , j ) = { ( 0 , 0 ) , ( 0 , 2 ) , ( 2 , 0 ) , ( 2 , 2 ) } Q ( m ) [ i ] [ j ] = M m .1 for ( i , j ) = { ( 1 , 1 ) , ( 1 , 3 ) , ( 3 , 1 ) , ( 3 , 3 ) } Q ( m ) [ i ] [ j ] = M m .2 otherwise M = [ 21844 8388 13108 18724 7625 11650 16384 6989 10486 14564 5992 9532 13107 5243 8066 11916 4660 7490 ]

Equation 16 De quantization parameters R ( m ) [ i ] [ j ] = S m .0 for ( i , j ) = { ( 0 , 0 ) , ( 0 , 2 ) , ( 2 , 0 ) , ( 2 , 2 ) } R ( m ) [ i ] [ j ] = S m .1 for ( i , j ) = { ( 1 , 1 ) , ( 1 , 3 ) , ( 3 , 1 ) , ( 3 , 3 ) } R ( m ) [ i ] [ j ] = S m .2 otherwise S = [ 6 10 8 7 11 9 8 12 10 9 14 11 10 16 13 11 18 14 ]

The description of the forward transformation and forward quantization, Equation 18, are given below assuming input is in X, quantization parameter QP.


K=Tforward·X·TforwardT


period=QP/6
phase=QP−6·period
Level[i][j]=(Q(phase)[i][j]·K[i][j])/2(17+period)

The description of dequantization, inverse transform, and normalization for this example is given below, Equation 19 and 20.


period=QP/6
phase=QP−6·period
K[k][j]=R(phase)[i][j]·Level[i][j]·2period


X′=Treverse·K·TreverseT
X″[i][j]=X′[i][j]/27

FIG. 1 is a flowchart illustrating the present invention method for the quantization of a coefficient. Although this method is depicted as a sequence of numbered steps for clarity, no order should be inferred from the numbering unless explicitly stated. It should be understood that some of these steps may be skipped, performed in parallel, or performed without the requirement of maintaining a strict order of sequence. The methods start at Step 100. Step 102 supplies a coefficient K. Step 104 supplies a quantization parameter (QP). Step 106 forms a quantization value (L) from the coefficient K using a mantissa portion (Am(QP)) and an exponential portion (xAe(QP)). Typically, the exponential portion (xAe(QP)) includes x being the value 2.

In some aspects of the method, forming a quantization value (L) from the coefficient K using a mantissa portion (Am(QP)) and an exponential portion (xAe(QP)) in Step 106 includes:

L = K * A ( QP ) = K * Am ( QP ) * ( 2 Ae ( QP ) ) .

Some aspects of the method include a further step. Step 108 normalizes the quantization value by 2N as follows:

Ln = L / 2 N = K * Am ( QP ) / 2 ( N · Ae ( QP ) ) .

In other aspects, forming a quantization value in Step 106 includes forming a set of recursive quantization factors with a period P, where A(QP+P)=A(QP)/x. Likewise, forming a set of recursive quantization factors includes forming recursive mantissa factors, where Am(QP)=Am(QP mod P). Then, forming a set of recursive quantization factors includes forming recursive exponential factors, where Ae(QP)=Ae(QP mod P)−QP/P.

In some aspects, forming a quantization value includes forming a set of recursive quantization factors with a period P, where A(QP+P)=A(QP)/2. In other aspects, forming a set of recursive quantization factors includes forming recursive mantissa factors, where P=6. Likewise, forming a set of recursive quantization factors includes forming recursive exponential factors, where P=6.

In some aspects of the method, receiving a coefficient K in Step 102 includes receiving a coefficient matrix K[i][j]. Then, forming a quantization value (L) from the coefficient matrix K[i][j] using a mantissa portion (Am(QP) and an exponential portion (xAe(QP)) in Step 106 includes forming a quantization value matrix (L[i][j]) using a mantissa portion matrix (Am(QP)[i][j]) and an exponential portion matrix (xAe(QP)[i][j]). Likewise, forming a quantization value matrix (L[i][j]) using a mantissa portion matrix (Am(QP)[i][j]) and an exponential portion matrix (xAe(QP)[i][j]) includes, for each particular value of QP, every element in the exponential portion matrix being the same value. Typically, every element in the exponential portion matrix is the same value for a period (P) of QP values, where Ae(QP)=Ae(P*(QP/P)).

Some aspects of the method include a further step. Step 110 forms a dequantization value (X1) from the quantization value, using a mantissa portion (Bm(QP)) and an exponential portion (xBe(QP)). Again, the exponential portion (xBe(QP)) typically includes x being the value 2.

In some aspects of the method, forming a dequantization value (X1) from the quantization value, using a mantissa portion (Bm(QP)) and an exponential portion (2Be(QP)) includes:

X1 = L * B ( QP ) = L * Bm ( QP ) * ( 2 Be ( QP ) ) .

Other aspects of the method include a further step, Step 112, of denormalizing the quantization value by 2N as follows:

X1d = X1 / 2 N = X1 * Bm ( QP ) / 2 N .

In some aspects, forming a dequantization value in Step 110 includes forming a set of recursive dequantization factors with a period P, where B(QP+P)=x*B(QP). Then, forming a set of recursive dequantization factors includes forming recursive mantissa factors, where Bm(QP)=Bm(QP mod P). Further, forming a set of recursive dequantization factors includes forming recursive exponential factors, where Be(QP)=Be(QP mod P)+QP/P.

In some aspects, forming a set of recursive quantization factors with a period P includes the value of x being equal to 2, and forming recursive mantissa factors includes the value of P being equal to 6. Then, forming a set of recursive dequantization factors includes forming recursive exponential factors, where Be(QP)=Be(QP mod P)+QP/P.

In some aspects of the method, forming a dequantization value (X1), from the quantization value, using a mantissa portion (Bm(QP)) and an exponential portion (xBe(QP)) in Step 110 includes forming a dequantization value matrix (X1[i][j]) using a mantissa portion matrix (Bm(QP)[i][j]) and an exponential portion matrix (xBe(QP)[i][j]). Likewise, forming a dequantization value matrix (X1[i][j]) using a mantissa portion matrix (Bm(QP)[i][j]) and an exponential portion matrix (xBe(QP)[i][j]) includes, for each particular value of QP, every element in the exponential portion matrix being the same value. In some aspects, every element in the exponential portion matrix is the same value for a period (P) of QP values, where Be(QP)=Be(P*(QP/P)).

Another aspect of the invention includes a method for the dequantization of a coefficient. However, the process is essentially the same as Steps 110 and 112 above, and is not repeated in the interest of brevity.

A method for the quantization of a coefficient has been presented. An example is given illustrating a combined dequantization and normalization procedure applied to the H.26L video coding standard with a goal of reducing the bit-depth required at the decoder to 16 bits. The present invention concepts can also be used to meet other design goals within H.26L. In general, this invention has application to the combination of normalization and quantization calculations.

Embodiments of the present invention may be implemented as hardware, firmware, software and other implementations. Some embodiments may be implemented on general purpose computing devices or on computing devices specifically designed for implementation of these embodiments. Some embodiments may be stored in memory as a means of storing the embodiment or for the purpose of executing the embodiment on a computing device.

Some embodiments of the present invention comprise systems and methods for video encoding, as shown in FIG. 2. In these embodiments, image data 130 is subtracted from 132 with data representing prior video frames 145 resulting in a differential image 133, which is sent to a transform module 134. Transform module 134 may use DCT or other transform methods to transform the image. Generally, the result of the transform process will be coefficients (K), which are then sent to a quantization module 136 for quantization.

Quantization module 136 may have other inputs, such as user inputs 131 for establishing quantization parameters (QPs) and for other input. Quantization module 136 may use the transformation coefficients and the quantization parameters to determine quantization levels (L) in the video image. Quantization module 136 may use methods employing a mantissa portion and an exponential portion, however, other quantization methods may also be employed in the quantization modules 136 of embodiments of the present invention. These quantization levels 135 and quantization parameters 133 are output to a coding module 138 as well as a dequantization module (DQ) 140.

Output to the coding module 138 is encoded and transmitted outside the encoder for immediate decoding or storage. Coding module 138 may use variable length coding (VLC) in its coding processes. Coding module 138 may use arithmetic coding in its coding process.

Output from quantization module 136 is also received at dequantization module 140 to begin reconstruction of the image. This is done to keep an accurate accounting of prior frames. Dequantization module 140 performs a process with essentially the reverse effect as quantization module 136. Quantization levels or values (L) are dequantized yielding transform coefficients. Dequantization modules 140 may use methods employing a mantissa portion and an exponential portion as described herein.

The transform coefficients output from dequantization module 140 are sent to an inverse transformation (IT) module 142 where they are inverse transformed to a differential image 141. This differential image 141 is then combined with data from prior image frames 145 to form a video frame 149 that may be input to a frame memory 146 for reference to succeeding frames.

Video frame 149 may also serve as input to a motion estimation module 147, which also receives input image data 130. These inputs may be used to predict image similarities and help compress image data. Output from motion estimation module 147 is sent to motion compensation module 148 and combined with output data from coding module 138, which is sent out for later decoding and eventual image viewing.

Motion compensation module 148 uses the predicted image data to reduce frame data requirements; its output is subtracted from input image data 130.

Some embodiments of the present invention comprise systems and methods for video decoding, as shown in FIG. 3. A decoder of embodiments of the present invention may receive encoded image data 150 to a decoder module 152. Encoded image data 150 may comprise data that has been encoded by an encoder 100 such as that described with reference to FIG. 2.

Decoder module 152 may employ variable length decoding methods if they were used in the encoding process. Other decoding methods may also be used as dictated by the type of encoded data 150. Decoding module 152 performs essentially the reverse process as coding module 138. Output from decoding module 152 may comprise quantization parameters 156 and quantization values 154. Other output may comprise motion estimation data and image prediction data that may be sent directly to a motion compensation module 166.

Typically, quantization parameters 156 and quantization values 154 are output to a dequantization module 158, where quantization values are converted back to transform coefficients. These coefficients are then sent to an inverse transformation module 160 for conversion back to spatial domain image data 161.

The motion compensation unit 166 uses motion vector data and the frame memory 164 to construct a reference image 165.

Image data 161 represents a differential image that must be combined 162 with prior image data 165 to form a video frame 163. This video frame 163 is output 168 for further processing, display or other purposes and may be stored in frame memory 164 and used for reference with subsequent frames.

In some embodiments of the present invention, as illustrated in FIG. 4, image data 102 may be sent to an encoder or encoding portion 104 for the various transformation, quantization, encoding and other procedures typical of video encoding as described above for some embodiments of the present invention. Output from the encoder may then be stored on any computer-readable storage media 106. Storage media 106 may act as a short-term buffer or as a long-term storage device.

When desired, encoded video data may be read from storage media 106 and decoded by a decoder or decoding portion 108 for output 110 to a display or other device.

In some embodiments of the present invention, as illustrated in FIG. 5, image data 112 may be sent to an encoder or encoding portion 114 for the various transformation, quantization, encoding and other procedures typical of video encoding as described above for some embodiments of the present invention. Output from the encoder may then be sent over a network, such as a LAN, WAN or the Internet 116. A storage device such as storage media 106 may be part of a network. Encoded video data may be received and decoded by a decoder or decoding portion 118, which also communicates with network 116. Decoder 118 may then decode the data for local consumption 120.

In some embodiments of the present invention, as illustrated in FIG. 6, a quantization method or apparatus comprises a mantissa portion 172 and an exponential portion 174. Quantization parameters 176 are input to both portions 172 & 174. A coefficient K 170 is input to the mantissa portion 172 where it is modified using the quantization parameter and other values as explained above. The result of this operation is combined with the result produced in the exponential portion using the quantization parameter thereby producing a quantization level or value L 178.

In some embodiments of the present invention, as illustrated in FIG. 7, a quantization method or apparatus comprises a mantissa portion 182 and a shifting portion 184. Quantization parameters 186 are input to both portions 182 & 184. A coefficient, K 180 is input to the mantissa portion 182 where it is modified using the quantization parameter and other values as explained above. The result of this operation is further processed in the shifting portion using the quantization parameter thereby producing a quantization level or value, L 188.

Some embodiments of the present invention, as illustrated in FIG. 8, comprise a dequantization method or apparatus with a mantissa portion 192 and an exponential portion 194. Quantization parameters 196 are input to both portions 192 & 194. A quantization value, L 190 is input to the mantissa portion 192 where it is modified using the quantization parameter and other values as explained above. The result of this operation is further processed in the exponential portion using the quantization parameter thereby producing a coefficient, X1 198.

Some embodiments of the present invention, as illustrated in FIG. 9, comprise a dequantization method or apparatus with a mantissa portion 202 and a shifting portion 204. Quantization parameters 206 are input to both portions 202 & 204. A quantization value, L 200 is input to the mantissa portion 202 where it is modified using the quantization parameter and other values as explained above. The result of this operation is further processed in the exponential portion using the quantization parameter thereby producing a coefficient, X1 208.

Some embodiments of the present invention may be stored on computer-readable media such as magnetic media, optical media, and other media as well as combinations of media. Some embodiments may also be transmitted as signals across networks and communication media. These transmissions and storage actions may take place as part of operation of embodiments of the present invention or as a way of transmitting the embodiment to a destination.

Typical methods of dequantization, inverse transformation, and normalization may be expressed mathematically in equation form. These methods, as illustrated in FIG. 10, may begin with input in the form of an array of quantized coefficient levels cα 220, and a quantization parameter QP 222. A dequantization scaling value SQP 224 is then calculated 221 using the quantization parameter QP 222. Quantized coefficient levels 220 are scaled 227 by SQP 224 to give transform coefficients wα 226 according to Equation 21. These transform coefficients 226 are then inverse transformed 228 to compute scaled samples x′α 230 as shown in Equation 22. The scaled samples 230 may then be normalized 232 to give reconstructed samples, x″α 234 according to Equation 23.

w α = c α · S QP Equation 21 x α = β T αβ - 1 · w β Equation 22 x α = ( x α + f ) M Equation 23

In embodiments of the present invention, a reduction in bit depth for inverse transformation calculations is achieved. The processes of these embodiments, illustrated in FIG. 11, begin with input in the form of an array of quantized coefficient levels cα 220, and a quantization parameter QP 222 similar to typical prior art methods. However, in these embodiments, the equivalent of a dequantization scaling factor SQP is factored 223 & 225 into a mantissa portion RQP 236 and an exponential portion EQP 238. The mantissa portion 236 is used during dequantization 240 to calculate the reconstructed transform coefficients ({tilde over (w)}α) 242, which are used in the inverse transformation process 228 to calculate reconstructed samples ({tilde over (x)}′α) 244. These reconstructed samples may then be normalized using the exponential portion 238 according to Equation 26, thereby yielding reconstructed samples (x″α) 234. Using these methods, the values of {tilde over (w)}α and {tilde over (x)}′α require EQP fewer bits for representation than the corresponding values wα and x′α. This factorization enables mathematically equivalent calculation of the reconstructed samples using lower intermediate precision as shown in Equations 24-26.

w ~ α = C α · R QP Equation 24 x ~ α = β T αβ - 1 · w ~ β Equation 25 x α = [ x ~ α + ( f << E QP ) ] >> ( M - E QP ) Equation 26

In embodiments of the present invention, a reduction in bit depth for inverse transformation calculations is achieved together with a reduction in memory needed to store dequantization parameters. The processes of these embodiments, illustrated in FIG. 12, begin with input in the form of an array of quantized coefficient levels cα 220, a quantization parameter QP 222 similar to typical prior art methods. However, in these embodiments, an additional parameter P is used and the equivalent of a dequantization scaling factor SQP is factored 227 & 229 into a mantissa portion RQP 236 and an exponential portion EQP 238. The mantissa portion, RQP 236, doubles with each increment of QP by P. The exponential portion EQP 238 is periodic with period P. The mantissa portion 236 is used during dequantization 240 to calculate the reconstructed transform coefficients ({tilde over (w)}α) 242, which are used in the inverse transformation process 228 to calculate reconstructed samples ({tilde over (x)}′α) 244. These reconstructed samples may then be normalized using the exponential portion 238 according to Equation 28, thereby yielding reconstructed samples, x″α 234. Using these methods, the values of {tilde over (w)}α and {tilde over (x)}′α require EQP fewer bits for representation. This factorization enables mathematically equivalent calculation of the reconstructed samples using lower intermediate precision as shown in Equations 25, 27 & 28. Values of R and E need only be stored for QP in one period [1, P] reducing the memory requirements.

w ~ α = c α · R QP % P >> (QP/P ) Equation 27 x α = [ x ~ α + ( f << E QP % P ) ] >> ( M - E QP % P ) Equation 28

In embodiments of the present invention, a reduction in bit depth for inverse transformation calculations is achieved together with a reduction in memory needed to store dequantization parameters. Additionally, the normalization process is independent of QP. This eliminates the need to communicate an exponential value for use in the normalization process. In these embodiments, the exponential portion, previously described as EQP is held constant and incorporated into normalization 248 thereby negating the need to transmit the value as is done in previously described embodiments. The processes of these embodiments, illustrated in FIG. 13, begin with input in the form of an array of quantized coefficient levels cα 220, a quantization parameter QP 222 similar to typical prior art methods. Some of these embodiments implement the parameter P as described above. In these embodiments, the equivalent of a dequantization scaling factor SQP is factored 227 & 229 into a mantissa portion RQP 236 and a constant exponential portion EQP that is incorporated into normalization 248. The mantissa portion, RQP 236, may double with each increment of QP by P as previously described. The exponential portion EQP 238 is constant. The mantissa portion 236 is used during dequantization 240 to calculate the reconstructed transform coefficients ({tilde over (w)}α) 242, which are used in the inverse transformation process 228 to calculate reconstructed samples ({tilde over (x)}′α) 244. These reconstructed samples may then be normalized using the constant exponential portion that is incorporated into normalization 248 according to Equation 27, thereby yielding reconstructed samples, x″α 234. Using these methods, the values of {tilde over (w)}α and {tilde over (x)}′α require EQP fewer bits for representation. This factorization enables mathematically equivalent calculation of the reconstructed samples using lower intermediate precision as shown for other embodiments above in Equations 25, 27 & 29. For embodiments that employ periodic values related to the parameter P, values of R need only be stored for QP in one period [1, P] reducing the memory requirements. The constant value for E simplifies the process by eliminating the need to transmit E to the normalization process 248.
x″α=({tilde over (x)}′α+2z)>>{tilde over (M)}  Equation 29

In further embodiments of the present invention, a reduction in bit depth for inverse transformation calculations is achieved together with a reduction in memory needed to store dequantization parameters and the normalization process is independent of QP thereby eliminating the need to communicate an exponential value for use in the normalization process. These embodiments also express the quantization scaling factor mantissa portion as a matrix. This matrix format allows frequency dependent quantization, which allows the processes of these embodiments to be used in coding schemes that comprise frequency-dependent transformation.

In these embodiments, the exponential portion, previously described as EQP may be held constant and incorporated into normalization 248 as previously explained. The processes of these embodiments, illustrated in FIG. 14, begin with input in the form of an array of quantized coefficient levels cα 220, and a quantization parameter QP 222 similar other methods. Some of these embodiments may implement the parameter P as described above.

In these embodiments, the equivalent of a dequantization scaling factor SαQP is factored 254 into a mantissa portion RαQP 252 and a constant exponential portion EQP that is incorporated into normalization 248. The mantissa portion, RαQP 252, may double with each increment of QP by P as previously described. The exponential portion EQP is constant. The mantissa portion 252 is used during dequantization 250 to calculate the reconstructed transform coefficients ({tilde over (w)}α) 242, which are used in the inverse transformation process 228 to calculate reconstructed samples ({tilde over (x)}′α) 244. These reconstructed samples may then be normalized using the constant exponential portion that is incorporated into normalization 248 according to Equation 27, thereby yielding reconstructed samples, x″α 234. Using these methods, the values of {tilde over (w)}α and {tilde over (x)}′α require EQP fewer bits for representation. This factorization enables mathematically equivalent calculation of the reconstructed samples using lower intermediate precision as described above and in Equations 25, 27 & 29. In these embodiments the dequantization scaling factor portion is expressed as a matrix. This format is expressed in Equation 30 with the subscript α.
{tilde over (w)}α=cα·RαQP % P>>(QP/P)  Equation 30

Typical methods of quantization may be expressed mathematically in equation form. These methods, as illustrated in FIG. 15, may begin with input in the form of a coefficient (k) 256 and a quantization parameter 222. The coefficient 256 is multiplied by a quantization factor 262 to give the value g 264 according to Equation 31. The value g 264 is normalized 266 to give the quantized coefficient level c 220 according to Equation 32.
g=k·SQP  Equation 31
c=g>>M  Equation 32
In embodiments of the present invention, a reduction in bit depth for quantization calculations is achieved together with a reduction in memory needed to store quantization parameters. The processes of these embodiments, illustrated in FIG. 16, may begin with input in the form of coefficient (k) 256 and a quantization parameter QP 222. However, in these embodiments, an additional parameter P is used in processing. The equivalent of a quantization scaling factor SQP is factored into a mantissa portion RQP 274 and an exponential portion EQP 276. The mantissa portion RQP 274 is periodic with period P. The exponential portion 276 decreases by one for each increment of QP by P. The mantissa portion 274 is used during quantization 278 to calculate the scaled transform coefficient ({tilde over (g)}) 280 according to Equation 33. The scaled transform coefficient may then be normalized 282 using the exponential portion 276 according to Equation 34, thereby yielding the quantized coefficient level (c) 220. Using these methods, the value of {tilde over (g)} 280 requires EQP fewer bits for representation than a corresponding value g 264 generated through known methods. Values of R 274 and E 276 need only be stored for QP in one period [1, P] reducing the memory requirements. This factorization enables mathematically equivalent calculation of the reconstructed samples using lower intermediate precision as shown in Equations 33 & 34.
{tilde over (g)}=k·RQP % P  Equation 33
c={tilde over (g)}>>(M−EQP)  Equation 34

Other variations and embodiments of the invention will occur to those skilled in the art.

Kerofsky, Louis J.

Patent Priority Assignee Title
Patent Priority Assignee Title
5230038, Jan 27 1989 Low bit rate transform coder, decoder, and encoder/decoder for high-quality audio
5345408, Apr 19 1993 Google Technology Holdings LLC Inverse discrete cosine transform processor
5471412, Oct 27 1993 Winbond Electronic Corp. Recycling and parallel processing method and apparatus for performing discrete cosine transform and its inverse
5479364, Jun 26 1992 COASES INVESTMENTS BROS L L C Method and arrangement for transformation of signals from a frequency to a time domain
5590067, Jun 26 1992 COASES INVESTMENTS BROS L L C Method and arrangement for transformation of signals from a frequency to a time domain
5594678, Jun 26 1992 COASES INVESTMENTS BROS L L C Method and arrangement for transformation of signals from a frequency to a time domain
5596517, Jun 26 1992 COASES INVESTMENTS BROS L L C Method and arrangement for transformation of signals from a frequency to a time domain
5640159, Jan 03 1994 International Business Machines Corporation Quantization method for image data compression employing context modeling algorithm
5748793, Sep 28 1993 NEC Corporation Quick image processor of reduced circuit scale with high image quality and high efficiency
5754457, Mar 05 1996 Intel Corporation Method for performing an inverse cosine transfer function for use with multimedia information
5764553, Feb 28 1996 AVAGO TECHNOLOGIES GENERAL IP SINGAPORE PTE LTD Generalized data processing path for performing transformation and quantization functions for video encoder systems
5822003, Oct 31 1994 Intel Corporation Method and apparatus for performing fast reduced coefficient discrete cosine transforms
5845112, Mar 06 1997 SAMSUNG ELECTRONICS CO , LTD Method for performing dead-zone quantization in a single processor instruction
6081552, Jan 13 1998 Intel Corporation Video coding using a maximum a posteriori loop filter
6160920, Sep 15 1998 Novatek Microelectronics Corp Cosine transforming and quantizing device, method of reducing multiplication operations in a video compressing apparatus
6856262, Aug 12 2000 Robert Bosch GmbH Method for carrying out integer approximation of transform coefficients, and coder and decoder
6876703, May 11 2000 CISCO SYSTEMS CANADA CO Method and apparatus for video coding
20040046754,
CA2221181,
JP2004506990,
JP3270573,
JP4222121,
JP4503136,
JP4504192,
JP5095483,
JP5307467,
JP6046269,
JP6053839,
JP6077842,
JP7099578,
KR172902,
//
Executed onAssignorAssigneeConveyanceFrameReelDoc
Jan 19 2010Sharp Kabushiki Kaisha(assignment on the face of the patent)
Sep 29 2015Sharp Kabushiki KaishaDolby Laboratories Licensing CorporationASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0367240111 pdf
Date Maintenance Fee Events
Feb 27 2012REM: Maintenance Fee Reminder Mailed.
Mar 20 2012M1551: Payment of Maintenance Fee, 4th Year, Large Entity.
Mar 20 2012M1554: Surcharge for Late Payment, Large Entity.
Dec 23 2019M1553: Payment of Maintenance Fee, 12th Year, Large Entity.


Date Maintenance Schedule
Sep 27 20144 years fee payment window open
Mar 27 20156 months grace period start (w surcharge)
Sep 27 2015patent expiry (for year 4)
Sep 27 20172 years to revive unintentionally abandoned end. (for year 4)
Sep 27 20188 years fee payment window open
Mar 27 20196 months grace period start (w surcharge)
Sep 27 2019patent expiry (for year 8)
Sep 27 20212 years to revive unintentionally abandoned end. (for year 8)
Sep 27 202212 years fee payment window open
Mar 27 20236 months grace period start (w surcharge)
Sep 27 2023patent expiry (for year 12)
Sep 27 20252 years to revive unintentionally abandoned end. (for year 12)