A method and apparatus are provided for performing entropy encoding on a fine granular scalability layer. A method of entropy encoding on a plurality of current coefficients of a quality layer among a plurality of quality layers of an image block divided into the plurality of quality layers, includes determining a coding pass with respect to each of the current coefficients, selecting a context model with respect to each of the current coefficients using at least one lower coefficient corresponding to each of the current coefficients if the coding pass is a refinement pass, and performing arithmetic encoding a group of coefficients having a same selected context model among the current coefficients by using the selected context model.
|
1. A method of entropy encoding a plurality of current coefficients of a quality layer among a plurality of quality layers of an image block divided into the plurality of quality layers, the method comprising:
determining a coding pass with respect to each of the current coefficients;
a context model among a plurality of context models with respect to each of the current coefficients using an adjacent lower coefficient of an adjacent lower layer corresponding to a respective one of the current coefficients, wherein the context model is determined independently of coefficients of lower layers other than the adjacent lower layer; and
encoding a group of coefficients having a same selected context model among the current coefficients using the selected context model,
wherein the plurality of quality layers are layered in succession on top of one another, and the plurality of quality layers comprise a plurality of lower layers which are lower than the quality layer, and
the adjacent lower layer is one of the plurality of layers and is adjacently below the quality layer.
15. A method of entropy encoding a plurality of current coefficients of a quality layer among a plurality of quality layers of an image block divided into the plurality of quality layers, the method comprising;
determining a coding pass with respect to each of the current coefficients;
selecting a variable length coding (vlc) table among a plurality vlc tables with respect to the current coefficients using an adjacent lower coefficient of an adjacent lower layer corresponding to a respective one of the current coefficients, wherein the vlc table is determined independently of coefficients of lower layers other than the adjacent lower layer; and
performing a variable length encoding process on a group of coefficients having a same selected context model among the current coefficients using the selected vlc table,
wherein the plurality of quality layers are layered in succession on top of one another, and the plurality of quality layers comprise a plurality of lower layers which are lower than the quality layer, and
the adjacent lower layer is one of the plurality of layers and is adjacently below the quality layer.
19. A method of entropy decoding a plurality of encoded current coefficients of a quality layer among a plurality of quality layers of an image block divided into the plurality of quality layers, the method comprising:
determining a coding pass with respect to each of the encoded current coefficient;
selecting a variable length coding (vlc) table among a plurality of vlc tables with respect to each of the encoded current coefficients using an adjacent lower coefficient of an adjacent lower layer corresponding to a respective one of the encoded current coefficients, wherein the vlc table is determined independently of coefficients of lower layers other than the adjacent lower layer; and
performing variable length decoding on a group of coefficients having the same selected context model among the encoded current coefficients by using the selected vlc table,
wherein the plurality of quality layers are layered in succession on top of one another, and the plurality of quality layers comprise a plurality of lower layers which are lower than the quality layer, and
the adjacent lower layer is one of the plurality of layers and is adjacently below the quality layer.
8. A method of entropy decoding a plurality of encoded current coefficients of a quality layer among a plurality of quality layers of an image block divided into the plurality of quality layers, the method comprising:
determining a coding pass with respect to each of the encoded current coefficients;
selecting a context model among a plurality of context models with respect to each of the encoded current coefficients using an adjacent lower coefficient of an adjacent lower layer corresponding to a respective one of the encoded current coefficients if the coding pass is a refinement pass, wherein the context model is determined independently of coefficients of lower layers other than the adjacent lower layer; and
decoding a group of coefficients having a same selected context model among the encoded current coefficients using the same selected context model,
wherein the plurality of quality layers are layered in succession on top of one another, and the plurality of quality layers comprise a plurality of lower layers which are lower than the quality layer, and
the adjacent lower layer is one of the plurality of layers and is adjacently below the quality layer.
23. An apparatus for entropy encoding a plurality of current coefficients of a quality layer among a plurality of quality layers of an image block divided into the plurality of quality layers, the apparatus comprising:
a hardware processor;
a unit which determines a coding pass with respect to each of the current coefficients;
a unit which selects a context model among a plurality of context models with respect to each of the current coefficients using an adjacent lower coefficient of an adjacent lower layer corresponding to a respective one of the current coefficients, wherein the context model is determined independently of coefficients of lower layers other than the adjacent lower layer; and
a unit which performs arithmetic encoding on a group of coefficients having a same selected context model among the current coefficients using the selected context model,
wherein the plurality of quality layers are layered in succession on top of one another, and the plurality of quality layers comprise a plurality of lower layers which are lower than the quality layer, and
the adjacent lower layer is one of the plurality of layers and is adjacently below the quality layer.
24. An apparatus for entropy decoding a plurality of encoded current coefficients of in a layer among a plurality of quality layers of an image block divided into the plurality of quality layers, the apparatus comprising:
a hardware processor;
a unit which determines a coding pass with respect to each of the encoded current coefficients;
a unit which selects a context model among a plurality of context models with respect to each of the encoded current coefficients using an adjacent lower coefficient of an adjacent lower layer corresponding to a respective one of the encoded current coefficients, wherein the context model is determined independently of coefficients of lower layers other than the adjacent lower layer; and
a unit which performs arithmetic decoding on a group of coefficients having a same selected context model among the encoded current coefficients by using the selected context model,
wherein the plurality of quality layers are layered in succession on top of one another, and the plurality of quality layers comprise a plurality of lower layers which are lower than the quality layer, and
the adjacent lower layer is one of the plurality of layers and is adjacently below the quality layer.
2. The method of
3. The method of
4. The method of
5. The method of
6. The method of
7. The method of
9. The method of
10. The method of
11. The method of
12. The method of
13. The method of
14. The method of
16. The method of
17. The method of
18. The method of
20. The method of
21. The method of
22. The method of
|
This application claims priority from Korean Patent Application No. 10-2006-0087546 filed on Sep. 11, 2006 in the Korean Intellectual Property Office, and U.S. Provisional Patent Application No. 60/831,936 filed on Jul. 20, 2006 in the United States Patent and Trademark Office, the disclosures of which are incorporated herein by reference in their entirety.
1. Field of the Invention
Methods and apparatuses consistent with the present invention relate to video compression, and more particularly, to increasing encoding efficiency when performing entropy encoding on a fine granular scalability layer.
2. Description of the Related Art
With the development of information communication technology including the Internet, video communication as well as text and voice communication has increased. Conventional text communication cannot satisfy users' various demands, and thus, multimedia services that can provide various types of information such as text, pictures, and music have increased. However, multimedia data requires a large-capacity storage medium and a wide bandwidth for transmission because the amount of multimedia data is usually large. Accordingly, a compression coding method is requisite for transmitting multimedia data including text, video, and audio.
A basic principle of data compression is to remove data redundancy. Data can be compressed by removing spatial redundancy in which the same color or object is repeated in an image, temporal redundancy in which there is little change between adjacent frames in a moving image or the same sound is repeated in audio, or mental visual redundancy which takes into account human eyesight and its limited perception of high frequency. In a general video coding method, the temporal redundancy is removed by temporal filtering based on motion compensation, and the spatial redundancy is removed by spatial transform.
Lossy encoding is performed on the result obtained by removing data redundancy according to predetermined quantization steps through a quantization process. Lossless encoding is finally performed on the quantized result through entropy encoding.
Currently, in the scalable video coding (SVC) standard that is being conducted by a joint video team (JVT), which is a meeting between video experts of the International Organization for Standardization/International Electro technical Commission (ISO/IEC) and the International Telecommunication Union (ITU), research on a multi-layer based scalable video coding technique based on the conventional H.264 standard has been actively made. In particular, an FGS technique is used to improve quality or bit rate of one frame.
In the SVC draft, the coding is performed using correlation between the respective FGS layers. That is, another FGS layer is coded using coefficients of one FGS layer according to separated coding passes (the concept including a significant pass and a refinement pass). At this time, when coefficients of all of the corresponding lower layers are zero, coefficients of the corresponding current layer are coded by the significant pass. When one or more coefficients of the corresponding lower layers do not have values of zero, the coefficients of the corresponding current layer are coded by the refinement pass. As such, predetermined coefficients of the FGS layers are coded by the different passes because probability distributions of the coefficients are clearly differentiated from each other according to the coefficients of the corresponding lower layers.
The present invention provides a method and apparatus for entropy encoding/decoding that can improve entropy coding efficiency of video data that includes a plurality of quality layers.
The present invention also provides a method and apparatus for entropy encoding/decoding that can reduce computational complexity when performing entropy coding on video data that includes a plurality of quality layers.
According to an aspect of the present invention, there is provided a method of entropy encoding on at least one current coefficient of a predetermined quality layer among a plurality of quality layers when an image block is divided into the plurality of quality layers, the method including determining a coding pass with respect to each of the current coefficients, selecting a context model with respect to each of the current coefficients using at least one lower coefficient corresponding to each of the current coefficients when the coding pass is a refinement pass, and performing arithmetic encoding on a group of coefficients having the same selected context model among the current coefficients by using the selected context model.
According to another aspect of the present invention, there is provided a method of entropy decoding on at least one encoded current coefficient of a predetermined quality layer among a plurality of quality layers when an image block is divided into the plurality of quality layers, tire method including determining a coding pass with respect to each of the encoded current coefficients, selecting a context model with respect to each of the encoded current coefficients using at least one lower coefficient corresponding to each of the encoded current coefficients when the coding pass is a refinement pass, and performing arithmetic decoding on a group of coefficients having the same selected context model among the encoded current coefficients by using the selected context model.
According to another aspect of the present invention, there is provided a method of entropy encoding on at least one current coefficient of a predetermined quality layer among a plurality of quality layers when an image block is divided into the plurality of quality layers, the method including determining a coding pass with respect to each of the current coefficients, selecting a variable length coding (VLC) with respect to the current coefficients according to each of the current coefficients using at least one lower coefficient corresponding to each of the current coefficients when the coding pass is a refinement pass, and performing variable length encoding on a group of coefficients having the same selected context model among the current coefficients by using the selected VLC table.
According to another aspect of the present invention, there is provided a method of entropy decoding on at least one encoded current coefficient of a predetermined quality layer among a plurality of quality layers when an image block is divided into the plurality of quality layers, the method including determining a coding pass with respect to each of the encoded current coefficient; selecting a VLC table with respect to each of the encoded current coefficients using at least one lower coefficient corresponding to each of the encoded current coefficients when the coding pass is a refinement pass, and performing variable length decoding on a group of coefficients having the same selected context model among the encoded current coefficients by using the selected VLC table.
The above and other aspects of the present invention will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings in which:
Advantages and features of the present invention and methods of accomplishing the same may be understood more readily by reference to the following detailed description of exemplary embodiments and the accompanying drawings. The present invention may, however, be embodied in many different forms and should not be construed as being limited to the exemplary embodiments set forth herein. Rather, these exemplary embodiments are provided so that this disclosure will be thorough and complete and will fully convey the concept of the invention to those skilled in the art, and the present invention will only be defined by the appended claims. Like reference numerals refer to like elements throughout the specification.
The present invention will now be described more fully with reference to the accompanying drawings, in which exemplary embodiments of the invention are shown.
The quantized result 25 is inversely quantized (S5) and the inversely quantized result 26 is added to the inversely quantized slice 23 by an adder 27 (S6) and then supplied to a subtracter 28. The subtracter 28 subtracts the added result from the original slice (S7). The subtracted result is quantized by a third quantization parameter QP3 (S8). The quantized result 29 forms a second FGS layer. Through this process, a plurality of quality layers shown in
As described above, in the current SVC draft, coding passes of coefficients in a predetermined FGS layer are determined by using corresponding coefficients of all layers that are located below the predetermined FGS layer. Here, the “corresponding coefficients” indicate coefficients that have the same spatial position between, the plurality of quality layers. For example, as shown in
In
Therefore, the coefficients cn, cn+1 and cn+2 that belong to the same refinement pass have the same context model Ctx1.
As compared with
In the SVC draft according to the related art, after the refinement pass and the significant pass are determined as shown in
Therefore, in the exemplary embodiment of the invention, in order to reduce the amount of operation of entropy coding according to the coding passes, the coefficients are not divided into groups on the basis of the coding passes as in the SVC draft according to the related art, but it is proposed that the entropy encoding be performed through one loop in scan order. That is, the entropy coding is performed on the corresponding coefficients in scan order regardless of whether the predetermined coefficient belongs to the refinement pass or the significant pass.
In order that the scheme of only using the adjacent lower coefficients as shown in
In the JSVM-5, when a value of the function is zero, it is determined that the current coefficient is coded by the significant pass, and when the value is not zero, it is determined that the current coefficient is coded by the refinement pass. In the JSVM-5, definition of the function is depicted in the pseudocode of Table 1.
TABLE 1
base_luma_level (mbAddr, i8x8, i4x4, scanIdx) is specified as follows.
A variable i is set equal to 0.
A variable result is set equal to 0.
The return value of base_luma_level (mbAddr, i8x8, i4x4, scanIdx) is derived as
follows.
a.
The derivation process for base quality slices in subclause F.6.1 is invoked with
mbAddr and baseQualityLevel equal to i as input and the output is assigned to
baseQualitySlice [i].
b.
When the transform coefficient level LumaLevel [4 * i8x8 + i4x4] [scanIdx] of
the macroblock mbAddr is not equal to 0, the following applies
The variable result is set equal to LumaLevel [4 * i8x8 + i4x4] [scanIdx].
The derivation process is continued with step d.
c.
When the variable i is less than (quality_level−1), the following applies.
The variable i is incremented by 1: i = i + 1.
The derivation process is continued with step a.
d.
The return value of base_luma_level (mbAddr, i8x8, i4x4, scanIdx) is set equal
to the value of the variable result.
According to the first exemplary embodiment of the invention as shown in
TABLE 2
base_luma_level (mbAddr, i8x8, i4x4, scanIdx) is specified as follows.
The return value of base_luma_level (mbAddr, i8x8, i4x4, scanIdx)
is derived as follows.
a. The derivation process for base quality slices in subclause F.6.1
is invoked with mbAddr and baseQualityLevel equal to i as input and
the output is assigned to baseQualitySlice [quality_level − 1].
b. The return value of base_luma_level(mbAddr, i8x8, i4x4,
scanIdx) is set equal to LumaLevel [4 * i8x8 + i4x4][scanIdx].
In Table 2, the function base_luma_level( ) is newly defined to be equal to a value of the adjacent lower coefficient corresponding to the coefficient of the current FGS layer, that is, “LumaLevel[4*i 8×8+i4×4][scanIdx]”. Here, the parameter “4*i8×8+i4×4” is a formula that is assigned such that indexes of the coefficients do not overlap. As such, in the first exemplary embodiment of the invention, by using the newly defined function base_luma_level( ), it is determined that when the value of this function is zero, the coding is performed using the significant pass, and otherwise, the refinement pass.
The method of determining the coding passes with respect to luminance components has been described with reference to Table 2. However, the same method may be applied with respect to chrominance components.
However, as described above, taking into account the fact that the current coefficient has high correlation with the adjacent lower coefficient corresponding to the current coefficient and thus, has a similar context characteristic to the adjacent lower coefficient, the context model of the current coefficient is determined based on the value of the adjacent lower coefficient. For example, a context model Ctx1 is applied to a coefficient cn on the basis of an adjacent lower coefficient 1, a context model Ctx2 is applied to a coefficient cn+1 on the basis of an adjacent lower coefficient −1, and a context model Ctx3 is applied to a coefficient cn+2 on the basis of an adjacent lower coefficient 0. Herein, “0/1/−1” indicates 0, 1, or −1.
That is, when determining the coding passes, all of the lower coefficients in all of the lower layers corresponding to the current FGS layer are used. However, when determining the context model of each of the current coefficients included in each refinement pass, the adjacent lower coefficient is only used. In
That is, the third exemplary embodiment is the same as the first exemplary embodiment in that coding passes are determined according to whether the adjacent lower coefficients are zero or not. However, the third exemplary embodiment is different from the first exemplary embodiment in that two context models Ctx1 and Ctx2 are applied to the current coefficients having the refinement pass according to whether the adjacent lower coefficients corresponding to the current coefficients having the refinement pass are 1 or −1. The current coefficients of when the adjacent lower coefficients are 1, and the current coefficients of when the adjacent lower coefficients are −1 may have different context characteristics. As such, the efficiency of FGS coding can be increased by applying the different context models.
In the above-described second and third exemplary embodiments, the description is made of the case where the plurality of context models are applied on the assumption that the refinement pass is coded using CABAC. However, when the refinement pass is coded using Context-adaptive Variable Length Coding (CAVLC), a plurality of VLC tables may be applied instead of using the plurality of context models.
As described above, in the exemplary embodiments of the invention, the description is made of the example in which one block includes one discrete layer and two FGS layers, and the second FGS layer is the current layer (i.e., a layer to be coded). However, the number of FGS layers may be increased, and a third FGS layer or upper layers may be the current layer.
The frame encoding device 110 generates from a video frame that is input, at least one quality layer related to the video frame.
To this end, the frame encoding device 110 may include a prediction unit 111, a transform unit 112, a quantization unit 113, and a quality layer generating unit 114.
The prediction unit 111 obtains a residual signal by subtracting an image, which is predicted according to a predetermined prediction method, from a current macroblock. The prediction method may include prediction schemes disclosed in the SVC draft, that is, inter prediction, directional intra, prediction, intra base prediction, and the like. The inter prediction may include a motion estimation process of obtaining a motion vector for expressing the relative motion between the current frame and a frame having the same resolution as and/or different temporal position from the current frame. The current frame may be predicted using a frame of a lower layer (a base layer) that exists in the same temporal position as the current frame and has different resolution from the current frame. This is called the intra base prediction. The intra base prediction does not include the motion estimation process.
The transform unit 112 transforms the obtained residual signal by using a spatial transform method, such as discrete cosine transform (DCT), wavelet transform, or the like so as to generate a transform coefficient. As a result of the spatial transform, the transform coefficient is obtained. When the DCT is used as the spatial transform method, a DCT coefficient is obtained, and when the wavelet transform is used, a wavelet coefficient is obtained.
The quantization unit 113 quantizes the transform coefficient obtained by the spatial transform unit 112 so as to generate a quantization coefficient. The quantization means a process in which the transform coefficient expressed as an arbitrary real number is divided into predetermined intervals so as to be expressed as discrete values. The quantization process includes scalar quantization, vector quantization, and the like.
The quality layer generating unit 114 generates a plurality of quality layers by the process described in
The entropy encoding device 120 performs an independent lossless encoding process according to the exemplary embodiment of the invention. A detailed structure of the entropy encoding device 120 is shown in
The coding pass selecting unit 121 determines a coding pass of each of the coefficients included in the current block (4×4 block, 8×8 block, or 16×16 block) mat belongs to the quality layer. Further, the coding pass selecting unit 121 may determine context models or VLC tables that may be used for the refinement pass coding.
The method of determining the coding passes and the method of determining the context models will be performed according to the first, second, and third exemplary embodiments of the invention.
According to the first exemplary embodiment of the invention, the coding pass selecting unit 121 determines the coding pass (refinement pass or significant pass) of each of the current coefficients, only using an adjacent lower coefficient corresponding to each of the current coefficients. That is, the coding pass of each of the current coefficients is determined as the significant pass when the corresponding coefficient is zero, and otherwise, as the refinement pass. At this time, one context model (or one VLC table) is applied to all of the current coefficients whose coding passes are determined as the refinement passes.
According to the second exemplary embodiment, the coding pass selecting unit 121 selects the coding pass by using coefficients of all lower layers corresponding to each of the current coefficients of the current layer. When at least one coefficient being used has a value except for zero, the coding pass of each of the current coefficients is determined as the refinement pass, and otherwise, as the significant pass. At this time, different context models (or VLC tables) are applied to the current coefficients whose coding passes are determined as the refinement passes, according to the value (1, 0, or −1) of the adjacent lower coefficient corresponding to each of the current coefficients.
According to the third exemplary embodiment, like the first exemplary embodiment, the coding pass selecting unit 121 only refers to the adjacent lower coefficient corresponding to each of tire current coefficients and determines the coding pass whether the adjacent lower coefficient has a value except for zero. Here, different context models (or VLC tables) are applied to the current coefficients whose coding passes are determined as the refinement passes, according to the value (1 or −1) of the adjacent coefficient value being used.
A pass coding unit 125 performs a lossless encoding (entropy encoding) process on the coefficients of the current block according to the selected coding passes. To this end, the pass coding unit 125 includes the refinement pass coding unit 122 that performs a lossless encoding process on the current coefficients whose coding passes are determined as the refinement passes by the coding pass selecting unit 121, by using a refinement pass coding scheme, and the significant pass coding unit 123 that performs a lossless encoding process on the current coefficients whose coding passes are determined as the significant passes by the coding pass selecting unit 121, by using a significant pass coding scheme.
A detailed method of entropy coding according to the actual refinement pass or significant pass may be performed using the same method used in the conventional SVC draft.
In particular, the JVT-P056, which is a proposal document for the SVC, proposes the following coding scheme for the significant pass. A characteristic of a codeword, which is an encoded result, is determined by a cut-off parameter “m”. A symbol “C” to be coded is equal to or smaller than the parameter m, the symbol C is encoded using a code Exp_Golomb. When the symbol C is larger than the parameter m, the symbol C is divided into two parts, that is, length and suffix according to Equation 1, and then encoded.
The P indicates the encoded codeword, which includes the length and the suffix (which has a value of 00, 10, or 10).
Meanwhile, the refinement pass coding scheme may include CABAC or CAVLC. The refinement pass coding unit 122 collects the current coefficients that have the same context model determined by the coding pass selecting unit 121, and performs binary arithmetic encoding on the collected coefficients by using the context model (CABAC). Alternatively, the refinement pass coding unit 122 collects the current coefficients that have the same VLC table determined by the coding pass selecting unit 121, and performs variable length encoding on the collected current coefficients by using the VLC table (CAVLC).
The CABAC is a process in which a probability model is selected with respect to predetermined coefficients to be coded and arithmetic coding is performed on the coefficients. In general, the CABAC process includes binarization, context model selection, arithmetic coding, and probability updating.
The CAVLC is a scheme in which coefficients having predetermined lengths that will be coded are converted into codewords having different, lengths by using the VLC table.
The MUX 124 multiplexes an output of the refinement pass coding unit 122 an and output of the significant pass coding unit 123 so as to output one bit stream.
The entropy decoding device 220 performs entropy decoding on the coefficients of the current block that belong to at least one quality layer included in the input bit stream according to the exemplary embodiment of the invention. A detailed structure of the entropy decoding device 220 will be described below with reference to
The frame decoding device 210 restores an image of the current block from the coefficients of die current block that are losslessly decoded by the entropy decoding device 220. To this end, the frame decoding device 210 includes a quality layer assembling unit 211, an inverse quantization unit 212, an inverse transform unit 213, and an inverse prediction unit 214.
The quality layer assembling unit 211 adds a plurality of quality layers as shown in
The inverse quantization unit 212 inversely quantizes the data supplied from the quality layer assembling unit 211.
The inverse transform unit 213 inversely transforms the inversely quantized result. The inverse transform is performed inversely to the transform process performed in the transform unit 112 of
The inverse prediction unit 214 adds the restored residual signal supplied from the inverse transform unit 213 and a prediction signal so as to restore the video frame. At this time, the prediction signal may be obtained using inter prediction or intra base prediction like the video encoder stage.
The coding pass selecting unit 221 determines a coding pass of each of the encoded coefficients (encoded current coefficients) included in the current block (4×4 block, 8×8 block, or 16×16 block) that belongs to the quality layer. Further, the coding pass selecting unit 221 may determine context models or VLC tables to be applied to the refinement pass coding. The method of determining the coding passes and the method of determining the context models are performed according to the first, second, and third exemplary embodiments of the invention as described above.
A pass decoding unit 225 losslessly decodes the current coefficients according to the selected coding passes. To this end, the pass decoding unit 225 includes the refinement pass decoding unit 222 that losslessly decodes the current coefficients whose coding passes are determined as the refinement passes by the coding pass selecting unit 221, by using a refinement pass decoding scheme, and the significant pass decoding unit 223 that losslessly decodes the current coefficients whose coding passes are determined as the significant passes by the coding pass selecting unit 221, according to a significant pass decoding scheme.
The refinement pass decoding unit 222 collects the current coefficients having the same context model determined by the coding pass selecting unit 221, and performs binary arithmetic decoding on the collected coefficients by using the context model (CABAC). Alternatively, the refinement pass decoding unit 222 collects the current coefficients having the same VLC table determined by the coding pass selecting unit 221, and performs variable length decoding on the collected coefficients by using the VLC table.
The MUX 224 multiplexes an output of the refinement pass decoding unit 222 and an output of the significant pass decoding unit 223 so as to generate data (slice or frame) with respect to one quality layer.
The respective components shown in
Although the present invention has been described in connection with the exemplary embodiments thereof, it will be apparent to those skilled in the art that various modifications and changes may be made thereto without departing from the scope and spirit of the invention. Therefore, it should be understood that the above exemplary embodiments are not limitative, but illustrative in all aspects.
According to the exemplary embodiments of the invention, it is possible to improve entropy coding efficiency of video data that includes a plurality of quality layers.
Further, according to the exemplary embodiments of the invention, it is possible to reduce computational complexity when performing entropy encoding on video data that includes a plurality of quality layers.
Lee, Kyo-hyuk, Han, Woo-jin, Lee, Bae-keun
Patent | Priority | Assignee | Title |
10027991, | Oct 20 2010 | Samsung Electronics Co., Ltd. | Low complexity entropy-encoding/decoding method and apparatus |
10158890, | Apr 05 2010 | Samsung Electronics Co., Ltd. | Low complexity entropy-encoding/decoding method and apparatus |
8942292, | Oct 13 2006 | Qualcomm Incorporated | Efficient significant coefficients coding in scalable video codecs |
9866875, | Apr 05 2010 | Samsung Electronics Co., Ltd. | Low complexity entropy-encoding/decoding method and apparatus |
Patent | Priority | Assignee | Title |
6275531, | Jul 07 1998 | OPTIVISION, INC | Scalable video coding method and apparatus |
6580832, | Jul 02 1997 | PANTECH INC | Apparatus and method for coding/decoding scalable shape binary image, using mode of lower and current layers |
6639943, | Nov 23 1999 | LG Electronics Inc | Hybrid temporal-SNR fine granular scalability video coding |
6788740, | Oct 01 1999 | FUNAI ELECTRIC CO , LTD | System and method for encoding and decoding enhancement layer data using base layer quantization data |
6944639, | Jun 29 2001 | Nokia Corporation | Hardware context vector generator for JPEG2000 block-coding |
7406176, | Apr 01 2003 | Microsoft Technology Licensing, LLC | Fully scalable encryption for scalable multimedia |
7586425, | Jul 11 2006 | Nokia Corporation | Scalable video coding and decoding |
20040017949, | |||
20040066974, | |||
20040240742, | |||
20050094731, | |||
20050185714, | |||
20060078049, | |||
20060083309, | |||
20060153294, | |||
20070014349, | |||
20070014351, | |||
20070053425, | |||
20070053426, | |||
20070069926, | |||
20070071088, | |||
20070133676, | |||
20070177664, | |||
20070201550, | |||
20070201551, | |||
20070223580, | |||
20070223825, | |||
20070230811, | |||
20070237240, | |||
20080013624, | |||
20080025399, | |||
20090304091, | |||
20110293019, | |||
20120189049, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jul 20 2007 | Samsung Electronics Co., Ltd. | (assignment on the face of the patent) | / | |||
Jul 20 2007 | LEE, BAE-KEUN | SAMSUNG ELECTRONICS CO , LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 019585 | /0548 | |
Jul 20 2007 | LEE, KYO-HYUK | SAMSUNG ELECTRONICS CO , LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 019585 | /0548 | |
Jul 20 2007 | HAN, WOO-JIN | SAMSUNG ELECTRONICS CO , LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 019585 | /0548 |
Date | Maintenance Fee Events |
May 22 2013 | ASPN: Payor Number Assigned. |
Aug 12 2016 | REM: Maintenance Fee Reminder Mailed. |
Jan 01 2017 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Jan 01 2016 | 4 years fee payment window open |
Jul 01 2016 | 6 months grace period start (w surcharge) |
Jan 01 2017 | patent expiry (for year 4) |
Jan 01 2019 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jan 01 2020 | 8 years fee payment window open |
Jul 01 2020 | 6 months grace period start (w surcharge) |
Jan 01 2021 | patent expiry (for year 8) |
Jan 01 2023 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jan 01 2024 | 12 years fee payment window open |
Jul 01 2024 | 6 months grace period start (w surcharge) |
Jan 01 2025 | patent expiry (for year 12) |
Jan 01 2027 | 2 years to revive unintentionally abandoned end. (for year 12) |