Disclosed are a video encoding method and apparatus and a video decoding method and apparatus. The method of encoding video includes: producing a first predicted coding unit of a current coding unit, which is to be encoded; determining whether the current coding unit comprises a portion located outside a boundary of a current picture; and producing a second predicted coding unit is produced by changing a value of pixels of the first predicted coding unit by using the pixels of the first predicted coding unit and neighboring pixels of the pixels when the current coding unit does not include a portion located outside a boundary of the current picture. Accordingly, a residual block that is the difference between the current encoding unit and the second predicted encoding unit, can be encoded, thereby improving video prediction efficiency.
|
0. 8. An apparatus of restoring an encoded block, the apparatus comprising:
a processor; and
a memory storing a program which causes the processor to:
split an image into a plurality of maximum coding units based on information about a size of a maximum coding unit, and determine at least one coding unit included in the maximum coding unit among the plurality of maximum coding units by splitting the maximum coding unit based on split information,
extract information regarding a prediction mode of a current block included in the at least one coding unit, from a received bitstream,
determine neighboring pixels of the current block used for intra prediction by using available neighboring pixels of the current block when the extracted information indicates the prediction mode of the current block is intra prediction,
produce a first prediction value of the current block including a first pixel located on a top border in the current block and a second pixel located on a left border in the current block, by calculating an average value of at least one of the neighboring pixels adjacent to the current block,
produce a second prediction value of the first pixel by using a weighted average value of the first prediction value and a pixel value of one neighboring pixel adjacent to the first pixel and located on same column with the first pixel,
produce a second prediction value of the second pixel by using a weighted average value of the first prediction value and a pixel value of one neighboring pixel adjacent to the second pixel and located on same row with the second pixel,
obtain, from the received bitstream, residual of the first pixel and residual of the second pixel included in the current block,
obtain a restored current block including a restored pixel value of the first pixel and a restored pixel value of the second pixel, wherein the restored pixel value of the first pixel is obtained by adding the residual of the first pixel and the second prediction value of the first pixel, and the restored pixel value of the second pixel is obtained by adding the residual of the second pixel and the second prediction value of the second pixel, and
output the restored current block including the restored pixel value of the first pixel and the restored pixel value of the second pixel,
wherein, when the neighboring pixels of the current block are located within a boundary of a current picture, the neighboring pixels of the current block located within the boundary of the current picture are determined as available.
0. 1. A method of decoding video, the method comprising:
extracting information regarding a prediction mode for a current decoding unit, which is to be decoded, from a received bitstream;
producing a first predicted decoding unit of the current decoding unit, based on the extracted information;
determining whether the current decoding unit includes a portion located outside a boundary of a current picture; and
producing a second predicted decoding unit by changing values of pixels of the first predicted decoding unit by using pixels of the first predicted decoding unit and neighboring pixels of the pixels when the current decoding unit does not include the portion located outside the boundary of the current picture, and skipping the producing the second predicted decoding unit when the current decoding unit includes the portion located outside the boundary of the current picture.
0. 2. The method of
0. 3. The method of
if the index information has a first predetermined value, the index information indicates that the producing the second predicted decoding unit is not to be performed; and
if the index information has a second predetermined value, the index information indicates that the producing the second predicted decoding unit is to be performed.
0. 4. An apparatus for decoding video, the apparatus comprising:
an entropy decoder which extracts information regarding a prediction mode for a current decoding unit, which is to be decoded, from a received bitstream;
a predictor which produces a first predicted decoding unit of the current decoding unit, based on the extracted information;
a determiner which determines whether the current decoding unit includes a portion located outside a boundary of a current picture; and
a post-processor which produces a second predicted decoding unit by changing values of pixels of the first predicted decoding unit by using the pixels of the first predicted decoding unit and neighboring pixels of the pixels when the current decoding unit does not include the portion located outside the boundary of the current picture, and which skips the producing the second predicted decoding unit when the current decoding unit includes the portion located outside the boundary of the current picture.
0. 5. The apparatus of
0. 6. The apparatus of
if the index information has a first predetermined value, the index information indicates that the process of producing the second predicted decoding unit is not to be performed; and
if the index information has a second predetermined value, the index information indicates that the process of producing the second predicted decoding unit is to be performed.
0. 7. A non-transitory computer readable recording medium having recorded thereon a program code for executing the method of
|
Compared to the current minimum transform unit size CurrMinTuSize that can be determined in the current coding unit, a transform unit size RootTuSize when the TU size flag is 0 may denote a maximum transform unit size that can be selected in the system. In Equation (1), RootTuSize/(2^MaxTransformSizeIndex) denotes a transform unit size when the transform unit size RootTuSize, when the TU size flag is 0, is split a number of times corresponding to the maximum TU size flag, and MinTransformSize denotes a minimum transformation size. Thus, a smaller value from among RootTuSize/(2^MaxTransformSizeIndex) and MinTransformSize may be the current minimum transform unit size CurrMinTuSize that can be determined in the current coding unit.
According to an exemplary embodiment, the maximum transform unit size RootTuSize may vary according to the type of a prediction mode.
For example, if a current prediction mode is an inter mode, then RootTuSize may be determined by using Equation (2) below. In Equation (2), MaxTransformSize denotes a maximum transform unit size, and PUSize denotes a current prediction unit size.
RootTuSize=min(MaxTransformSize,PUSize) (2)
That is, if the current prediction mode is the inter mode, the transform unit size RootTuSize when the TU size flag is 0, may be a smaller value from among the maximum transform unit size and the current prediction unit size.
If a prediction mode of a current partition unit is an intra mode, RootTuSize may be determined by using Equation (3) below. In Equation (3), PartitionSize denotes the size of the current partition unit.
RootTuSize=min(MaxTransformSize,PartitionSize) (3)
That is, if the current prediction mode is the intra mode, the transform unit size RootTuSize when the TU size flag is 0 may be a smaller value from among the maximum transform unit size and the size of the current partition unit.
However, the current maximum transform unit size RootTuSize that varies according to the type of a prediction mode in a partition unit is just an example and is not limited thereto.
Intra prediction performed by the intra predictor 410 of the image encoder 400 illustrated in
The index MPI_PredMode indicates whether what kind of Multi-Parameter Intra-prediction (MPI), which will be described in detail later, is to be performed. Referring to Table 2, if the index MPI_PredMode is 0, it indicates that the MPI is not performed to produce a second predicted coding unit, and if the index MPI_PredMode is greater than 0, it indicates that the MPI is to be performed so as to produce the second predicted coding unit.
TABLE 2
MPI_PredMode
MPI Mode Name
Meaning
0
MPI_Mode0
Do not perform MPI
1
MPI_Mode1
Perform MPI
. . .
. . .
. . .
MPI_PredModelMAX
MPI_ModelMAX
Perform MPI
According to Table 2, the index MPI_PredMode is 0 or 1 depending on whether the MPI is to be performed. However, in a case where N modes are present as MPI modes, the MPI_PredMode may have integral value ranging from 0 to N so as to express the case where the MPI will not be performed and the N modes.
If the determiner 1415 determines that the current coding unit does not include any portion located outside a boundary of the picture, that is, when the index MPI_PredMode is not 0, then the post-processor 1420 produces the second predicted coding unit by perform the MPI by using neighboring pixels of pixels that constitute the first predicted coding unit so as to change the pixel values of the pixels of the first predicted coding unit.
Thus, according to an exemplary embodiment, coding unit size may be largely classified into at least three sizes: N1×N1 (2≤N1≤4, N1 denotes an integer), N2×N2 (8≤N2≤32, N2 denotes an integer), and N3×N3 (64≤N3, N3 denotes an integer). If a number of intra prediction modes that are to be performed on each coding unit having a size of N1×N1 is A1 (A1 denotes a positive integer), a number of intra prediction modes that are to be performed on each coding unit having a size of N2×N2 is A2 (A2 denotes a positive integer), and a number of intra prediction modes that are to be performed on each coding unit having a size of N3×N3 is A3 (A3 denotes a positive integer), then a number of intra prediction modes that are to be performed according to the size of a coding unit, may be determined to satisfy A3≤A1≤A2. That is, if a current picture is divided into a small-sized coding unit, a medium-sized coding unit, and a large-sized coding unit, then a number of prediction modes that are to be performed on the medium-sized coding unit may be greater than those of prediction modes to be performed on the small-sized coding unit and the large-sized coding unit. However, another exemplary embodiment is not limited thereto and a large number of prediction modes may also be set to be performed on the small-sized and medium-sized coding units. The numbers of prediction modes according to the size of each coding unit illustrated in
As illustrated in
TABLE 3
mode #
dx
dy
mode 4
1
−1
mode 5
1
1
mode 6
1
2
mode 7
2
1
mode 8
1
−2
mode 9
2
−1
mode 10
2
−11
mode 11
5
−7
mode 12
10
−7
mode 13
11
3
mode 14
4
3
mode 15
1
11
mode 16
1
−1
mode 17
12
−3
mode 18
1
−11
mode 19
1
−7
mode 20
3
−10
mode 21
5
−6
mode 22
7
−6
mode 23
7
−4
mode 24
11
1
mode 25
6
1
mode 26
8
3
mode 27
5
3
mode 28
5
7
mode 29
2
7
mode 30
5
−7
mode 31
4
−3
Mode 0, mode 1, mode 2, mode 3, and mode 32 denote a vertical mode, a horizontal mode, a DC mode, a plane mode, and a Bi-linear mode, respectively.
Mode 32 may be set as a bi-linear mode that uses bi-linear interpolation as will be described later with reference to
Referring to
Referring to
Meanwhile, if the extended line 180 having the angle of tan−1(dy/dx) that is determined according to (dx, dy) of each mode passes between the neighboring pixel A 181 and the neighboring pixel B 182 of the integer locations, a section between the neighboring pixel A 181 and the neighboring pixel B 182 may be divided into a predetermined number of areas, and a weighted average value considering a distance between an intersection and the neighboring pixel A 181 and the neighboring pixel B 182 in each divided area may be used as a prediction value. For example, referring to
Also, if two neighboring pixels, that is, the neighboring pixel A on the up side and the neighboring pixel B on the left side meet the extended line 180 as shown in
The intra prediction modes having various directionalities shown in Table 3 may be predetermined by an encoding side and a decoding side, and only an index of an intra prediction mode of each coding unit may be transmitted.
Specifically, first, a value of a virtual pixel C 193 on a lower rightmost point of the current coding unit is calculated by calculating an average of values of a neighboring pixel (right-up pixel) 194 on an upper rightmost point of the current coding unit and a neighboring pixel (left-down pixel) 195 on a lower leftmost point of the current coding unit, as expressed in the following equation:
C=0.5(LeftDownPixel+RightUpPixel) (4)
The virtual pixel C 193 may be obtained by shifting operation as The Equation 4 may be the predictor for the current pixel P may be obtained by shift operation as C=0.5(LeftDownPixel+RightUpPixel+1)>>1.
Next, a value of the virtual pixel A 191 located on a lowermost boundary of the current coding unit when the current pixel P 190 is extended downward by considering the distance W1 between the current pixel P 190 and the left boundary of the current coding unit and the distance W2 between the current pixel P 190 and the right boundary of the current coding unit, is calculated by using the following equation:
A=(C*W1+LeftDownPixel*W2)/(W1+W2)
A=(C*W1+LeftDownPixel*W2+((W1+W2)/2))/(W1+W2) (5)
When a value of W1+W2 in Equation 5 is a power of 2, like 2^n, A=(C*W1+LeftDownPixel*W2+((W1+W2)/2))/(W1+W2) may be calculated by shift operation as A=(C*W1+LeftDownPixel*W2+2^(n−1))>>n without division.
Similarly, a value of the virtual pixel B 192 located on a rightmost boundary of the current coding unit when the current pixel P 190 is extended in the right direction by considering the distance h1 between the current pixel P 190 and the upper boundary of the current coding unit and the distance h2 between the current pixel P 190 and the lower boundary of the current coding unit, is calculated by using the following equation:
B=(C*h1+RightUpPixel*h2)/(h1+h2)
B=(C*h1+RightUpPixel*h2+((h1+h2)/2))/(h1+h2) (6)
When a value of h1+h2 in Equation 6 is a power of 2, like 2^m, B=(C*h1+RightUpPixel*h2+((h1+h2)/2))/(h1+h2) may be calculated by shift operation as B=(C*h1+RightUpPixel*h2+2^(m−1))>>m without division.
Once the values of the virtual pixel B 192 on the right border and the virtual pixel A 191 on the down border of the current pixel P 190 are determined by using Equations (4) through (6), a predictor for the current pixel P 190 may be determined by using an average value of A+B+D+E. In detail, a weighted average value considering a distance between the current pixel P 190 and the virtual pixel A 191, the virtual pixel B 192, the pixel D 196, and the pixel E 197 or an average value of A+B+D+E may be used as a predictor for the current pixel P 190. For example, if a weighted average value is used and the size of block is 16×16, a predictor for the current pixel P may be obtained as (h1*A+h2*D+W1*B+W2*E+16)>>5. Such bilinear prediction is applied to all pixels in the current coding unit, and a prediction coding unit of the current coding unit in a bilinear prediction mode is generated.
According to an exemplary embodiment, prediction encoding is performed according to various intra prediction modes determined according to the size of a coding unit, thereby allowing efficient video compression based on characteristics of an image.
Meanwhile, as described with reference to
Referring to
Accordingly, a value of any one of dx and dy representing a directivity of a prediction mode for determining neighboring pixels may be determined to be a power of 2. That is, when n and m are integers, dx and dy may be 2^n and 2^m, respectively.
Referring to
Likewise, if the up neighboring pixel A is used as a predictor for the current pixel P and dy has a value of 2^m, i*dx/dy necessary to determine (j+i*dx/dy,0) that is a location of the up neighboring pixel A becomes (i*dx)/(2^m), and division using such a power of 2 is easily obtained through shift operation as (i*dx)>>m.
As a neighboring pixel necessary for prediction according to a location of a current pixel, any one of an up neighboring pixel and a left neighboring pixel is selected.
Referring to
If only a dy component of a y-axis direction from among (dx, dy) representing a prediction direction has a power of 2 like 2^m, while the up pixel A in
In general, there are many cases where linear patterns shown in an image or a video signal are vertical or horizontal. Accordingly, when intra prediction modes having various directivities are defined by using parameters dx and dy, image coding efficiency may be improved by defining values dx and dy as follows.
In detail, if dy has a fixed value of 2^m, an absolute value of dx may be set so that a distance between prediction directions close to a vertical direction is narrow, and a distance between prediction modes closer to a horizontal direction is wider. For example, referring to
Likewise, if dx has a fixed value of 2^n, an absolute value of dy may be set so that a distance between prediction directions close to a horizontal direction is narrow and a distance between prediction modes closer to a vertical direction is wider. For example, referring to
Also, when one of values of dx and dy is fixed, the remaining value may be set to be increased according to a prediction mode. For example, if dy is fixed, a distance between dx may be set to be increased by a predetermined value. Also, an angle of a horizontal direction and a vertical direction may be divided in predetermined units, and such an increased amount may be set in each of the divided angles. For example, if dy is fixed, a value of dx may be set to have an increased amount of a in a section less than 15 degrees, an increased amount of b in a section between 15 degrees and 30 degrees, and an increased width of c in a section greater than 30 degrees. In this case, in order to have such a shape as shown in
For example, prediction modes described with reference to
TABLE 4
dx
Dy
dx
dy
dx
dy
−32
32
21
32
32
13
−26
32
26
32
32
17
−21
32
32
32
32
21
−17
32
32
−26
32
26
−13
32
32
−21
32
32
−9
32
32
−17
−5
32
32
−13
−2
32
32
−9
0
32
32
−5
2
32
32
−2
5
32
32
0
9
32
32
2
13
32
32
5
17
32
32
9
TABLE 5
dx
Dy
dx
dy
dx
Dy
−32
32
19
32
32
10
−25
32
25
32
32
14
9
32
32
32
32
19
−14
32
32
−25
32
25
−10
32
32
−19
32
32
−6
32
32
−14
−3
32
32
−10
−1
32
32
−6
0
32
32
−3
1
32
32
−1
3
32
32
0
6
32
32
1
10
32
32
3
14
32
32
6
TABLE 6
dx
Dy
dx
dy
dx
dy
−32
32
23
32
32
15
−27
32
27
32
32
19
−23
32
32
32
32
23
−19
32
32
−27
32
27
−15
32
32
−23
32
32
−11
32
32
−19
−7
32
32
−15
−3
32
32
−11
0
32
32
−7
3
32
32
−3
7
32
32
0
11
32
32
3
15
32
32
7
19
32
32
11
As described above, a predicted coding unit produced using an intra prediction mode determined according to the size of a current coding unit by the predictor 1410 of the intra prediction apparatus 1400 of
A reason why post-processing is not performed when a current predicted coding unit has a portion located outside a boundary of a current coding unit, is because neighboring pixels of each pixel are used for post-processing and pixels in the current predicted coding unit lack neighboring pixels. Even if post-processing is performed by producing neighboring pixels through padding or extrapolation, prediction efficiency is not high because the produced neighboring pixels are originally non-existent pixels.
A method of post-processing a predicted coding unit by the post-processor 1420 of
If the determiner 1415 of
Referring to
As illustrated in
In the current exemplary embodiment (first exemplary embodiment), neighboring pixels of the first pixel 2110 are not limited to those located above and to the left side of the first predicted coding unit, unlike as illustrated in
The post-processor 1420 produces a second predicted coding unit by changing values of all pixels included in the first predicted coding unit 2100 by using Equation (8). In Equation (8), three neighboring pixels are used, but another exemplary embodiment is not limited thereto and the post-processor 1420 may perform post-processing by using four or more neighboring pixels.
According to a second exemplary embodiment, the post-processor 1420 produces a second predicted coding unit by changing the value of each pixel of the first predicted coding unit 2100 by using a weighted harmonic average of the values of a pixel of the first predicted coding unit 2100, which is to be changed, and neighboring pixels of the pixel.
For example, the post-processor 1420 changes the value of a pixel at the ith column and the jth row of the first predicted coding unit 2100 from f[i][j] to f[i][j] by using neighboring pixels located above and to the left side of the pixel, as shown in the following equation:
wherein α, β, and γ denote positive integers, and for example, α=2, β=2, and γ=1.
According to a third exemplary embodiment, the post-processor 1420 produces a second predicted coding unit by changing the value of each pixel of the first predicted coding unit 2100 by using a weighted geometric average of values of a pixel of the first predicted coding unit 2100, which is to be changed, and neighboring pixels of the pixel.
For example, the post-processor 1420 changes the value of a pixel at the ith column and the jth row of the first predicted coding unit 2100 from f[i][j] to f[i][j] by using neighboring pixels located above and to the left side of the pixel, as shown in the following equation:
wherein α, β, and γ denote positive integers, and for example, α=1, β=1, and γ=2. In Equation (8) to (10), a relative large weight is assigned to the value f[i][j] of the pixel that is to be changed.
As described above, in the first to third exemplary embodiments, the post-processor 1420 may perform post-processing by using not only neighboring pixels located above and to the left side of a pixel that is to be changed, but also a predetermined number of neighboring pixels selected from among the neighboring pixels 2111 to 2118 as illustrated in
According to a fourth exemplary embodiment, the post-processor 1420 produces a second predicted coding unit by changing the value of each pixel in the first predicted coding unit by using an average of the values of a pixel in the first predicted coding unit, which is to be changed, and one selected from among neighboring pixels of the pixel.
For example, the post-processor 1420 changes the value of a pixel at the ith column and the jth row of the first predicted coding unit 2100 from f[i][j] to f[i][j] by using neighboring pixels located above the pixel, as shown in the following equation:
f[i][j]=(f[i−1][j]+f[i][j−1]+1)>>1 (11)
Similarly, according to a fifth exemplary embodiment, the post-processor 1420 produces a second predicted coding unit by changing the value of each pixel in the first predicted coding unit by using an average of the values of a pixel in the first predicted coding unit, which is to be changed, and neighboring pixels located to the left side of the pixel.
In other words, the post-processor 1420 changes the value of a pixel at the ith column and the jth row of the first predicted coding unit 2100 from f[i][j] to f[i][j], as shown in the following equation:
f[i][j]=(f[i−1][j]+f[i][j]+1)>>1 (12)
According to a sixth exemplary embodiment, the post-processor 1420 produces a second predicted coding unit by changing the value of each pixel in the first predicted coding unit by using a median between the values of a pixel of the first predicted coding unit, which is to be changed, and neighboring pixels of the pixel. Referring back to
In seventh to ninth exemplary embodiments, the post-processor 1420 produces a second predicted coding unit by using previous coding units adjacent to a current coding unit, which have been encoded and restored, rather than by using neighboring pixels of a pixel that is to be changed.
Referring back to
f[i][j]=(f[i][j]+f[−1][j]+1)>>1 (13),
wherein f[−1][j] denotes the value of the pixel 2121.
Similarly, in the eighth exemplary embodiment, the post-processor 1420 changes the value of the first pixel 2110 to f[i][j] by calculating an average of the value of the first pixel 2110 at the ith column and the jth row of the first predicted coding unit 2100 and the value of the pixel 2122 that is located at the same row as the first pixel 2110 and included in a coding unit adjacent to the left side of the current coding unit, as shown in the following equation:
f[i][j]=(f[i][j]+f[j][−1]+1)>>1 (14),
wherein f[i][−1] denotes the value of the pixel 2122.
In the ninth exemplary embodiment, the post-processor 1420 changes the value of the first pixel 2110 to f[i][j] by calculating a weighted average of the values of the first pixel 2110 at the ith column and the jth row of the first predicted coding unit 2100, the pixel 2121 located at the same column as the first pixel 2110 and included in a coding unit adjacent to the top of the current coding unit, and the pixel 2122 located at the same row as the first pixel 2110 and included in a coding unit adjacent to the left side of the current coding unit, as shown in the following equation:
f′[i][j]=((f[i][j]<<1)+f[j][−1]+f[i][j−1]+2)>>2 (15)
In a tenth exemplary embodiment, the post-processor 1420 changes the value of the first pixel 2110 of the first predicted coding unit 2100, which is to be changed, from f[i][j] to f[i][j] by using one of the following equations.
f′[i][j]=min(f[i][j]+i,255) (16)
f′[i][j]min(f[i][j]+j,255) (17)
f′[i][j]=max(f[i][j]−i,0) (18)
f′[i][j]max(f[i][j]−j,0) (19)
In Equation (16), the pixel values of the first predicted coding unit 2100 are changed to gradually increase from top to bottom, in column units of the first predicted coding unit 2100. In Equation (17), the pixel values of the first predicted coding unit 2100 are changed to gradually increase in a right direction, in row units of the first predicted coding unit 2100. In Equation (18), the pixel values of the first predicted coding unit 2100 are changed to gradually decrease from top to bottom, in column units of the first predicted coding unit 2100. In Equation (19), the pixel values of the first predicted coding unit 2100 are changed to gradually decrease in the right direction, in row units of the first predicted coding unit 2100.
In an eleventh exemplary embodiment, if the value of the first pixel 2110, which is located at the ith column and the jth row of the first predicted coding unit 2100 and is to be changed, is f[i][j], the value of a pixel located at an upper leftmost point of the first predicted coding unit 2100 is f[0][0], the value of a pixel located at the jth column as the first pixel 2110 and at the uppermost point of the first predicted coding unit 2100 is f[0][j], the value of a pixel located at the ith row as the first pixel 2110 and at the leftmost point of the first predicted coding unit is f[i][0], and
G[i][j]=f[i][0]+f[0][j]−f[0][0],
then the post-processor 1420 changes the value of the first pixel 2110 to f[i][j], as shown in the following equation:
f′[i][j]=(f[i][j]+G[i][j])/2 (20)
Equation (20) is based on a wave equation, in which the value of each pixel in the first predicted coding unit 2100 is changed by calculating the value G[i][j] by setting the values of a pixel on the uppermost point of and a pixel on the leftmost point of the first predicted coding unit 2100 to be boundary conditions so as to smooth the value of each pixel in the first predicted coding unit 2100, and then calculating an average of the values G[i][j] and f[i][j].
Also, if a value of a first pixel at an xth column and an yth row of the first predicted coding unit, which is to be changed, is f[x][y] and values of neighboring pixels located above, below, and to the left and right sides of the first pixel are f[x−1][y], f[x+1][y], f[x][y−1], and f[x][y+1], respectively, then the post-processor 1420 may change the value of the first pixel to f[x][y] by using one of the following shifting operations:
f[x,y]=(f[x,y]+f[x−1,y]+f[x,y−1]+f[x,y+1]+2)>>2
f[x,y]=(f[x,y]+f[x−1,y]+f[x,y−1]+f[x−1,y−1]+2)>>2
f[x,y]=(2*f[x,y]+f[x+1,y]+f[x,y−1]+2)>>2
f[x,y]=(2*f[x,y]+f[x−1,y]+f[x,y−1]+2)>>2
f[x,y]=(f[x,y]+f[x+1,y]+f[x,y+1]+f[x,y−1]+2)>>2
f[x,y]=(f[x,y]+f[x−1,y]+f[x,y+1]+f[x,y−1]+2)>>2
Also, the post-processor 1420 may produce a median by using the first pixel and neighboring pixels of the first pixel, and change the value of the first pixel by using the median. For example, the value of the first pixel may be changed by setting a median t[x,y] by using an equation: t [x,y]=(2*f[x,y]+f[x−1,y]+f[x,y−1]+2)>>2, f[x,y]=t[x,y]. Similarly, the median t[x,y] between the first pixel and the neighboring pixels may be calculated using an equation: t[x,y]=median (f[x,y],f[x−1,y],f[x,y−1]), and may be determined as a changed value of the first pixel.
Also, the post-processor 1420 may change the value of the first pixel by using the following operation:
{
t[x,y] = f[x,y]
for (Int iter=0; iter<iterMax; iter++)
{
laplacian[x,y] = (t[x,y]<<2) − t[x−1,y]− t[x+1,y]− t[x,y−1]− t[x,y+1]
t [x,y] =(α* t [x,y] + laplacian[x,y] )/ α
}
f[x,y] = t[x,y]
}
Here, iterMax may be 5, and a may be 16.
Costs of bitstreams containing results of encoding second predicted coding units produced using various post-processing modes according to the above first through eleventh embodiments, respectively, are compared to one another, and then, the post-processing mode having the minimum cost is added to a header of a bitstream from among the various post-processing modes. When the post-processing mode is added to the bistream, it is possible to represent different post-processing modes to be differentiated from one another by using variable-length coding, in which a small number of bits are assigned to a post-processing mode that is most frequently used, based on a distribution of the post-processing mode determined after encoding of a predetermined number of coding units is completed. For example, if a post-processing mode according to the first exemplary embodiment is an optimum operation leading to the minimum cost of most coding units, a minimum number of bits are assigned to an index indicating this post-processing mode so that this post-processing mode may be differentiated from the other post-processing modes.
When a coding unit is split to sub coding units and prediction is performed in the sub coding units, a second predicted coding unit may be produced by applying different post-processing modes to the sub coding units, respectively, or by applying the same post-processing mode to sub coding units belonging to the same coding unit so as to simplify calculation and decrease an overhead rate.
A rate-distortion optimization method may be used as a cost for determining an optimum post-processing mode. Since a video encoding method according to an exemplary embodiment is performed on an intra predicted coding unit used as reference data for another coding unit, a cost may be calculated by allocating a high weight to a distortion, compared to the rate-distortion optimization method. That is, in the rate-distortion optimization method, a cost is calculated, based on a distortion that is the difference between an encoded image and the original image and a bitrate generated, as shown in the following equation:
Cost=distortion+bit-rate (21)
In contrast, in a video encoding method according to an exemplary embodiment, an optimum post-processing mode is determined by allocating a high weight to a distortion, compared to the rate-distortion optimization method, as shown in the following equation:
Cost=α*distortion+bit-rate (α denotes a real number equal to or greater than 2) (22)
In operation 2320, it is determined whether the current coding unit has a portion located outside a boundary of a current picture. A predetermined index MPI_PredMode may be produced according to the determination result, in such a manner that post-processing for producing a second predicted coding unit will not be performed when the predetermined index MPI_PredMode is 0 and will be performed when the predetermined index MPI_PredMode is 1.
If it is determined in operation 2320 that the current coding unit does not have a portion located outside a boundary of the current picture, then a second predicted coding unit is produced by changing a value of each pixel of the first predicted coding unit by using each pixel of the first predicted coding unit and at least one neighboring pixel, in operation 2330. As described above in the first through eleventh exemplary embodiments regarding an operation of the post-processor 1420, a second predicted coding unit may be produced by changing the value of each pixel in the first predicted coding unit by performing one of various post-processing modes on a pixel of the first predicted coding unit, which is to be changed, and neighboring pixels thereof. Then, a residual block that is the difference between the second predicted coding unit and the current coding unit, is transformed, quantized, and entropy encoded so as to generate a bitstream. Information regarding the post-processing mode used to produce the second predicted coding unit may be added to a predetermined region of the bitstream, so that a decoding apparatus may reproduce the second predicted coding unit of the current coding unit.
If it is determined in operation 2320 that the current coding unit has a portion located outside a boundary of the current picture, then, a second predicted coding unit is not produced, and the first predicted coding unit is directly output as prediction information regarding the current coding unit, in operation 2340. Then, a residual block that is the difference between the first predicted coding unit and the current coding unit, is transformed, quantized, and entropy encoded so as to generate a bitstream.
In operation 2620, a first predicted decoding unit of the current decoding unit is produced according to the extracted information.
In operation 2630, it is determined whether the current decoding unit has a portion located outside a boundary of a current picture. A predetermined index MPI_PredMode may be produced according to the determination result, in such a manner that post-processing for producing a second predicted decoding unit will not be performed when the predetermined index MPI_PredMode is 0 and will be performed when the predetermined index MPI_PredMode is 1.
If it is determined in operation 2630 that the current decoding unit does not have a portion located outside a boundary of the current picture, a second predicted decoding unit is produced by changing a value of each pixel of the first predicted decoding unit by using each pixel of the first predicted decoding unit and neighboring pixels of each pixel, in operation 2640. As described above in the first through eleventh exemplary embodiments regarding an operation of the post-processor 1420, a second predicted coding unit may be produced by changing the value of each pixel of the first predicted coding unit by using performing one of various post-processing modes on a pixel of the first predicted coding unit, which is to be changed, and neighboring pixels thereof.
If it is determined in operation 2630 that that the current decoding unit has a portion located outside a boundary of the current picture, post-processing for producing a second predicted decoding unit is not performed and the first predicted decoding unit is directly output as prediction information regarding the current decoding unit, in operation 2650. The first predicted decoding unit is combined with a residual block of the current decoding unit, which is extracted from the bitstream, so as to reproduce the current decoding unit.
An exemplary embodiment can also be embodied as computer readable code on a computer readable recording medium. The computer readable recording medium is any data storage device that can store data which can be thereafter read by a computer system. Examples of the computer readable recording medium include read-only memory (ROM), random-access memory (RAM), CD-ROMs, magnetic tapes, floppy disks, and optical data storage devices. The computer readable recording medium can also be distributed over network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.
Exemplary embodiments can also be implemented as computer processors and hardware devices.
While exemplary embodiments have been particularly shown and described above, it will be understood by one of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the inventive concept as defined by the following claims. The exemplary embodiments should be considered in a descriptive sense only and not for purposes of limitation. Therefore, the scope of the inventive concept is defined not by the detailed description of exemplary embodiments but by the following claims, and all differences within the scope will be construed as being included in the present inventive concept.
Alshina, Elena, Alshin, Alexander, Seregin, Vadim, Shlyakhov, Nikolay
Patent | Priority | Assignee | Title |
11917146, | Mar 27 2017 | Thomson Licensing | Methods and apparatus for picture encoding and decoding |
Patent | Priority | Assignee | Title |
8873633, | Mar 28 2007 | Samsung Electronics Co., Ltd. | Method and apparatus for video encoding and decoding |
20050053145, | |||
20050190976, | |||
20060023791, | |||
20060120450, | |||
20060233251, | |||
20070253483, | |||
20080107175, | |||
20080240248, | |||
20090074073, | |||
20090135909, | |||
20110038414, | |||
20110038415, | |||
20110090969, | |||
20120014438, | |||
20120177106, | |||
CN101502119, | |||
CN101569201, | |||
EP1641280, | |||
EP1841230, | |||
EP2001239, | |||
KR1020080088042, | |||
KR1020110044486, | |||
WO2008117933, | |||
WO2009051419, | |||
WO2009051719, | |||
WO2010002214, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Oct 29 2015 | Samsung Electronics Co., Ltd. | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Mar 10 2021 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
Feb 12 2022 | 4 years fee payment window open |
Aug 12 2022 | 6 months grace period start (w surcharge) |
Feb 12 2023 | patent expiry (for year 4) |
Feb 12 2025 | 2 years to revive unintentionally abandoned end. (for year 4) |
Feb 12 2026 | 8 years fee payment window open |
Aug 12 2026 | 6 months grace period start (w surcharge) |
Feb 12 2027 | patent expiry (for year 8) |
Feb 12 2029 | 2 years to revive unintentionally abandoned end. (for year 8) |
Feb 12 2030 | 12 years fee payment window open |
Aug 12 2030 | 6 months grace period start (w surcharge) |
Feb 12 2031 | patent expiry (for year 12) |
Feb 12 2033 | 2 years to revive unintentionally abandoned end. (for year 12) |