Devices, systems and methods for digital video processing, which includes intra mode coding based on history information, are described. In a representative aspect, a method for video processing includes selecting, for a conversion between a current block of visual media data and a bitstream representation of the current block, a first intra prediction mode based on at least a first set of history intra coding information that includes statistical information of a set of intra prediction modes, and performing the conversion based on the first intra prediction mode.
|
1. A method for video processing, comprising:
selecting, for a conversion between a current block of a video and a bitstream of the video, a first intra prediction mode based on at least a first set of history intra coding information that includes statistical information of a set of intra prediction modes of previously coded blocks, wherein the current block comprises a luma component and a chroma component, the first set of history intra coding information is used for the luma component,
wherein the statistical information comprises a number of occurrences of each of the set of intra prediction modes of the previously coded blocks over a time duration; or
the statistical information comprises a number of occurrences of a part of the set of intra prediction modes of the previously coded blocks over a time duration; and
performing the conversion based on the first intra prediction mode;
further selecting the first intra prediction mode based on at least intra prediction modes associated with adjacent neighbors of the current block and intra prediction modes associated with non-adjacent neighbors of the current block;
constructing a most probable mode (mpm) list based on the intra prediction modes associated with adjacent neighbors of the current block and the intra prediction modes associated with non-adjacent neighbors of the current block, wherein the conversion is performed further based on the mpm list;
wherein constructing the mpm list comprises:
adding, before pruning the mpm list, the intra prediction modes associated with adjacent neighbors of the current block and derived intra prediction modes to the mpm list; and
adding, after the pruning, the intra prediction modes associated with non-adjacent neighbors of the current block to the mpm list.
16. A non-transitory computer-readable recording medium storing a bitstream of a video which is generated by a method performed by an apparatus for video processing, wherein the method comprises:
selecting a first intra prediction mode based on at least a first set of history intra coding information that includes statistical information of a set of intra prediction modes of previously coded blocks, wherein a current block of the video comprises a luma component and a chroma component, the first set of history intra coding information is used for the luma component,
wherein the statistical information comprises a number of occurrences of each of the set of intra prediction modes of the previously coded blocks over a time duration; or
the statistical information comprises a number of occurrences of a part of the set of intra prediction modes of the previously coded blocks over a time duration; and
generating the bitstream based on the first intra prediction mode,
further selecting the first intra prediction mode based on at least intra prediction modes associated with adjacent neighbors of the current block and intra prediction modes associated with non-adjacent neighbors of the current block;
constructing a most probable mode (mpm) list based on the intra prediction modes associated with adjacent neighbors of the current block and the intra prediction modes associated with non-adjacent neighbors of the current block, wherein the bitstream is generated further based on the mpm list;
wherein constructing the mpm list comprises:
adding, before pruning the mpm list, the intra prediction modes associated with adjacent neighbors of the current block and derived intra prediction modes to the mpm list; and
adding, after the pruning, the intra prediction modes associated with non-adjacent neighbors of the current block to the mpm list.
15. An apparatus for video processing, comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to:
select, for a conversion between a current block of a video and a bitstream of the video, a first intra prediction mode based on at least a first set of history intra coding information that includes statistical information of a set of intra prediction modes of previously coded blocks, wherein the current block comprises a luma component and a chroma component, the first set of history intra coding information is used for the luma component,
wherein the statistical information comprises a number of occurrences of each of the set of intra prediction modes of the previously coded blocks over a time duration; or
the statistical information comprises a number of occurrences of a part of the set of intra prediction modes of the previously coded blocks over a time duration; and
perform the conversion based on the first intra prediction mode,
further selecting the first intra prediction mode based on at least intra prediction modes associated with adjacent neighbors of the current block and intra prediction modes associated with non-adjacent neighbors of the current block;
constructing a most probable mode (mpm) list based on the intra prediction modes associated with adjacent neighbors of the current block and the intra prediction modes associated with non-adjacent neighbors of the current block, wherein the conversion is performed further based on the mpm list;
wherein constructing the mpm list comprises:
adding, before pruning the mpm list, the intra prediction modes associated with adjacent neighbors of the current block and derived intra prediction modes to the mpm list; and
adding, after the pruning, the intra prediction modes associated with non-adjacent neighbors of the current block to the mpm list.
2. The method of
3. The method of
4. The method of
updating the set of history intra coding information,
wherein a subsequent block of the current block is processed based on the updated history intra coding information.
5. The method of
accumulating the number of occurrences of the first intra prediction mode by k in the first set of history intra coding information, k being a positive integer.
6. The method of
7. The method of
dividing the number of occurrences by a first predefined value, or
subtracting a second predefined value from the number of occurrences.
8. The method of
constructing a most probable mode (mpm) list based on at least the set of intra prediction modes.
9. The method of
adding intra prediction modes associated with spatial neighbors of the current block to the mpm list;
adding a subset of the set of intra prediction modes to the mpm list;
pruning the mpm list; and
adding derived intra prediction modes to the mpm list.
10. The method of
reordering the mpm list based on the first set of history intra coding information, wherein the mpm list is reordered based on a descending order of the statistical information of the set of intra prediction modes.
11. The method of
12. The method of
13. The method of
14. The method of
reordering intra prediction modes that are not in the mpm list according to the intra prediction modes associated with the non-adjacent neighbors of the current block, wherein the intra prediction modes associated with the non-adjacent neighbors of the current block are associated with a higher priority for the reordering.
17. The apparatus of
19. The method of
20. The apparatus of
dividing the number of occurrences by 2, or
subtracting the minimum occurrence number in the first set of history intra coding information from the number of occurrences.
|
This application is a continuation of International Application No. PCT/IB2019/057906, filed on Sep. 19, 2019, which claims the priority to and benefit of International Patent Application No. PCT/CN2018/106518, filed on Sep. 19, 2018. All the aforementioned patent applications are hereby incorporated by reference in their entireties.
This patent document relates to video coding techniques, devices and systems.
In spite of the advances in video compression, digital video still accounts for the largest bandwidth use on the internet and other digital communication networks. As the number of connected user devices capable of receiving and displaying video increases, it is expected that the bandwidth demand for digital video usage will continue to grow.
Devices, systems and methods related to digital video coding, and specifically, intra mode coding for images and video based of past (e.g., historical or statistical) information are described. The described methods may be applied to both the existing video coding standards (e.g., High Efficiency Video Coding (HEVC)) and future video coding standards (e.g., Versatile Video Coding (VVC)) or codecs.
In one representative aspect, the disclosed technology may be used to provide a method for video processing. This example method includes selecting, for a conversion between a current block of visual media data and a bitstream representation of the current block, a first intra prediction mode based on at least a first set of history intra coding information that includes statistical information of a set of intra prediction modes, and performing the conversion based on the first intra prediction mode.
In another representative aspect, the disclosed technology may be used to provide a method for video processing. This example method includes, selecting, for a conversion between a current block of visual media data and a bitstream representation of the current block, a first intra prediction mode based on at least intra prediction modes associated with non-adjacent neighbors of the current block. The method also includes performing the conversion based on the first intra prediction mode.
In another representative aspect, the disclosed technology may be used to provide a method for video processing. This example method includes selecting, for a conversion between a current block of visual media data and a bitstream representation of the current block, an intra prediction mode based on at least one of spatial neighboring blocks to the current block, and performing the conversion based on the intra prediction mode. The at least one of the spatial neighboring blocks is different from a first block that is located to a left of a first row of the current block and a second block that is located above a first column of the current block.
In another representative aspect, the disclosed technology may be used to provide a method for video processing. This example method includes generating a final prediction block for a conversion between a current block of visual media data and a bitstream representation of the current block and performing the conversion based on the final prediction block. At least a portion of the final prediction block is generated based on a combination of a first prediction block and a second prediction block that are based on reconstructed samples from an image segment that comprises the current block.
In another representative aspect, the disclosed technology may be used to provide a method for video processing. This example method includes maintaining a first table of motion candidates during a conversion between a current block of visual media data and a bitstream representation of the current block and determining, based on at least the first table of motion candidates, motion information for the current block that is coded using an intra block copy (IBC) mode in which at least one motion vector is directed at an image segment that comprises the current block. The method also includes performing the conversion based on the motion information.
In another representative aspect, the disclosed technology may be used to provide a method for video coding. This example method for video coding includes selecting, for a bitstream representation of a current block of visual media data, a first intra prediction mode from a set of intra prediction modes based on a first set of past intra coding information, and processing, based on the first intra prediction mode, the bitstream representation to generate the current block.
In another representative aspect, the disclosed technology may be used to provide a method for video coding. This example method for video coding includes selecting, for a bitstream representation of a current block of visual media data, an intra prediction mode from at least one of adjacent or non-adjacent spatial neighboring blocks to the current block, and processing, based on the intra prediction mode, the bitstream representation to generate the current block.
In yet another representative aspect, the disclosed technology may be used to provide a method for video coding. This example method for video coding includes generating, for a bitstream representation of a current block of visual media data, a first prediction block and a second prediction block that are based on reconstructed samples from an image segment that comprises the current block, generating at least a portion of a final prediction block based on a linear function of the first and second prediction blocks, and processing, based on the final prediction block, the bitstream representation to generate the current block.
In yet another representative aspect, the above-described method is embodied in the form of processor-executable code and stored in a computer-readable program medium.
In yet another representative aspect, a device that is configured or operable to perform the above-described method is disclosed. The device may include a processor that is programmed to implement this method.
In yet another representative aspect, a video decoder apparatus may implement a method as described herein.
The above and other aspects and features of the disclosed technology are described in greater detail in the drawings, the description and the claims.
Due to the increasing demand of higher resolution video, video coding methods and techniques are ubiquitous in modern technology. Video codecs typically include an electronic circuit or software that compresses or decompresses digital video, and are continually being improved to provide higher coding efficiency. A video codec converts uncompressed video to a compressed format or vice versa. There are complex relationships between the video quality, the amount of data used to represent the video (determined by the bit rate), the complexity of the encoding and decoding algorithms, sensitivity to data losses and errors, ease of editing, random access, and end-to-end delay (latency). The compressed format usually conforms to a standard video compression specification, e.g., the High Efficiency Video Coding (HEVC) standard (also known as H.265 or MPEG-H Part 2), the Versatile Video Coding (VVC) standard to be finalized, or other current and/or future video coding standards.
Embodiments of the disclosed technology may be applied to existing video coding standards (e.g., HEVC, H.265) and future standards to improve runtime performance. Section headings are used in the present document to improve readability of the description and do not in any way limit the discussion or the embodiments (and/or implementations) to the respective sections only.
1 Examples of Intra Prediction in VVC
1.1 Intra Mode Coding with 67 Intra Prediction Modes
To capture the arbitrary edge directions presented in natural video, the number of directional intra modes is extended from 33, as used in HEVC, to 65. The additional directional modes are depicted as red dotted arrows in
Conventional angular intra prediction directions are defined from 45 degrees to −135 degrees in clockwise direction as shown in
In HEVC, every intra-coded block has a square shape and the length of each of its side is a power of 2. Thus, no division operations are required to generate an intra-predictor using DC mode. In VTV2, blocks can have a rectangular shape that necessitates the use of a division operation per block in the general case. To avoid division operations for DC prediction, only the longer side is used to compute the average for non-square blocks.
1.2 Examples of Intra Mode Coding
In some embodiments, and to keep the complexity of the MPM list generation low, an intra mode coding method with 3 Most Probable Modes (MPMs) is used. The following three aspects are considered to the MPM lists:
For neighbor intra modes (A and B), two neighbouring blocks, located in left and above are considered. The left and above blocks are those who are connected to the top-left sample of current block, as shown in
If two neighboring candidate modes (i.e., A==B) are same,
Otherwise,
An additional pruning process is used to remove duplicated modes so that only unique modes can be included into the MPM list. For entropy coding of the 64 non-MPM modes, a 6-bit Fixed Length Code (FLC) is used.
1.3 Wide-Angle Intra Prediction for Non-Square Blocks
In some embodiments, conventional angular intra prediction directions are defined from 45 degrees to −135 degrees in clockwise direction. In VTM2, several conventional angular intra prediction modes are adaptively replaced with wide-angle intra prediction modes for non-square blocks. The replaced modes are signaled using the original method and remapped to the indexes of wide angular modes after parsing. The total number of intra prediction modes for a certain block is unchanged, e.g., 67, and the intra mode coding is unchanged.
To support these prediction directions, the top reference with length 2 W+1, and the left reference with length 2H+1, are defined as shown in the examples in
In some embodiments, the mode number of replaced mode in wide-angular direction mode is dependent on the aspect ratio of a block. The replaced intra prediction modes are illustrated in Table 1.
TABLE 1
Intra prediction modes replaced by wide-angle modes
Condition
Replaced intra prediction modes
W/H = = 2
Modes 2, 3, 4, 5, 6, 7
W/H > 2
Modes 2, 3, 4, 5, 6, 7, 8, 9, 10, 11
W/H = =1
None
H/W = = 1/2
Modes 61, 62, 63, 64, 65, 66
H/W < 1/2
Mode 57, 58, 59, 60, 61, 62, 63, 64, 65, 66
As shown in
1.4 Examples of Position Dependent Intra Prediction Combination (PDPC)
In the VTM2, the results of intra prediction of planar mode are further modified by a position dependent intra prediction combination (PDPC) method. PDPC is an intra prediction method which invokes a combination of the un-filtered boundary reference samples and HEVC style intra prediction with filtered boundary reference samples. PDPC is applied to the following intra modes without signaling: planar, DC, horizontal, vertical, bottom-left angular mode and its eight adjacent angular modes, and top-right angular mode and its eight adjacent angular modes.
The prediction sample pred(x,y) is predicted using an intra prediction mode (DC, planar, angular) and a linear combination of reference samples according to the Equation as follows:
pred(x,y)=(wL×R−1,y+wT×Rx,−1−wTL×R−1,−1+(64−wL−wT+wTL)×pred(x,y)+32)>>shift
Herein, Rx,−1, R−1,y represent the reference samples located at the top and left of current sample (x, y), respectively, and R−1,−1 represents the reference sample located at the top-left corner of the current block.
In some embodiments, and if PDPC is applied to DC, planar, horizontal, and vertical intra modes, additional boundary filters are not needed, as required in the case of HEVC DC mode boundary filter or horizontal/vertical mode edge filters.
In some embodiments, the PDPC weights are dependent on prediction modes and are shown in Table 2, where S=shift.
TABLE 2
Examples of PDPC weights according to prediction modes
Prediction modes
wT
wL
wTL
Diagonal top-right
16 >> ((y′ <<1) >> S)
16 >> ((x′ <<1) >> S)
0
Diagonal bottom-left
16 >> ((y′ <<1) >> S)
16 >> ((x′ <<1) >> S)
0
Adjacent diag.
32 >> ((y′ <<1) >> S)
0
0
top-right
Adjacent diag.
0
32 >> ((x′ <<1) >> S)
0
bottom-left
1.5 Examples of Chroma Coding
In HEVC chroma coding, five modes (including one direct mode (DM) which is the intra prediction mode from the top-left corresponding luma block and four default modes) are allowed for a chroma block. The two color components share the same intra prediction mode.
In some embodiments, and different from the design in HEVC, two new methods have been proposed, including: cross-component linear model (CCLM) prediction mode and multiple DMs.
1.5.1 Examples of the Cross-Component Linear Model (CCLM)
In some embodiments, and to reduce the cross-component redundancy, a cross-component linear model (CCLM) prediction mode (also referred to as LM), is used in the JEM, for which the chroma samples are predicted based on the reconstructed luma samples of the same CU by using a linear model as follows:
predC(i,j)=α·recL′(i,j)+β (1)
Here, predC(i,j) represents the predicted chroma samples in a CU and recL(i,j) represents the downsampled reconstructed luma samples of the same CU for color formats 4:2:0 or 4:2:2 while recL′(i,j) represents the reconstructed luma samples of the same CU for color format 4:4:4. CCLM parameters α and β are derived by minimizing the regression error between the neighboring reconstructed luma and chroma samples around the current block as follows:
Here, L(n) represents the down-sampled (for color formats 4:2:0 or 4:2:2) or original (for color format 4:4:4) top and left neighboring reconstructed luma samples, C(n) represents the top and left neighboring reconstructed chroma samples, and value of N is equal to twice of the minimum of width and height of the current chroma coding block.
In some embodiments, and for a coding block with a square shape, the above two equations are applied directly. In other embodiments, and for a non-square coding block, the neighboring samples of the longer boundary are first subsampled to have the same number of samples as for the shorter boundary.
In some embodiments, this regression error minimization computation is performed as part of the decoding process, not just as an encoder search operation, so no syntax is used to convey the α and β values.
In some embodiments, the CCLM prediction mode also includes prediction between the two chroma components, e.g., the Cr (red-difference) component is predicted from the Cb (blue-difference) component. Instead of using the reconstructed sample signal, the CCLM Cb-to-Cr prediction is applied in residual domain. This is implemented by adding a weighted reconstructed Cb residual to the original Cr intra prediction to form the final Cr prediction:
predCr*(i,j)=predCr(i,j)+α·resiCb′(i,j) (4)
Here, resiCb′(i,j) presents the reconstructed Cb residue sample at position (i,j).
In some embodiments, the scaling factor α may be derived in a similar way as in the CCLM luma-to-chroma prediction. The only difference is an addition of a regression cost relative to a default α value in the error function so that the derived scaling factor is biased towards a default value of −0.5 as follows:
Here, Cb(n) represents the neighboring reconstructed Cb samples, Cr(n) represents the neighboring reconstructed Cr samples, and λ is equal to Σ(Cb(n)·Cb(n))<<9.
In some embodiments, the CCLM luma-to-chroma prediction mode is added as one additional chroma intra prediction mode. At the encoder side, one more RD cost check for the chroma components is added for selecting the chroma intra prediction mode. When intra prediction modes other than the CCLM luma-to-chroma prediction mode is used for the chroma components of a CU, CCLM Cb-to-Cr prediction is used for Cr component prediction.
1.5.2 Examples of a Multiple Model CCLM
In the JEM, there are two CCLM modes: the single model CCLM mode and the multiple model CCLM mode (MMLM). As indicated by the name, the single model CCLM mode employs one linear model for predicting the chroma samples from the luma samples for the whole CU, while in MMLM, there can be two models.
In MMLM, neighboring luma samples and neighboring chroma samples of the current block are classified into two groups, each group is used as a training set to derive a linear model (i.e., a particular α and β are derived for a particular group). Furthermore, the samples of the current luma block are also classified based on the same rule for the classification of neighboring luma samples.
1.5.3 Examples of Chroma Coding in VVC
With regard to existing implementations, CCLM as in JEM is adopted in VTM-2.0, whereas MM-CCLM in JEM has not been adopted in VTM-2.0.
In some embodiments, for chroma intra coding, a candidate list of chroma intra prediction modes may be derived first, and wherein three parts may be included:
(1) One direct mode (DM) which is set to the one intra luma prediction mode associated with luma CB covering the co-located top-left position of a chroma block. An example is shown in
(2) One cross-component linear model (CCLM) mode
(3) Four default modes (DC, Planar, Horizontal, and Vertical modes, as highlighted in
1.6 Examples of Syntax, Semantics and Decoding in VVC2
In some embodiments, the syntax and semantics of intra mode related information may be specified as follows:
TABLE 3
Example of a coding unit (CU) syntax
Descriptor
coding_unit( x0, y0, cbWidth, cbHeight, treeType ) {
if( slice_type != I) {
pred_mode_flag
ae(v)
}
if( CuPredMode[ x0 ][ y0 ] = = MODE_INTRA ) {
if( treeType = = SINGLE_TREE | | treeType = = DUAL_TREE_LUMA ) {
intra_luma_mpm_flag[ x0 ][ y0 ]
if( intra_luma_mpm_flag[ x0 ][ y0 ] )
intra_luma_mpm_idx[ x0 ][ y0 ]
ae(v)
else
intra_luma_mpm_remainder[ x0 ][ y0 ]
ae(v)
}
if( treeType = = SINGLE_TREE | | treeType = = DUAL_TREE_CHROMA)
intra_chroma_pred_mode[ x0 ][ y0 ]
ae(v)
} else {
[Ed. (BB): Inter prediction yet to be added, pending further specification development.]
}
if( CuPredMode[ x0 ][ y0 ] != MODE_INTRA )
cu_cbf
ae(v)
if( cu_cbf ) {
transform_tree( x0, y0, cbWidth, cbHeight, treeType)
}
Examples of coding unit semantics. In some embodiments, the syntax elements intra_luma_mpm_flag[x0][y0], intra_luma_mpm_idx[x0][y0] and intra_luma_mpm_remainder[x0][y0] specify the intra prediction mode for luma samples. The array indices x0, y0 specify the location (x0, y0) of the top-left luma sample of the considered prediction block relative to the top-left luma sample of the picture. When intra_luma_mpm_flag[x0][y0] is equal to 1, the intra prediction mode is inferred from a neighbouring intra-predicted prediction unit according to the specification (e.g., clause 8.2.2).
In some embodiments, intra_chroma_pred_mode[x0][y0] specifies the intra prediction mode for chroma samples. The array indices x0, y0 specify the location (x0, y0) of the top-left luma sample of the considered prediction block relative to the top-left luma sample of the picture.
Examples of deriving the luma intra prediction mode. In some embodiments, an input to this process is a luma location (xPb, yPb) specifying the top-left sample of the current luma prediction block relative to the top-left luma sample of the current picture. In this process, the luma intra prediction mode IntraPredModeY[xPb][yPb] is derived. Table 4 specifies the value for the intra prediction mode and the associated names.
TABLE 4
Example specification of intra prediction modes and associated names
Intra prediction mode
Associated name
0
INTRA_PLANAR
1
INTRA_DC
2 . . . 66
INTRA_ANGULAR2 . . . INTRA_ANGULAR66
77
INTRA_CCLM
In the context of Table 4, the intra prediction mode INTRA_CCLM is only applicable to chroma components, and IntraPredModeY[xPb][yPb] labelled 0 . . . 66 represents directions of predictions as shown in
The variable IntraPredModeY[x][y] with x=xPb . . . xPb+cbWidth−1 and y=yPb . . . yPb+cbHeight−1 is set to be equal to IntraPredModeY[xPb][yPb].
Examples of deriving the chroma intra prediction mode. In some embodiments, an input to this process is a luma location (xPb, yPb) specifying the top-left sample of the current chroma prediction block relative to the top-left luma sample of the current picture. In this process, the chroma intra prediction mode IntraPredModeC[xPb][yPb] is derived.
In some embodiments, the chroma intra prediction mode IntraPredModeC[xPb][yPb] is derived using intra_chroma_pred_mode[xPb][yPb] and IntraPredModeY[xPb][yPb] as specified in Table 5 and Table 6.
TABLE 5
Example specification of IntraPredModeC[ xPb ][ yPb ]
for sps_cclm_enabled_flag = 0
intra_chroma_pred mode
IntraPredModeY[ xPb ][ yPb ]
[ xPb ][ yPb ]
0
50
18
1
X ( 0 <= X <= 66)
0
66
0
0
0
0
1
50
66
50
50
50
2
18
18
66
18
18
3
1
1
1
66
1
4
0
50
18
1
X
TABLE 6
Example specification of IntraPredModeC[ xPb ][ yPb ]
for sps_cclm_enabled_flag = 1
intra_chroma_pred_mode
IntraPredModeY[ xPb ][ yPb ]
[ xPb][ yPb ]
0
50
18
1
X ( 0 <= X <= 66)
0
66
0
0
0
0
1
50
66
50
50
50
2
18
18
66
18
18
3
1
1
1
66
1
4
77
77
77
77
77
5
0
50
18
1
X
1.7 Examples of Multiple Direct Modes (DMs)
An existing implementation (e.g., JVET-D0111) was recently adopted by JEM. Due to the separate block partitioning structure for luma and chroma components in I slices, one chroma CB may correspond to several luma CBs. Therefore, to further improve the coding efficiency, a Multiple Direct Modes (MDM) method for chroma intra coding is proposed. In MDM, the list of chroma mode candidates includes the following three parts:
In some embodiments, the pruning process is applied whenever a new chroma intra mode is added to the candidate list. In this contribution, the chroma intra mode candidate list size is always set to 10.
1.8 Examples of Intra Block Copy
In some embodiments, HEVC screen content coding (SCC) extensions employ a new coding tool, Intra block copy (IBC), also named intra picture block compensation or current picture referencing (CPR), is a very efficient technique in terms of coding performance improvement. IBC is a block matching technique wherein a current prediction block is predicted from a reference block located in the already reconstructed regions of the same picture.
In IBC, a displacement vector (referred as block vector or BV, or motion vector) is used to signal the relative displacement from the position of the current block to that of the reference block. Further, the previously reconstructed reference block within the same picture is added to the prediction errors to form the reconstructed current block. In this technique, the reference samples correspond to the reconstructed samples of the current picture prior to in-loop filter operations. In HEVC, the in-loop filters refer to both deblocking and sample adaptive offset (SAO) filters. An example of the intra block compensation is shown in
In some embodiments, the use of the IBC mode is signaled at both sequence and picture level. When the IBC mode is enabled at sequence parameter set (SPS), it can be enabled at picture level. When the IBC mode is enabled at picture level, the current reconstructed picture is treated as a reference picture. Therefore, no syntax change on block level is needed in HEVC SCC on top of the existing HEVC inter mode to signal the use of the IBC mode.
In some embodiments, the following is a list of features for the IBC mode in HEVC SCC:
In addition to the features enumerated above, some embodiments comprise additional features (or treatments) for IBC modes, including:
The current intra mode coding implementations in either JEM or VTM exhibit at least the following issues:
Embodiments of the presently disclosed technology overcome drawbacks of existing implementations, thereby providing video coding with higher coding efficiencies but lower computational complexity. Intra mode coding based on past information, and as described in the present document, may enhance both existing and future video coding standards, is elucidated in the following examples described for various implementations. The examples of the disclosed technology provided below explain general concepts, and are not meant to be interpreted as limiting. In an example, unless explicitly indicated to the contrary, the various features described in these examples may be combined.
Examples of Intra Mode Coding Based on Historical Information
Example 1. In one example, a table with statistical information of intra prediction modes (IPMs) may be maintained in the encoding/decoding process to help intra mode coding.
Example 2. Different tables may be used for different color components.
Example 3. The frequency or occurrence of one mode M in the table is accumulated by K (e.g. K=1) after decoding a block with mode M.
Example 4. Instead of utilizing IPMs of adjacent neighboring blocks for coding a block, intra prediction modes of non-adjacent neighboring blocks may be utilized.
Example 5. Coding of an IPM associated with current block may depend on the table, IPMs from adjacent spatial neighboring blocks, IPMs from non-adjacent spatial neighboring blocks.
Examples of Selection of Adjacent Neighboring Blocks
Example 6. Instead of using of the above and left neighboring blocks relative to the top-left position of current block for intra mode coding (such as using the intra prediction modes of above and left neighboring blocks to construct a MPM list), different positions may be utilized.
Example 7. Instead of utilizing the fixed M relative neighboring blocks for all kinds of blocks, selection of M neighboring blocks may depend on block shape and/or block size.
Examples of Multiple Prediction Blocks for One Intra-Coded Block
Example 8. Multiple prediction blocks may be generated with reconstructed samples from the same tile/slice/picture containing current block.
Example 9. The above methods may be applied under certain conditions.
Example 10. When IBC is treated as inter mode (i.e., with at least one motion vector to derive the prediction block), History-based Motion Vector Prediction (HMVP) methods, which utilize previously coded motion information for motion information prediction may be used. The HMVP methods allow the encoding/decoding process to be performed based on historical data (e.g., the blocks that have been processed). One or multiple tables with motion information from previously coded blocks are maintained during the encoding/decoding process. Each table can be associated with a counter to track the number of motion candidates stored in the table. During the encoding/decoding of one block, the associated motion information in tables may be used (e.g., by being added to the motion candidate lists) depending on the coding characteristics of the block (e.g., whether the block is IBC coded). After encoding/decoding the block, the tables may be updated based on the coding characteristics of the block. That is, the updating of the tables is based on the encoding/decoding order. In some embodiments, the tables can be reset (e.g., removing the stored candidates and setting the associated counter to 0) before processing a new slice, a new largest coding unit (LCU) row, or a new tile.
In some embodiments, before updating look-up tables as described in Example 10 by adding a motion candidate obtained from a coded block, pruning may be applied.
In some embodiments, the look-up tables as described in Example 10 may be used when a block is coded with merge or AMVP mode. That is, the current block can be coded using an IBC merge mode or an IBC AMVP mode.
In some embodiments, besides adjacent candidates, non-adjacent candidates (e.g., merge candidates or AMVP candidates) can be combined in the look-up tables.
In some embodiments, similar to the usage of look-up tables with motion candidates for motion vector prediction, it is proposed that one or multiple look-up tables may be constructed, and/or updated to store intra prediction modes from previously coded blocks and look-up tables may be used for coding/decoding an intra-coded block.
In some embodiments, intra prediction modes of non-adjacent blocks may be used as intra prediction mode predictors for coding an intra-coded block.
In some embodiments, look-up tables and non-adjacent based methods may be jointly utilized. In one example, the intra prediction modes in either look-up tables or non-adjacent blocks may be used in the MPM list construction process. Alternatively, the intra prediction modes in either look-up tables or non-adjacent blocks may be used to re-order non-MPM intra prediction modes.
In some embodiments, a motion candidate list for an IBC-coded block can be constructed as follows:
The examples described above may be incorporated in the context of the methods described below, e.g., methods 1400, 1500 and 1600, which may be implemented at a video encoder and/or decoder.
In some embodiments, and in the context of Example 1, the first set of past intra coding information comprises statistical intra coding information. In one example, the statistical intra coding information comprises a number of occurrences of each of the set of intra prediction modes over a time duration, and the method 1400 further includes the steps of constructing a most probable mode (MPM) list based on neighboring blocks of the current block, and reordering the MPM list based on the number of occurrences.
The method 1400 includes, at step 1420, processing, based on the first intra prediction mode, the bitstream representation to generate the current block.
In some embodiments, and in the context of Example 2, the current block comprises a luma component and a chroma component, the first set of past intra coding information is used for intra-mode coding the luma component, and a second set of past intra coding information is used for intra-mode coding the chroma component. In an example, the luma component is based on a first coding tree, and the chroma component is based on a different second coding tree.
In some embodiments, and in the context of Example 3, the number of occurrences is incremented for an intra prediction mode of the set of intra prediction modes that corresponds to the first intra prediction mode. Then, the method 1400 further includes the step of determining that at least one of the number of occurrences is equal to a maximum number of occurrences. In one example, the method further includes right shifting each of the number of occurrences by a predefined number. In another example, the method further includes subtracting a minimum number of occurrences from each of the number of occurrences.
In some embodiments, and in the context of Example 4, the method 1400 further includes the step of adding a first subset of the set of intra prediction modes to a most probable mode (MPM) list. In other embodiments, the method 1400 may further include increasing a size of an MPM list, and adding a first subset of the set of intra prediction modes to the MPM list, wherein the first subset of intra prediction modes correspond to blocks that are spatial neighbors of the current block. In these cases, the method 1400 may further include pruning the MPM list, and adding a second subset of the set of intra prediction modes to the MPM list, wherein an ordering of intra prediction modes in the second subset is maintained in the MPM list.
In some embodiments, and in the context of Example 6, the at least one of the adjacent spatial neighboring blocks is an above neighboring block relative to a top-left position of the current block, or an above neighboring block relative to a center position of a first row of the current block, or an above neighboring block relative to a top-right position of the current block, or a left neighboring block relative to a top-left position of the current block, or a left neighboring block relative to a center position of a first column of the current block, or a left neighboring block relative to a bottom-left position of the current block.
The method 1500 includes, at step 1520, processing, based on the intra prediction mode, the bitstream representation to generate the current block.
The method 1600 includes, at step 1620, generating at least a portion of a final prediction block based on a linear function of the first and second prediction blocks. In some embodiments, the final prediction block is an average of the first and second prediction blocks.
In some embodiments, and in the context of Example 8, one portion of the final prediction block is based on the linear function of the first and second prediction blocks, and the remaining portion is copied directly from the first or second prediction blocks. In some embodiments, the weights of the linear function are signaled in a sequence parameter set (SPS), a picture parameter set (PPS), a video parameter set (VPS), a slice header, a tile header, a group of coding tree units (CTUs), a coding unit (CU), a prediction unit (PU) or a transform unit (TU).
The method 1600 includes, at step 1630, processing, based on the final prediction block, the bitstream representation to generate the current block.
5 Example Implementations of the Disclosed Technology
In some embodiments, the video coding methods may be implemented using an apparatus that is implemented on a hardware platform as described with respect to
The system 1800 may include a coding component 1804 that may implement the various coding or encoding methods described in the present document. The coding component 1804 may reduce the average bitrate of video from the input 1802 to the output of the coding component 1804 to produce a coded representation of the video. The coding techniques are therefore sometimes called video compression or video transcoding techniques. The output of the coding component 1804 may be either stored, or transmitted via a communication connected, as represented by the component 1806. The stored or communicated bitstream (or coded) representation of the video received at the input 1802 may be used by the component 1808 for generating pixel values or displayable video that is sent to a display interface 1810. The process of generating user-viewable video from the bitstream representation is sometimes called video decompression. Furthermore, while certain video processing operations are referred to as “coding” operations or tools, it will be appreciated that the coding tools or operations are used at an encoder and corresponding decoding tools or operations that reverse the results of the coding will be performed by a decoder.
Examples of a peripheral bus interface or a display interface may include universal serial bus (USB) or high definition multimedia interface (HDMI) or Displayport, and so on. Examples of storage interfaces include SATA (serial advanced technology attachment), PCI, IDE interface, and the like. The techniques described in the present document may be embodied in various electronic devices such as mobile phones, laptops, smartphones or other devices that are capable of performing digital data processing and/or video display.
In some embodiments, the conversion includes encoding the current block to generate the bitstream representation. In some embodiments, the conversion includes decoding the bitstream representation to generate the current block. In some embodiments, the statistical intra coding information comprises a number of occurrences of each of the set of intra prediction modes over a time duration. In some embodiments, the statistical intra coding information comprises a number of occurrences of a part of the set of intra prediction modes over a time duration.
In some embodiments, the method further includes, comprising, after performing the conversion, updating the set of history intra coding information. In some embodiments, the updating comprises accumulating the number of occurrences of the first intra prediction mode by k in the first set of history intra coding information, k being a positive integer. In some embodiments, a subsequent block of the current block is processed based on the updated history intra coding information.
In some embodiments, each intra prediction mode in the first set of history intra coding information is associated with a limit for the number of occurrences. In some embodiments, the method includes, after the number of occurrences reaches the limit, reducing the number of occurrences based on a predefined rule. In some embodiments, the predefined rule comprises dividing the number of occurrences by a first predefined value, or subtracting a second predefined value from the number of occurrences.
In some embodiments, the method includes constructing a most probable mode (MPM) list based on at least the set of intra prediction modes. In some embodiments, the method includes adding a subset of the set of intra prediction modes to the MPM list. In some embodiments, the method includes adding intra prediction modes associated with spatial neighbors of the current block to the MPM list, adding a subset of the set of intra prediction modes to the MPM list, and pruning the MPM list. In some embodiments, the method includes adding derived intra prediction modes to the MPM list. In some embodiments, the method includes increasing a size of the MPM list. In some embodiments, the method includes reordering the MPM list based on the set of history intra coding information. In some embodiments, the MPM list is reordered based on a descending order of the statistical information of the set of intra prediction modes.
In some embodiments, the current block comprises a luma component and a chroma component. The first set of history intra coding information is used for intra-mode coding the luma component. In some embodiments, a second set of history intra coding information is used for intra-mode coding the chroma component. In some embodiments, the luma component is based on a first coding tree and the chroma component is based on a second coding tree different from the first coding tree.
In some embodiments, selecting the first intra prediction mode is further based on intra prediction modes of spatial neighbors of the current block or intra prediction modes of non-adjacent neighbors of the current block. In some embodiments, the history intra coding information is stored in a table.
In some embodiments, the conversion includes encoding the current block to generate the bitstream representation. In some embodiments, the conversion includes decoding the bitstream representation to generate the current block.
In some embodiments, the method further includes constructing, for the conversion between the current block and the bitstream representation of the current block, a most probable mode (MPM) list based on the intra prediction modes associated with the non-adjacent neighbors of the current block. The conversion is performed further based on the MPM list. In some embodiments, constructing the MPM list includes adding intra prediction modes associated with spatial neighbors of the current block to the MPM list and pruning the MPM list. In some embodiments, constructing the MPM list includes adding derived intra prediction modes to the MPM list. In some embodiments, the method includes increasing a size of the MPM list.
In some embodiments, the method includes reordering intra prediction modes that are not in the MPM list according to the intra prediction modes associated with the non-adjacent neighbors of the current block. In some embodiments, the intra prediction modes associated with the non-adjacent neighbors of the current block are associated with a higher priority for the reordering.
In some embodiments, the conversion comprises encoding the current block to generate the bitstream representation. In some embodiments, the conversion comprises decoding the bitstream representation to generate the current block.
In some embodiments, the at least one of the spatial neighboring blocks includes a block adjacent to the current block. In some embodiments, the at least one of the spatial neighboring blocks includes a block non-adjacent to the current block. In some embodiments, the at least one of the adjacent spatial neighboring blocks includes a block adjacent to a top-left position of the current block. In some embodiments, the at least one of the adjacent spatial neighboring blocks includes a block adjacent to a center position of a first row of the current block. In some embodiments, the at least one of the adjacent spatial neighboring blocks includes a block adjacent to a center position of a first column of the current block.
In some embodiments, the at least one of the spatial neighboring blocks is selected based on one or more dimensions of the current block. In some embodiments, the current block has a square shape, and the at least one of the adjacent spatial neighboring blocks includes a block adjacent to a top-left position of the current block. The selected intra prediction mode can be added to a most-probable-mode list of the current block.
In some embodiments, the current block has a non-square shape, and the at least one of the spatial neighboring blocks includes a block adjacent to a center position of a first row of the current block or a block adjacent to a center position of a first column of the current block.
In some embodiments, the first prediction block is generated according to an intra prediction mode. In some embodiments, the second prediction block is generated based on a motion vector pointing to the image segment. In some embodiments, the second prediction block is generated based on an intra block copy technology in which the reconstructed samples pointed by the motion vector are copied. In some embodiments, the second prediction block is generated based on an intra block copy technology in which motion compensation is applied to the reconstructed samples pointed by the motion vector before applying an in-loop filtering operation.
In some embodiments, the reconstructed samples are generated before an in-loop filtering operation is applied. In some embodiments, a remaining portion of the final prediction block is copied directly from the first or second prediction block. In some embodiments, the combination of the first and the second prediction blocks comprises a linear function of the first and second prediction blocks. In some embodiments, the linear function comprises an average of the first and second prediction blocks. In some embodiments, the linear function includes a weighted linear function of the first and the second prediction blocks. In some embodiments, different weights are applied for different positions relative to the first or the second prediction block. In some embodiments, different weights are applied for different intra prediction modes. In some embodiments, weights of the weighted linear function are signaled in a sequence parameter set (SPS), a picture parameter set (PPS), a video parameter set (VPS), a slice header, a tile header, a group of coding tree units (CTUs), a coding unit (CU), a prediction unit (PU) or a transform unit (TU). In some embodiments, weights of the weighted linear function are predefined.
In some embodiments, the image segment includes a picture, a slice or a tile. In some embodiments, the first prediction block and the second prediction block are generated for a selected color component of the current block. In some embodiments, the selected color component includes a luma color component. In some embodiments, the first prediction block and the second prediction block are generated upon determining that a size or a shape of the current block satisfies a predefined criterion.
In some embodiments, the method includes selectively updating, based on a rule, the first table of motion candidates using the motion information for the conversion. In some embodiments, the rule specifies adding the motion information to the first table of motion candidates. In some embodiments, the rule specifies excluding the motion information from the first table of motion candidates.
In some embodiments, the method includes updating, based on the motion information for the current block, the first table of motion candidates after the conversion. In some embodiments, the method includes performing a second conversion between a second block of visual media data and the bitstream representation of the second block using the updated first table of motion candidates. The second block can be coded using the intra block copy mode. In some embodiments, the first table of motion candidates includes only motion information from blocks that are coded using the intra block copy mode. In some embodiments, the first table of motion candidates is only used for processing blocks that are coded using the intra block copy mode. In some embodiments, the first table of motion candidates includes motion information from blocks that are coded using the intra block copy mode and excludes motion information from blocks that are coded using other techniques. In some embodiments, the first table of motion candidates includes motion information from blocks that are coded using the intra block copy mode and motion information from blocks that are coded using other techniques. In some embodiments, the second conversion is performed using motion candidates associated with IBC coded blocks in the updated first table of motion candidates.
In some embodiments, the method includes maintaining, for a third conversion between a third block of visual media data and the bitstream representation of the third block, a second table of motion candidates, performing the third conversion between the third block and the bitstream representation of the third block based on the second table of motion candidates. In some embodiments, the third conversion is performed without using the updated first table of motion candidates. In some embodiments, the second table includes motion information from blocks that are encoded using other techniques that are different than the IBC mode. In some embodiments, the third block is coded using a technique that is different than the IBC mode. In some embodiments, the technique includes an inter mode. In some embodiments, the method includes updating the second table of motion candidates using motion information for the third conversion. In some embodiments, the method includes performing a fourth conversion between an IBC coded block based on the updated first table and performing a fifth conversion between an inter coded block based on the updated second table.
In some embodiments, the method includes comparing a motion candidate associated with the current block with a number of entries in the first table. The first table is updated based on the comparing. In some embodiments, the number of entries corresponds to all entries in the first tables. In some embodiments, the number of entries is m, m being an integer, and the m entries are last m entries in the first table. In some embodiments, updating the first table comprises adding the motion candidate to the first table. In some embodiments, the first table includes no duplicated entries.
In some embodiments, the method includes determining that the current block is further coded using an IBC merge mode or an IBC Advanced Motion Vector Prediction (AMVP) mode. In some embodiments, the method includes determining a list of motion candidates for the current block by combining the first table of motion candidates and a set of adjacent or non-adjacent candidates of the current block and performing the conversion based on the list. In some embodiments, the combining comprises checking m candidates from the set of adjacent or non-adjacent candidates, and checking n motion candidates from the first table of motion candidates, wherein m and n are positive integers. In some embodiments, the method includes adding at least one of the m candidates from the set of adjacent or non-adjacent candidates to the list of motion candidates upon determining that the list is not full. In some embodiments, the method includes adding at least one of the n motion candidates from the first table of motion candidates to the list of motion candidates upon determining that the list is not full.
In some embodiments, the combining comprises checking candidates from the set of adjacent or non-adjacent candidates and the first table of motion candidates in an interleaved manner. In some embodiments, the method includes checking a candidate from the set of adjacent or non-adjacent candidates prior to checking a motion candidate from the first table. In some embodiments, the checking comprises checking a candidate from the set of adjacent or non-adjacent candidates and replacing, based on a coding characteristic of an adjacent or non-adjacent block associated with the adjacent or non-adjacent candidate, the candidate with a motion candidate from the first table of motion candidates. In some embodiments, the coding characteristic of the adjacent or non-adjacent block indicates that the adjacent or non-adjacent block is located outside a predefined range. In some embodiments, the coding characteristic of the adjacent or non-adjacent block indicates that the adjacent or non-adjacent block is intra coded. In some embodiments, adding a motion candidate further comprising adding the motion candidate to the list upon determining that the motion candidate from the first table of motion candidates is different from at least one candidate from the set of adjacent or non-adjacent candidates. In some embodiments, the set of adjacent or non-adjacent candidate has a higher priority than the first table of motion candidates.
In some embodiments, the list of motion candidates comprises multiple merge candidates used for IBC merge-coded blocks. In some embodiments, the list of motion candidates comprises multiple AMVP candidates used for IBC AMVP-coded blocks. In some embodiments, the list of motion candidates comprises intra prediction modes used for intra-coded blocks. In some embodiments, a size of the first table is pre-defined or signaled in the bitstream representation. In some embodiments, the first table is associated with a counter that indicates a number of available motion candidates in the first table.
In some embodiments, the method includes resetting the first table before processing a new slice, a new LCU row, or a new tile. In some embodiments, the counter is set to zero before processing a new slice, a new LCU row, or a new tile.
From the foregoing, it will be appreciated that specific embodiments of the presently disclosed technology have been described herein for purposes of illustration, but that various modifications may be made without deviating from the scope of the invention. Accordingly, the presently disclosed technology is not limited except as by the appended claims.
Implementations of the subject matter and the functional operations described in this patent document can be implemented in various systems, digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations of the subject matter described in this specification can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a tangible and non-transitory computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The term “data processing unit” or “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Computer readable media suitable for storing computer program instructions and data include all forms of nonvolatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
It is intended that the specification, together with the drawings, be considered example only. As used herein, the use of “or” is intended to include “and/or”, unless the context clearly indicates otherwise.
While this patent document contains many specifics, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in this patent document should not be understood as requiring such separation in all embodiments.
Only a few implementations and examples are described and other implementations, enhancements and variations can be made based on what is described and illustrated in this patent document.
Zhang, Kai, Wang, Yue, Zhang, Li, Liu, Hongbin
Patent | Priority | Assignee | Title |
11765345, | Sep 19 2018 | BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD; BYTEDANCE INC. | Multiple prediction blocks for one intra-coded block |
11770557, | Jan 08 2019 | LG Electronics Inc. | Intra prediction-based video coding method and device using MPM list |
Patent | Priority | Assignee | Title |
10212435, | Oct 14 2013 | Qualcomm Incorporated | Device and method for scalable coding of video information |
10244242, | Jun 25 2014 | Qualcomm Incorporated | Multi-layer video coding |
10284874, | Jun 19 2014 | VID SCALE, Inc. | Methods and systems for intra block copy coding with block vector derivation |
10368091, | Mar 04 2014 | Microsoft Technology Licensing, LLC | Block flipping and skip mode in intra block copy prediction |
10368092, | Mar 04 2014 | Microsoft Technology Licensing, LLC | Encoder-side decisions for block flipping and skip mode in intra block copy prediction |
10412387, | Aug 22 2014 | Qualcomm Incorporated | Unified intra-block copy and inter-prediction |
10469863, | Jan 03 2014 | Microsoft Technology Licensing, LLC | Block vector prediction in video and image coding/decoding |
10582213, | Oct 14 2013 | Microsoft Technology Licensing, LLC | Features of intra block copy prediction mode for video and image coding and decoding |
10812817, | Sep 30 2014 | Microsoft Technology Licensing, LLC | Rules for intra-picture prediction modes when wavefront parallel processing is enabled |
9154796, | Nov 04 2011 | Qualcomm Incorporated | Intra-mode video coding |
9591325, | Jan 27 2015 | Microsoft Technology Licensing, LLC | Special case handling for merged chroma blocks in intra block copy prediction mode |
9699457, | Oct 11 2011 | Qualcomm Incorporated | Most probable transform for intra prediction coding |
9877043, | Jun 19 2014 | VID SCALE INC | Methods and systems for intra block copy coding with block vector derivation |
9883197, | Jan 09 2014 | Qualcomm Incorporated | Intra prediction of chroma blocks using the same vector |
9918105, | Oct 07 2014 | Qualcomm Incorporated | Intra BC and inter unification |
20080240245, | |||
20110194609, | |||
20130301944, | |||
20140226912, | |||
20150350682, | |||
20160080751, | |||
20160182905, | |||
20160227214, | |||
20160241858, | |||
20160316201, | |||
20160330471, | |||
20170332084, | |||
20180063553, | |||
20180199061, | |||
20180213261, | |||
20190141319, | |||
20190200038, | |||
20190208201, | |||
20190238842, | |||
20190238864, | |||
20190327466, | |||
20200092544, | |||
20200092579, | |||
20200177910, | |||
20200195960, | |||
20200244956, | |||
20200275124, | |||
20200296417, | |||
20200413069, | |||
20210021811, | |||
20210211655, | |||
20210211709, | |||
20210281838, | |||
20210352321, | |||
20220030226, | |||
20220030228, | |||
20220191512, | |||
CN102860006, | |||
CN103959789, | |||
CN105917650, | |||
CN106131548, | |||
CN107079161, | |||
CN107454403, | |||
CN1674680, | |||
EP2770739, | |||
EP3253061, | |||
KR20170058837, | |||
WO2015052273, | |||
WO2018037896, | |||
WO2018116925, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Aug 15 2019 | ZHANG, LI | BYTEDANCE INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 055643 | /0413 | |
Aug 15 2019 | ZHANG, KAI | BYTEDANCE INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 055643 | /0413 | |
Aug 20 2019 | LIU, HONGBIN | BEIJING BYTEDANCE NETWORK TECHNOLOGY CO , LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 055643 | /0441 | |
Aug 21 2019 | WANG, YUE | BEIJING BYTEDANCE NETWORK TECHNOLOGY CO , LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 055643 | /0441 | |
Mar 18 2021 | BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD. | (assignment on the face of the patent) | / | |||
Mar 18 2021 | BYTEDANCE INC. | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Mar 18 2021 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Date | Maintenance Schedule |
Nov 15 2025 | 4 years fee payment window open |
May 15 2026 | 6 months grace period start (w surcharge) |
Nov 15 2026 | patent expiry (for year 4) |
Nov 15 2028 | 2 years to revive unintentionally abandoned end. (for year 4) |
Nov 15 2029 | 8 years fee payment window open |
May 15 2030 | 6 months grace period start (w surcharge) |
Nov 15 2030 | patent expiry (for year 8) |
Nov 15 2032 | 2 years to revive unintentionally abandoned end. (for year 8) |
Nov 15 2033 | 12 years fee payment window open |
May 15 2034 | 6 months grace period start (w surcharge) |
Nov 15 2034 | patent expiry (for year 12) |
Nov 15 2036 | 2 years to revive unintentionally abandoned end. (for year 12) |