New intra angular prediction modes and methods for decoding are offered for providing greater accuracy when processing predictions of digital video data blocks. One new method considers predicting a current prediction sample by taking the linear interpolation of two previously reconstructed reference samples that lay along a common angular line. Another new method offers a method for making previously unavailable samples from a neighboring block available as reference samples when predicting a current prediction sample. Another new method considers a new combined intra prediction mode that utilizes a local mean to predict a current prediction sample. And a new decoding method offers to rearrange the order in which video data blocks are predicted based on the intra prediction mode used for predicting the video data blocks.
|
1. A method for processing a video signal using intra prediction, the method comprising:
receiving the video signal, the video signal including a current prediction unit and intra prediction mode information corresponding to the current prediction unit;
obtaining the intra prediction mode information from the video signal;
when an angular line represented by the intra prediction mode information passes through a point between reference samples of a first neighboring block of the current prediction unit and a reference sample of a second neighboring block of the current prediction unit, the first neighboring block being different from the second neighboring block:
estimating a virtual reference sample corresponding to the point using a first linear interpolation between the reference samples of the first neighboring block; and
predicting a current sample of the current prediction unit using a second linear interpolation between the estimated virtual reference sample of the first neighboring block and the reference sample of the second neighboring block,
wherein the virtual reference sample of the first neighboring block and the reference sample of the second neighboring block are located on the angular line represented by the intra prediction mode information, and
wherein the first neighboring block including the virtual reference sample corresponds to a top neighboring block of the current prediction unit and the second neighboring block corresponds to a left neighboring block of the current prediction unit, or the first neighboring block including the virtual reference sample corresponds to a left neighboring block of the current prediction unit and the second neighboring block corresponds to a top neighboring block of the current prediction unit.
2. The method of
3. The method of
wherein the first weight is determined to be in inverse proportion to a first distance from the current sample to the reference sample of the second neighboring block and the second weight is determined to be in inverse proportion to a second distance from the current sample to the virtual reference sample of the first neighboring block.
4. The method of
when the angular line represented by the intra prediction mode information passes through a first point between reference samples of a first neighboring block of the current prediction unit and a second point between reference samples of a second neighboring block of the current prediction unit:
estimating a first virtual reference sample corresponding to the first point using a third linear interpolation of the reference samples of the first neighboring block, and estimating a second virtual reference sample corresponding to the second point using a fourth linear interpolation of the reference samples of the second neighboring block; and
predicting a current sample of the current prediction unit using a fifth linear interpolation of the first virtual reference sample of the first neighboring block and the second virtual reference sample of the second neighboring block,
wherein the first virtual reference sample and the second virtual reference sample are located on the angular line represented by the intra prediction mode information.
5. The method of
when the angular line represented by the intra prediction mode information passes through a reference sample of the first neighboring block of the current prediction unit and the reference sample of the second neighboring block of the current prediction unit, predicting the current sample of the current prediction unit using a sixth linear interpolation of the reference sample of the first neighboring block and the reference sample of the second neighboring block.
|
This application claims the benefit of U.S. Provisional Patent Application No. 61/345,583 filed on May 17, 2010; U.S. Provisional Patent Application No. 61/348,232 filed on May 25, 2010; U.S. Provisional Patent Application No. 61/348,243 filed on May 26, 2010; and U.S. Provisional Patent Application No. 61/349,197 filed on May 27, 2010, which are hereby incorporated by reference as if fully set forth herein.
1. Field of the Invention
The present invention relates to a method and apparatus for performing intra prediction type decoding on digital video data that has been encoded using an intra prediction type prediction mode. The present invention also relates to a method and apparatus for providing the proper signaling to a decoding unit for informing the decoding unit as to the proper intra prediction mode to apply.
2. Discussion of the Related Art
Generally there are two methods for accomplishing video data compression in order to eliminate temporal and spatial redundancy found amongst video data. Eliminating temporal and spatial redundancy is an important requirement to increase a compression ratio for the video data, which in turn will decrease an overall video data size for later storage or transmission.
An inter prediction encoding method is able to predict a current video data block based on similar regions found on a previously encoded picture of video data that precedes a current picture that includes the current video data block. And an intra prediction encoding method is able to predict a current video data block based on previously encoded blocks that are adjacent to the current video data block and within a same picture. The inter prediction method is referred to as a temporal prediction method, and the intra prediction method is referred to as a spatial prediction method.
An encoding unit is able to take an original RGB video signal and encode it into digital video data that serves as a digital representation of the original RGB video signal. By processing both the inter and intra predictions on the original RGB video signal, the encoding unit is able to create an accurate digital video representation of the original RGB video signal. Each block of digital video data that is prediction processed is referred to as a prediction unit. Depending on whether a prediction unit was processed according to an intra prediction mode or inter prediction mode, the prediction unit may come in a variety of available block sizes. Once the encoding unit has encoded all of the original RGB video signal into corresponding prediction units of digital video data, the resulting digital video data may be transmitted to a decoding unit for decoding and reproduction of the original RGB video signal. In order for the receiving decoding unit to reproduce an accurate reproduction of the original RGB video signal, the decoding unit must perform the same prediction mode processing on a prediction unit as was used at the encoding unit.
Pertaining specifically to the intra prediction method for prediction processing a prediction unit of digital video data, there exists various intra prediction modes known today for accomplishing the spatial prediction that defines the intra prediction method. Yet even with the various intra prediction modes currently available, there is always the need to update existing intra prediction modes and to offer new intra prediction modes in order to accomplish more accurate predictions.
When increasing the total number of intra prediction modes available for intra predicting a prediction unit, there is the often-times overlooked consequence of increasing a maximum binary codeword length that is needed to identify each of the available intra prediction modes. As noted above, when the encoding unit performs prediction processing on a prediction unit according to a specific intra prediction mode, the decoding unit must then perform prediction processing on the prediction unit according to the same specific intra prediction mode to ensure accurate reproduction of the original RGB video signal. The only way to notify a decoding unit as to which specific intra prediction mode was used to predict the particular prediction unit at the encoding unit is to attach intra prediction mode identifying information to each prediction unit. This is an undesirable consequence of offering new intra prediction modes. This way, the decoding unit can parse the intra prediction mode identifying information and determine the proper intra prediction mode to process on a particular prediction unit.
This being said, each intra prediction mode identifying information will be a binary codeword comprised of ‘0’s and ‘1’ in terms of digital data. And as the number of new intra prediction modes that need to be uniquely identified increases, so too will the maximum length of the binary codeword that corresponds to the intra prediction mode identifying information. As a simple example, it may only require a maximum 3 bit long codeword to uniquely identify four unique intra prediction modes. ‘01’ can identify the first intra prediction mode, ‘10’ can identify the second intra prediction mode and ‘100’ can identify the third intra prediction mode. However by adding just two new intra prediction modes, the maximum bit length for the codewords identifying each of the intra prediction modes may grow to a maximum 4 bit long codeword. To identify the new fifth intra prediction mode the codeword ‘1001’ may be assigned, and to identify the sixth new intra prediction mode the codeword ‘1101’ may be assigned. Therefore the real cost of increasing the number of total intra prediction modes available is in the number of total digital information that must be transmitted to identify all of the new intra prediction modes. This in turn results in more and more information bits needing to be transmitted along with the actual video data bits, which obviously decreases the efficiency of the overall video signal compression.
Therefore there also exists a need to conserve a total number of informational bits transmitted with the video data by reducing the maximum bit length of the codewords assigned to identify each new intra prediction mode.
Accordingly, it is an object of the present invention to offer new intra prediction modes that provide more accurate predictions, when compared to previous intra prediction modes, of prediction units that are processed by a decoding unit.
Another object of the present invention is to provide a method for signaling the new intra prediction modes so that the decoding unit may properly identify the new intra prediction modes when predicting a current prediction unit.
Another object of the present invention is to minimize the maximum binary codeword length that is required to be transmitted along with digital video data for signaling each of the available intra prediction modes.
Additional advantages, objects and features of the invention will be set forth in part in the description and figures which follows, and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention.
To achieve these objects and other advantages according to the present invention, as embodied and broadly described herein, a number of new angular prediction modes are offered. One of the new angular prediction modes according to the present invention is able to reference two reconstructed samples from neighboring blocks. The two reference samples that are referenced from the neighboring blocks are obtained along one of a plurality of available predetermined angles that pass through the current prediction unit and the two reference samples. Each of the two reference samples used to predict a current prediction sample are weighted according to a proximity to the current prediction sample. This is an improvement over the previous angular prediction modes that referenced only a single reconstructed sample from a neighboring block.
According to another aspect of the present invention, a new enhanced angular intra prediction mode is offered that allows referencing previously unavailable reference samples. Previously, reference samples were made to be unavailable for a variety of reasons, such as for belonging to a separate slice from a current prediction unit or for not being previously reconstructed. However, regardless of the reason such samples could not previously be referenced, the new enhanced angular intra prediction mode of the present invention aims to offer methods for allowing such previously unavailable samples to be referenced as reference samples when predicting samples of a current prediction unit. This is a more flexible and accurate approach over the previous angular intra prediction modes.
According to another aspect of the present invention, a new combined intra prediction mode is offered that combines a weighted local mean of three neighboring reference samples with a weighted angular prediction to process a prediction of a current prediction sample. The new combined intra prediction mode according to the present invention will first obtain a local mean from three reference samples that neighbor a current prediction sample and then obtain an angular prediction for the current prediction sample. The new combined intra prediction mode then processes a prediction of the current prediction sample by combining a weighted value for both of these values. This provides a more accurate prediction of the current sample than seen in the prior art.
According to another aspect of the present invention, a new method for ordering the sequence in which samples within a current prediction unit will be prediction processed is offered. According to this new method, the ordering of current prediction samples that will be predicted will depend on the specific direction of a current intra prediction mode identified for predicting the current prediction unit. This new method provides a more efficient method for performing the prediction processing on the current prediction unit over the generalized raster scanning prediction sequence known in the prior art.
According to another aspect of the present invention, a reduction in overall codeword bits that need to be transmitted is accomplished. The present invention is able to accomplish this reduction by reducing the number of overall informational bits that need to be transmitted from an encoding unit to a decoding unit. This is generally accomplished by making information transmitted later in time dependent on information transmitted prior in time when possible. A more detailed explanation is provided in the details and figures described within this disclosure.
It is to be understood that both the foregoing general description and the following detailed description of the present invention are exemplary and explanatory and are intended to provide further explanation of the present invention as claimed.
Reference will now be made in detail to the preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings. Terminologies or words used in this specification and claims are not construed as limited to the general or dictionary meanings and should be construed as the meanings and concepts matching the technical idea of the present invention based on the principle that an inventor is able to appropriately define the concepts of the terminologies to describe the inventor's invention in an intended way. The embodiments disclosed in this disclosure and configurations shown in the accompanying drawings are exemplary in nature and are not intended to be inclusive in nature. The preferred embodiments do not represent all possible technical variations of the present invention. Therefore, it is understood that the present invention covers the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents at the timing point of filing this application.
It is noted that for the purposes of the detailed explanation that follows, all mention of a neighboring block is understood to be in reference to a block that neighbors a current prediction unit. A current prediction unit is understood to include the current prediction samples that are being prediction processed according to the new intra prediction modes of the present invention. Also all mention of intra angular and intra directional prediction modes are considered to be one and the same. The intra angular and intra directional prediction modes are to include horizontal and vertical prediction modes.
And
Previously reconstructed samples that neighbor the current prediction unit 301 are represented by the filled in gray dots. In particular,
In addition, according to the preferred embodiment of the present invention, weighted values for the reference samples A and B will be used when obtaining the linear interpolation to predict each of the current prediction samples. The weighted values will be taken such that the reconstructed sample that is proximately closer to a current prediction sample will be weighted greater than the reconstructed sample that is proximately further away from the current prediction sample. This relationship is graphically represented by
So looking back at
This weighing principle is able to provide more accurate predictions for each of the current prediction samples within current prediction unit 301 because reference samples that lay proximately closer to a current prediction sample have a higher probability of sharing similar graphical characteristics than reference samples that lay proximately further away.
Another aspect of the present invention allows for the intra angular prediction to be processed as illustrated in
D=(B+C+1)>>1
The >> correlates to a shift 1 function that essentially averages the values of reconstructed samples B and C by dividing it by 2. The plus 1 following the sum of the sample values for reconstructed samples B and C, is added to account for potential rounding errors from the above calculation. In an alternative embodiment of the present invention the plus 1 may be removed when estimating the sample value for reference sample D.
After obtaining the reference sample D based on the reconstructed samples B and C, a linear interpolation of reference sample A and reference sample D may be used to process the intra angular prediction of current prediction samples a and b. Like the example given with reference to
Although
According to another aspect of the current invention, the new intra angular prediction mode of the present invention may still be processed even when only a single neighboring block is available.
However it is an aspect of the present invention to utilize a padding function in order to compensate for unavailable neighboring blocks and samples. Because the linear interpolation according to the new intra angular prediction mode of the present invention requires two reference samples from two separate neighboring blocks, the padding function will be valuable in the cases where only one neighboring block is found to be available. A detailed description of the padding function will be given with reference to the example illustrated in
Although
It is a further aspect of the present invention to make neighboring blocks that belong to separate slices from the slice including the current prediction unit available when processing intra angular prediction modes. This aspect of the present invention is applicable to all intra prediction modes mentioned in this application. Therefore any mention of neighboring blocks within this disclosure may refer to a neighboring block that belongs to a separate slice from the slice including the current prediction unit.
It is also a further aspect of the present invention to not only make samples belonging to neighboring blocks to the immediate left and top of the current prediction available for referencing for intra prediction, but also samples belonging to neighboring blocks adjacent to the current prediction unit in all directions as illustrated in
In the scenario depicted by
It is noted that although
It is noted that although
As an alternative, instead of using the interpolation of reference samples A and B to pad-in the reference sample values for sample 0 to 8 in the neighboring top block, the sample value for either one of reference samples A and B may be used to directly pad samples 0 to 8 of the neighboring top block. While this alternative is not directly illustrated by
As another alternative, the reference sample located at index 0 may not actually be included as part of the neighboring top block. In such a scenario where the reference sample located at index 0 actually belongs to a neighboring top-left block, the padding of the unavailable neighboring top block will begin with the sample located at index 1 instead of index 0. This is true for the case where the unavailable samples from the neighboring top block are padded with a value obtained from the interpolation of reference samples A and B, or where the unavailable samples are padded simply by copying the sample value from either one of reference samples A or B.
As an alternative, instead of padding the unavailable samples of the neighboring left block with the interpolation of reference samples A and B, the sample value corresponding to either one of reference samples A or B may be used. So according to this alternative, the sample value of either reference sample A or B may simply be copied to pad-in the unavailable samples of the left neighboring block.
As another alternative, the reference sample located at index 0 may not actually be included as part of the neighboring left block. In such a scenario where the reference sample located at index 0 actually belongs to a neighboring top-left block, the padding of the unavailable neighboring left block will begin with the sample located at index 1 instead of index 0. This is true for the case where the unavailable samples from the neighboring left block are padded with a value obtained from the interpolation of reference samples A and B, or where the unavailable samples are padded simply by copying the sample value from either one of reference samples A or B.
Another aspect of the present invention introduces a solution for dealing with the redundancies of intra prediction modes that causes the same prediction results to be made due to the padding function applied according to the present invention. For example, when the reconstruction value PA is horizontally padded all along the neighboring samples that belong to the top block and top-right block in relation to the current prediction unit 1701 as illustrated in
Therefore when the padding function results in a plurality of intra prediction modes that will all result in the same prediction of a current prediction sample, the present invention will be able to recognize that it is only necessary to identify one of the intra prediction modes from among the plurality of redundant intra prediction modes. So in the scenario depicted in
The benefit of only making a single intra prediction mode available in such a scenario where there are redundancies of intra prediction modes that all result in a same prediction value for any one current prediction sample becomes apparent when considering the information that must be transmitted from an encoding unit side. The encoding unit is responsible for first taking original RGB video data and encoding it into prediction units for video data compression. Each prediction unit has a specific intra prediction mode applied to it during the encoding process. Then in order to ensure a receiving decoding unit re-applies the same intra prediction mode prediction process to each received prediction unit, the encoding unit additionally assigns identifying information to each prediction unit that identifies which intra prediction mode should be applied to each prediction unit of digital video data by the decoding unit. Each prediction unit received by the decoding unit is decoded by re-applying the proper intra prediction mode processing as identified from the received identifying information. Now depending on the number of available intra prediction modes that may be applied to a given prediction unit, the length of the binary codeword identifying each intra prediction mode will vary. For example, if there are five intra prediction modes available to prediction process a particular prediction unit, a maximum binary bit codeword length for identifying each of the five intra prediction modes may be 3-bits (eg. 01, 10, 110, 101, 011). By getting rid of just one of the available five intra prediction modes so that there are now four available intra prediction modes that need to be identified, the maximum binary bit codeword length can be shortened to a maximum length of 2 bits (eg. 0, 1, 01, 10).
Going back to the scenario depicted in
According to yet another aspect of the present invention, a new combined intra prediction (CIP) mode is offered. This new CIP mode offers a new method for predicting a current prediction unit by combining a weighted intra angular prediction with a weighted local mean prediction of previously reconstructed samples.
The first local mean 1801 is an average sample value of the three previously reconstructed samples as grouped in
The weighted value applied to the intra angular prediction 1803 becomes greater as the current prediction sample being predicted is proximately closer to the reference sample P, and becomes smaller as the current prediction sample that is being predicted is proximately further away from the reference sample P. The CIP mode prediction on each of the current prediction samples p1 through p4 seen in
p1=[w1*(intra angular prediction)]+[(1−w1)*(first local mean)]
p2=[w2*(intra angular prediction)]+[(1−w2)*(second local mean)]
p3=[w3*(intra angular prediction)]+[(1−w3)*(third local mean)]
p4=[w4*(intra angular prediction)]+[(1−w4)*(fourth local mean)]
And according to the present invention, the weighted values, w1-w4, may take on the following values:
Example 1
Example 2
w1
1
4/5
w2
2/3
3/5
w3
1/3
2/5
w4
0
1/5
As can be determined from above, as the current prediction sample gets farther away from the reference sample P from which the intra angular prediction component is obtained, the weight of the local mean component becomes stronger for the CIP prediction of the current prediction sample. And as the current prediction sample gets farther away from the reference sample P from which the intra angular prediction component is obtained, the weight of the intra angular prediction component becomes weaker for the CIP prediction of the current prediction sample. This is due to the assumption that the intra angular prediction component provides a more accurate prediction of the current prediction sample as the current prediction sample is proximately closer to the reference sample P.
According to a first embodiment of the CIP prediction mode of the present invention, the first current prediction sample of the current prediction unit 1905 to be predicted will be the top left-most current prediction sample p1 as illustrated by
p1=[w1*(intra angular prediction)]+[(1−w1)*(local mean)]
The remaining current prediction samples within the current prediction unit 1905 that have not been reconstructed may be predicted according to the CIP prediction mode in a raster scan motion sequence.
As an alternative, instead of predicting p1 according to the CIP mode, p1 may first be predicted according to any other available intra prediction mode. Then after p1 has been predicted and reconstructed, p1 may be used as part of the local mean calculated for the first CIP prediction starting with current prediction sample p2.
According to a second embodiment of the CIP mode prediction of the present invention, the scenario seen in step 0 in
After performing the prediction processing on the four selected current prediction samples in step 1, the four samples are immediately reconstructed. These reconstructed samples within the current prediction unit 2001 are represented by the four filled in black dots in step 2. It is during step two that the first prediction according to the CIP mode prediction will be processed. Four samples within the current prediction unit 2001, as represented by the filled in gray dots in
According to this second embodiment of the CIP prediction mode, a weighted value from a local mean will still be combined with a weighted value from an intra angular prediction. However, according to this second embodiment of the CIP prediction mode the local mean may be comprised of the average sample values from at least three reconstructed samples adjacent to the current prediction sample. And the intra angular prediction may be referenced from at least one previously reconstructed sample. This entails that more than three reference samples may be referenced when calculating the local mean, and more than one intra directional prediction may be included as the intra directional prediction component of the CIP prediction.
When looking at the top left current prediction sample (TL) that is being predicted according to the CIP mode in
As another example, looking at the top-right sample (TR) among the four selected current prediction samples within the current prediction unit 2001 in step 2, the local mean may be calculated from the values of the previously reconstructed samples to the bottom-left, top-left, top and top-right of TR. Then the intra directional prediction is left to come from the reconstructed sample to the bottom-right of TR. In this example the values from four previously reconstructed samples are used to calculate the local means.
Step 3 then illustrates the four selected samples from within the current prediction unit 2001 that were selected for CIP prediction in step 2, being fully reconstructed as represented by the filled in black dots in step 3. Now with all of the reconstructed samples from the neighboring blocks and within the current prediction unit 2001 itself available to be reference for CIP prediction, the remaining current prediction samples can be predicted according to the CIP mode. Therefore, as long as there are at least three reconstructed samples that are adjacent to a current prediction sample and at least one reconstructed sample from which to process an intra directional prediction on the current prediction sample, the current prediction sample can be processed according to this third embodiment of the CIP mode of the present invention.
It is also within the scope of the present invention to utilize more than just the reconstructed samples to the immediate left, top-left and top of a current prediction sample when obtaining the local mean for the CIP prediction mode according to all embodiments of the present invention. It is within the scope of the present invention to make available all reconstructed samples that are adjacent to the current prediction sample when calculating the local mean for use in all the embodiments of the CIP mode of the present invention.
According to another aspect of the present invention, a new method for rearranging the order for predicting transform units (TUs) within a given prediction unit is offered. In addition, this current aspect of the present invention introduces a new method for decoding a prediction unit that calls for the immediate reconstruction of a TU after it has been predicted so that the samples within the reconstructed TU can be referenced for performing intra prediction of samples within other TUs in the same prediction unit. Because this current aspect of the present invention is only applicable when there is a plurality of TUs within a single prediction unit, the current aspect of the present invention is only concerned with the case where the TU size is less than the prediction unit size. Two such examples, which are not to be taken as being exhaustive, are illustrated in
Assuming that a prediction unit is to be intra predicted, a unique characteristic of the prediction unit is that the entire prediction unit will be predicted according to a same intra prediction mode. So when there is a plurality of smaller TUs within a current prediction unit, all of the TUs within the current prediction unit will be predicted according to the same intra prediction mode. In previous decoding methods, each TU within a prediction unit would be predicted according to a raster scan sequence order. An example of this previous raster scan order for predicting TUs can be seen in
For
The depiction on the right also has the neighboring reference samples that have been previously reconstructed, as seen by the filled in gray dots. According to the current aspect, TU 1 will be predicted first using only the reference samples from the neighboring blocks. After predicting the samples of TU 1, the next TU to be predicted is TU 2. For TU 2, reference samples from the top neighboring block are used to predict the top-left, top-right and bottom-right samples of TU 2. However the bottom-left sample in TU 2 is seen to be predicted by referencing the reference sample located at the top-right of TU 1. This is possible because TU 1 has already been predicted and reconstructed and therefore samples of TU 1 are now available to be referenced when predicting the remaining TUs. After the samples within TU 2 are predicted and reconstructed, TU 3 will begin to be predicted. For TU 3, reference samples from the left neighboring block are used to predict the bottom-right, bottom-left and top-left samples in TU 3. However the top-right sample in TU 3 is seen to be predicted by referencing the reference sample located at the bottom-left of TU 1 that was previously predicted and reconstructed. After the samples within TU 3 are predicted and reconstructed, TU 4 will begin to be predicted. TU 4 is unique because none of the reference samples used to predict TU 4 are referenced from the blocks that neighbor the current prediction unit 2202. All of the reference samples used to predict the samples within TU 4 are referenced from previously predicted and reconstructed TUs within the same prediction unit 2202. So the top-right sample in TU 4 is predicted from the bottom-left reference sample in TU 2, the top-left sample and bottom-right sample in TU 4 are predicted from the bottom-right reference sample in TU 1, and the bottom-left sample in TU 4 is predicted from the top-right reference sample in TU 3.
Although the order of prediction for the TUs in the current prediction unit 2202 may not have changed from what it would have been under the raster scan order, by immediately reconstructing each TU after its prediction processing there is still the realized benefit of more efficient and accurate predictions. This is because previously (as depicted on the left of
According to the current aspect, TU 1 will be predicted first using only the reference samples from the neighboring blocks to the left and bottom-left. After predicting and reconstructing the samples of TU 1, the next TU to be predicted is TU 2. For TU 2, reference samples from the bottom-left neighboring block are used to predict the bottom-left, bottom-right and top-right samples in TU 2. However the top-left sample in TU 2 is seen to be predicted by referencing the reference sample located at the bottom-right in TU 1. After predicting and reconstructing the samples of TU 2, the next TU to be predicted is TU 3 located at the top-left corner of the current prediction unit 2302. For TU 3, reference samples from the neighboring block to the left of the current prediction unit 2302 are used to predict the top-left, top-right and bottom-left samples of TU 3. However
According to the current aspect, TU 1 will be predicted first by referencing only the reference samples from the neighboring blocks to the top and top-right. After predicting the samples of TU 1, the next TU to be predicted is TU 2. For TU 2, reference samples from the top neighboring block are referenced to predict the bottom-left, top-right and top-left samples in TU 2. However the bottom-right sample in TU 2 is seen to be predicted by referencing the previously reconstructed top-left reference sample in TU 1. After predicting and reconstructing the samples of TU 2, the next TU to be predicted is TU 3 located at the bottom-right corner of the current prediction unit 2402. For TU 3, reference samples from the neighboring block to the top-right of the current prediction unit 2402 are referenced to predict the top-right, bottom-left and bottom-right samples in TU 3. However
According to the current aspect, both the top-left and top-right TUs of the current prediction unit 2502 are labeled as TU 1 and will be predicted first using only the reference samples from the neighboring block to the top as seen in
According to the current aspect, both the top-left and bottom-left TUs of the current prediction unit 2602 are labeled as TU 1 and will be predicted first using only the reference samples from the neighboring block to the left as seen in
The exemplary illustration of the intra DC prediction mode according to the current aspect of the present invention is made in
Referring to the depiction on the right side of
By rearranging the order of predicting transform units within a common current prediction unit and immediately reconstructing the transform units as they are predicted, the current aspect of the present invention makes reconstructed samples within a TU available to be used as reference samples when predicting remaining TUs that have not yet been prediction processed. By making reference samples available from a fellow transform unit within a common current prediction unit, the present invention also offers a new method of decoding that results in more accurate predictions of remaining samples in transform units that have not yet been predicted. The result of more accurate predictions is achieved by decreasing the distance between a reference sample in relation to the current prediction sample. Whereas the previous decoding method only made reconstructed samples from neighboring blocks available as reference samples, the present invention makes reconstructed samples from fellow transform units within a common current prediction unit available as reference samples when prediction processing another fellow transform unit.
Referring to
The entropy decoding unit 2810 extracts a transform coefficient of each block of video data, a motion vector, a reference picture index and the like by performing entropy decoding on a video signal bitstream that is encoded by an encoding unit (not pictured). The inverse quantizing unit 2820 inverse-quantizes the entropy decoded transform coefficient, and the inverse transforming unit 2825 then restores an original sample value using the inverse-quantized transform coefficient. The deblocking filtering unit 2830 is applied to each coded block of video data to reduce block distortion. A picture through filtering is stored in the decoded picture storing unit 2840 to be outputted or used as a reference picture. The inter predicting unit 2850 predicts a current picture using the reference picture stored in the decoded picture storing unit 2840 and inter prediction information (e.g., reference picture index, motion vector, etc.) delivered from the entropy decoding unit 2810. In particular, motion vectors of blocks adjacent to a current block (ie. neighboring blocks) are extracted from a video signal. A predicted motion vector of the current block may be obtained from the neighboring block. The neighboring block may include a block located at a left, top or right top side of the current block. For instance, a predicted motion vector of a current block may be obtained using median value of horizontal and vertical components of motion vectors of neighboring blocks. Alternatively, in case that a left block of a current block has at least one prediction block coded in an inter mode, a predicted motion vector of the current block may be obtained using a motion vector of a prediction block located at a top side of the current block. In case that a top block of a current block has at least one prediction block coded in an inter mode, a predicted motion vector of the current block may be obtained using a motion vector of a prediction block located at a most left side. In case that blocks located at top and right sides of a current block among neighboring blocks are located outside a boundary of a picture or slice, a predicted motion vector of the current block may be set to a motion vector of a left block. If there exists one block having the same reference picture index of a current block among neighboring blocks, a motion vector of the block may be used for motion prediction.
The intra predicting unit 2860 performs intra prediction by referencing previously reconstructed samples from within a current picture. The reconstructed sample within the current picture may include a sample to which deblocking filtering is not applied. An original picture is then reconstructed by adding the predicted current picture and a residual outputted from the inverse transforming unit 2825 together. For each prediction unit of video data, each current prediction sample of a current prediction unit will be processed according to the new intra planar mode prediction of the present invention by the intra prediction unit 2860. Then the predicted current prediction samples will be reconstructed by combining the predicted samples with a residual outputted from the inverse transforming unit 2825.
The prediction mode obtaining unit 2962 is tasked with parsing identifying information that is included in a video signal to determine the proper intra prediction mode to apply to each current prediction unit that is being predicted by the intra prediction unit 2960. So according to the present invention, the prediction mode obtaining unit 2962 will process signaling information from the identifying information included in a video signal and determine from the signaling information that the new intra planar mode for prediction should be applied to a current prediction unit.
And once the current prediction unit is properly predicted by the intra prediction unit 2960 according to the proper intra prediction mode identified by the prediction mode determining unit 2962, the predicted samples of the current prediction unit will be reconstructed by the reconstructing unit 2970. The reconstructing unit 2970 is able to reconstruct the predicted samples by combining them with residual values obtained from the inverse transforming unit 2925.
While the present invention has been described and illustrated herein with reference to the preferred embodiments thereof, it will be apparent to those skilled in the art that various modifications and variations can be made therein without departing from the spirit and scope of the invention. Thus, it is intended that the present invention covers the modifications and variations of this invention that come within the scope of the appended claims and their equivalents.
Park, Joonyoung, Park, Seungwook, Lim, Jaehyun, Kim, Jungsun, Choi, Younghee, Sung, Jaewon, Jeon, Byeongmoon, Jeon, Yongjoon
Patent | Priority | Assignee | Title |
10091502, | Dec 21 2010 | Electronics and Telecommunications Research Institute | Intra prediction mode encoding/decoding method and apparatus for same |
10194165, | Jul 02 2002 | Panasonic Intellectual Property Corporation of America | Motion vector derivation method, moving picture coding method and moving picture decoding method |
10194166, | Jul 02 2002 | Panasonic Intellectual Property Corporation of America | Motion vector derivation method, moving picture coding method and moving picture decoding method |
10250903, | Jul 02 2002 | Panasonic Intellectual Property Corporation of America | Motion vector derivation method, moving picture coding method and moving picture decoding method |
10264278, | Jul 02 2002 | Panasonic Intellectual Property Corporation of America | Motion vector derivation method, moving picture coding method and moving picture decoding method |
10390044, | Jul 09 2010 | Qualcomm Incorporated | Signaling selected directional transform for video coding |
10404982, | Nov 05 2014 | HUAWEI TECHNOLOGIES CO , LTD | Per-sample prediction encoding apparatus and method |
10511836, | Dec 21 2010 | Electronics and Telecommunications Research Institute | Intra prediction mode encoding/decoding method and apparatus for same |
10645398, | Oct 25 2011 | Texas Instruments Incorporated | Sample-based angular intra-prediction in video coding |
10659807, | Jul 02 2002 | Panasonic Intellectual Property Corporation of America | Motion vector derivation method, moving picture coding method and moving picture decoding method |
10659808, | Jul 02 2002 | Panasonic Intellectual Property Corporation of America | Motion vector derivation method, moving picture coding method and moving picture decoding method |
10764587, | Jun 30 2017 | Qualcomm Incorporated | Intra prediction in video coding |
10812803, | Dec 13 2010 | Electronics and Telecommunications Research Institute | Intra prediction method and apparatus |
10939098, | Dec 21 2010 | Electronics and Telecommunications Research Institute | Intra prediction mode encoding/decoding method and apparatus for same |
10986362, | Jul 02 2002 | Panasonic Intellectual Property Corporation of America | Motion vector derivation method, moving picture coding method and moving picture decoding method |
10992958, | Dec 29 2010 | Qualcomm Incorporated | Video coding using mapped transforms and scanning modes |
11218701, | Aug 09 2018 | GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD. | Video colour component prediction method and apparatus, and computer storage medium |
11228771, | Oct 25 2011 | Texas Instruments Incorporated | Sample-based angular intra-prediction in video coding |
11277615, | Feb 16 2016 | SAMSUNG ELECTRONICS CO , LTD | Intra-prediction method for reducing intra-prediction errors and device for same |
11336901, | Dec 13 2010 | Electronics and Telecommunications Research Institute | Intra prediction method and apparatus |
11445173, | Nov 13 2017 | MEDIATEK SINGAPORE PTE LTD | Method and apparatus for Intra prediction fusion in image and video coding |
11470330, | Jun 30 2017 | Qualcomm Incorporated | Intra prediction in video coding |
11503282, | Dec 21 2010 | Electronics and Telecommunications Research Institute | Intra prediction mode encoding/decoding method and apparatus for same |
11601678, | Dec 29 2010 | Qualcomm Incorporated | Video coding using mapped transforms and scanning modes |
11627325, | Dec 13 2010 | Electronics and Telecommunications Research Institute | Intra prediction method and apparatus |
11743466, | Aug 09 2018 | GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD. | Video colour component prediction method and apparatus, and computer storage medium |
11800120, | Oct 25 2011 | Texas Instruments Incorporated | Sample-based angular intra-prediction in video coding |
11838548, | Dec 29 2010 | Qualcomm Incorporated | Video coding using mapped transforms and scanning modes |
11863739, | Sep 27 2016 | InterDigital VC Holdings, Inc. | Method for improved intra prediction when reference samples are missing |
9264733, | Jul 02 2002 | Panasonic Intellectual Property Corporation of America | Motion vector derivation method, moving picture coding method and moving picture decoding method |
9277236, | Jul 02 2002 | Panasonic Intellectual Property Corporation of America | Motion vector derivation method, moving picture coding method and moving picture decoding method |
9350993, | Dec 21 2010 | Electronics and Telecommunications Research Institute | Intra prediction mode encoding/decoding method and apparatus for same |
9538196, | Jul 02 2002 | Panasonic Intellectual Property Corporation of America | Motion vector derivation method, moving picture coding method and moving picture decoding method |
9648327, | Dec 21 2010 | Electronics and Telecommunications Research Institute | Intra prediction mode encoding/decoding method and apparatus for same |
9838689, | Dec 21 2010 | Electronics and Telecommunications Research Institute | Intra prediction mode encoding/decoding method and apparatus for same |
Patent | Priority | Assignee | Title |
20050105621, | |||
20060008006, | |||
20060013320, | |||
20060188165, | |||
20060227863, | |||
20070002948, | |||
20070053443, | |||
20070065027, | |||
20070098070, | |||
20070154087, | |||
20090003716, | |||
20090225834, | |||
20090232207, | |||
20100104020, | |||
20110249733, | |||
20110249741, | |||
CN101019437, | |||
CN1615025, | |||
WO2004064406, | |||
WO2009051419, | |||
WO2009110753, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
May 17 2011 | LG Electronics Inc. | (assignment on the face of the patent) | / | |||
Jun 03 2011 | JEON, YONGJOON | LG Electronics Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 026536 | /0431 | |
Jun 03 2011 | PARK, SEUNGWOOK | LG Electronics Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 026536 | /0431 | |
Jun 03 2011 | PARK, JOONYOUNG | LG Electronics Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 026536 | /0431 | |
Jun 03 2011 | JEON, BYEONGMOON | LG Electronics Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 026536 | /0431 | |
Jun 03 2011 | LIM, JAEHYUN | LG Electronics Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 026536 | /0431 | |
Jun 03 2011 | KIM, JUNGSUN | LG Electronics Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 026536 | /0431 | |
Jun 03 2011 | CHOI, YOUNGHEE | LG Electronics Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 026536 | /0431 | |
Jun 08 2011 | SUNG, JAEWON | LG Electronics Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 026536 | /0431 |
Date | Maintenance Fee Events |
Dec 11 2015 | ASPN: Payor Number Assigned. |
Dec 10 2018 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Dec 05 2022 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
Jul 14 2018 | 4 years fee payment window open |
Jan 14 2019 | 6 months grace period start (w surcharge) |
Jul 14 2019 | patent expiry (for year 4) |
Jul 14 2021 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jul 14 2022 | 8 years fee payment window open |
Jan 14 2023 | 6 months grace period start (w surcharge) |
Jul 14 2023 | patent expiry (for year 8) |
Jul 14 2025 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jul 14 2026 | 12 years fee payment window open |
Jan 14 2027 | 6 months grace period start (w surcharge) |
Jul 14 2027 | patent expiry (for year 12) |
Jul 14 2029 | 2 years to revive unintentionally abandoned end. (for year 12) |