video coding tree sub-block splitting. A parser and treeblock splitter is implemented for adaptive partitioning of treeblocks (TBs) into various respective sub-treeblocks (stbs). Such adaptation may be tailored for effectuating parallel processing in accordance with video decoding. Different respective decoding engines may each individually and respectively also perform further partitioning of the stbs into sub-stbs as well. Such adaptation of respective stb size and/or sub-stb size may be made based on local and/or remote consideration(s). For example, adaptation may be made based upon communication link and/or channel conditions, a remote characteristic (e.g., associated with a source device and/or destination device), a local characteristic (e.g., associated with operations and/or processing performed locally within a given device), and/or any other type of consideration. Different respective portions of a video signal may be processed using different respective sized stbs/sub-stbs (and different respective engines may also employ different respective sized stbs/sub-stbs).
|
18. A method for execution by a video processing device, the method comprising:
receiving a bitstream, corresponding to a video signal, and including a plurality of treeblocks;
adaptively partitioning the plurality of treeblocks (TBs) into a plurality of sub-treeblocks (stbs) such that each of the plurality of TBs corresponding to a respective plurality of stbs;
outputting the plurality of stbs, in parallel, via a first pathway and a second pathway of a plurality of pathways;
operating a first decoding engine of a video decoder for receiving the plurality of stbs via the first pathway and for performing a first at least one video decoding operation to generate a first at least one intermediate decoded resultant;
operating a second decoding engine of the video decoder for receiving the plurality of stbs via the second pathway and for performing a second at least one video decoding operation that is different than and is independent from the first at least one video decoding operation, in parallel with the first decoding engine performing the first at least one video decoding operation, to generate a second at least one intermediate decoded resultant; and
combining the first at least one intermediate decoded resultant and the second at least one intermediate decoded resultant to generate at least one decoded resultant.
6. An apparatus comprising:
an input configured to receive a bitstream, corresponding to a video signal, and including a plurality of treeblocks;
a parser and treeblock splitter configured to:
partition adaptively the plurality of treeblocks (TBs) into a plurality of sub-treeblocks (stbs) such that each of the plurality of TBs corresponding to a respective plurality of stbs; and
output the plurality of stbs, in parallel, via a first pathway and a second pathway of a plurality of pathways;
a video decoder including:
a first decoding engine configured to receive the plurality of stbs via the first pathway and to perform a first at least one video decoding operation to generate a first at least one intermediate decoded resultant; and
a second decoding engine configured to receive the plurality of stbs via the second pathway and to perform a second at least one video decoding operation that is different than and is independent from the first at least one video decoding operation, in parallel with the first decoding engine performing the first at least one video decoding operation, to generate a second at least one intermediate decoded resultant; and
a combiner configured to combine the first at least one intermediate decoded resultant and the second at least one intermediate decoded resultant to generate at least one decoded resultant.
1. An apparatus comprising:
an input configured to receive a bitstream, corresponding to a video signal, and including a plurality of treeblocks;
a parser and treeblock splitter configured to:
partition adaptively the plurality of treeblocks (TBs) into a plurality of sub-treeblocks (stbs) such that each of the plurality of TBs corresponding to a respective plurality of stbs; and
output the plurality of stbs, in parallel, via a first pathway and a second pathway of a plurality of pathways;
a video decoder including:
a first decoding engine configured to receive the plurality of stbs via the first pathway and to perform a first at least one video decoding operation to generate a first at least one intermediate decoded resultant; and
a second decoding engine configured to receive the plurality of stbs via the second pathway and to perform a second at least one video decoding operation that is different than and is independent from the first at least one video decoding operation, in parallel with the first decoding engine performing the first at least one video decoding operation, to generate a second at least one intermediate decoded resultant; and
a combiner configured to combine the first at least one intermediate decoded resultant and the second at least one intermediate decoded resultant to generate at least one decoded resultant; and wherein:
the first decoding engine configured to partition adaptively the plurality of stbs further into a plurality of sub-stbs and to perform the first at least one video decoding operation on the plurality of sub-stbs to generate the first at least one intermediate decoded resultant;
the parser and treeblock splitter configured to partition adaptively the plurality of TBs into a plurality of stbs based on at least of at least one characteristic associated with at least one local processing condition of the apparatus and at least one characteristic associated with at least one source device to provide the bitstream via at least one communication network; and
each of the plurality of stbs having a size of 32 pixels by 32 pixels.
2. The apparatus of
3. The apparatus of
4. The apparatus of
the first decoding engine being an inverse scan, dequantization, an inverse transform engine, a spatial predictor engine, a motion compensator engine, or a reconstructor engine; and
the second decoding engine being a de-blocker engine.
5. The apparatus of
a communication device that is operative within at least one of a satellite communication system, a wireless communication system, a wired communication system, a fiber-optic communication system, or a mobile communication system.
7. The apparatus of
the first decoding engine configured to partition adaptively the plurality of stbs further into a plurality of sub-stbs and to perform the first at least one video decoding operation on the plurality of sub-stbs to generate the first at least one intermediate decoded resultant.
8. The apparatus of
the first decoding engine includes a plurality of sub-engines such that a first of the plurality of sub-engines is configured to process a first of the plurality of sub-stbs in parallel with a second of the plurality of sub-engines that is configured to process a second of the plurality of sub-stbs.
9. The apparatus of
the first decoding engine configured to partition adaptively the plurality of stbs further into a first plurality of sub-stbs and to perform the first at least one video decoding operation on the plurality of sub-stbs to generate the first at least one intermediate decoded resultant; and
the second decoding engine configured to partition adaptively the plurality of stbs further into a second plurality of sub-stbs and to perform the second at least one video decoding operation on the plurality of sub-stbs to generate the second the at least one intermediate decoded resultant.
10. The apparatus of
the first decoding engine includes a first plurality of sub-engines such that a first of the first plurality of sub-engines is configured to process a first of the first plurality of sub-stbs in parallel with a second of the first plurality of sub-engines that is configured to process a second of the first plurality of sub-stbs; and
the second decoding engine includes a second plurality of sub-engines such that a first of the second plurality of sub-engines is configured to process a first of the second plurality of sub-stbs in parallel with a second of the second plurality of sub-engines that is configured to process a second of the second plurality of sub-stbs.
11. The apparatus of
the parser and treeblock splitter configured to partition adaptively the plurality of TBs into a plurality of stbs based on at least one characteristic associated with at least one local processing condition of the apparatus.
12. The apparatus of
the parser and treeblock splitter configured to partition adaptively the plurality of TBs into a plurality of stbs based on at least one characteristic associated with the at least one source device to provide the bitstream via at least one communication network.
13. The apparatus of
14. The apparatus of
15. The apparatus of
the first decoding engine being an inverse scan, dequantization, an inverse transform engine, a spatial predictor engine, a motion compensator engine, or a reconstructor engine; and
the second decoding engine being a de-blocker engine.
16. The apparatus of
17. The apparatus of
a communication device that is operative within at least one of a satellite communication system, a wireless communication system, a wired communication system, a fiber-optic communication system, or a mobile communication system.
19. The method of
operating the first decoding engine adaptively to partition the plurality of stbs further into a plurality of sub-stbs and for performing the first at least one video decoding operation on the plurality of sub-stbs to generate the first at least one intermediate decoded resultant.
20. The method of
operating the first decoding engine that includes a plurality of sub-engines and operating a first of the plurality of sub-engines that is configured to process a first of the plurality of sub-stbs in parallel with a second of the plurality of sub-engines that is configured to process a second of the plurality of sub-stbs.
21. The method of
operating the first decoding engine adaptively to partition the plurality of stbs further into a first plurality of sub-stbs and for performing the first at least one video decoding operation on the plurality of sub-stbs to generate the first at least one intermediate decoded resultant; and
operating the second decoding engine adaptively to partition the plurality of stbs further into a second plurality of sub-stbs and for performing the second at least one video decoding operation on the plurality of sub-stbs to generate the second the at least one intermediate decoded resultant.
22. The method of
operating the first decoding engine that includes a first plurality of sub-engines and operating a first of the first plurality of sub-engines that is configured to process a first of the first plurality of sub-stbs in parallel with a second of the first plurality of sub-engines that is configured to process a second of the first plurality of sub-stbs; and
operating the second decoding engine that includes a second plurality of sub-engines and operating a first of the second plurality of sub-engines that is configured to process a first of the second plurality of sub-stbs in parallel with a second of the second plurality of sub-engines that is configured to process a second of the second plurality of sub-stbs.
23. The method of
adaptively partitioning the plurality of TBs into a plurality of stbs based on at least one characteristic associated with at least one local processing condition of the video processing device.
24. The method of
adaptively partitioning the plurality of TBs into a plurality of stbs based on at least one characteristic associated with the at least one source device for providing the bitstream via at least one communication network.
25. The method of
26. The method of
27. The method of
28. The method of
|
The present U.S. Utility Patent Application claims priority pursuant to 35 U.S.C. § 119(e) to the following U.S. Provisional Patent Application which is hereby incorporated herein by reference in its entirety and made part of the present U.S. Utility Patent Application for all purposes:
1. U.S. Provisional Patent Application Ser. No. 61/541,938, entitled “Coding, communications, and signaling of video content within communication systems,” filed Sep. 30, 2011.
The following standards/draft standards are hereby incorporated herein by reference in their entirety and are made part of the present U.S. Utility Patent Application for all purposes:
1. “WD4: Working Draft 4 of High-Efficiency Video Coding, Joint Collaborative Team on Video Coding (JCT-VC),” Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11 6th Meeting: Torino, IT, 14-22 Jul. 2011, Document: JCTVC-F803 d4, 230 pages.
2. International Telecommunication Union, ITU-T, TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU, H.264 (March 2010), SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS, Infrastructure of audiovisual services—Coding of moving video, Advanced video coding for generic audiovisual services, Recommendation ITU-T H.264, also alternatively referred to as International Telecomm ISO/IEC 14496-10—MPEG-4 Part 10, AVC (Advanced Video Coding), H.264/MPEG-4 Part 10 or AVC (Advanced Video Coding), ITU H.264/MPEG4-AVC, or equivalent.
Technical Field of the Invention
The invention relates generally to digital video processing; and, more particularly, it relates to performing video coding using blocks and/or sub-blocks in accordance with such digital video processing.
Description of Related Art
Communication systems that operate to communicate digital media (e.g., images, video, data, etc.) have been under continual development for many years. With respect to such communication systems employing some form of video data, a number of digital images are output or displayed at some frame rate (e.g., frames per second) to effectuate a video signal suitable for output and consumption. Within many such communication systems operating using video data, there can be a trade-off between throughput (e.g., number of image frames that may be transmitted from a first location to a second location) and video and/or image quality of the signal eventually to be output or displayed. The present art does not adequately or acceptably provide a means by which video data may be transmitted from a first location to a second location in accordance with providing an adequate or acceptable video and/or image quality, ensuring a relatively low amount of overhead associated with the communications, relatively low complexity of the communication devices at respective ends of communication links, etc.
Within many devices that use digital media such as digital video, respective images thereof, being digital in nature, are represented using pixels. Within certain communication systems, digital media can be transmitted from a first location to a second location at which such media can be output or displayed. The goal of digital communications systems, including those that operate to communicate digital video, is to transmit digital data from one location, or subsystem, to another either error free or with an acceptably low error rate. As shown in
Referring to
It is noted that such communication devices 110 and/or 120 may be stationary or mobile without departing from the scope and spirit of the invention. For example, either one or both of the communication devices 110 and 120 may be implemented in a fixed location or may be a mobile communication device with capability to associate with and/or communicate with more than one network access point (e.g., different respective access points (APs) in the context of a mobile communication system including one or more wireless local area networks (WLANs), different respective satellites in the context of a mobile communication system including one or more satellite, or generally, different respective network access points in the context of a mobile communication system including one or more network access points by which communications may be effectuated with communication devices 110 and/or 120.
To reduce transmission errors that may undesirably be incurred within a communication system, error correction and channel coding schemes are often employed. Generally, these error correction and channel coding schemes involve the use of an encoder at the transmitter end of the communication channel 199 and a decoder at the receiver end of the communication channel 199.
Any of various types of ECC codes described can be employed within any such desired communication system (e.g., including those variations described with respect to
Generally speaking, when considering a communication system in which video data is communicated from one location, or subsystem, to another, video data encoding may generally be viewed as being performed at a transmitting end of the communication channel 199, and video data decoding may generally be viewed as being performed at a receiving end of the communication channel 199.
Also, while the embodiment of this diagram shows bi-directional communication being capable between the communication devices 110 and 120, it is of course noted that, in some embodiments, the communication device 110 may include only video data encoding capability, and the communication device 120 may include only video data decoding capability, or vice versa (e.g., in a uni-directional communication embodiment such as in accordance with a video broadcast embodiment).
Referring to the communication system 200 of
Within each of the transmitter 297 and the receiver 298, any desired integration of various components, blocks, functional blocks, circuitries, etc. Therein may be implemented. For example, this diagram shows a processing module 280a as including the encoder and symbol mapper 220 and all associated, corresponding components therein, and a processing module 280 is shown as including the metric generator 270 and the decoder 280 and all associated, corresponding components therein. Such processing modules 280a and 280b may be respective integrated circuits. Of course, other boundaries and groupings may alternatively be performed without departing from the scope and spirit of the invention. For example, all components within the transmitter 297 may be included within a first processing module or integrated circuit, and all components within the receiver 298 may be included within a second processing module or integrated circuit. Alternatively, any other combination of components within each of the transmitter 297 and the receiver 298 may be made in other embodiments.
As with the previous embodiment, such a communication system 200 may be employed for the communication of video data is communicated from one location, or subsystem, to another (e.g., from transmitter 297 to the receiver 298 via the communication channel 299).
Digital image and/or video processing of digital images and/or media (including the respective images within a digital video signal) may be performed by any of the various devices depicted below in
Referring to embodiment 400 of
The input video signal may generally be referred to as corresponding to raw frame (or picture) image data. For example, raw frame (or picture) image data may undergo processing to generate luma and chroma samples. In some embodiments, the set of luma samples in a macro-block is of one particular arrangement (e.g., 16×16), and set of the chroma samples is of a different particular arrangement (e.g., 8×8). In accordance with the embodiment depicted herein, a video encoder processes such samples on a block by block basis.
The input video signal then undergoes mode selection by which the input video signal selectively undergoes intra and/or inter-prediction processing. Generally speaking, the input video signal undergoes compression along a compression pathway. When operating with no feedback (e.g., in accordance with neither inter-prediction nor intra-prediction), the input video signal is provided via the compression pathway to undergo transform operations (e.g., in accordance with discrete cosine transform (DCT)). Of course, other transforms may be employed in alternative embodiments. In this mode of operation, the input video signal itself is that which is compressed. The compression pathway may take advantage of the lack of high frequency sensitivity of human eyes in performing the compression.
However, feedback may be employed along the compression pathway by selectively using inter- or intra-prediction video encoding. In accordance with a feedback or predictive mode of operation, the compression pathway operates on a (relatively low energy) residual (e.g., a difference) resulting from subtraction of a predicted value of a current macro-block from the current macro-block. Depending upon which form of prediction is employed in a given instance, a residual or difference between a current macro-block and a predicted value of that macro-block based on at least a portion of that same frame (or picture) or on at least a portion of at least one other frame (or picture) is generated.
The resulting modified video signal then undergoes transform operations along the compression pathway. In one embodiment, a discrete cosine transform (DCT) operates on a set of video samples (e.g., luma, chroma, residual, etc.) to compute respective coefficient values for each of a predetermined number of basis patterns. For example, one embodiment includes 64 basis functions (e.g., such as for an 8×8 sample). Generally speaking, different embodiments may employ different numbers of basis functions (e.g., different transforms). Any combination of those respective basis functions, including appropriate and selective weighting thereof, may be used to represent a given set of video samples. Additional details related to various ways of performing transform operations are described in the technical literature associated with video encoding including those standards/draft standards that have been incorporated by reference as indicated above. The output from the transform processing includes such respective coefficient values. This output is provided to a quantizer.
Generally, most image blocks will typically yield coefficients (e.g., DCT coefficients in an embodiment operating in accordance with discrete cosine transform (DCT)) such that the most relevant DCT coefficients are of lower frequencies. Because of this and of the human eyes' relatively poor sensitivity to high frequency visual effects, a quantizer may be operable to convert most of the less relevant coefficients to a value of zero. That is to say, those coefficients whose relative contribution is below some predetermined value (e.g., some threshold) may be eliminated in accordance with the quantization process. A quantizer may also be operable to convert the significant coefficients into values that can be coded more efficiently than those that result from the transform process. For example, the quantization process may operate by dividing each respective coefficient by an integer value and discarding any remainder. Such a process, when operating on typical coding units or macro-blocks, typically yields a relatively low number of non-zero coefficients which are then delivered to an entropy encoder for lossless encoding and for use in accordance with a feedback path which may select intra-prediction and/or inter-prediction processing in accordance with video encoding.
An entropy encoder operates in accordance with a lossless compression encoding process. In comparison, the quantization operations are generally lossy. The entropy encoding process operates on the coefficients provided from the quantization process. Those coefficients may represent various characteristics (e.g., luma, chroma, residual, etc.). Various types of encoding may be employed by an entropy encoder. For example, context-adaptive binary arithmetic coding (CABAC) and/or context-adaptive variable-length coding (CAVLC) may be performed by the entropy encoder. For example, in accordance with at least one part of an entropy coding scheme, the data is converted to a (run, level) pairing (e.g., data 14, 3, 0, 4, 0, 0, −3 would be converted to the respective (run, level) pairs of (0, 14), (0, 3), (1, 4), (2,−3)). In advance, a table may be prepared that assigns variable length codes for value pairs, such that relatively shorter length codes are assigned to relatively common value pairs, and relatively longer length codes are assigned for relatively less common value pairs.
As the reader will understand, the operations of inverse quantization and inverse transform correspond to those of quantization and transform, respectively. For example, in an embodiment in which a DCT is employed within the transform operations, then an inverse DCT (IDCT) is that employed within the inverse transform operations.
A picture buffer, alternatively referred to as a digital picture buffer or a DPB, receives the signal from the IDCT module; the picture buffer is operative to store the current frame (or picture) and/or one or more other frames (or pictures) such as may be used in accordance with intra-prediction and/or inter-prediction operations as may be performed in accordance with video encoding. It is noted that in accordance with intra-prediction, a relatively small amount of storage may be sufficient, in that, it may not be necessary to store the current frame (or picture) or any other frame (or picture) within the frame (or picture) sequence. Such stored information may be employed for performing motion compensation and/or motion estimation in the case of performing inter-prediction in accordance with video encoding.
In one possible embodiment, for motion estimation, a respective set of luma samples (e.g., 16×16) from a current frame (or picture) are compared to respective buffered counterparts in other frames (or pictures) within the frame (or picture) sequence (e.g., in accordance with inter-prediction). In one possible implementation, a closest matching area is located (e.g., prediction reference) and a vector offset (e.g., motion vector) is produced. In a single frame (or picture), a number of motion vectors may be found and not all will necessarily point in the same direction. One or more operations as performed in accordance with motion estimation are operative to generate one or more motion vectors.
Motion compensation is operative to employ one or more motion vectors as may be generated in accordance with motion estimation. A prediction reference set of samples is identified and delivered for subtraction from the original input video signal in an effort hopefully to yield a relatively (e.g., ideally, much) lower energy residual. If such operations do not result in a yielded lower energy residual, motion compensation need not necessarily be performed and the transform operations may merely operate on the original input video signal instead of on a residual (e.g., in accordance with an operational mode in which the input video signal is provided straight through to the transform operation, such that neither intra-prediction nor inter-prediction are performed), or intra-prediction may be utilized and transform operations performed on the residual resulting from intra-prediction. Also, if the motion estimation and/or motion compensation operations are successful, the motion vector may also be sent to the entropy encoder along with the corresponding residual's coefficients for use in undergoing lossless entropy encoding.
The output from the overall video encoding operation is an output bit stream. It is noted that such an output bit stream may of course undergo certain processing in accordance with generating a continuous time signal which may be transmitted via a communication channel. For example, certain embodiments operate within wireless communication systems. In such an instance, an output bitstream may undergo appropriate digital to analog conversion, frequency conversion, scaling, filtering, modulation, symbol mapping, and/or any other operations within a wireless communication device that operate to generate a continuous time signal capable of being transmitted via a communication channel, etc.
Referring to embodiment 500 of
The input video signal may generally be referred to as corresponding to raw frame (or picture) image data. For example, raw frame (or picture) image data may undergo processing to generate luma and chroma samples. In some embodiments, the set of luma samples in a macro-block is of one particular arrangement (e.g., 16×16), and set of the chroma samples is of a different particular arrangement (e.g., 8×8). In accordance with the embodiment depicted herein, a video encoder processes such samples on a block by block basis.
The input video signal then undergoes mode selection by which the input video signal selectively undergoes intra and/or inter-prediction processing. Generally speaking, the input video signal undergoes compression along a compression pathway. When operating with no feedback (e.g., in accordance with neither inter-prediction nor intra-prediction), the input video signal is provided via the compression pathway to undergo transform operations (e.g., in accordance with discrete cosine transform (DCT)). Of course, other transforms may be employed in alternative embodiments. In this mode of operation, the input video signal itself is that which is compressed. The compression pathway may take advantage of the lack of high frequency sensitivity of human eyes in performing the compression.
However, feedback may be employed along the compression pathway by selectively using inter- or intra-prediction video encoding. In accordance with a feedback or predictive mode of operation, the compression pathway operates on a (relatively low energy) residual (e.g., a difference) resulting from subtraction of a predicted value of a current macro-block from the current macro-block. Depending upon which form of prediction is employed in a given instance, a residual or difference between a current macro-block and a predicted value of that macro-block based on at least a portion of that same frame (or picture) or on at least a portion of at least one other frame (or picture) is generated.
The resulting modified video signal then undergoes transform operations along the compression pathway. In one embodiment, a discrete cosine transform (DCT) operates on a set of video samples (e.g., luma, chroma, residual, etc.) to compute respective coefficient values for each of a predetermined number of basis patterns. For example, one embodiment includes 64 basis functions (e.g., such as for an 8×8 sample). Generally speaking, different embodiments may employ different numbers of basis functions (e.g., different transforms). Any combination of those respective basis functions, including appropriate and selective weighting thereof, may be used to represent a given set of video samples. Additional details related to various ways of performing transform operations are described in the technical literature associated with video encoding including those standards/draft standards that have been incorporated by reference as indicated above. The output from the transform processing includes such respective coefficient values. This output is provided to a quantizer.
Generally, most image blocks will typically yield coefficients (e.g., DCT coefficients in an embodiment operating in accordance with discrete cosine transform (DCT)) such that the most relevant DCT coefficients are of lower frequencies. Because of this and of the human eyes' relatively poor sensitivity to high frequency visual effects, a quantizer may be operable to convert most of the less relevant coefficients to a value of zero. That is to say, those coefficients whose relative contribution is below some predetermined value (e.g., some threshold) may be eliminated in accordance with the quantization process. A quantizer may also be operable to convert the significant coefficients into values that can be coded more efficiently than those that result from the transform process. For example, the quantization process may operate by dividing each respective coefficient by an integer value and discarding any remainder. Such a process, when operating on typical coding units or macro-blocks, typically yields a relatively low number of non-zero coefficients which are then delivered to an entropy encoder for lossless encoding and for use in accordance with a feedback path which may select intra-prediction and/or inter-prediction processing in accordance with video encoding.
An entropy encoder operates in accordance with a lossless compression encoding process. In comparison, the quantization operations are generally lossy. The entropy encoding process operates on the coefficients provided from the quantization process. Those coefficients may represent various characteristics (e.g., luma, chroma, residual, etc.). Various types of encoding may be employed by an entropy encoder. For example, context-adaptive binary arithmetic coding (CABAC) and/or context-adaptive variable-length coding (CAVLC) may be performed by the entropy encoder. For example, in accordance with at least one part of an entropy coding scheme, the data is converted to a (run, level) pairing (e.g., data 14, 3, 0, 4, 0, 0, −3 would be converted to the respective (run, level) pairs of (0, 14), (0, 3), (1, 4), (2,−3)). In advance, a table may be prepared that assigns variable length codes for value pairs, such that relatively shorter length codes are assigned to relatively common value pairs, and relatively longer length codes are assigned for relatively less common value pairs.
As the reader will understand, the operations of inverse quantization and inverse transform correspond to those of quantization and transform, respectively. For example, in an embodiment in which a DCT is employed within the transform operations, then an inverse DCT (IDCT) is that employed within the inverse transform operations.
An adaptive loop filter (ALF) is implemented to process the output from the inverse transform block. Such an adaptive loop filter (ALF) is applied to the decoded picture before it is stored in a picture buffer (sometimes referred to as a DPB, digital picture buffer). The adaptive loop filter (ALF) is implemented to reduce coding noise of the decoded picture, and the filtering thereof may be selectively applied on a slice by slice basis, respectively, for luminance and chrominance whether or not the adaptive loop filter (ALF) is applied either at slice level or at block level. Two-dimensional 2-D finite impulse response (FIR) filtering may be used in application of the adaptive loop filter (ALF). The coefficients of the filters may be designed slice by slice at the encoder, and such information is then signaled to the decoder (e.g., signaled from a transmitter communication device including a video encoder [alternatively referred to as encoder] to a receiver communication device including a video decoder [alternatively referred to as decoder]).
One embodiment operates by generating the coefficients in accordance with Wiener filtering design. In addition, it may be applied on a block by block based at the encoder whether the filtering is performed and such a decision is then signaled to the decoder (e.g., signaled from a transmitter communication device including a video encoder [alternatively referred to as encoder] to a receiver communication device including a video decoder [alternatively referred to as decoder]) based on quadtree structure, where the block size is decided according to the rate-distortion optimization. It is noted that the implementation of using such 2-D filtering may introduce a degree of complexity in accordance with both encoding and decoding. For example, by using 2-D filtering in accordance and implementation of an adaptive loop filter (ALF), there may be some increasing complexity within encoder implemented within the transmitter communication device as well as within a decoder implemented within a receiver communication device.
In certain optional embodiments, the output from the de-blocking filter is provided to one or more other in-loop filters (e.g., implemented in accordance with adaptive loop filter (ALF), sample adaptive offset (SAO) filter, and/or any other filter type) implemented to process the output from the inverse transform block. For example, such an ALF is applied to the decoded picture before it is stored in a picture buffer (again, sometimes alternatively referred to as a DPB, digital picture buffer). Such an ALF is implemented to reduce coding noise of the decoded picture, and the filtering thereof may be selectively applied on a slice by slice basis, respectively, for luminance and chrominance whether or not such an ALF is applied either at slice level or at block level. Two-dimensional 2-D finite impulse response (FIR) filtering may be used in application of such an ALF. The coefficients of the filters may be designed slice by slice at the encoder, and such information is then signaled to the decoder (e.g., signaled from a transmitter communication device including a video encoder [alternatively referred to as encoder] to a receiver communication device including a video decoder [alternatively referred to as decoder]).
One embodiment is operative to generate the coefficients in accordance with Wiener filtering design. In addition, it may be applied on a block by block based at the encoder whether the filtering is performed and such a decision is then signaled to the decoder (e.g., signaled from a transmitter communication device including a video encoder [alternatively referred to as encoder] to a receiver communication device including a video decoder [alternatively referred to as decoder]) based on quadtree structure, where the block size is decided according to the rate-distortion optimization. It is noted that the implementation of using such 2-D filtering may introduce a degree of complexity in accordance with both encoding and decoding. For example, by using 2-D filtering in accordance and implementation of an ALF, there may be some increasing complexity within encoder implemented within the transmitter communication device as well as within a decoder implemented within a receiver communication device.
As mentioned with respect to other embodiments, the use of an ALF can provide any of a number of improvements in accordance with such video processing, including an improvement on the objective quality measure by the peak to signal noise ratio (PSNR) that comes from performing random quantization noise removal. In addition, the subjective quality of a subsequently encoded video signal may be achieved from illumination compensation, which may be introduced in accordance with performing offset processing and scaling processing (e.g., in accordance with applying a gain) in accordance with ALF processing.
With respect to one type of an in-loop filter, the use of an adaptive loop filter (ALF) can provide any of a number of improvements in accordance with such video processing, including an improvement on the objective quality measure by the peak to signal noise ratio (PSNR) that comes from performing random quantization noise removal. In addition, the subjective quality of a subsequently encoded video signal may be achieved from illumination compensation, which may be introduced in accordance with performing offset processing and scaling processing (e.g., in accordance with applying a gain) in accordance with adaptive loop filter (ALF) processing.
Receiving the signal output from the ALF is a picture buffer, alternatively referred to as a digital picture buffer or a DPB; the picture buffer is operative to store the current frame (or picture) and/or one or more other frames (or pictures) such as may be used in accordance with intra-prediction and/or inter-prediction operations as may be performed in accordance with video encoding. It is noted that in accordance with intra-prediction, a relatively small amount of storage may be sufficient, in that, it may not be necessary to store the current frame (or picture) or any other frame (or picture) within the frame (or picture) sequence. Such stored information may be employed for performing motion compensation and/or motion estimation in the case of performing inter-prediction in accordance with video encoding.
In one possible embodiment, for motion estimation, a respective set of luma samples (e.g., 16×16) from a current frame (or picture) are compared to respective buffered counterparts in other frames (or pictures) within the frame (or picture) sequence (e.g., in accordance with inter-prediction). In one possible implementation, a closest matching area is located (e.g., prediction reference) and a vector offset (e.g., motion vector) is produced. In a single frame (or picture), a number of motion vectors may be found and not all will necessarily point in the same direction. One or more operations as performed in accordance with motion estimation are operative to generate one or more motion vectors.
Motion compensation is operative to employ one or more motion vectors as may be generated in accordance with motion estimation. A prediction reference set of samples is identified and delivered for subtraction from the original input video signal in an effort hopefully to yield a relatively (e.g., ideally, much) lower energy residual. If such operations do not result in a yielded lower energy residual, motion compensation need not necessarily be performed and the transform operations may merely operate on the original input video signal instead of on a residual (e.g., in accordance with an operational mode in which the input video signal is provided straight through to the transform operation, such that neither intra-prediction nor inter-prediction are performed), or intra-prediction may be utilized and transform operations performed on the residual resulting from intra-prediction. Also, if the motion estimation and/or motion compensation operations are successful, the motion vector may also be sent to the entropy encoder along with the corresponding residual's coefficients for use in undergoing lossless entropy encoding.
The output from the overall video encoding operation is an output bit stream. It is noted that such an output bit stream may of course undergo certain processing in accordance with generating a continuous time signal which may be transmitted via a communication channel. For example, certain embodiments operate within wireless communication systems. In such an instance, an output bitstream may undergo appropriate digital to analog conversion, frequency conversion, scaling, filtering, modulation, symbol mapping, and/or any other operations within a wireless communication device that operate to generate a continuous time signal capable of being transmitted via a communication channel, etc.
Referring to embodiment 600 of
It is noted that a corresponding video decoder, such as located within a device at another end of a communication channel, is operative to perform the complementary processes of decoding, inverse transform, and reconstruction to produce a respective decoded video sequence that is (ideally) representative of the input video signal.
As may be seen with respect to this diagram, alternative arrangements and architectures may be employed for effectuating video encoding. Generally speaking, an encoder processes an input video signal (e.g., typically composed in units of coding units or macro-blocks, often times being square in shape and including N×N pixels therein). The video encoding determines a prediction of the current macro-block based on previously coded data. That previously coded data may come from the current frame (or picture) itself (e.g., such as in accordance with intra-prediction) or from one or more other frames (or pictures) that have already been coded (e.g., such as in accordance with inter-prediction). The video encoder subtracts the prediction of the current macro-block to form a residual.
Generally speaking, intra-prediction is operative to employ block sizes of one or more particular sizes (e.g., 16×16, 8×8, or 4×4) to predict a current macro-block from surrounding, previously coded pixels within the same frame (or picture). Generally speaking, inter-prediction is operative to employ a range of block sizes (e.g., 16×16 down to 4×4) to predict pixels in the current frame (or picture) from regions that are selected from within one or more previously coded frames (or pictures).
With respect to the transform and quantization operations, a block of residual samples may undergo transformation using a particular transform (e.g., 4×4 or 8×8). One possible embodiment of such a transform operates in accordance with discrete cosine transform (DCT). The transform operation outputs a group of coefficients such that each respective coefficient corresponds to a respective weighting value of one or more basis functions associated with a transform. After undergoing transformation, a block of transform coefficients is quantized (e.g., each respective coefficient may be divided by an integer value and any associated remainder may be discarded, or they may be multiplied by an integer value). The quantization process is generally inherently lossy, and it can reduce the precision of the transform coefficients according to a quantization parameter (QP). Typically, many of the coefficients associated with a given macro-block are zero, and only some nonzero coefficients remain. Generally, a relatively high QP setting is operative to result in a greater proportion of zero-valued coefficients and smaller magnitudes of non-zero coefficients, resulting in relatively high compression (e.g., relatively lower coded bit rate) at the expense of relatively poorly decoded image quality; a relatively low QP setting is operative to allow more nonzero coefficients to remain after quantization and larger magnitudes of non-zero coefficients, resulting in relatively lower compression (e.g., relatively higher coded bit rate) with relatively better decoded image quality.
The video encoding process produces a number of values that are encoded to form the compressed bit stream. Examples of such values include the quantized transform coefficients, information to be employed by a decoder to re-create the appropriate prediction, information regarding the structure of the compressed data and compression tools employed during encoding, information regarding a complete video sequence, etc. Such values and/or parameters (e.g., syntax elements) may undergo encoding within an entropy encoder operating in accordance with CABAC, CAVLC, or some other entropy coding scheme, to produce an output bit stream that may be stored, transmitted (e.g., after undergoing appropriate processing to generate a continuous time signal that comports with a communication channel), etc.
In an embodiment operating using a feedback path, the output of the transform and quantization undergoes inverse quantization and inverse transform. One or both of intra-prediction and inter-prediction may be performed in accordance with video encoding. Also, motion compensation and/or motion estimation may be performed in accordance with such video encoding.
The signal path output from the inverse quantization and inverse transform (e.g., IDCT) block, which is provided to the intra-prediction block, is also provided to a de-blocking filter. The output from the de-blocking filter is provided to one or more other in-loop filters (e.g., implemented in accordance with adaptive loop filter (ALF), sample adaptive offset (SAO) filter, and/or any other filter type) implemented to process the output from the inverse transform block. For example, in one possible embodiment, an ALF is applied to the decoded picture before it is stored in a picture buffer (again, sometimes alternatively referred to as a DPB, digital picture buffer). The ALF is implemented to reduce coding noise of the decoded picture, and the filtering thereof may be selectively applied on a slice by slice basis, respectively, for luminance and chrominance whether or not the ALF is applied either at slice level or at block level. Two-dimensional 2-D finite impulse response (FIR) filtering may be used in application of the ALF. The coefficients of the filters may be designed slice by slice at the encoder, and such information is then signaled to the decoder (e.g., signaled from a transmitter communication device including a video encoder [alternatively referred to as encoder] to a receiver communication device including a video decoder [alternatively referred to as decoder]).
One embodiment generated the coefficients in accordance with Wiener filtering design. In addition, it may be applied on a block by block based at the encoder whether the filtering is performed and such a decision is then signaled to the decoder (e.g., signaled from a transmitter communication device including a video encoder [alternatively referred to as encoder] to a receiver communication device including a video decoder [alternatively referred to as decoder]) based on quadtree structure, where the block size is decided according to the rate-distortion optimization. It is noted that the implementation of using such 2-D filtering may introduce a degree of complexity in accordance with both encoding and decoding. For example, by using 2-D filtering in accordance and implementation of an ALF, there may be some increasing complexity within encoder implemented within the transmitter communication device as well as within a decoder implemented within a receiver communication device.
As mentioned with respect to other embodiments, the use of an ALF can provide any of a number of improvements in accordance with such video processing, including an improvement on the objective quality measure by the peak to signal noise ratio (PSNR) that comes from performing random quantization noise removal. In addition, the subjective quality of a subsequently encoded video signal may be achieved from illumination compensation, which may be introduced in accordance with performing offset processing and scaling processing (e.g., in accordance with applying a gain) in accordance with ALF processing.
With respect to any video encoder architecture implemented to generate an output bitstream, it is noted that such architectures may be implemented within any of a variety of communication devices. The output bitstream may undergo additional processing including error correction code (ECC), forward error correction (FEC), etc. thereby generating a modified output bitstream having additional redundancy deal therein. Also, as may be understood with respect to such a digital signal, it may undergo any appropriate processing in accordance with generating a continuous time signal suitable for or appropriate for transmission via a communication channel That is to say, such a video encoder architecture may be of limited within a communication device operative to perform transmission of one or more signals via one or more communication channels. Additional processing may be made on an output bitstream generated by such a video encoder architecture thereby generating a continuous time signal that may be launched into a communication channel.
The residual, which is the difference between the current pixel and the reference or prediction pixel, is that which gets encoded. As can be seen with respect to this diagram, intra-prediction operates using pixels within a common frame (or picture). It is of course noted that a given pixel may have different respective components associated therewith, and there may be different respective sets of samples for each respective component.
A residual may be calculated in accordance with inter-prediction processing, though such a residual is different from the residual calculated in accordance with intra-prediction processing. In accordance with inter-prediction processing, the residual at each pixel again corresponds to the difference between a current pixel and a predicted pixel value. However, in accordance with inter-prediction processing, the current pixel and the reference or prediction pixel are not located within the same frame (or picture). While this diagram shows inter-prediction as being employed with respect to one or more previous frames or pictures, it is also noted that alternative embodiments may operate using references corresponding to frames before and/or after a current frame. For example, in accordance with appropriate buffering and/or memory management, a number of frames may be stored. When operating on a given frame, references may be generated from other frames that precede and/or follow that given frame.
Coupled with the CU, a basic unit may be employed for the prediction partition mode, namely, the prediction unit, or PU. It is also noted that the PU is defined only for the last depth CU, and its respective size is limited to that of the CU.
Generally speaking, such video decoding architectures operate on an input bitstream. It is of course noted that such an input bitstream may be generated from a signal that is received by a communication device from a communication channel. Various operations may be performed on a continuous time signal received from the communication channel, including digital sampling, demodulation, scaling, filtering, etc. such as may be appropriate in accordance with generating the input bitstream. Moreover, certain embodiments, in which one or more types of error correction code (ECC), forward error correction (FEC), etc. may be implemented, may perform appropriate decoding in accordance with such ECC, FEC, etc. thereby generating the input bitstream. That is to say, in certain embodiments in which additional redundancy may have been made in accordance with generating a corresponding output bitstream (e.g., such as may be launched from a transmitter communication device or from the transmitter portion of a transceiver communication device), appropriate processing may be performed in accordance with generating the input bitstream. Overall, such a video decoding architectures and lamented to process the input bitstream thereby generating an output video signal corresponding to the original input video signal, as closely as possible and perfectly in an ideal case, for use in being output to one or more video display capable devices.
Referring to the embodiment 900 of
Referring to the embodiment 1000 of
Operation of any one or more modules, circuitries, processes, steps, etc. within the transcoder may be adaptively made based upon consideration associated with local operational parameters and/or remote operational parameters. Examples of local operational parameters may be viewed as corresponding to provision and/or currently available hardware, processing resources, memory, etc. Examples of remote operational parameters may be viewed as corresponding to characteristics associated with respective streaming media flows, including delivery flows and/or source flows, corresponding to signaling which is received from and/or transmitted to one or more other devices, including source devices and or destination devices. For example, characteristics associated with any media flow may be related to any one or more of latency, delay, noise, distortion, crosstalk, attenuation, signal to noise ratio (SNR), capacity, bandwidth, frequency spectrum, bit rate, symbol rate associated with the at least one streaming media source flow, and/or any other characteristic, etc. Considering another example, characteristics associated with any media flow may be related more particularly to a given device from which or through which such a media flow may pass including any one or more of user usage information, processing history, queuing, an energy constraint, a display size, a display resolution, a display history associated with the device, and/or any other characteristic, etc. Moreover, various signaling may be provided between respective devices in addition to signaling of media flows. That is to say, various feedback or control signals may be provided between respective devices within such a communication system.
In at least one embodiment, such a transcoder is implemented for selectively transcoding at least one streaming media source flow thereby generating at least one transcoded streaming media delivery flow based upon one or more characteristics associated with the at least one streaming media source flow and/or the at least one transcoder that streaming media delivery flow. That is to say, consideration may be performed by considering characteristics associated with flows with respect to an upstream perspective, a downstream perspective, and/or both an upstream and downstream perspective. Based upon these characteristics, including historical information related thereto, current information related thereto, and/or predicted future information related thereto, adaptation of the respective transcoding as performed within the transcoder may be made. Again, consideration may also be made with respect to global operating conditions and/or the current status of operations being performed within the transcoder itself. That is to say, consideration with respect to local operating conditions (e.g., available processing resources, available memory, source flow(s) being received, delivery flow(s) being transmitted, etc.) may also be used to effectuate adaptation of respective transcoding as performed within the transcoder.
In certain embodiments, adaptation is performed by selecting one particular video coding protocol or standard from among a number of available video coding protocols or standards. If desired, such adaptation may be with respect to selecting one particular profile of a given video coding protocol or standard from among a number of available profiles corresponding to one or more video coding protocols or standards. Alternatively, such adaptation may be made with respect to modifying one or more operational parameters associated with a video coding protocol or standard, a profile thereof, or a subset of operational parameters associated with the video coding protocol or standard.
In other embodiments, adaptation is performed by selecting different respective manners by which video coding may be performed. That is to say, certain video coding, particularly operative in accordance with entropy coding, maybe context adaptive, non-context adaptive, operative in accordance with syntax, or operative in accordance with no syntax. Adaptive selection between such operational modes, specifically between context adaptive and non-context adaptive, and with or without syntax, may be made based upon such considerations as described herein.
Generally speaking, a real time transcoding environment may be implemented wherein scalable video coding (SVC) operates both upstream and downstream of the transcoder and wherein the transcoder acts to coordinate upstream SVC with downstream SVC. Such coordination involves both internal sharing real time awareness of activities wholly within each of the transcoding decoder and transcoding encoder. This awareness extends to external knowledge gleaned by the transcoding encoder and decoder when evaluating their respective communication PHY/channel performance. Further, such awareness exchange extends to actual feedback received from a downstream media presentation device' decoder and PHY, as well as an upstream media source encoder and PHY. To fully carry out SVC plus overall flow management, control signaling via industry or proprietary standard channels flow between all three nodes.
As may be analogously understood with respect to the context of transcoding, the corresponding encoding operations performed therein may be applied to a device that does not necessarily performed decoding of received streaming source flows, but is operative to generate streaming delivery flows that may be delivered via one or more delivery flows to one or more destination devices via one or more communication networks, links, etc.
Referring to the embodiment 1500 of
Alternatively, there may be some instances in which the two or more streaming source flows may be provided from a singular source device. That is to say, a given video input signal may undergo encoding in accordance with two or more different respective video encoding operational modes thereby generating different respective streaming source flows both commonly generated from the same original input video signal. In some instances, one of the streaming source flows may be provided via a first communication pathway, and another of the streaming source flows may be provided via a second communication pathway. Alternatively, these different respective streaming source flows may be provided via a common communication pathway. There may be instances in which one particular streaming source flow may be more deleteriously affected during transmission than another streaming source flow. That is to say, depending upon the particular manner and coding by which a given streaming source flow has been generated, it may be more susceptible or more resilient to certain deleterious effects (e.g. noise, interference, etc.) during respective transmission via a given communication pathway. In certain embodiments, if sufficient resources are available, it may be desirable not only to generate different respective streaming flows that are provided via different respective communication pathways.
Referring to the embodiment 1600 of
For example, in some instances, each of the respective decoding engines is implemented to perform different respective operations on the very same sub tree blocks, in that, certain operations allow for complete independence such that there is no interdependency and/or dependency between the respective operations. However, there are some situations in which there is dependency between operations performed by one or more of the decoding engines. For example, in some instances in which there is dependency between different respective operations that may be performed on the sub to tree blocks, there may be one or more preprocessors and one or more post processors implemented before and/or after the respective decoding engines. The diagram generally depicts that a preprocessor and post processor may optionally be implemented respectively before and after any desire decoding engine. Of course, it is noted that certain operations is performed by the decoding engines do not require any pre- or post-processing to be performed before and after.
As the reader may understand, there are certain video processing operations that allow for different respective sub tree blocks to be processed completely in parallel with respect to one another such that there is no interdependency between the operations associated with different sub tree blocks. Considering a given frame or picture of a video signal, there are certain video processing operations that allow for the picture frame to be partition or split into a number of different portions (e.g., generally, coding units or macro-blocks, etc.) such that operations may be performed in parallel or simultaneously with respect to each of those different respective portions and the resulting processed portions may be combined and/or stitched together without a great deal of difficulty given that there is relatively little or no dependency between the respective portions of the picture frame. However, other video processing operations do in fact incur or have a certain degree of interdependency or dependency between the respective portions of the frame or picture being processed, and appropriate preprocessing and/or post-processing may need to be performed to ensure appropriate combination and/or stitching of those respective processed portions together to create or regenerate the appropriate process frame or picture.
For example, considering the coding unit de-blocking operations as performed with respect to
Generally speaking, it is noted that adaptability of the respective sizes of tree blocks, sub tree blocks, etc. within such an architecture of tree block splitting may be performed at either one or both of the parser and TB splitter level or at the decoding engine level. For example, the respective size of the sub tree blocks output from the parser and TB splitter may be adapted based upon any of a number of considerations (e.g., local characteristics, respective upstream source device and/or downstream destination device characteristics, respective upstream and/or downstream flow characteristics, etc.).
It is also noted that while one particular decoding engine is particularly shown along each respective pathway output from the parser and TB splitter, it is noted that further subdivision and parallelism may be effectuated along any one or more of the desired pathways. For example, in an alternative embodiment, a number of respected decoding engines may be employed to effectuate the operations of decoding engine 1 (e.g., decoding engine 1A, decoding engine 1B, and so on in a paralyzed implementation of decoding engine 1, such that a number of sub pathways may be implemented along any one or more of the respective pathways output from the parser and TB splitter).
Such independent processing and recombination of each sub-SCTB may occur wholly within a single decoder module or may span multiple decoder modules before being finally recombined. Examples of such decoder modules include inverse scan, dequantization and inverse transform engine, spatial predictor, motion compensator, deblocker, etc.
The respective sub-SCTB's can be processed in a flow architecture such as (i) parallel pipelines, (ii) repeated use of a single pipeline (one use for each sub-SCTB), (iii) any combination of the above i-ii in combined serial/parallel flows, (iv) using several pipelines that each serially multiple sub-SCTBs, etc.
Moreover, with respect to an adaptive implementation in which given respective modules, circuitries, operations, steps, etc. may operate using different respective sizes of blocks, a decision regarding which such structure to use, or whether even to employ SCTB splitting, can be made as a fixed design choice or may involve adaptive selection made as part of a setup routine at the decoder. Such adaptation may also be extended to cover real time adaptation including mid-stream adaption as circumstances change. For example, within a robust processing device with unlimited (AC) power, a decoder circuitry may be adaptively configured for either full block size processing of ACTBs or fully parallelized pipeline flows. Considering applications targeted towards devices that may be limited in terms of processing capability and/or power (e.g., with respect to mobile, handheld, battery powered devices, etc.), such decoder circuitry may initially configure for fully parallelized pipeline flows, yet, when confronted with a significant variance in either processing resource sharing or (battery) power availability (e.g., AC power docking or low-battery), may reconfigure on-the-fly to adopt a fully serial or other type of flow architecture, for example.
Macro-block pipelined architecture is widely used in the state-of-the-art hardware video decoder architecture. However, in the currently developing high efficiency video coding (HEVC) standard, flexible coding unit (CU) size may be employed therein, and one possible option of a CU size can be as large as N×N (e.g., 64×64), which costs 6 kbytes (kB) memory size for a single coding unit buffer, 16 times of that for a 16×16 macro-block. An efficient pipelined architecture may require a great deal of on-chip coding unit (CU) buffer to store intermediate results. Such a large coding unit (CU) size can unfortunately become a greater concern particularly when operating in accordance with a highly efficient hardware implementation of an HEVC decoder in which a great deal of hardware, resources, energy, etc. may not be available (e.g., such as in accordance with battery operated and/or mobile applications).
As such, a treeblock splitting approach may be implemented in which a video decoder can be implemented using a M×M (e.g., 32×32) sub treeblock pipeline, which requires only approximately ¼ of the original buffer size of 6 kB as mentioned above.
Generally speaking, a treeblock is the largest CU employed in accordance with video coding compliant with HEVC. As can be seen within this diagram, the bitstream is first parsed in the order of treeblocks. Then a N×N (e.g., 64×64) treeblock is split into four M×M (e.g., 32×32) sub-treeblock tasks for each respective decoding engine (e.g., as illustrated within the diagram, inverse scan and dequantization and inverse transform engine, spatial predictor, motion compensator, reconstructor and deblocker). The processing of these engines is pipelined in the unit of sub-treeblock.
Treeblock Splitting when Split_Coding_Unit_Flag Equals to 1
When split_coding_unit_flag is equal to 1, a treeblock is composed of four sub coding trees and hence a treeblock can be split into sub treeblocks in the syntax level.
Treeblock Splitting when Split_Coding_Unit_Flag Equals to 0
However, split_coding_unit_flag is not always equal to 1, otherwise the treeblock size will be set to 32×32 instead of 64×64. When split_coding_unit_flag is equal to 0, prediction unit and transform unit are further split either in the syntax level or decoding level.
Inverse Coefficient Scan, Dequantization and Inverse Transform Engine
A practical maximum transform size is M×M (e.g., 32×32), so that the parsed transform coefficients of a 64×64 coding unit (CU) can be regrouped into four M×M (e.g., 32×32) sub-treeblock transform coefficient. Each group of coefficients is first inverse scanned and dequantized. The dequantized coefficients are then inverse transformed. The M×M (e.g., 32×32) inverse transformed results are sent to the reconstructor
Spatial Prediction Engine
For an intra-coded coding unit (CU), HEVC uses intra_split_flag to further indicate whether to split the prediction unit or not. When intra_split_flag is equal to 1, the intra-prediction of the N×N or N×M (e.g., 64×64, 64×32, etc.) coding unit (CU) is further split into four respective (e.g., 32×32) standard intra-prediction tasks. The spatial prediction engine processes each respective (e.g., 32×32) standard intra-prediction tasks and sends the results to the reconstructor.
When intra_split_flag is equal to 0, N×N (e.g., 64×64) intra-prediction is used. To split it into four 32×32 intra-prediction tasks, some non-standard prediction modes should be introduced and supported by the spatial prediction engine. In one embodiment as shown in
Inside the spatial predictor, those new modes should be supported as well as standard intra-prediction modes. If the split flag is equal to 0, standard intra-prediction is conducted according to the intra-prediction mode. Otherwise, if the split flag is equal to 1, which means that the current intra-prediction is one-fourth of the N×N (e.g., 64×64) intra-prediction, the intra-prediction is conducted according to both intra_prediction_mode and sub treeblock index.
The splitting approaches described herein may be implemented and operative to support an extra 20 modes in the spatial predictor. To remove the extra hardware requirement for splitting, we may modify the definition of the N×N (e.g., 64×64) intra-prediction mode to be four M×M (e.g., 32×32) intra-predictions with the same prediction mode. In that case, the N×N (e.g., 64×64) intra-prediction can be broken down to the M×M (e.g., 32×32) intra-predictions in the syntax level and the spatial prediction engine does not need to add any extra hardware to support splitting.
Motion Compensator Engine
The motion vectors and reference indexes can be regrouped into four M×M (e.g., 32×32) sub treeblock tasks. The motion compensator processes each task one by one and sends the M×M (e.g., 32×32) prediction to the reconstructor. If one sub treeblock does not have coded motion vectors and reference indexes, it just copies those parameters from left or above sub treeblocks according to different prediction unit partition.
Reconstructor
The reconstructor combines the M×M (e.g., 32×32) output from the inverse transform engine, motion compensator and spatial predictor and sends results to the deblocking engine.
Deblocking Engine
To split a 64×64 coding unit (CU) deblocking process into four M×M (e.g., 32×32) intra-prediction tasks, some non-standard deblocking order should be introduced and supported by the deblocking engine. As shown in
Referring to the embodiment 2100 of
With respect to
Due to the current deblocking order defined in HEVC, the deblocking process of the 64×64 coding unit (CU) cannot be evenly split into four M×M (e.g., 32×32) sub treeblock deblocking process. We may also modify the current N×N (e.g., 64×64) CU deblocking order to make it splitting friendly.
Referring to the embodiment 2300 of
As may be seen with respect to various of the previous diagrams and/or embodiments, a video decoding system may be implemented employing a M×M (e.g., 32×32) sub treeblock pipeline. In certain situations, a splitter may be implemented to split a 64×64 treeblock into four M×M (e.g., 32×32) sub-treeblock tasks. Moreover, such an architecture may be provisioned to provide for the splitting of N×N (e.g., 64×64) intra-prediction into four respective M×M (e.g., 32×32) new intra-prediction blocks. An M×M (e.g., 32×32) spatial predictor may be implemented to support such new M×M (e.g., 32×32) intra-prediction. Also, splitting may be effectuated with respect to the deblocking of N×N (e.g., 64×64) coding unit (CU) to four M×M (e.g., 32×32) de-blocking tasks.
Referring to the embodiment 2500 of
On the other hand, after the wide deployment 1920×1080 high definition (HD) video, people begin to consider broadcasting higher resolution video like 4 k×2 k 4 k×2 k or even 8 k×4 k. MPEG and ITU have already launched a collaborative project to design the next generation video coding standard HEVC to support 4 k×2 k and even 8 k×4 k applications. Such high resolutions will double or quadruple the current top context on-chip memory requirement, which becomes a concern for the high efficient decoder design.
An alternative vertical macro-block scan order is proposed. With the proposed vertical macro-block scan order, the required context memory buffer size is halved, and hence the chip size is largely reduced as well as the cost.
Referring to the embodiment 2600 of
In a macro-block pipelined decoder implementation, the left column context is stored on chip instead of the top row context. Since the maximum vertical size is half of that of the horizontal size, the on chip memory requirement for the context buffer is halved. As may be seen, such intra-prediction may be performed from the left bottom portion of the macro-block. Analogously, such intra-prediction/motion vector prediction may also be performed from the left bottom portion of the macro-block.
Referring to method 2700 of
The method 2700 also operates by operating a first decoding engine of a video decoder for receiving the plurality of STBs via a first of the plurality of pathways and for performing a first at least one video decoding operation thereby generating a first at least one intermediate decoded resultant, as shown in a block 2740. The method 2700 also operates by operating a second decoding engine of the video decoder for receiving the plurality of STBs via a second of the plurality of pathways and for performing a second at least one video decoding operation, in parallel with the first decoding engine performing the first at least one video decoding operation, thereby generating a second at least one intermediate decoded resultant, as shown in a block 2750. The method 2700 also operates by combining the first at least one intermediate decoded resultant and the second at least one intermediate decoded resultant thereby generating at least one decoded resultant, as shown in a block 2760.
It is also noted that the various operations and functions as described with respect to various methods herein may be performed within a communication device, such as using a baseband processing module and/or a processing module implemented therein and/or other component(s) therein.
As may be used herein, the terms “substantially” and “approximately” provides an industry-accepted tolerance for its corresponding term and/or relativity between items. Such an industry-accepted tolerance ranges from less than one percent to fifty percent and corresponds to, but is not limited to, component values, integrated circuit process variations, temperature variations, rise and fall times, and/or thermal noise. Such relativity between items ranges from a difference of a few percent to magnitude differences. As may also be used herein, the term(s) “operably coupled to”, “coupled to”, and/or “coupling” includes direct coupling between items and/or indirect coupling between items via an intervening item (e.g., an item includes, but is not limited to, a component, an element, a circuit, and/or a module) where, for indirect coupling, the intervening item does not modify the information of a signal but may adjust its current level, voltage level, and/or power level. As may further be used herein, inferred coupling (i.e., where one element is coupled to another element by inference) includes direct and indirect coupling between two items in the same manner as “coupled to”. As may even further be used herein, the term “operable to” or “operably coupled to” indicates that an item includes one or more of power connections, input(s), output(s), etc., to perform, when activated, one or more its corresponding functions and may further include inferred coupling to one or more other items. As may still further be used herein, the term “associated with”, includes direct and/or indirect coupling of separate items and/or one item being embedded within another item. As may be used herein, the term “compares favorably”, indicates that a comparison between two or more items, signals, etc., provides a desired relationship. For example, when the desired relationship is that signal 1 has a greater magnitude than signal 2, a favorable comparison may be achieved when the magnitude of signal 1 is greater than that of signal 2 or when the magnitude of signal 2 is less than that of signal 1.
As may also be used herein, the terms “processing module”, “processing circuit”, and/or “processing unit” may be a single processing device or a plurality of processing devices. Such a processing device may be a microprocessor, micro-controller, digital signal processor, microcomputer, central processing unit, field programmable gate array, programmable logic device, state machine, logic circuitry, analog circuitry, digital circuitry, and/or any device that manipulates signals (analog and/or digital) based on hard coding of the circuitry and/or operational instructions. The processing module, module, processing circuit, and/or processing unit may be, or further include, memory and/or an integrated memory element, which may be a single memory device, a plurality of memory devices, and/or embedded circuitry of another processing module, module, processing circuit, and/or processing unit. Such a memory device may be a read-only memory, random access memory, volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, cache memory, and/or any device that stores digital information. Note that if the processing module, module, processing circuit, and/or processing unit includes more than one processing device, the processing devices may be centrally located (e.g., directly coupled together via a wired and/or wireless bus structure) or may be distributedly located (e.g., cloud computing via indirect coupling via a local area network and/or a wide area network). Further note that if the processing module, module, processing circuit, and/or processing unit implements one or more of its functions via a state machine, analog circuitry, digital circuitry, and/or logic circuitry, the memory and/or memory element storing the corresponding operational instructions may be embedded within, or external to, the circuitry comprising the state machine, analog circuitry, digital circuitry, and/or logic circuitry. Still further note that, the memory element may store, and the processing module, module, processing circuit, and/or processing unit executes, hard coded and/or operational instructions corresponding to at least some of the steps and/or functions illustrated in one or more of the Figures. Such a memory device or memory element can be included in an article of manufacture.
The present invention has been described above with the aid of method steps illustrating the performance of specified functions and relationships thereof. The boundaries and sequence of these functional building blocks and method steps have been arbitrarily defined herein for convenience of description. Alternate boundaries and sequences can be defined so long as the specified functions and relationships are appropriately performed. Any such alternate boundaries or sequences are thus within the scope and spirit of the claimed invention. Further, the boundaries of these functional building blocks have been arbitrarily defined for convenience of description. Alternate boundaries could be defined as long as the certain significant functions are appropriately performed. Similarly, flow diagram blocks may also have been arbitrarily defined herein to illustrate certain significant functionality. To the extent used, the flow diagram block boundaries and sequence could have been defined otherwise and still perform the certain significant functionality. Such alternate definitions of both functional building blocks and flow diagram blocks and sequences are thus within the scope and spirit of the claimed invention. One of average skill in the art will also recognize that the functional building blocks, and other illustrative blocks, modules and components herein, can be implemented as illustrated or by discrete components, application specific integrated circuits, processors executing appropriate software and the like or any combination thereof.
The present invention may have also been described, at least in part, in terms of one or more embodiments. An embodiment of the present invention is used herein to illustrate the present invention, an aspect thereof, a feature thereof, a concept thereof, and/or an example thereof. A physical embodiment of an apparatus, an article of manufacture, a machine, and/or of a process that embodies the present invention may include one or more of the aspects, features, concepts, examples, etc. described with reference to one or more of the embodiments discussed herein. Further, from figure to figure, the embodiments may incorporate the same or similarly named functions, steps, modules, etc. that may use the same or different reference numbers and, as such, the functions, steps, modules, etc. may be the same or similar functions, steps, modules, etc. or different ones.
While such circuitries in the above described figure(s) may including transistors, such as field effect transistors (FETs), as one of ordinary skill in the art will appreciate, such transistors may be implemented using any type of transistor structure including, but not limited to, bipolar, metal oxide semiconductor field effect transistors (MOSFET), N-well transistors, P-well transistors, enhancement mode, depletion mode, and zero voltage threshold (VT) transistors.
Unless specifically stated to the contra, signals to, from, and/or between elements in a figure of any of the figures presented herein may be analog or digital, continuous time or discrete time, and single-ended or differential. For instance, if a signal path is shown as a single-ended path, it also represents a differential signal path. Similarly, if a signal path is shown as a differential path, it also represents a single-ended signal path. While one or more particular architectures are described herein, other architectures can likewise be implemented that use one or more data buses not expressly shown, direct connectivity between elements, and/or indirect coupling between other elements as recognized by one of average skill in the art.
The term “module” is used in the description of the various embodiments of the present invention. A module includes a processing module, a functional block, hardware, and/or software stored on memory for performing one or more functions as may be described herein. Note that, if the module is implemented via hardware, the hardware may operate independently and/or in conjunction software and/or firmware. As used herein, a module may contain one or more sub-modules, each of which may be one or more modules.
While particular combinations of various functions and features of the present invention have been expressly described herein, other combinations of these features and functions are likewise possible. The present invention is not limited by the particular examples disclosed herein and expressly incorporates these other combinations.
Yang, Zhijie, Sherigar, Bhaskar Sherigar Mala
Patent | Priority | Assignee | Title |
11418784, | Jul 26 2019 | BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD.; BYTEDANCE INC. | Determination of picture partition mode based on block size |
11457210, | Jul 26 2019 | BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD.; BYTEDANCE INC. | Determination of picture partition mode based on block size |
11539949, | Jul 26 2019 | BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD.; BYTEDANCE INC. | Determination of picture partition mode based on block size |
11659179, | Jul 26 2019 | BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD.; BYTEDANCE INC. | Determination of picture partition mode based on block size |
Patent | Priority | Assignee | Title |
8498334, | Feb 03 2010 | METAGENCE TECHNOLOGIES LIMITED | Method and system for staggered parallelized video decoding |
20070292113, | |||
20110096826, | |||
20120177120, | |||
20120189049, | |||
20120328003, | |||
20130039422, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Dec 19 2011 | YANG, ZHIJIE | Broadcom Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 027427 | /0851 | |
Dec 19 2011 | SHERIGAR, BHASKAR SHERIGAR MALA | Broadcom Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 027427 | /0851 | |
Dec 21 2011 | Avago Technologies International Sales Pte. Limited | (assignment on the face of the patent) | / | |||
Feb 01 2016 | Broadcom Corporation | BANK OF AMERICA, N A , AS COLLATERAL AGENT | PATENT SECURITY AGREEMENT | 037806 | /0001 | |
Jan 19 2017 | BANK OF AMERICA, N A , AS COLLATERAL AGENT | Broadcom Corporation | TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS | 041712 | /0001 | |
Jan 20 2017 | Broadcom Corporation | AVAGO TECHNOLOGIES GENERAL IP SINGAPORE PTE LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 041706 | /0001 | |
Sep 05 2018 | AVAGO TECHNOLOGIES GENERAL IP SINGAPORE PTE LTD | AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED | MERGER SEE DOCUMENT FOR DETAILS | 047397 | /0307 |
Date | Maintenance Fee Events |
Jun 23 2022 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Date | Maintenance Schedule |
Dec 25 2021 | 4 years fee payment window open |
Jun 25 2022 | 6 months grace period start (w surcharge) |
Dec 25 2022 | patent expiry (for year 4) |
Dec 25 2024 | 2 years to revive unintentionally abandoned end. (for year 4) |
Dec 25 2025 | 8 years fee payment window open |
Jun 25 2026 | 6 months grace period start (w surcharge) |
Dec 25 2026 | patent expiry (for year 8) |
Dec 25 2028 | 2 years to revive unintentionally abandoned end. (for year 8) |
Dec 25 2029 | 12 years fee payment window open |
Jun 25 2030 | 6 months grace period start (w surcharge) |
Dec 25 2030 | patent expiry (for year 12) |
Dec 25 2032 | 2 years to revive unintentionally abandoned end. (for year 12) |