A data processing apparatus capable of performing processing in units of block data by using processing results of other block data in a short time is provided, wherein a deblocking filter performs horizontal filtering processing and vertical filtering processing on first block data and second block data obtained by dividing restructured picture data into two in parallel by adjusting their dependency relation.
|
0. 7. An image processing method, comprising:
decoding a bit stream to generate decoded image data;
performing horizontal filtering, in parallel, on vertical block boundaries of the decoded image data to generate filtered image data;
controlling the horizontal filtering so as not to process pixel data of pixels located on a vertical edge of an adjacent block of the image data;
performing vertical filtering, in parallel, on horizontal block boundaries of the filtered image data; and
controlling the vertical filtering so as not to process pixel data of pixels located on a horizontal edge of an adjacent block of the filtered image data,
wherein the horizontal filtering and the vertical filtering are performed in parallel to process pixel data of a plurality of data blocks within a fixed-size block as a processing unit in decoding.
0. 6. An image processing apparatus, comprising:
circuitry configured to
decode a bit stream to generate decoded image data;
perform horizontal filtering, in parallel, on vertical block boundaries of the decoded image data to generate filtered image data;
control the horizontal filtering so as not to process pixel data of pixels located on a vertical edge of an adjacent block of the image data;
perform vertical filtering, in parallel, on horizontal block boundaries of the filtered image data; and
control the vertical filtering so as not to process pixel data of pixels located on a horizontal edge of an adjacent block of the filtered image data,
wherein the horizontal filtering and the vertical filtering are performed in parallel to process pixel data of a plurality of data blocks within a fixed-size block as a processing unit in decoding.
0. 1. An image processing apparatus for performing processing on a first block for defining a first two-dimensional image and on a second block for defining a second two-dimensional image adjacent to the first two-dimensional image, comprising:
a first processing means for performing:
horizontal filtering processing on first pixel data in the first block, and
vertical filtering processing on second pixel data in the first block;
a second processing means for performing:
horizontal filtering processing on third pixel data in the first block,
horizontal filtering processing on fourth pixel data in the second block,
vertical filtering processing on fifth pixel data in the second block using the third pixel data in the first block after the horizontal filtering processing on the third pixel data in the first block, and
vertical filtering processing on sixth pixel data in the second block by using the fourth pixel data in the second block after the horizontal filtering processing on the fourth pixel data in the second block,
wherein the second processing means and the first processing means process in parallel; and
a third processing means for performing:
horizontal filtering processing on the fifth pixel data in the second block, and
vertical filtering processing on the first pixel data of the first block using the fifth pixel data in the second block after the horizontal filtering processing on the fifth pixel data in the second block and the vertical filtering processing by the first processing means.
0. 2. An image processing apparatus as set forth in
a memory for storing the third pixel data in the first block before the horizontal filtering processing on the third pixel data in the first block and for storing the fifth pixel data in the second block before the horizontal filtering processing on the fifth pixel data in the second block;
wherein the second processing means performs the vertical filtering processing by reading the third pixel data in the first block pixel data from the memory; and
wherein the third processing means performs the horizontal filtering processing by reading the fifth pixel data in the second block from the memory.
0. 3. An image processing apparatus as set forth in
a motion prediction compensation means for generating prediction image data of image data to be encoded by performing motion vector searching in reference image data;
an orthogonal transformation quantization means for performing orthogonal transformation processing and quantization processing on a difference of the prediction image data and the image data to be encoded and generating orthogonal image data;
an inverse orthogonal transformation quantization means for performing inverse quantization processing and inverse orthogonal transformation processing on the orthogonal image data and generating inverse orthogonal image data; and
a restructuring means for generating restructured image data by using prediction image data and the inverse orthogonal image data;
wherein the first processing means, the second processing 5 means, and the third processing means perform horizontal filtering processing and vertical filtering processing on the first block and the second block in the restructured image data and generate the reference image data based on the results.
0. 4. An image processing method performed by a data processing apparatus, comprising a memory, a first processing means, and a second processing means, for performing processing on a first block for defining a first two-dimensional image and a second block for defining a second two-dimensional image adjacent to the first two-dimensional image, the method comprising:
performing, by the first processing means, horizontal filtering processing on first pixel data in the first block and vertical filtering processing on second pixel data in the first block; and
performing, by the second processing means, horizontal filtering processing on third pixel data in the first block, horizontal filtering processing on fourth pixel data in the second block, vertical filtering processing on fifth pixel data in the second block by using the third pixel data in the first block after the horizontal filtering processing on the third pixel data in the first block, and vertical filtering processing on sixth pixel data in the second block using the fourth pixel data in the second block after the horizontal filtering processing on the fourth pixel data in the second block,
wherein the first processing means and the second processing means process in parallel; and
performing horizontal filtering processing on the fifth pixel data in the second block, and vertical filtering processing on the first pixel data of the first block using the fifth pixel data in the second block after the horizontal filtering processing on the fifth pixel data in the second block and processing results of the vertical filtering processing on the fifth pixel data in the second block.
0. 5. A method comprising:
a first step for performing horizontal filtering processing on first pixel data in a first block and vertical filtering processing on second pixel data in the first block; and
a second step for performing horizontal filtering processing on third pixel data in the first block, horizontal filtering processing on fourth pixel data in a second block, vertical filtering processing on fifth pixel data in the second block by using the third pixel data in the first block after the horizontal filtering processing on the third pixel data in the first block, and vertical filtering processing on sixth pixel data in the second block using the fourth pixel data in the second block after the horizontal filtering processing on the fourth pixel data in the second block,
wherein the first step and the second step process in parallel; and
a third step for performing horizontal filtering processing on the fifth pixel data in the second block, and vertical filtering processing on the first pixel data of the first block using the fifth pixel data in the second block after the horizontal filtering processing on the fifth pixel data in the second block and processing results of the vertical filtering processing on the fifth pixel data in the second block.
|
The present invention This is a divisional reissue of application Ser. No. 13/468,853, filed May 10, 2012, which is a reissue of U.S. Pat. No. 7,715,647, which issued from application Ser. No. 11/300,310, filed Dec. 15, 2005, and contains subject matter related to Japanese Patent Application JP 2004-364437 filed in the Japanese patent Office on Dec. 16, 2004, the entire contents of which being incorporated herein by reference. More than one reissue application has been filed for the reissue of U.S. Pat. No. 7,715,647. The reissue applications are application Ser. Nos. 13/934,532 (the present application) filed Jul. 3, 2013 and 13/468,853 filed May 10, 2012 wherein reissue application Ser. No. 13/934,532 is a divisional reissue of application Ser. No. 13/468,853.
1. Field of the Invention
The present invention relates to a data processing apparatus and an image processing apparatus for performing processing in units of block data by using a processing result of other block data, and their methods and programs.
2. Description of the Related Art
In recent years, an apparatus based on the MPEG (Moving Picture Experts Group) and other methods for compressing by discrete cosine transformation or other orthogonal transformation and motion compensation, wherein image data is handled as digital and redundancy peculiar to image information is utilized for highly efficient transmission and accumulation of information, has become widespread both in distributing information by broadcasting stations, etc. and receiving information in general households.
The MPEG2 and MPEG4 methods are followed by a proposal for an encoding method called an MPEG4/AVC (Advanced Video Coding).
In an encoding apparatus of the MPEG4/AVC method, data block filtering processing is performed on restructured image data in prediction encoding to generate reference image data to be used by next prediction encoding.
In the deblock filtering processing, filtering processing is performed on the reconfigured image data in units of macro blocks MB in the horizontal direction and in the vertical direction in turn.
In that case, filtering processing on each macro block MB depends on a filtering processing result of a micro block MB at a corresponding position above the macro block MB in the vertical direction.
The above explained deblock filtering processing includes an enormous amount of calculation, and the deblock filtering processing requires a long time in encoding processing.
However, parallel processing is difficult in the deblock filtering processing due to the dependency relation between macro blocks MB as explained above, so that it is difficult to shorten the processing time.
Systems for performing processing in units of block data using a processing result of other block data have the similar disadvantage.
It is desired to provide a data processing apparatus and an image processing apparatus capable of reducing processing time of processing performed in units of block data by using a processing result of other block data, and their methods and programs.
To solve the above disadvantages of the related art and attain the above object, according to a first invention, there is provided a data processing apparatus for performing predetermined processing on element data of a first block and a second block comprising: when processing on element data in the first block includes first processing not using the element data after subjected to the predetermined processing in the second block and second processing using first element data as a part of element data after subjected to the predetermined processing in the second block, and processing on the element data in the second block includes third processing not using the element data after subjected to the predetermined processing in the first block and fourth processing using second element data as a part of element data after subjected to the predetermined processing in the first block; a first processing means for performing the first processing; a second processing means for performing the third processing, fifth processing for generating the second element data in the first processing, and the fourth processing using the second element data obtained by the fifth processing in parallel with processing of the first processing means; a third processing means for performing the second processing by using the first element data obtained at least by one of the third processing and the fourth processing in the second processing means; and a fourth processing means for combining results of the first processing by the first processing means, results of the third processing and the fourth processing in the second processing means, and results of the second processing by the third processing means and outputting processing results of the first block and the second block.
According to a second invention, there is provided an image processing apparatus for performing filtering processing on a first block for defining a predetermined two-dimensional image and a second block for defining a two-dimensional image adjacent to the two-dimensional image, comprising a first processing means, a second processing means and a third processing means for each of the first block and the second block; when pixel data in the block is subjected to horizontal filtering processing using other pixel data at a different position in the horizontal direction but at the same position in the vertical direction and vertical filtering processing using other pixel data at a different position in the vertical direction but at the same position in the horizontal direction in turn, the horizontal filtering processing of the first block does not use pixel data of the second block, the vertical filtering processing of the first block uses second pixel data after subjected to the horizontal filtering processing at the same position in the horizontal direction in the second block for first pixel data as a part of the first block and pixel data in the second block is not used for other third pixel data, the horizontal filtering processing of the second block does not use pixel data of the first block, and the vertical filtering processing of the second block uses fifth pixel data after subjected to the horizontal filtering processing at the same first position in the horizontal direction in the first block for fourth pixel data as a part of the second block, and pixel data in the first block is not used for other sixth pixel data; wherein the first processing means performs the horizontal filtering processing on pixel data in the first block and the vertical filtering processing on the third pixel data in the first block in turn; the second processing means performs the horizontal filtering processing on the fifth pixel data in the first block, the horizontal filtering processing on pixel data in the second block, the vertical filtering processing on the fourth pixel data in the second block using the fifth pixel data after subjected to the horizontal filtering processing, and the vertical filtering processing on the sixth pixel data in the second block by using pixel data after subjected to the horizontal filtering processing in the second block in parallel with processing by the first processing means; and the third processing means performs the horizontal filtering processing on the second pixel data, and the vertical filtering processing on the first pixel data of the first block using the second pixel data after subjected to the horizontal filtering processing and processing results of the vertical filtering processing by the first processing means.
An effect of the image processing apparatus of the second invention is as below.
First, the first processing means performs the horizontal filtering processing on pixel data in the first block and the vertical filtering processing no the third pixel data in the first block in turn.
Also, the second processing means performs the horizontal filtering processing on the fifth pixel data in the first block, the horizontal filtering processing on pixel data in the second block, the vertical filtering processing on the fourth pixel data in the second block using the pixel data after subjected to the horizontal filtering processing, and the vertical filtering processing on the sixth pixel data in the second block using pixel data after subjected to the horizontal filtering processing in the second clock in parallel with processing by the first processing means.
Next, the third processing means performs the horizontal filtering processing on the second pixel data and the vertical filtering processing on the first pixel data of the first block by using the second pixel data after subjected to the horizontal filtering processing and processing results of the vertical filtering processing by the first processing means.
According to a third invention, there is provided a data processing method performed by data processing apparatus for performing predetermined processing on element data of a first block and a second block, including when processing on element data in the first block includes first processing not using the element data after subjected to the predetermined processing in the second block and second processing using first element data as a part of element data after subjected to the predetermined processing in the second block, and processing on the element data in the second block includes third processing not using the element data after subjected to the predetermined processing in the first block and fourth processing using second element data as a part of element data after subjected to the predetermined processing in the first block; a first step for performing for the first processing the third processing, fifth processing for generating the second element data in the first processing, and the fourth processing using the second element data obtained by the fifth processing in parallel; a second step for performing the second processing by using the first element data obtained at least by the third processing and the fourth processing obtained in the first step; and a third step for combining results of the first processing obtained in the first step, results of the third processing and the fourth processing and results of the second processing obtained in the second step and outputting processing results of the first block and the second block.
According to a fourth invention, there is provided an image processing method performed by a data processing apparatus for performing filtering processing on a first block for defining a predetermined two-dimensional image and a second block for defining a two-dimensional image adjacent to the two-dimensional image, including when, for each of the first block and the second block, pixel data in the block is subjected to horizontal filtering processing using other pixel data at a different position in the horizontal direction but at the same position in the vertical direction and vertical filtering processing using other pixel data at a different position in the vertical direction but at the same position in the horizontal direction in turn, the horizontal filtering processing of the first block does not use pixel data of the second block, the vertical filtering processing of the first block uses second pixel data after subjected to the horizontal filtering processing at the same position in the horizontal direction in the second block for first pixel data as a part of the first block and pixel data in the second block is not used for other third pixel data, the horizontal filtering processing of the second block does not use pixel data of the first block, and the vertical filtering processing of the second block uses fifth pixel data after subjected to the horizontal filtering processing at the same first position in the horizontal direction in the first block for fourth pixel data as a part of the second block, and pixel data in the first block is not used for other sixth pixel data; a first step for performing the horizontal filtering processing on pixel data in the first block and the vertical filtering processing on the third pixel data in the first block in turn; a second step for performing the horizontal filtering processing on the fifth pixel data in the first block, the horizontal filtering processing on pixel data in the second block, the vertical filtering processing on the fourth pixel data in the second block by using the fifth pixel data after subjected to the horizontal filtering processing, and the vertical filtering processing on the sixth pixel data in the second block using pixel data after subjected to the horizontal filtering processing in the second block in parallel with processing in the first step; and a third step for performing the horizontal filtering processing on the second pixel data, and the vertical filtering processing on the first pixel data of the first block using the second pixel data after subjected to the horizontal filtering processing and processing results of the vertical filtering processing obtained in the first step.
According to a fifth invention, there is provided a program executed by a data processing apparatus for performing predetermined processing on element data of a first block and a second block, by which the data processing apparatus performs; when processing on element data in the first block includes first processing not using the element data after subjected to the predetermined processing in the second block and second processing using first element data as a part of element data after subjected to the predetermined processing in the second block, and processing on the element data in the second block includes third processing not using the element data after subjected to the predetermined processing in the first block and fourth processing using second element data as a part of element data after subjected to the predetermined processing in the first block; a first step for performing for the first processing the third processing, fifth processing for generating the second element data in the first processing, and the fourth processing using the second element data obtained by the fifth processing in parallel; a second step for performing the second processing by using the first element data obtained at least by the third processing and the fourth processing obtained in the first step; and a third step for combining results of the first processing obtained in the first step, results of the third processing and the fourth processing, and results of the second processing obtained in the second step and outputting processing results of the first block and the second block.
According to the present invention, a data processing apparatus and an image processing apparatus capable of reducing processing time of processing performed in units of block data by using a processing result of other block data, and their methods and programs can be provided.
According to a sixth invention, there is provided a data processing apparatus for performing predetermined processing on element data of a first block and a second block, comprising: when processing on element data in said first block includes first processing not using said element data after subjected to the predetermined processing in said second block and second processing using first element data as a part of element data after subjected to said predetermined processing in said second block, and processing on said element data in said second block includes third processing not using said element data after subjected to said predetermined processing in said first block and fourth processing using second element data as a part of element data after subjected to said predetermined processing in said first block, a first processing circuit for performing said first processing; a second processing circuit for performing said third processing, fifth processing for generating said second element data in said first processing, and said fourth processing using said second element data obtained by said fifth processing in parallel with processing of said first processing circuit; a third processing circuit for performing said second processing by using said first element data obtained at least by one of said third processing and said fourth processing in said second processing circuit; and a fourth processing circuit for combining results of said first processing by said first processing circuit, results of said third processing and said fourth processing in said second processing circuit, and results of said second processing by said third processing circuit and outputting processing results of said first block and said second block.
These and other objects and features of the present invention will become clearer from the following description of the preferred embodiments given with reference to the attached drawings, in which:
Below, a communication system 1 of the present embodiment will be explained.
First, corresponding relationships of components of the embodiment of the present invention and those of the present invention will be explained.
As shown in
The encoding apparatus 2 corresponds to the data processing apparatus and the encoding apparatus of the present invention.
In the communication system 1, the encoding apparatus 2 on the transmission side generates frame image data (a bit stream) compressed by discrete cosine transformation, Karhunen-Loeve transformation or other orthogonal transformation and motion compensation, modulates the frame image data, then, transmits via a transmission medium, such as satellite broadcasting, cable TV network, telephone line network and cellular phone network.
On the receiving side, after demodulating an image signal received by the decoding apparatus 3, frame image data expanded by inverse transformation of the orthogonal transformation at the modulation and motion compensation is generated and used.
Note that the transmission medium may be a recording medium, such as an optical disk, magnetic disk and semiconductor memory.
The decoding apparatus 3 shown in
Below, the encoding apparatus 2 shown in
As shown in
In the encoding apparatus 2, the deblocking filter 34 adjusts dependency relations of horizontal filtering processing HF and vertical filtering processing VF for first block data BPIC1 and second block data BPIC2 obtained by dividing reconfigured picture data RPIC from the adding circuit 33 into two, and the horizontal filtering processing HF and the vertical filtering processing VF of the first block data BPIC1 and those of the second block data BPIC2 are performed in parallel.
Below, components of the encoding apparatus 2 will be explained.
[A/D Conversion Circuit 22]
The A/D conversion circuit 22 converts an input original image signal S10 composed of an analog luminance signal Y and color-difference signals Pb and Pr to digital picture data S22 and outputs the same to the screen relocating circuit 23 and an RGB conversion circuit 51.
[Screen Relocating Circuit 23]
The screen relocating circuit 23 relocates frame data in the picture data S22 input from the A/D conversion circuit 22 to be in an encoding order in accordance with the GOP (group of pictures) structure formed by picture types I, P and B of the frame data so as to obtain original image data S23, and outputs the same to the computing circuit 24, motion prediction compensation circuit 42 and intra-prediction circuit 41.
[Computing Circuit 24]
The computing circuit 24 generates image data S24 indicating a difference between the original image data S23 and prediction image data input from the selection circuit 44 and outputs the same to the orthogonal transformation circuit 25.
[Orthogonal Transformation Circuit 25]
The orthogonal transformation circuit 25 performs orthogonal transformation, such as discrete cosine transformation and Karhunen-Loeve transformation, on the image data S24 to generate image data (for example, a DCT coefficient) S25 and outputs the same to the quantization circuit 26.
[Quantization Circuit 26]
The quantization circuit 26 performs quantization on the image data S25 based on a quantization scale QS input from the rate control circuit 32 so as to generate image data S26 (a quantized DCT coefficient) and outputs the same to the reversible encoding circuit 27 and the inverse quantization circuit 29.
[Reversible Encoding Circuit 27]
The reversible encoding circuit 27 stores in the buffer memory 28 image data obtained by performing variable length encoding or arithmetic coding on the image data S26.
At this time, the reversible encoding circuit 27 stores in head data, etc. a motion vector MV input from the motion prediction compensation circuit 42 or a differential motion vector thereof, identification data of reference image data, and an intra-prediction mode input from the intra-prediction circuit 41.
[Buffer Memory 28]
Image data stored in the buffer memory 28 is subjected to modulation, etc. and transmitted.
[Inverse Quantization Circuit 29]
The inverse quantization circuit 29 generates data obtained by performing inverse quantization on the image data S26 and outputs the same to the inverse orthogonal transformation circuit 30.
[Inverse Orthogonal Transformation Circuit 30]
The inverse orthogonal transformation circuit 30 performs inverse transformation of the orthogonal transformation performed in the orthogonal transformation circuit 25 on the data input from the inverse quantization circuit 29 to generate image data and outputs the same to the adding circuit 33.
[Adding Circuit 33]
The adding circuit 33 adds the (decoded) image data input from the inverse orthogonal transformation circuit 30 and prediction image data PI input from the selection circuit 44 to generate restructured picture data RPD and outputs the same to the deblocking filter 34.
[Deblocking Filter 34]
The deblocking filter 34 writes to the frame memory 31 image data obtained by removing block distortions from the restructured image data input from the adding circuit 33 as reference picture data R_PIC.
Note that, in the frame memory 31, for example, restructured image data of pictures to be subjected to motion prediction compensation processing by the motion prediction compensation circuit 42 and intra-prediction processing by the intra-prediction circuit 41 is written successively in units of macro blocks MB after processing.
[Rate Control Circuit 32]
The rate control circuit 32 generates a quantization scale QS, for example, based on image data read from the buffer memory 28 and outputs the same to the quantization circuit 26.
[Intra-Prediction Circuit 41]
The intra-prediction circuit 41 generates prediction image data PIi of a macro block MB to be processed for each of a plurality of prediction modes, such as an intra 4×4 mode and an intra 16×16 mode, and based thereon and the macro block MB to be processed in the original image data S23, generates index data COSTi to be an index of a coding amount of encoded data.
Then, the intra-prediction circuit 41 selects an intra-prediction mode, by which the index data COSTi becomes minimum.
The intra-prediction circuit 41 outputs the prediction image data PIi and index data COSTi generated in accordance with the finally selected intra-prediction mode to the selection circuit 44.
Also, when receiving a selection signal S44 indicating that the intra-prediction mode is selected, the intra-prediction circuit 41 outputs a prediction mode IPM indicating the finally selected intra-prediction mode to the reversible encoding circuit 27.
Note that even in the case of a macro block MB belonging to a P-slice or B-slice, the intra-prediction encoding by the intra-prediction circuit 41 is sometimes performed.
The intra-prediction circuit 41 generates index data COSTi, for example, based on the formula (1) below.
In the above formula (1), “i” indicates, for example, an identification number given to each block data having a size corresponding to the intra-prediction mode composing the macro block MB to be processed, and “x” becomes “1” in the case of the intra 16×16 mode and becomes “16” in the case of the intra 4×4 mode.
The intra-prediction circuit 41 calculates “SATD+header_cost(mode)” for all block data composing the macro block MB to be processed and calculates index data COSTi by adding them.
The “header_cost(mode)” is index data as an index of a coding amount of header data including a motion vector after encoding, identification data of reference image data, selected mode, quantization parameter (quantization scale), etc. A value of the “header_cost(mode)” varies in accordance with the prediction mode.
Also, “SATD” is index data as an index of a coding amount of differential image data between block data in the macro block MB to be processed and block data (prediction block data) determined in advance around the former block data.
In the present embodiment, the prediction image data PIi is regulated by a single or plurality of block data.
The “SATD” is, for example as shown in the formula (2), data obtained by performing Hadamard transformation (Tran) on a sum of absolute difference between pixel data of the block data to be processed “Org” and prediction block data “Pre”.
The pixels in the block data are specified by “s” and “t” in the formula (2).
Note that “SAD” shown in the formula (3) below may be used instead of the “SATD”.
Also, instead of the “SATD”, other index indicating a distortion and residual error, such as SSD regulated by the MPEG4 and AVC, may be used.
[Motion Prediction Compensation Circuit 42]
The motion prediction compensation circuit 42 generates index data COSTm along with inter-encoding based on luminance components of the macro block MB to be processed in the original image data S23 input from the screen relocating circuit 23.
The motion prediction compensation circuit 42 searches a motion vector MV of the block data to be processed and generates prediction block data in units of block data regulated by a motion compensation mode based on reference picture data R_PIC encored in the past and stored in the frame memory 31 for each of predetermined plurality of motion prediction compensation modes.
A size of the block data and the reference picture data R_PIC are regulated, for example, by a motion prediction compensation mode.
A size of the block data is, for example, 16×16, 16×8, 8×16 or 8×8 pixels.
The motion prediction compensation circuit 42 determines a motion vector and reference picture data for each block data.
Note that, in 8×8 sized block data, the partition may be furthermore divided to 8×8, 8×4, 4×8 or 4×4.
In the motion prediction compensation circuit 42, the motion prediction compensation modes are, for example, an inter 16×16 mode, inter 8×16 mode, inter 16×8 mode, inter 8×8 mode, inter 8×4 mode, inter 4×8 mode and inter 4×4 mode, and the respective sizes of the block data are 16×16, 8×16, 16×8, 8×8, 8×4, 4×8 and 4×4.
Also, for the respective sizes of the motion prediction compensation modes, a front prediction mode, a rear prediction mode and bidirectional prediction mode can be selected.
Here, the front prediction mode is a mode using previous image data in a display order as reference image data, the rear prediction mode is a mode using subsequent image data in the display order as reference image data, and the bidirectional prediction mode is a mode using previous and subsequent image data as reference image data.
In the present embodiment, a plurality of reference image data can be used in the motion prediction compensation processing in the motion prediction compensation circuit 42.
Also, the motion prediction compensation circuit 42 generates for each of the motion prediction compensation modes index data COSTm to be an index of a total coding amount of block data having a block size corresponding to that of the motion prediction compensation mode composing the micro block MB to be processed in the original image data S23.
Then, the motion prediction compensation circuit 42 selects a motion prediction compensation mode, by which the index data COSTm become minimum.
Also, the motion prediction compensation circuit 42 generates prediction image data PIm obtained by selecting the motion prediction compensation mode.
The motion prediction compensation circuit 42 outputs to the selection circuit 44 the prediction image data PIm and index data COSTm generated in accordance with the finally selected motion prediction compensation mode.
Also, the motion prediction compensation circuit 42 outputs a motion vector generated in accordance with the selected motion prediction compensation mode or a differential motion vector of the motion vector and a prediction motion vector to the reversible encoding circuit 27.
Furthermore, the motion prediction compensation circuit 42 outputs a motion prediction compensation mode MEM indicating the selected motion prediction compensation mode to the reversible encoding circuit 27.
Also, the motion prediction compensation circuit 42 outputs identification data of reference image data (a reference frame) selected in the motion prediction compensation processing to the reversible circuit 27.
The motion prediction compensation circuit 42 generates the index data COSTm, for example, based on the formula (4) below.
Also, in the formula (4), “i” indicates, for example, an identification number given to each block data having a size corresponding to the motion prediction compensation mode composing the macro block MB to be processed.
Namely, the motion prediction compensation circuit 42 calculates “SATD+header_cost(mode)” for all block data composing the macro block MB to be processed and calculates index data COSTm by adding them.
The “header_cost(mode)” is index data as an index of a coding amount of header data including motion vector after encoding, identification data of reference image data, selected mode, quantization parameter (quantization scale), etc. A value of the “header_cost(mode)” varies in accordance with the motion prediction compensation mode.
Also, “SATD” is index data to be an index of a code amount of differential image data between block data in the macro block MB to be processed and block data (prediction block data) in reference image data specified by a motion vector MV.
In the present embodiment, the prediction image data PIm is regulated by a single or plurality of reference block data.
The “SATD” is, for example as shown in the formula (5) below, data obtained by performing Hadamard transformation (Tran) on a sum of absolute difference between pixel data of the block data “Org” to be processed and prediction block data “Pre”.
The pixels in the block data are specified by “s” and “t” in the formula (5).
Note that “SAD” shown in the formula (6) below may be used instead of the “SATD”.
Also, instead of the “SATD”, other index indicating a distortion and residual error, such as SSD regulated by the MPEG4 and AVC, may be used.
[Selection Circuit 44]
The selection circuit 44 specifies the smaller of the index data COSTm input from the motion prediction compensation circuit 42 and the index data COSTi input from the intraprediction circuit 41 and outputs prediction image data PIm or PIi input in accordance with the specified index data to the computing circuit 24 and the adding circuit 33.
Also, when the index data COSTm is smaller, the selection circuit 44 outputs to the motion prediction compensation circuit 42 a selection signal S44 indicating that inter encoding (motion prediction compensation mode) is selected.
On the other hand, when the index COSTi is smaller, the selection circuit 44 outputs to the motion prediction compensation circuit 42 a selection signal S44 indicating that the intra encoding (intra-prediction mode) is selected.
Note that, in the present embodiment, all index data COSTi and COSTm generated by the intra-prediction circuit 41 and the motion prediction compensation circuit 42 may be output to the selection circuit 44, and the minimum index data may be specified in the selection circuit 44.
Below, the deblocking filter 34 will be explained in detail.
[Deblocking Filter 34]
As shown in
The buffer memory 70 stores restructured picture data RPIC input from the adding circuit 33.
The filter circuit 72 mainly performs filtering processing in the horizontal direction and in the vertical direction using first block data BPIC1 as one of restructured picture data RPIC divided to two as shown in
The filter circuit 74 mainly performs filtering processing in the horizontal direction and in the vertical direction using second block data BPIC2 as one of restructured picture data RPD divided to two as shown in
The buffer memory 76 stores results of the filtering processing by the filter circuit 72 and the filter circuit 74.
The coupling circuit 78 combines the results of the filtering processing stored in the buffer memory 76 to generate reference picture data R_PIC after deblocking filtering processing and writes the same to the frame memory 31.
Below, deblocking filtering processing performed by the deblocking filter 34 will be explained.
The deblocking filter 34 performs deblock filtering processing on each of a luminance block LB and a color difference block CB composing a macro block MB in units of macro blocks MB.
In the present embodiment, deblocking filtering processing of a luminance block LB indicates deblocking filtering processing using pixel data in the luminance block LB.
Also, deblock filtering processing of a color difference block CB indicates deblock filtering processing using pixel data in the color difference block CB.
The deblocking filter 34 successively performs filtering processing in the horizontal direction and that in the vertical direction on luminance block LB and color difference block CB, respectively.
First, filtering processing of a luminance block LB will be explained.
As indicated by shading in
The deblocking filter 34 performs the vertical filtering processing VF on pixel data to be processed by predetermined calculation by using a predetermined number of pixel data positioned in the vertical direction.
Below, the vertical processing VF of a luminance block LB performed by the deblocking filter 34 will be explained specifically.
The deblocking filter 34 defines four filtering modes (BS1 to BS4) based on a relation between a macro block MB to be processed and a macro block MB above it.
In the respective horizontal filtering processing HF and vertical filtering processing VF, the deblocking filter 34 selects a filtering mode BS4 when satisfying a condition that the predetermined number of pixel data to be used in the filtering processing belongs to a macro block MB to be subjected to intra encoding and a module composed of the predetermined number of pixel data positions on a boundary of the macro block MB.
While, when the condition is not satisfied, the deblocking filter 34 selects filtering modes BS1 to BS4 by following a predetermined condition.
Note that the filtering mode BS0 is a mode for not performing filtering processing, and the filtering mode BS4 is a mode for performing stronger filtering processing comparing with those in the case of the filtering modes BS1 to BS3.
When the filtering mode BS4 is selected, the deblocking filter 34 performs vertical filtering processing VF on pixel data p2 in a micro block MB1 as shown in
When the filtering mode BS4 is selected, the deblocking filter 34 performs vertical filtering processing VF on pixel data p1 in a micro block MB1 as shown in
When the filtering mode BS4 is selected, the deblocking filter 34 performs vertical filtering processing VF on pixel data p0 in a micro block MB1 as shown in
When the filtering mode BS4 is selected, the deblocking filter 34 performs vertical filtering processing VF on pixel data q0 in a micro block MB2 as shown in
When the filtering mode BS4 is selected, the deblocking filter 34 performs vertical filtering processing VF on pixel data q1 in a micro block MB2 as shown in
When the filtering mode BS4 is selected, the deblocking filter 34 performs vertical filtering processing VF on pixel data q2 in a micro block MB2 as shown in
As explained above, in the vertical filtering processing VF when the filtering mode BS4 is selected, a result of horizontal filtering processing HF of lines p0 and p1 of other macro block MB positioned above and adjacent to the macro block MB is used and pixel data other than that of the lines p0 and p1 of the macro block MB is not used.
When a filter mode is selected from BS1 to BS3, the deblock filter 34 performs vertical filtering processing VF on the pixel data p1 in the macro block MB1 as shown in
When a filter mode is selected from BS1 to BS3, the deblock filter 34 performs vertical filtering processing VF on the pixel data p0 in the macro block MB1 as shown in
When a filter mode is selected from BS1 to BS3, the deblock filter 34 performs vertical filtering processing VF on the pixel data q0 in the macro block MB2 as shown in
When a filter mode is selected from BS1 to BS3, the deblock filter 34 performs vertical filtering processing VF on the pixel data q1 in the macro block MB2 as shown in
As explained above, in the vertical filtering processing VF when the filtering mode is selected from BS1 to BS3, a result of the horizontal filtering processing HF of lines p0 and p1 in an adjacent other macro block MB positioned above the macro block MB is used and pixel data other than that of the lines p0 and p1 of the macro block MB is not used.
Below, filtering processing of a color difference block CB will be explained.
In the case of a luminance block LB explained above, filtering processing is performed on 16×16 pixel data, while in the case of a color difference block CB, filtering processing is performed on 8×8 pixel data.
As indicated by shading in
The deblocking filter 34 performs the horizontal filtering processing HF on the image data to be processed by performing predetermined calculation by using a predetermined number of pixel data positioning in the horizontal direction.
As explained above, in the horizontal filtering processing HF of a color difference block CB, pixel data of a macro block MB positioned above or below the macro block is not used.
Next, as indicated by shading in
The deblocking filter 34 performs predetermined calculation by using a predetermined number of pixel data positioning in the vertical direction to perform the vertical filtering processing VF on the pixel data to be processed.
In the vertical filtering processing VF of a color difference block GB, as shown in
In the vertical filtering processing VF of a color difference block CB, as shown in
As explained above, in the vertical filtering processing VF of a color difference block CB, a result of horizontal filtering processing HF of lines p0 and p1 of an adjacent other macro block MB above the macro block MB is used, and pixel data other than that of the lines p0 and p1 of the macro block MB is not used.
As explained above, the horizontal filtering processing HF and the vertical filtering processing of a micro block MB have characteristics below.
As explained by using
Also, as explained by using
Also, as shown in
Also, as shown in
The followings are drawn from the characteristics above.
When dividing the restructured picture data RPIC shown in
Here, from the above characteristics, there are the following dependency relations explained above.
(1) For a luminance block LB and a color difference block CB, as indicated by a mark A in
(2) As to a luminance block LB, as indicated by a mark B in
First, restructured picture data RPIC from the adding circuit 33 is written to the buffer memory 70.
Then, the filter circuit 72 and the filter circuit 74 read from the buffer memory 70 a luminance block LB and a color difference block CB of an adjacent macro block MB to the block boundary L among macro blocks composing the first block data BPIC1 and second block data BPIC2 composing the restructured picture data RPIC from the buffer memory 70 and perform the processing below.
As shown in
Also, the filter circuit 72 follows the horizontal filtering processing HF1, as shown in
Then, the filter circuit 72 performs the above processing on the luminance block LB and color difference block CB of all macro blocks MB adjacent to the block boundary L in the first block data BPIC1.
Consequently, the luminance block LB becomes as shown in
Also, the filter circuit 72 performs horizontal filtering processing HF and vertical filtering processing VF of macro blocks MB not adjacent to the block boundary line L in the first block data BPIC1.
Also, in the filter circuit 74, as shown in
Also, in the filter circuit 74, as shown in
Continuously, in the filter circuit 74, as shown in
At this time, as shown in
In the vertical filtering processing VF2, as shown in
The filter circuit 74 performs the above processing on luminance blocks LB and color difference blocks CB of all macro blocks MB adjacent to the block boundary L in the second block data BPIC2.
Also, the filter circuit 74 performs the horizontal filtering processing HF and vertical filtering processing VF on macro blocks not adjacent to the block boundary L of the second block data BPIC2.
The processing in the filter circuit 72 and that in the filter circuit 74 explained above are performed in parallel.
Next, in the filter circuit 72 or filter circuit 74, as shown in
Continuously, in the filter circuit 72 or filter circuit 74, as shown in
The filter circuit 72 or the filter circuit 74 performs the above processing on the macro blocks MB adjacent to the block boundary L1 of the first block data BPIC1 and the second block data BPIC2 by using a macro block of the first block data BPIC1 as a macro block MB1 and using a macro block of the second block data BPIC2 as a macro block MB2.
Below, an overall operation of the encoding apparatus 2 shown in
An image signal to be an input is converted to a digital signal by the A/D conversion circuit 22.
Next, in accordance with a GOP configuration of image compression information to be an output, frame image data is relocated by the screen relocating circuit 23, and original image data S23 obtained thereby is output to the computing circuit 24, the motion prediction compensation circuit 42 and the intra-prediction circuit 41.
Next, the computing circuit 24 detects a difference of the original image data S23 from the screen relocating circuit 23 and prediction image data PI from the selection circuit 44 and outputs image data S24 indicating the difference is output to the orthogonal transformation circuit 25.
Next, the orthogonal transformation circuit 25 performs discrete cosine transformation, Karhunen-Loeve transformation or other orthogonal transformation on the image data S24 to generate image data (a DCT coefficient) S25 and outputs the same to the quantization circuit 26.
Next, the quantization circuit 26 quantizes the image data S25 and outputs image data (quantized DCT coefficient) S26 to the reversible encoding circuit 27 and the inverse quantization circuit 29.
Then, the reversible encoding circuit 27 performs reverse encoding of variable encoding or arithmetic encoding on the image data S26 to generate image data S28 and accumulates the same in the buffer 28.
Also, the rate control circuit 32 controls a quantization rate in the quantization circuit 26 based on the image data 28 read from the buffer 28.
Also, the inverse quantization circuit 29 performs inverse quantization on the image data S26 input from the quantization circuit 26 and outputs the result to the inverse orthogonal transformation circuit 30.
Then, the inverse orthogonal transformation circuit 30 performs inverse transformation of the transformation processing by the orthogonal transformation circuit 25 to generate image data and outputs the same to the adding circuit 33.
In the adding circuit 33, the image data from the inverse transformation circuit 30 is added to the prediction image data PI from the selection circuit 44 to generate restructured image data and the result is output to the deblocking filter 34.
In the deblocking filter 34, through the processing explained above, image data obtained by removing block distortions of the restructured image data is generated and written to the frame memory 31 as reference image data.
In the intra-prediction circuit 41, the intra-prediction processing explained above is performed and the result prediction image data PIi and index data COSTi are output to the selection circuit 44.
Also, in the motion prediction compensation circuit 42, a motion vector is generated by using reference picture data R_PIC and the result prediction image data PIm and index data COSTm are output to the selection circuit.
Then, in the selection circuit 44, the smaller one is specified from the index data COSTm input from the motion prediction compensation circuit 42 and the index data COSTi input from the intra prediction circuit 41, and the prediction image data PIm or PIi input in accordance with the specified index data is output to the computing circuit 24 and the adding circuit 33.
Accordingly, as explained by using
In an encoding apparatus of the related art, time required by deblocking filtering processing constituted a large portion, so that the processing tine of the entire encoding processing can be largely reduced according to the encoding apparatus 2.
The present invention is not limited to the above embodiment.
In the embodiment explained above, deblock filtering processing was explained as an example of predetermined processing of the present invention, however, in the present invention, processing to be performed on element data in the first block includes first processing not using element data after subjected to predetermined processing in the second block and second processing using first element data as a part of the element data after subjected to the predetermined processing in the second block; and it may be applied to other processing, wherein processing on the element data in the second block includes third processing not using the element data after subjected to the predetermined processing in the first block and fourth processing using second element data as a part of the element data after subjected to the predetermined processing in the first block.
Also, when a deblocking filter is included in the decoding apparatus corresponding to the encoding apparatus 2, the present invention can be also applied thereto.
Also, the deblocking filter 34 shown in
The present invention can be applied to a system, wherein processing is performed in units of block data by using a processing result of other block data.
It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and alternations may occur depending on design requirements and other factors insofar as they are within the scope of the appended claims or the equivalents thereof.
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
5485215, | May 19 1994 | Matsushita Electric Corporation of America | HDTV raster converter and interpolation filter with section overlap |
6225923, | Feb 19 1992 | 8x8, Inc | Encoder arrangement and bit-exact IDCT protocol |
6917310, | Jun 25 2003 | AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED | Video decoder and encoder transcoder to and from re-orderable format |
6922492, | Dec 27 2002 | Google Technology Holdings LLC | Video deblocking method and apparatus |
8116379, | Oct 08 2004 | STMicroelectronics, Inc. | Method and apparatus for parallel processing of in-loop deblocking filter for H.264 video compression standard |
20020159528, | |||
20040005006, | |||
20050117653, | |||
20060078052, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jul 03 2013 | Sony Corporation | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Apr 19 2017 | ASPN: Payor Number Assigned. |
Nov 01 2017 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Oct 21 2021 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Nov 01 2019 | 4 years fee payment window open |
May 01 2020 | 6 months grace period start (w surcharge) |
Nov 01 2020 | patent expiry (for year 4) |
Nov 01 2022 | 2 years to revive unintentionally abandoned end. (for year 4) |
Nov 01 2023 | 8 years fee payment window open |
May 01 2024 | 6 months grace period start (w surcharge) |
Nov 01 2024 | patent expiry (for year 8) |
Nov 01 2026 | 2 years to revive unintentionally abandoned end. (for year 8) |
Nov 01 2027 | 12 years fee payment window open |
May 01 2028 | 6 months grace period start (w surcharge) |
Nov 01 2028 | patent expiry (for year 12) |
Nov 01 2030 | 2 years to revive unintentionally abandoned end. (for year 12) |