An improved image processing system involves decoding compressed image data including frequency domain coefficients defining blocks of pixel values representing an image at a first resolution to provide an image at a reduced second resolution for display from a selected sub-set of the frequency domain coefficients. The apparatus includes an enhanced motion-compensation-unit (MCU) operating with blocks of pixel values representing an image at an intermediate third resolution lower than the first resolution but higher than the reduced second resolution.
|
1. In apparatus for decoding compressed image data including frequency domain coefficients defining blocks of pixel values representing an image at a first resolution to provide an image at a reduced second resolution for display, said apparatus comprising:
first means responsive to a selected sub-set of said frequency domain coefficients for deriving said image of said reduced second resolution for display and including,
enhanced motion-compensation-unit (MCU) processing means; and
second means for operating said enhanced MCU processing means with blocks of pixel values representing said image at an intermediate third resolution lower than said first resolution and higher than said reduced second resolution.
18. In a system for decoding compressed image data in the form of pixel blocks representing an image of a first resolution to provide an image of a reduced second resolution, a method comprising the steps of:
generating data representative of an image pixel block at an intermediate third resolution lower than said first resolution but higher than said reduced second resolution;
generating motion compensated pixel block data at said third resolution from said pixel block data of said reduced second resolution supplemented by said intermediate third resolution data; and
deriving pixel data representing said image of said reduced second resolution from said motion compensated pixel block data at said third resolution.
11. In a system for decoding compressed image data in the form of pixel blocks representing an image at a first resolution to provide an image of a reduced second resolution, a method of decompressing a pixel block of said first resolution by:
selecting a sub-set of frequency domain coefficients in said pixel blocks of said compressed image data;
processing elements of said sub-set of frequency domain coefficients to provide pixel data representing pixels comprising a spatially distributed sub-set of pixels in a pixel block of said image at a first resolution and excluding other pixels of that pixel block, said processing including
using data at an intermediate third resolution, lower than said first resolution but higher than said reduced second resolution, to supplement data from said reduced second resolution in forming prediction for motion compensation; and
formatting said pixel data representing pixels comprising said spatially distributed sub-set of pixels to provide said image of said reduced second resolution.
5. In apparatus for decoding compressed image data including frequency domain coefficients defining blocks of pixel values representing an image at a first resolution to provide an image at a reduced second resolution for display, said apparatus comprising:
first means responsive to a selected sub-set of said frequency domain coefficients for deriving said image of said reduced second resolution for display and including,
enhanced motion-compensation-unit (MCU) processing means; and
second means for operating said enhanced MCU processing means with blocks of pixel values representing said image at an intermediate third resolution lower than said first resolution and higher than said reduced second resolution, wherein
said enhanced MCU processing means is responsive to base-layer pixel macroblock input values representing said image at said reduced second resolution and to pixel values representing said image at said intermediate third resolution for deriving motion-compensated base-layer prediction macroblock output pixel values as a first output and motion-compensated enhancement-layer prediction macroblock output pixel residual values as a second output.
2. The apparatus defined in
said reduced second resolution is substantially ¼ of said first resolution; and
said second means operates said enhanced MCU processing at an intermediate third resolution which is substantially ½ of said first resolution.
3. The apparatus defined in
said image at said reduced second resolution for display is a progressive-scanned image.
4. The apparatus defined in
said image at said reduced second resolution for display is an interlaced-scanned image.
6. The apparatus defined in
said second means comprises third means responsive to said selected sub-set of said frequency domain coefficients and to both said motion-compensated base-layer macroblock output pixel values and said enhancement-layer macroblock output pixel residual values for deriving both said base-layer macroblock input pixel values and said encoded enhancement-layer macroblock input pixel residual values.
7. The apparatus defined in
a base and enhancement-layer decimated-pixel memory;
unitary enhanced inverse discrete cosine transform (DCT), filtering and pixel-decimation processing means responsive to a selected sub-set of frequency domain coefficients for deriving base-layer blocks of output pixel values representing said image at said reduced second resolution as a first output and output enhancement-layer blocks of output pixel residual values representing said image at said intermediate third resolution as a second output;
fourth means, including a first adder for adding corresponding pixel values of said motion-compensated base-layer macroblock output pixel values from said enhanced MCU processing means and said base-layer blocks of output pixel values from said unitary IDCT, filtering and pixel-decimation processing means, for deriving values that are stored as base-layer data in said base and enhancement-layer decimated-pixel memory;
fifth means, including a second adder and an enhancement-layer encoder, for adding corresponding pixel residual values of said motion-compensated enhancement-layer macroblock output pixel residual values from said enhanced MCU processing means to said enhancement-layer blocks of output pixel residual values from said unitary IDCT, filtering and pixel-decimation processing means to obtain a sum output from said second adder for encoding by said enhancement-layer encoder, for deriving second input values that are stored as encoded enhancement-layer data in said base and enhancement-layer decimated-pixel memory; and
sixth means for providing from said base and enhancement-layer decimated-pixel memory said base-layer pixel macroblock input values to said enhanced MCU processing means and for deriving said encoded enhancement-layer pixel macroblock input residual values applied as a second input to said enhanced MCU processing means from said stored encoded enhancement-layer data.
8. The apparatus defined in
said frequency domain coefficients define image information that includes luma blocks of pixel values representing intracoded (I) and predictive-coded (P) progressive-scanned image at said first resolution.
9. The apparatus defined in
seventh means comprising a sample-rate converter for deriving an ongoing display video signal from base-layer blocks of output pixel values.
10. The apparatus defined in
said reduced second resolution is substantially ¼ of said first resolution; and
said intermediate third resolution is substantially ½ of said first resolution.
12. A method according to
selecting different spatially distributed sub-sets of pixels for interlace and progressive image output.
13. A method according to
upsampling said pixel data representing pixels comprising a spatially distributed sub-set of pixels to provide said image of said reduced second resolution.
14. A method according to
Selecting said spatially distributed sub-set of pixels based on desired PIP picture characteristic.
15. A method according to
said PIP picture characteristic comprises at least one of (a) PIP picture size, (b) whether said PIP picture is interlace or progressive, and (c) PIP picture vertical and horizontal pixel resolution.
16. A method according to
adaptively filtering pixel data representing pixels comprising a spatially distributed sub-set of pixels using a filter function selected based on at least one of, (a) motion vector type, (b) group of picture (GOP) structure, (c) a GOP boundary transition, (d) whether I, B or P frame, and (e) whether interlace or progressive frame reduced second resolution output required.
17. A method according to
adaptively filtering pixel data representing pixels comprising a selected spatially distributed sub-set of pixels using a filter function selected from at least one of, (a) a vertical pixel data filter, (b) a horizontal pixel data filter, (c) a chrominance data filter, and (d) luminance data filter.
19. A method according to
20. A method according to
21. A method according to
upsampling said pixel block data at said third resolution to provide image data of said first resolution.
22. A method according to
downsampling said upsampled pixel block data of said first resolution to provide image data of said second resolution.
23. A method according to
downsampling said upsampled pixel block data of said first resolution to provide said intermediate third resolution data.
24. A method according to
said pixel block data of said third resolution comprises residual data.
|
This is a non-provisional application of provisional application Ser. No. 60/133,429 by M. L. Corner et al, filed 11 May 1999.
The present invention relates to the decoding of a coded high-definition (HD) video signal to derive an enhanced decoded video signal suitable, for example, for recording or producing a picture-in-picture (PIP) or other reduced-resolution display.
Known in the art are television receivers that, while displaying a relatively large picture derived from a primary television channel, also simultaneously display a small picture-in-picture (PIP) derived from a secondary television channel. In the case of a high-definition television (HDTV) receiver, the receiver must include a relatively complex and expensive decoder that conforms with the MPEG ISO 13818-2 standard for decoding a received coded HD video signal in real time for high definition display. However, because the PIP is small, there is no need to provide a high definition PIP display because a viewer inherently would not be able to resolve the higher definition components of a high definition PIP. Therefore, to provide the PIP, the HDTV receiver may be supplied with a lower-resolution second simpler and less expensive decoder which still conforms with the ISO 13818-2 standard.
One approach, known in the art, to providing a lower-resolution second decoder which is somewhat simpler and less expensive than the decoder providing the high definition display, is disclosed in the three U.S. Pat. Nos. 5,614,952, 5,614,957 and 5,635,985, which were, respectively, issued to Boyce et al. on Mar. 25, 1997, Mar. 25, 1997 and Jun. 3, 1997.
Further, incorporated herein by reference is the teaching of copending U.S. patent application Ser. No. 09/349,865, filed Jul. 8, 1999 and assigned to the same assignee as the present application, which is directed to a lower-resolution second-decoder approach suitable for deriving a PIP display in real time from a received coded HD video signal that is significantly simpler and less expensive to implement than is the second decoder disclosed by Boyce et al, but still conforms with the ISO 13818-2 standard.
A system involves decoding compressed image data including frequency domain coefficients defining blocks of pixel values representing an image at a first resolution to provide an image at a reduced second resolution. The system includes a motion-compensation-unit (MCU) processor responsive to a selected sub-set of the frequency domain coefficients for deriving the image of the reduced second resolution. The motion-compensation-unit (MCU) processor employs blocks of pixel values representing image data at an intermediate third resolution lower than the first resolution and higher than the reduced second resolution.
Referring to
Returning to
The simplified functional block diagram of the embodiment of enhanced-quality PIP decoding means 102 shown in
For illustrative purposes, the following description of elements 200, 202, 204, 205B, 205E, 206, 207, 208 and 210 assumes that each of these elements is being operated in accordance with the above-discussed preferred tutorial example.
In this example, RLD 200 outputs 10 DCT coefficients for each 8×8 coded block using the 2 scan patterns defined in the ISO 13818-2 standard. The positioning of the 10 DCT coefficients within each 8×8 block is illustrated in
TABLE 1
Interpretation of run = 10 and run = 21 by RLD 200
for progressive sequences
Run
Alternate—Scan
Interpretation in PIP RLD
10
0
All DCT coefficients = 0
10
1
Same as ISO 13818-2 standard
21
0
Not allowed
21
1
All DCT coefficients = 0
TABLE 2
Interpretation of run = 13 and run = 22 by RLD 200
for interlaced sequences
Run
Alternate—Scan
Interpretation in PIP RLD
13
0
Same as ISO 13818-2 standard
13
1
All DCT coefficients = 0
22
0
All DCT coefficients = 0
22
1
Not allowed
IQ 202 performs inverse quantization arithmetic and saturation described in the ISO 13818-2 standard on the 10 DCT coefficients shown in
In accordance with the principles of the invention taught in the aforesaid patent application Ser. No. 09/349,865, the unitary IDCT, filtering and pixel-decimation processing means disclosed therein is able to convert the coded respective values of inversely-quantized DCT coefficients contained in an 8×8 block at the output from IQ 202, into a smaller block of decimated pixels in a single-step computational process. Thus, the amount of hardware required to implement this single-step computational process by means 204 is relatively small and, therefore, relatively inexpensive compared to the aforesaid conventional three-step computational process.
Specifically, in accordance with the teachings of the aforesaid patent application Ser. No. 09/349,865, the decimated pixel memory thereof (which, because of pixel decimation, requires a storage capacity size of only ¼ the capacity size of a corresponding undecimated pixel memory) comprises a plurality of separate buffers. Each of these buffers is capable of temporarily storing decimated luma and chroma pixels. In conformity with the ISO 13818-2 standard, the decimated pixel memory includes one or more buffers for storing decimated pixels that define reconstructed intracoded (I), predictive-coded (P) and/or bi-directionally predictive-coded (B) frame or field pictures. Further, motion-compensated prediction macroblock output of pixel values from the MCU processing means is added in an adder to each corresponding macroblock output derived in the unitary IDCT, filtering and pixel-decimation processing means. The summed pixel values of the output from the adder are stored into a first buffer of the decimated pixel memory. This first buffer may be a first-in first-out (FIFO) buffer in which the stored decimated pixels may be reordered between (1) being written into the first buffer and (2) being read out from the first buffer and written into another buffer of the decimated pixel memory. In the case of a current P or B frame or field, the decimated pixel memory includes a buffer for storing a macroblock for input to the MCU processing means to provide motion compensation.
In accordance with the principles of the present invention, two layers of decimated pixels are advantageously stored respectively in base and enhancement-layer decimated pixel memory 206 to achieve high-quality motion compensation and improve PIP image quality. The first of these two layers is a base-layer of decimated pixels and the second of these two layers is an enhancement-layer of vector-quantized values of luma macroblock decimated pixels that are employed in enhanced MCU processing means 208 during decoding of P pictures The enhanced layer is used to provide a reduced resolution image of greater resolution than is obtainable by using just the decimated pixels of the base-layer. Both the base-layer and this enhancement-layer are employed by enhanced MCU processing means 208, in a manner described in detail below.
Preferred embodiments of IDCT, filtering and pixel-decimation processing means 204, enhancement-layer encoder 207 and enhanced MCU processing means 208 for implementing the present invention will now be described in detail.
The unitary enhanced IDCT, filtering and pixel-decimation processing means 204 provides the following sets of 16 decimated pixel values for each of the base and enhancement-layers used for each of progressive scan and interlaced scan (each set being a function of 10 DCT coefficient values).
Progressive-Scan, Base-Layer Set of Decimated Pixel Values
Progressive-Scan, Enhancement-Layer Set of Decimated PIXEL Values
Interlaced-Scan, Base-Layer Set of Decimated Pixel Values
Interlaced-Scan, Enhancement-Layer Set of Decimated Pixel Values
Each of the above “Progressive-Scan Set of Decimated Pixel Values” and above “Interlaced-Scan Set of Decimated Pixel Values” was derived in the following manner:
In the case of an interlaced scan (i.e., the progressive—sequence flag is 0), the base-layer value g1′(x,y) is computed in accordance with the following equation 6 and the enhancement-layer value g0′(x,y) is computed in accordance with the following equation 7:
In the interlaced-scan case of an 8×8 block, g1′(x,y) in equation 6 defines the average value of the values of a set of 4 contiguous pixels (or prediction errors) arranged in a 4×1 block portion of the 8×8 block. The value g0′(x,y) in equation 7 defines the difference between the average value of the values of a first set of 2 contiguous horizontal pixels (or prediction errors) of a vertical line and the average value of the values of a second set of the next 2 contiguous horizontal pixels (or prediction errors) of the same vertical line arranged in a 4×1 block portion of an 8×8 block. The 16 equations g1(0,0) to g1(1,7) of the above “Interlaced-Scan, Base-layer Set of Decimated Pixel Values” were derived by substituting equation 3 into equation 6, substituting numeric values for x and y in g1′(x,y), substituting N=8, and approximating the weighting factors for the DCT coefficients with rational values. The 16 equations g0(0,0) to g0(2,7) of the above “Interlaced-Scan, Enhancement-layer Set of Decimated Pixel Values” were derived in a similar manner by substituting equation 3 into equation 7, substituting numeric values for x and y in g0′(x,y), substituting N=8, and approximating the weighting factors for the DCT coefficients with rational values. Although the effective pixel decimation of the enhancement-layer is only 2 (rather than the effective pixel decimation of 4 of the base-layer), the equalities g0(x+1,y)=−g0(x,y) hold for x=0 and x=2 so that enhancement-layer values with odd horizontal indexes need not be computed. Thus, only 16 independent g0(x,y) enhancement-layer values need be computed for each 8×8 luma block in an interlaced-scan I or P picture. Further, because these 16 g0(x,y) enhancement-layer values are residual values, they tend to have a small dynamic range.
Returning to
Unit 204 conveys an output comprising the I and P luma g0(x,y) enhancement-layer decimated pixel values as a first input to enhancement-layer adder 205E in the previously mentioned decimated-pixel macroblock predetermined order. (For non-coded blocks all such values are zero). Further, for the case of P luma pixels, unit 208 applies a corresponding macroblock of 64 p0(x,y) enhancement-layer decimated pixel values as a second input to adder 205E in this same predetermined order. (For intra-coded macroblocks all such values are zero). The macroblock of 64 s0(x,y) enhancement-layer decimated pixel values derived as a sum output from adder 205E are applied as an input to enhancement-layer encoder 207 and then the encoded output bit-words from encoder 207 are stored in memory 206 during decoding of I and P pictures.
A macroblock at the higher resolution of the enhancement-layer would normally comprise 128 decimated luma pixel values. However, because of the above-described symmetry equalities for both progressive-scan sequences and interlaced-scan sequences, the number of independent decimated enhancement-layer pixel values in the block s0(x,y) is reduced from 128 to 64. Therefore, the predetermined order is such that only half of the enhancement-layer decimated pixel values need be considered by enhancement-layer encoder 207. These enhancement-layer values are encoded in pairs using a simple vector quantizer, with each pair of values being represented by an 8-bit codeword. Since there are 64 enhancement-layer values to be encoded in a macroblock, the number of bits of storage for the enhancement layer is 32×8=256 bits per macroblock. In the preferred embodiment the 32 codewords are combined into two 128-bit output words from encoder 207 for storage in memory 206.
For progressive sequences each pair of horizontally adjacent values in the block s0(x,y) is encoded as a two-dimensional vector, whereas for interlaced sequences each pair of vertically adjacent (within the same field) values in s0(x,y) is encoded as a two-dimensional vector. Let v0 and v1 be a pair of values to be encoded together. The computational procedure employed by encoder 207 to encode the pair v0,v1 is described in detail in Appendix A. After this procedure has been completed for each pair of values in s0(x,y), the codewords are packed into two 128-bit words, both of which 128-bit words form the output from encoder 207 that are stored in memory 206. Returning again to
In order for enhanced MCU processing means 208 to form a block of predictions, a block of pixel values is fetched from memory 206. The base-layer of pixel values which are read from the stored reference picture are denoted d1(x,y). The enhancement-layer residual values, which are needed only if the block of predictions being formed is for the luma component in a P picture, are denoted d0(x,y). Since the enhancement-layer samples are stored in memory 206 in encoded form, the enhancement-layer data output from memory 206 input to unit 208 is decoded by enhancement-layer decoder 300 (
Before a block of samples can be read from memory 206, the location and size of the block is determined. The location of a block of pixel values in the reference picture is specified by the horizontal and vertical coordinates of the start (i.e., the upper-left corner) of the block in the reference picture. For the base-layer, these coordinates are indexes into a picture which is ¼ horizontal, full vertical resolution for interlaced sequences and ½ horizontal, ½ vertical resolution for progressive sequences. For the enhancement-layer, the coordinates are indexes into a picture which is ½ horizontal, full vertical resolution for both interlaced and progressive sequences.
To locate the blocks d1(x,y) and d0(x,y) in the reference picture, the motion vector for the macroblock being decoded is needed. The decoding of motion vector data in the bitstream, the updating of motion vector predictors, and the selection of motion vectors in non-intra macroblocks which contain no coded motion vectors (e.g., skipped macroblocks) are all performed by unit 208 as described in the ISO 13818-2 standard. Let xb and yb be the full-resolution horizontal and vertical positions of the macroblock being decoded and let mv=(dx,dy) be the decoded motion vector, so that if the sequence were being decoded at full resolution, a block of pixel values at location (xb+(dx/2), yb+(dy/2)) in the full-resolution reference luma picture would be read from memory and used to form luma predictions. Similarly, a block of chroma values at location (xb/2+(dx//2), yb/2+(dy//2)) in the reference chroma picture would be needed to form predictions for each of the 2 chroma components in a full-resolution mode.
The location in the reference picture of a block needed for motion compensation in unit 208 is determined using xb, yb, dx and dy. Table 3, shows the locations of blocks for various prediction modes. The sizes of the blocks needed for motion compensation in unit 208 are specified in Table 4. Base-layer entries in Table 4 give the size of the block d1(x,y), and enhancement-layer entries in Table 4 give the size of the block d0(x,y).
TABLE 3
Locations of Blocks Needed for Motion Compensation
in Enhanced MCU Processing Means 208
Horizontal
Vertical
Prediction Mode
Coordinate
Coordinate
Progressive sequence, luma,
((xb + (dx/2))/8)*4
(yb + (dy/2))/2
base-layer
Progressive sequence, luma,
((xb + (dx/2))/8)*4
((yb + (dy/2))/2)*2
enhancement-layer
Progressive sequence, chroma
xb/4 + ((dx//2)/4)
yb/4 + ((dy//2)/4)
Interlaced sequence, luma,
((xb + (dx/2))/8)*2
yb + (dy/2)
base-layer
Interlaced sequence, luma,
((xb + (dx/2))/8)*4
yb + (dy/2)
enhancement-layer
Interlaced sequence, chroma
xb/8 + ((dx//2)/8)
yb/2 + ((dy//2)/2)
TABLE 4
Sizes of Blocks Needed for Motion Compensation
in Enhanced MCU Processing Means 208
Horizontal
Vertical
Prediction Mode
Size
Size
Progressive sequence, luma, base-layer
12
9
Progressive sequence, luma, enhancement-layer
12
18
Progressive sequence, chroma
5
5
Interlaced sequence, luma, 16 × 16 prediction,
6
17
base-layer
Interlaced sequence, luma, 16 × 8 prediction,
6
9
base-layer
Interlaced sequence, luma, 16 × 16 prediction,
12
17
enhancement-layer
Interlaced sequence, luma, 16 × 8 prediction,
12
9
enhancement-layer
Interlaced sequence, chroma, 8 × 8 prediction
3
9
Interlaced sequence, chroma, 8 × 4 prediction
3
5
The above-described structure of
Returning again to
In a realized embodiment of the present invention, the extra capacity required in 1 memory for storage of the ½ resolution enhancement-layer in encoded form adds only 1.98 Mbits to the 17.8 Mbits required for storage of the ¼ resolution base-layer. Thus, the inclusion of an encoded ½ resolution enhancement-layer increases the needed storage capacity of the base and enhancement-layer decimated-pixel memory by a relatively small amount (i.e., only a little more than 11%) to 19.78 Mbits.
Horlander, Thomas Edward, Comer, Mary Lafuze
Patent | Priority | Assignee | Title |
10154269, | Jan 02 2014 | VID SCALE, Inc. | Methods, apparatus and systems for scalable video coding with mixed interlace and progressive content |
10277898, | Feb 29 2012 | Sony Corporation | Image processing device and method for improving coding efficiency of quantization matrices |
10404985, | Feb 29 2012 | Sony Corporation | Image processing device and method for improving coding efficiency of quantization matrices |
7734151, | Dec 06 2004 | LG Electronics Inc | Method for decoding image block |
8014613, | Apr 16 2007 | Sharp Kabushiki Kaisha | Methods and systems for inter-layer image parameter prediction |
8259813, | Dec 19 2007 | Cisco Technology, Inc | Method and system for improving color sharpness in video and images |
8391370, | Mar 11 2009 | Hewlett-Packard Development Company, L.P.; HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | Decoding video data |
8401091, | Jan 09 2006 | LG Electronics Inc | Inter-layer prediction method for video signal |
8433184, | Dec 06 2004 | LG Electronics Inc. | Method for decoding image block |
8451899, | Jan 09 2006 | LG Electronics Inc | Inter-layer prediction method for video signal |
8457201, | Jan 09 2006 | LG Electronics Inc | Inter-layer prediction method for video signal |
8494042, | Jan 09 2006 | LG Electronics Inc | Inter-layer prediction method for video signal |
8494060, | Jan 09 2006 | LG Electronics Inc | Inter-layer prediction method for video signal |
8619872, | Jan 09 2006 | LG Electronics, Inc. | Inter-layer prediction method for video signal |
8792554, | Jan 09 2006 | LG Electronics Inc | Inter-layer prediction method for video signal |
9406112, | Jul 10 2008 | KONINKLIJKE PHILIPS N V | Video data compression |
9497453, | Jan 09 2006 | LG Electronics Inc | Inter-layer prediction method for video signal |
9819947, | Jan 02 2014 | VID SCALE, INC | Methods, apparatus and systems for scalable video coding with mixed interlace and progressive content |
9961303, | Mar 04 2013 | Cisco Technology, Inc. | Video conference virtual endpoints |
Patent | Priority | Assignee | Title |
5227878, | Nov 15 1991 | MULTIMEDIA PATENT TRUST C O | Adaptive coding and decoding of frames and fields of video |
5253056, | Jul 02 1992 | SHINGO LIMITED LIABILITY COMPANY | Spatial/frequency hybrid video coding facilitating the derivatives of variable-resolution images |
5341318, | Mar 14 1990 | LSI Logic Corporation | System for compression and decompression of video data using discrete cosine transform and coding techniques |
5371549, | Oct 07 1992 | QUARTERHILL INC ; WI-LAN INC | Decoding method and system for providing digital television receivers with multipicture display by way of zero masking transform coefficients |
5386241, | Oct 07 1992 | QUARTERHILL INC ; WI-LAN INC | Signal transformation system and method for providing picture-in-picture in high definition television receivers |
5598222, | Apr 18 1995 | Hatachi American, Ltd.; Hitachi America, Ltd | Method and apparatus for decoding multiple video bitstreams using a common memory |
5614952, | Oct 11 1994 | Hitachi America, Ltd | Digital video decoder for decoding digital high definition and/or digital standard definition television signals |
5614957, | Nov 14 1994 | Hitachi America, Ltd. | Digital picture-in-picture decoder |
5635985, | Oct 11 1994 | Hitachi America, Ltd.; Hitachi America, Ltd | Low cost joint HD/SD television decoder methods and apparatus |
5737019, | Jan 29 1996 | Panasonic Corporation of North America | Method and apparatus for changing resolution by direct DCT mapping |
5767797, | Jun 18 1996 | Kabushiki Kaisha Toshiba | High definition video decoding using multiple partition decoders |
5847771, | Aug 14 1996 | Verizon Patent and Licensing Inc | Digital entertainment terminal providing multiple digital pictures |
5867601, | Oct 20 1995 | Panasonic Corporation of North America | Inverse discrete cosine transform processor using parallel processing |
EP707426, | |||
WO9841011, | |||
WO9841011, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Oct 25 1999 | HORLANDER, THOMAS EDWARD | Thomson Consumer Electronics, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 010353 | /0382 | |
Oct 26 1999 | COMER, MARY LAFUZE | Thomson Consumer Electronics, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 010353 | /0382 | |
Oct 28 1999 | Thomson Licensing S.A. | (assignment on the face of the patent) | / | |||
Jan 13 2000 | Thomson Consumer Electronics, Inc | THOMSON LICENSING S A | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 010525 | /0971 |
Date | Maintenance Fee Events |
Jun 08 2009 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Mar 08 2013 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Jun 09 2017 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Jan 31 2009 | 4 years fee payment window open |
Jul 31 2009 | 6 months grace period start (w surcharge) |
Jan 31 2010 | patent expiry (for year 4) |
Jan 31 2012 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jan 31 2013 | 8 years fee payment window open |
Jul 31 2013 | 6 months grace period start (w surcharge) |
Jan 31 2014 | patent expiry (for year 8) |
Jan 31 2016 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jan 31 2017 | 12 years fee payment window open |
Jul 31 2017 | 6 months grace period start (w surcharge) |
Jan 31 2018 | patent expiry (for year 12) |
Jan 31 2020 | 2 years to revive unintentionally abandoned end. (for year 12) |