A video decoder for decoding video pictures encoded according to the MPEG-2 standard, having reduced memory requirements, including a memory for storing means for storing a plurality of anchor frames, the decoder employing such anchor frames to generate B-frames, and including block-to-raster buffer means for holding B-frame data for display, the decoder being operable in first and second modes of operation,
wherein in a first mode of operation a picture is encoded as a single frame and the video decoder decodes the entire frame twice wherein in a first decoding a set of lines of a first field are provided to the buffer for display, whereas in a second decoding lines from a second field are provided to the buffer for display; and
wherein in a second mode of operation in which two consecutive field pictures of a frame are decoded, a first field picture is decoded and provided to the buffer means for display, and then a second field picture is decoded and provided to the buffer means for display. In order to reduce memory requirement still further, data can be stored in the buffer in any available memory location, the addresses of the locations being held in a pointer table.
|
19. A video decoder system comprising:
a memory for storing a plurality of anchor frames; a decoder coupled to said memory and which uses the anchor frames to decode a plurality of intermediate frames; and a buffer independent of said memory and coupled to said decoder for holding data of the intermediate frames for display, said buffer including a pointer table which distributes incoming data to any available location in said buffer and stores an address for each said location.
14. A video decoder comprising:
means for storing a plurality of anchor frames suitable for decoding a plurality of intermediate frames; and means for holding intermediate frame data for display, characterised in that the means for holding (i) is independent of the means for storing and (ii) includes a pointer table with (iii) means for distributing incoming data to any available location in the means for holding, an address of each said location being stored in the pointer table.
1. A video decoder comprising:
a memory for storing a plurality of anchor frames suitable for decoding a plurality of intermediate frames; and a buffer independent of the memory for holding intermediate frame data for display, characterised in that the video decoder is operable in first and second modes of operation, wherein in a first mode of operation a picture is encoded as a single frame and the video decoder decodes the single frame twice wherein (i) in a first decoding a set of lines of a first field of the single frame are provided to the buffer for display and (ii) in a second decoding a set of lines from a second field of the single frame are provided to the buffer for display; and wherein in a second mode of operation in which two consecutive field pictures of a frame are decoded, (i) a first field picture of the two consecutive field pictures is decoded and provided to the buffer for display and then (ii) a second field picture of the two consecutive field pictures is decoded and provided to the buffer for display.
15. A video decoder system for decoding encoded video pictures, comprising:
a memory for storing a plurality of anchor frames; a decoder coupled to said memory and which uses the anchor frames to decode a plurality of intermediate frames; and a buffer system independent of said memory coupled to said decoder and which holds data of the intermediate frames for display, wherein said decoder is operable in a first and a second operation modes: in the first operation mode a picture is encoded as a single frame and said decoder decodes the single frame twice such that (i) in a first decoding a set of lines of a first field of said single frame are provided to said buffer system for display and (ii) in a second decoding a set of lines from a second field of said single frame are provided to said buffer system for display; and in the second operation mode said picture is encoded as a first field and a second field consecutively and said decoder (i) decodes a first field picture of the first field and provides the first field picture to said buffer system for display and then (ii) decodes a second field picture of the second field and provides the second field picture to said buffer system for display.
9. A method of decoding encoded video pictures, comprising the steps of:
(A) storing a plurality of anchor frames in a memory; (B) employing the anchor frames for decoding a plurality of intermediate frames; (C) holding intermediate frame data for display in a buffer independent of the memory; and characterised by first and second alternative modes, wherein a first mode comprises the steps of: (D1) providing a picture as a frame; decoding the frame a first time; (F1) providing a set of lines of a first field of the frame to the buffer for display in response to decoding the frame the first time; (G1) decoding the frame a second time; and (H1) providing a set of lines of a second field of the frame to the buffer for display in response to decoding the frame the second time, wherein a second mode comprises the steps of: (D2) providing two consecutive field pictures; (E2) decoding a first field picture of the two consecutive field pictures; (F2) providing the first field picture to the buffer for display; (G2) decoding a second field picture of the two consecutive field pictures in response to decoding the first field picture; and (H2) providing the second field picture to the buffer for display in response to providing the first field picture to the buffer for display.
2. The video decoder according to
a display buffer for displaying data assembled in the buffer, wherein (i) the video decoder is arranged for decoding pictures encoded according to an MPEG-2 standard, (ii) the intermediate frames comprise B-frames, and (iii) the buffer is configured as a reconstruction buffer which receives decoded macroblocks.
3. The video decoder according to
4. The video decoder according to
5. The video decoder according to
6. The video decoder according to
7. The video decoder according to
a second buffer for holding chrominance data, wherein said buffer is provided for holding luminance data.
8. The video decoder according to
10. The method according to
11. The method according to
12. The method according to
13. The method according to
providing a first set of eight decoded lines for display; and providing a second set of eight lines for display in response to providing the first set, wherein the first set and the second set comprise an upper and a lower halves of a 16 line section.
16. The video decoder system according to
17. The video decoder system according to
18. The video decoder system according to
|
The present invention relates to the decoding of video bit-streams, particularly although not exclusively encoded according to International Standard ISO/IEC 13818-2 (commonly referred to as MPEG-2 video).
In accordance with customary terminology in the video art, the term "frame" as used herein consists of two fields, which fields are interlaced together to provide an image, as with conventional analog television. The term "picture" is intended to mean a set of data in a bit-stream for representing an image. A video encoder may choose to code a frame as a single frame picture in which case there is a single picture transmitted consisting of two interlaced fields, or as two separate field pictures for subsequent interlacing, in which case two consecutive pictures are transmitted by the encoder. In a frame picture the two fields are interleaved with one another on a line-by-line basis.
Pels ("Picture Elements") usually consist of an 8 bit (sometimes 10 bit) number representing the intensity of a given component of the image at the specific point in the image where that pel occurs. In a picture (field-picture or frame-picture), the pels are grouped into blocks, each block having 64 pels organised as 8 rows by 8 columns. Six such blocks are grouped together to form a "macroblock". Four of these represent a 16 by 16 area of the luminance signal. The remaining two represent the same physical area of the image but are the two colour difference signals (sampled at half the linear resolution as the luminance). Within a picture the macroblocks are processed in the same order as words are read on the page i.e. starting at the top-left and progressing left-to-right before going to the next row (of macroblocks) down, which is again processed in left-to-right order. This continues until the bottom-right macroblock in the picture is reached.
MPEG video is composed of a number of different types of pictures, or, more properly, frames, denoted as
(a) I-frames (Intra Frames) which are compressed using intraframe coding and do not reference any other frames in the coded stream;
(b) P-frames (Predicted Frames) which are coded using motion-compensated prediction from past I-frames or P-frames; and
(c) B-frames (Bidirectionally Predicted Frames) which provide a high degree of compression and are coded using motion-compensated prediction from either past and/or future I-frames or P-frames.
The present invention is particularly concerned with the decoding of B-frames, and for the purposes of this specification the I-frames and P-frames may be viewed as equivalent to one another and will be referred to herein collectively as "anchor frames". According to the MPEG-2 standard, it is necessary to maintain two decoded anchor frames, which are used to form predictions when decoding B-frames.
Referring to
A prior improvement to this scheme reduces the requirement for the third frame store to a requirement for an amount of storage a little larger than that required to hold a field of video (half a frame store). This is often referred to as a 2.5 frame store operation.
EP-A-0732857 discloses an arrangement for reducing the amount of memory required as compared with the arrangement of
It is an object of the invention to provide a video decoder for decoding MPEG-2 pictures which is sufficiently versatile to cope with all possibilities of encoded frame, and which will provide a decoding capability in a efficient memory conserving manner.
In one aspect, the present invention provides a video decoder for decoding encoded video pictures, including memory means for storing a plurality of anchor frames, the decoder employing such anchor frames for decoding intermediate frames, and including buffer means for holding intermediate frame data for display, characterised in that the decoder is operable in first and second modes of operation,
wherein in a first mode of operation a picture is encoded as a single frame and the video decoder decodes the frame twice wherein in a first decoding a set of lines of a first field are provided to the buffer means for display, whereas in a second decoding a set of lines from a second field are provided to the buffer means for display; and
wherein in a second mode of operation in which two consecutive field pictures of a frame are decoded, a first field picture is decoded and provided to the buffer means for display, and then a second field picture is decoded and provided to the buffer means for display.
In a further aspect, the present invention provides a method of decoding encoded video pictures, comprising:
storing a plurality of anchor frames in memory means, employing such anchor frames for decoding intermediate frames, and holding intermediate frame data for display in buffer means;
characterised by first and second alternative modes, wherein a first mode comprises:
providing a picture as a single frame and decoding the frame a first time and providing a set of lines of a first field to the buffer means for display, and decoding the frame a second time and providing a set of lines of a second field to the buffer means for display; and
wherein a second mode comprises providing two consecutive field pictures, and decoding a first field picture and providing the picture to the buffer means for display, and decoding a second field picture and providing the picture to the buffer means for display.
The configuration of the buffer means will usually vary, according to the mode of operation. Thus, in the second mode of operation, the buffer simply has to reconstruct the data from the incoming macroblocks (where the data is encoded according to the MPEG-2 standard) and display the reconstructed data. The buffer means may therefore be configured as two separate 16 line buffers, the first a reconstruction buffer which receives macroblocks decoded, and a second buffer or display buffer for displaying data when transferred from the reconstruction buffer. Whilst this arrangement has the advantage of simplicity, a disadvantage is the large size of buffer required. An alternative and preferred technique is therefore to configure the buffer so that only 8 lines are required. In this arrangement, in said second mode of operation, a row of macroblocks are decoded for a single field picture. Whilst all of the 16 line macroblock belong to the current field, nevertheless half of the lines are discarded, for example those in the lower half of the block. Once the row of macroblocks has been constructed in the buffer to provide 8 lines for display, the decoder returns to the start of the macroblock row and decodes them again, and this time the upper 8 lines are discarded and the lower 8 lines are transferred to the block to raster buffer. Thus, the buffer provides data to the display 8 lines at a time. A principal advantage of this 8 line method is that the amount of storage required for the block-to-raster buffer means is reduced to one half of that required by the 16 line method.
In the first mode of operation for a frame picture, the picture is received as a single frame of interlaced data, and it is necessary that the video decoder decodes the entire frame twice in order to display both fields of the pictures. During the first decoding of the frame the lines of one field are displayed and the other lines of the other field are discarded and in the second decoding the lines of the second field are displayed, the remaining lines being discarded. As will become clear from below, the buffer means is configured to provide eight line reconstruction and display buffers.
In order to reduce the size of the block-to-raster buffer still further a pointer table method is used. This recognizes that the buffers described above are on average half empty during use. In this arrangement when a macroblock is decoded, the data is placed in any available location in the buffer, but a table is kept as a pointer to the various memory locations.
Methods for reducing memory buffer size are known; see for example U.S. Pat. No. 5,151,976, wherein saw tooth data is stored in memory as M stripes of N pixels. In order to avoid first and second memories in which data is alternately read and written, with consequent large memory requirements, data is read and written from the same memory section, wherein the memory is organised according to an addressing scheme wherein a memory location Ai,j is determined by Ai+1,j=(Ai,j+xj) Modulo (MN-1), xj+1=N.xj Modulo (MN-1). However this method is not appropriate where the size of the memory or buffer does not match the length of the stripes.
In contrast the present invention provides in a further aspect a video decoder for decoding encoded video pictures, including memory means for storing a plurality of anchor frames, the decoder employing such anchor frames for decoding intermediate frames, and including buffer means for holding intermediate frame data for display, characterised in that the buffer means includes a pointer table with means for distributing incoming data to any available memory location in the buffer, the address of the memory location being stored in the pointer table.
Preferred embodiments of the invention will now be described with reference to the accompanying drawings wherein:
Referring now to
Since the decoder 4 processes macroblocks it can be seen that it produces output for 16 consecutive (luminance) lines (and their associated 8 chrominance lines in each of two colour difference components) in the picture effectively simultaneously. However, the display must progress in raster-scan order (left-to-right, top-to-bottom) line-by-line rather than macroblock row by macroblock row.
Referring to
1. First, the macroblocks for an entire row of macroblocks are decoded, one after another, and placed in successive "slots" in the block-to-raster buffer.
2. Once an entire row of macroblocks have been decoded and saved in the buffer as described, display commences. The data is read out line-by-line and sent to the display.
Since the display process needs to be continuous it is necessary to have two buffers 22, 24 as shown. While buffer 22 is being used as a "reconstruction buffer" and is filled by the output of the video decoder, buffer 24 is being used as a "display buffer" and is emptied as data is sent to the display. Once the display buffer is empty and the reconstruction buffer is full, the two buffers are swapped over. While the recently decoded row of macroblocks is displayed the next row of macroblocks is decoded into the buffer, which until recently, was the display buffer.
For a single frame picture comprising two interlaced fields, the fields must be displayed sequentially. If the picture is coded as a single frame each row of macroblocks will contain 8 lines from the field currently being displayed and 8 lines from the other field of the frame.
As shown in
Whilst either an 8 line or 16 line block-to-raster buffer may be used, if the block to raster buffer has 16 lines in it then the operation of the block-to-raster buffer must be modified when decoding single frame pictures comprising two interlaced fields. Previously, the block-to-raster buffer was divided into two halves; the reconstruction buffer and the display buffer. When each macroblock only yields 8 lines for storage in the block-to-raster buffer the buffer must be reorganised to be divided into four quarters, each 8 lines high, as indicated in FIG. 5. At any instant in time there will be one currently active reconstruction buffer 50 and one currently active display buffer 52. The remaining two quarters may each be thought of as a reconstruction buffer 54 (which is available to be filled by the output of the video decoder) or as a display buffer 56 (which contains decoded data which has not yet been displayed). Since an implementation must be able to deal with both field and frame pictures, it follows that the block-to-raster buffer must be reconfigurable to operate in the "two halves" mode of FIG. 3 and the "four quarters" mode of FIG. 5. This can most naturally be achieved by considering that the block-to-raster buffer always has four quarters. When decoding field pictures, two of these quarters must be allocated together to be used as the reconstruction buffer and two as the display buffer. It will be understood that the buffer will normally be implemented in RAM (or possibly DRAM or SRAM).
There are two methods for decoding field pictures where a frame is provided as two consecutive field pictures. Each has advantages and disadvantages. If a frame is coded as two field pictures there is no need to decode each picture twice. The first field picture is decoded and displayed. When this is completed, the second picture is decoded and displayed.
In order to make this scheme work, it is necessary to have a block-to-raster buffer capable of handling 16 lines of data, as described with reference to FIG. 3. The advantage of this method (the "16 line" method) is that with a 16 line block-to-raster buffer, when B-field pictures are decoded they are only decoded once (that is each macroblock within the picture is decoded only once). When this is compared to the B-frame picture case (where each macroblock is decoded twice) it is clear that the amount of memory bandwidth required to decode the B-field pictures is of the order of one half of that required for B-frame pictures.
However, the 16 line block-to-raster which is required for decoding B-field pictures is twice as large as required to decode B-frame pictures. It therefore provides useful extra on-chip buffering which can be exploited to reduce the (external) memory bandwidth requirements when decoding B-frame pictures.
If the decoding of the B-frame commences as soon as possible (and some while before it is required to start displaying the frame) then it follows that the decoder will fill all sixteen lines of the block-to-raster buffer before display commences. Thus, the decoder will have decoded not one, but two, complete rows of macroblocks before display commences. As the display progresses, the decoder will continue to refill the block-to-raster buffer and will, in ideal circumstances, remain two macroblock rows ahead of the display.
However, if the memory bandwidth is restricted, the decoder will fall behind.
The video display will not be disturbed (i.e. the decode/display machine as a whole will not fail) until the display catches the video decoder. Since the block-to-raster buffer starts with a whole "extra" row of macroblocks the decoder may fall progressively further behind its nominal decoding rate by up to one row of macroblocks.
This fact may be exploited in a number of ways:
1. It may allow decoding in a system which would otherwise not have sufficient memory bandwidth to operate correctly.
2. The system may choose to allocate the bandwidth made available by exploiting the memory bandwidth reduction to some other task. For instance, servicing the requirements of a microprocessor which is sharing the same memory as the video decoder.
As can be seen, the decoder progresses more slowly during the VLC burst than in FIG. 6. Because of this the memory accesses required by the decoding during this period are spread over a longer period of time and consequently the actual bandwidth required (for all video decoding relating memory accesses) is reduced proportionately.
In the case of "Main Profile at Main Level" (MP@ML, a particular defined set of constraints in MPEG-2, applicable to the coding of conventional definition television signals), the VLC burst can last for a maximum of 8⅓ rows of macroblocks. Since those 8⅓ rows can take (allowing for the buffering) 9⅓ macroblock row times to decode, it follows that the peak memory bandwidth requirement is reduced to 8⅓/9⅓≡90% of the value without buffering.
To explain the VLC bursts illustrated in FIG. 7 and
It should be understood that the chances of a worst-case VLC burst occurring in a real video sequence is extremely small indeed. It is much more likely that there is a more even distribution of bits throughout the picture. It is also very unlikely that a B picture would use the maximum "B" bits. This is because B pictures typically use less than the average number of bits (e.g. the average number of bits in a frame is about 27% of the maximum "B"bits at 15 Mbit/s) in order that bits are "saved up" to be applied to the anchor frames which generally require more bits. In an anchor picture a VLC burst is of much less consequence because the bandwidth used for prediction, write-back and display is less than that used for the prediction (twice) in the B-frame decoding. Also, since there is twice as long to decode each macroblock in anchor frame decoding the worst case burst is half the height as in the B-frame decoding.
Disadvantages of the sixteen line method--The principal disadvantage of the sixteen line method is the large on-chip block to raster buffer which is required.
In order to reduce the size of the buffer, the 8 line method shown in
The decoder may still require a means to return to the start of the picture in order to decode it again in order to implement a freeze picture. However, in general each picture is only decoded once (though each row within the picture is decoded twice).
The advantage of this method of decoding field pictures is that it requires a smaller block-to-raster buffer (8 lines high) than would otherwise be required (16 lines high).
Advantages of the eight line method--The principal advantage of the eight line method is that the amount of storage required for block-to-raster buffer is reduced to one half of that required by the sixteen line method.
Disadvantage of the eight line method--The disadvantages of the eight line method relate to the external memory bandwidth required by the video decoder.
Since there is no buffering in the block-to-raster buffer, the video decoder operates in lock-step with the video display circuitry. This requires approximately 12% more memory bandwidth (for video decoding) than would be required with the 16 line method. Furthermore, the VLC burst represents a larger proportion a field time. For example, the VLC burst represents about 4.7 ms or 28% of a field time. However, with the 8 line method this increases to 9.3 ms or 56% of the field time (8⅓ macroblock rows in 15). This will be a disadvantage for other devices (such as CPU) which share the memory system since they must suffer the reduced memory bandwidth which is available to them for longer period of time.
In order to reduce the size of the buffer still further, a configuration is shown in
The area in the reconstruction buffer 22 is unused because newly decoded macroblocks have not yet been placed in the buffer. The area in the display buffer 24 is unused because the data that was stored there has already been displayed. The amount of storage is reduced by breaking the storage into small sections each of 16 bytes. These sections are considered to represent an area of the image which is 16 pels wide by 1 scan-line high. The reconstruction buffer and display buffers become indirection tables, each pointing at these "sections" in the main block-to-raster buffer as shown in FIG. 11.
Once all of the sections of a macroblock "slot" have pointers to available storage locations the video decoder may store the decoded data for that location into the slot. As it does this, it uses the address in the reconstruction indirection table to locate the address in the actual storage where the data is to be stored.
The size of the "actual storage" is reduced to one half of that required by the simple block-to-raster buffer. However, the indirection tables themselves are of a significant size so that the saving is not as much as one half. For example, the total size of the actual storage is 16 lines each of 720 (eight bit) bytes; 92160 bits. There are 45×16=720 "sections" so 10 bits are required to uniquely address each section. Each indirection table requires 720 entries of 10 bits. So the total storage for the indirection tables is 720×10×214400 bits.
The saving is thus 92160-14400=77760 bits or 42%.
In a modification more buffer storage "sections" are provided than are actually required by the scheme of FIG. 11. The effect will be to decouple the video decoder somewhat from the video display. This is because if extra storage is provided the video decoder may decode into these "spare" sections even though the video display is not freeing up sections by displaying them.
This can be done particularly efficiently if it happens that the number of sections does not require that the number of bits to address the sections is increased. So for example, the number of sections in the example just given may be increased from 720 to as much as 1024 before the number of bits in each entry of the indirection table must be increased from 10 to 11.
The addition of such buffering will be of particular benefit if the 8-line scheme for field B-pictures is used. Since the principal disadvantage of the 8-line scheme is in its effect on the memory bandwidth as a consequence of the tight coupling between video decode and video display, this modification allows a compromise to be developed between the small block-to-raster buffer of the 8-line scheme and the reduced memory bandwidth of the 16-line scheme.
For example, an 8-line scheme block-to-raster buffer using the indirection technique would require:
720×8×8 bits=46080 bits of actual storage, organised as 360 sections (requiring 2×45×8×9 bits=6480 bits of indirection table).
This could be increased to have, say, 512 sections instead of 360. So 512×16×8=65536 bits of actual storage with no increase in the indirection table size: an increase of 37%.
In return the video decoder has 42% of a macroblock row of additional buffering. So the 8.33 row VLC burst may be decoded in 8.33+0.42=8.75 row times; a 5% reduction in external memory bandwidth requirements.
As regards the handling of chrominance data, there are a number of options for accommodating this:
Provide an additional block-to-raster buffer for the chrominance data.
Provide additional indirection tables for the chrominance but allow them to point at the same main storage area.
Merge the indirection tables for the chrominance into the same indirection tables as are used for luminance.
There is one issue which is peculiar to the chrominance data. This concerns the use of the 4:2:0 sampling data employed by MPEG. Not only does each line (of each of the two colour difference signals) have half of the number of samples (pels) as the luminance line, but also there half as many lines.
In order to display the final image the chrominance data must be doubled vertically so that there are as many lines of chrominance data as luminance.
In practice this means that each line of chrominance data must be displayed twice.
This may be conveniently accomplished by arranging that the first time chrominance line is displayed, its storage is not made available to the reconstruction buffer. Only when the line is displayed for the second time is the storage released and the indirection reconstruction table updated. This of course required that the "actual storage" is increased to accommodate one additional line beyond the minimum (5 lines instead of 4).
Additionally, the chrominance data may be filtered in the vertical direction. This might be achieved (in the case of a simple 2-tap filter) by reading two lines out in each display line.
Patent | Priority | Assignee | Title |
10191617, | Jan 08 2007 | Samsung Electronics Co., Ltd. | Method and apparatus for providing recommendations to a user of a cloud computing service |
10235012, | Jan 08 2007 | Samsung Electronics Co., Ltd. | Method and apparatus for providing recommendations to a user of a cloud computing service |
10235013, | Jan 08 2007 | Samsung Electronics Co., Ltd. | Method and apparatus for providing recommendations to a user of a cloud computing service |
10754503, | Jan 08 2007 | Samsung Electronics Co., Ltd. | Methods and apparatus for providing recommendations to a user of a cloud computing service |
11416118, | Jan 08 2007 | Samsung Electronics Co., Ltd. | Method and apparatus for providing recommendations to a user of a cloud computing service |
11775143, | Jan 08 2007 | Samsung Electronics Co., Ltd. | Method and apparatus for providing recommendations to a user of a cloud computing service |
7512325, | Jan 09 2002 | MEDIATEK INC. | Method and apparatus for MPEG video processing |
7865571, | Jan 08 2007 | SAMSUNG ELECTRONICS CO , LTD | Method and apparatus for transferring digital content from a personal computer to a mobile handset |
7865572, | Jan 08 2007 | SAMSUNG ELECTRONICS CO , LTD | Method and apparatus for transferring digital content from a personal computer to a mobile handset |
7937451, | Jan 08 2007 | SAMSUNG ELECTRONICS CO , LTD | Method and apparatus for transferring digital content from a computer to a mobile handset |
8082321, | Jan 08 2007 | SAMSUNG ELECTRONICS CO , LTD | Method and apparatus for transferring digital content from a personal computer to a mobile handset |
8250172, | Jan 08 2007 | SAMSUNG ELECTRONICS CO , LTD | Method and apparatus for transferring digital content from a computer to a mobile device |
8533286, | Jan 08 2007 | SAMSUNG ELECTRONICS CO , LTD | Method and apparatus for aggregating user data and providing recommendations |
8594630, | Jan 08 2007 | SAMSUNG ELECTRONICS CO , LTD | Method and apparatus for transferring digital content from a computer to a mobile device |
9317179, | Jan 08 2007 | SAMSUNG ELECTRONICS CO , LTD | Method and apparatus for providing recommendations to a user of a cloud computing service |
9338222, | Jan 08 2007 | SAMSUNG ELECTRONICS CO , LTD | Method and apparatus for aggregating user data and providing recommendations |
Patent | Priority | Assignee | Title |
5574504, | Jun 26 1992 | Sony Corporation | Methods and systems for encoding and decoding picture signals and related picture-signal recording media |
5717461, | Aug 10 1994 | Google Technology Holdings LLC | Dram mapping for a digital video decompression processor |
5731839, | Feb 06 1996 | Sarnoff Corporation | Bitstream for evaluating predictive video decoders and a method of generating same |
5742347, | Sep 19 1994 | International Business Machines Corporation | Efficient support for interactive playout of videos |
5903282, | Jul 28 1997 | MEDIATEK INC | Video decoder dynamic memory allocation system and method with an efficient freeze mode |
6115072, | Jan 27 1999 | Google Technology Holdings LLC | 16:9 aspect ratio conversion by letterbox method for an MPEG image |
6130963, | Nov 22 1996 | AVAGO TECHNOLOGIES GENERAL IP SINGAPORE PTE LTD | Memory efficient decoding of video frame chroma |
6144323, | Dec 09 1998 | MEDIATEK INC | Method and apparatus for decoding video data |
6147712, | May 27 1996 | Mitsubishi Denki Kabushiki Kaisha | Format conversion circuit and television receiver provided therewith and method of converting video signals |
6222886, | Jun 24 1996 | Kabushiki Kaisha Toshiba | Compression based reduced memory video decoder |
6289053, | Jul 31 1997 | MEDIATEK INC | Architecture for decoding MPEG compliant video bitstreams meeting 2-frame and letterboxing requirements |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jul 30 1998 | LSI Logic Corporation | (assignment on the face of the patent) | ||||
Aug 11 1998 | WISE, ADRIAN PHILIP | LSI LOGIC CORPORATION | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 009520 | 0110 | |
Apr 06 2007 | LSI Logic Corporation | LSI Corporation | CHANGE OF NAME SEE DOCUMENT FOR DETAILS | 021744 | 0300 | |
Jun 29 2009 | LSI Corporation | MEDIATEK INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 023032 | 0574 |
Date | Maintenance Fee Events |
Apr 01 2008 | ASPN: Payor Number Assigned. |
Apr 22 2008 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Apr 26 2012 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Apr 26 2016 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Oct 26 2007 | 4 years fee payment window open |
Apr 26 2008 | 6 months grace period start (w surcharge) |
Oct 26 2008 | patent expiry (for year 4) |
Oct 26 2010 | 2 years to revive unintentionally abandoned end. (for year 4) |
Oct 26 2011 | 8 years fee payment window open |
Apr 26 2012 | 6 months grace period start (w surcharge) |
Oct 26 2012 | patent expiry (for year 8) |
Oct 26 2014 | 2 years to revive unintentionally abandoned end. (for year 8) |
Oct 26 2015 | 12 years fee payment window open |
Apr 26 2016 | 6 months grace period start (w surcharge) |
Oct 26 2016 | patent expiry (for year 12) |
Oct 26 2018 | 2 years to revive unintentionally abandoned end. (for year 12) |