Method is disclosed for reducing frame buffer, stream buffer, reconstruction buffer, or latency associated with frame buffer compression in an encoder or decoder with multiple slices of an image frame. The image frame is divided into multiple slices vertically, horizontally or both. One core compressor or decompressor can be used to process two or more slices. The encoding and decoding of two or more slices may be performed in parallel. Instead of encoding an entire slice, the encoder compresses only partial data of one slice before encoding another slice. According to one embodiment, each slice is divided into two or more partitions. The encoder is switched to another slice after encoding one partition of one slice. In another embodiment, the encoder is switched to another slice based the information related to the coding status. The decoding order may be same as the encoding order.
|
9. A method of compressing an image frame in an image or video system, the method comprising:
receiving the image frame;
dividing the image frame into multiple slices, wherein said multiple slices comprise slice one and slice two;
encoding, by a processing circuit, a processing unit in a current slice to generate a bitstream associated with the processing unit in current slice, wherein the processing unit is smaller than a slice size, and wherein said encoding is switched to another slice based on information related to coding status associated with said multiple slices;
storing the bitstream associated with the processing unit for the current slice in one or more stream buffers, wherein the bitstreams for each slice are packed into segments, wherein each segment has a fixed size; and
providing interleaved segments.
1. A method of compressing an image frame in an image or video system, the method comprising:
receiving the image frame;
dividing the image frame into multiple slices, wherein the multiple slices comprise slice one and slice two;
dividing each slice into multiple partitions;
encoding, by a processing circuit, each partition for each slice to generate a bitstream of said each partition for each slice, wherein said encoding for the slice one starts before said encoding for the slice two, and at least one partition of the slice two is encoded before encoding a last partition in the slice one;
storing the bitstreams associated with the multiple partitions from the multiple slices in one or more stream buffers, wherein the bitstreams associated with the multiple partitions from each slice are packed into segments, wherein each segment has a fixed size; and
providing interleaved segments.
16. A method of de-compressing an image frame in an image or video system, wherein the image frame is divided into multiple slices, said multiple slices comprise slice one and slice two, and each slice is partitioned into multiple partitions for decoding, the method comprising:
receiving a bitstream associated with the image frame, wherein the bitstream comprises interleaved multiple segments for said multiple slices and each segment has a fixed size;
de-interleaving the interleaved multiple segments by dispatching each of the interleaved multiple segments into one or more corresponding stream buffers for each slice;
decoding, by a processing circuit, the bitstream in one or more corresponding stream buffers for each slice to generate decoded partitions for each slice, wherein said decoding for the slice one starts before said decoding for the slice two, and at least one decoded partition of the slice two is generated before a last decoded partition in the slice one; and
providing the decoded partitions for the image frame.
19. A method of de-compressing an image frame in an image or video system, wherein the image frame is divided into multiple slices, said multiple slices comprise slice one and slice two, the method comprising:
receiving a bitstream associated with the image frame, wherein the bitstream comprises interleaved multiple segments for said multiple slices and each segment has a fixed size;
de-interleaving the interleaved multiple segments by dispatching each of the interleaved multiple segments into one or more corresponding stream buffers for each slice;
decoding, by a processing circuit, the bitstream in one or more corresponding stream buffers for a current slice to generate a decoded unit in the current slice, wherein the decoded unit is smaller than a slice size, and wherein said decoding is switched to another slice based on information related to decoding status associated with said multiple slices; and
providing decoded units for the image frame, wherein at least one decoded unit of the slice two is provided after a first decoded unit for the slice one and before a last decoded unit for the slice one.
2. The method of
3. The method of
5. The method of
6. The method of
7. The method of
8. The method of
10. The method of
11. The method of
12. The method of
14. The method of
15. The method of
17. The method of
18. The method of
20. The method of
22. The method of
23. The method of
|
The present invention claims priority to U.S. Provisional Patent Application, No. 61/920,841, filed on Dec. 26, 2013. This U.S. Provisional Patent Applications is hereby incorporated by reference in its entirety.
The present invention relates to image data processing. In particular, the present invention relates to methods of encoding and decoding an image frame with multiple slices for frame buffer compression.
With the development of image processing technique, the image display is improved from lower definition to higher definition. The amount of data to be transmitted increases significantly with the improvement of the definition, such as from 1280×720 to 1920×1088 or 2560×1600. When the display controller (DC) reads the pixels out from frame buffer at fixed rate, the requirement for the transmission bandwidth as well as the power consumption increases significantly to display high definition images. On the other hand, the requirement for the frame buffer size increases with the growing of image size. Thus, frame buffer compression (FBC) is a trend for image coding and transmission. By frame buffer compression, the transmission bandwidth between the transmit buffer (TX) and the receive buffer (RX) can be reduced. Moreover, the frame buffer size inside a RX device can also be reduced by frame buffer compression.
The algorithm used for frame buffer compression is related to the partition method of an image frame. In order to enhance the throughput for large size image, the image frame is usually split into multiple slices for encoding. The image frame can be divided into vertical slices, horizontal slices or interleaved slices.
In conventional method, the encoder compresses each slice of the image frame to generate a bitstream, regardless of the partition being vertical or horizontal. The bitstream from each slice may be packed in the sequence from the first slice to the last slice or the bitstreams from the multiple slices may be packed into interleaved segments.
In the case of non-interleaved streams, bitstreams are received in sequence from the first slice to the last slice and no stream of the current slice is received before the stream of previous slice finishes. No matter the image frame is divided into multiple slices by vertical or horizontal partition, the multiple slices in the image frame should be decoded one by one. Thus, the decoding of the next slice starts after finishing the current slice. Therefore, the throughput is limited because the multiple slices cannot be decoded in parallel by a multiple cores decoder without increasing frame buffer size. In non-interleaved streams, larger frame buffer size is required for decoding the multiple slices in parallel to obtain high throughput.
Due to the cost associated with the frame buffer, it is important to avoid the need of large frame buffer size for decoding multiple slices in parallel. Therefore, it is preferred to pack the bitstreams from the multiple slices into interleaved segments.
Among the interleaved segments, at least one segment of one slice stream is inserted into another slice stream.
Within each slice, the image data is usually processed line by line or block by block in raster scan fashion. In case of packing slice 0 and slice 1 streams into interleaved segments, the stream data are received with the segments of slice 0 stream interleaved with the segments of slice 1 stream. When the image frame is divided into horizontal slices (i.e., horizontal partition), the multiple horizontal slices can be decoded in parallel by multiple de-compressors. In this case, for each scan line or each row of blocks of the image frame, multiple de-compressors can be used for decoding. However, for decoding each scan line based on vertical partition, only one de-compressor can be used since slice 1 processing cannot be started until slice 0 is finished. Therefore, horizontal partition is preferred for providing higher decoding throughput on each scan line. Moreover, larger reconstruction buffer may be required for vertical partition since slice 1 reconstructed data is not displayed immediately.
In conventional video slice encoding based on horizontal partition, encoder will completely encode one slice and then start to process the next slice. The image frame is encoded slice by slice and in each slice the coding blocks are compressed row by row.
However, the tile base coding order of horizontal slices is different from the nature order of display interface. In the decoder side, the reconstructed image frame is displayed row by row in the whole frame.
Therefore, it is desirable to develop frame buffer compression so that the stream buffer size and/or the latency can be reduced without noticeable impact.
Methods and apparatus are disclosed for encoding and decoding multiple slices of an image frame for frame buffer compression. According to the present invention, the encoding may be performed in an interleaving order on the multiple slices. The number of the encoder modules may be different from the number of the multiple slices in the image frame. One encoder module can be used to encode two or more slices in the image frame.
According to the first embodiment of the present invention, the image frame in an image or video system is received and divided into multiple slices for compressing. The multiple slices comprise slice one and slice two. Each slice of the video frame is split into multiple partitions for encoding. Each partition for each slice is encoded to generate a bitstream. Among the multiple slices of the video frame, the encoding for the slice one starts before the encoding for the slice two. Moreover, at least one partition of the slice two is encoded before the last partition in the slice one. One or more stream buffers are used to store the bitstreams associated with the multiple partitions from the multiple slices. The bitstreams associated with the multiple partitions from each slice are packed into segments. Each segment has a fixed size. The segments for the multiple slices are interleaved for providing. At least one segment for the slice two may be provided after a first segment for the slice one and before the last segment of the slice one.
In the first embodiment, the image frame may be partitioned into the multiple slices vertically, horizontally or both vertically and horizontally. Each slice may correspond to a first rectangular pixel group. Each partition may contain one or more compression blocks. The compression blocks in each partition may be equal to, less than or more than one row of compression blocks. Each compression block can be a second rectangular pixel group which comprises one or more pixels.
According to the second embodiment of compressing the image frame in the image or video system, the image frame is also received and divided into multiple slices which comprises slice one and slice two. In this embodiment, the encoding is switched from one slice to another slice based on information related to the coding status associated with the multiple slices. The information may comprise one or a combination of a pre-defined bitstream size, the amount of data stored in said one or more stream buffers, the fullness status of said one or more stream buffers, the current encoding position in the image frame, and the amount of data in the processing unit.
The encoder encodes a processing unit in a current slice to generate a bitstream associated with the processing unit in current slice. The processing unit is smaller than the current slice. The bitstream associated with the processing unit for the current slice is stored in one or more stream buffers. The bitstreams for each slice are packed into segments and each segment has a fixed size. The segments for the multiple slices are interleaved for delivering. In the interleaved segment, at least one segment for the slice two is provided after a first segment for the slice one and before a last segment of the slice one.
The image frame may be partitioned into the multiple slices vertically, horizontally or both vertically and horizontally. Each slice may correspond to a first rectangular pixel group which may contain two or more compression blocks. The compression blocks in each processing unit may be equal to, less than or more than one row of compression blocks. Each compression block can be a second rectangular pixel group which comprises one or more pixels.
In the present invention, the method is also disclosed for de-compressing on an image frame in an image or video system. The image frame is divided into multiple slices which comprise slice one and slice two. Each slice may correspond to a first rectangular pixel group. The first rectangular pixel group may correspond to two or more decoded units. Each decoded unit may contain a second rectangular pixel group which comprises one or more pixels. The decoding of the multiple slices may also be performed in an interleaving order. The number of the decoder modules can be not equal to the number of the multiple slices in the image frame. One decoder module can be used to decode two or more slices in the image frame.
According to one embodiment of the present invention, each slice is partitioned into multiple partitions for decoding. A bitstream associated with the image frame is received for de-compressing. The bitstream comprises interleaved multiple segments for the multiple slices and each segment has a fixed size. The interleaved multiple segments are de-interleaved by dispatching each of the interleaved multiple segments into one or more corresponding stream buffers for each slice. For each slice, the bitstream in one or more corresponding stream buffers is decoded to generate decoded partitions. Each decoded partition may be equal to, less than or more than one row of de-compressed blocks. The decoding for the slice one starts before the decoding for the slice two and at least one decoded partition of the slice two is generated before the last decoded partition in the slice one. The decoded partitions are provided for displaying the image frame.
According to another embodiment of the present invention, a bitstream associated with the image frame is received for de-compressing. The bitstream comprises interleaved multiple segments for the multiple slices and each segment has a fixed size. The interleaved multiple segments are de-interleaved by dispatching each of the interleaved multiple segments into one or more corresponding stream buffers for each slice. The decoder decodes the bitstream in one or more corresponding stream buffers for a current slice to generate a decoded unit in the current slice. The decoded unit is smaller than the current slice. Each decoded unit may be equal to, less than or more than one row of de-compressed blocks. The decoded units for the image frame are provided for displaying. At least one decoded unit of slice two may be provided after the first decoded unit and before the last the decoded unit of the slice one.
The decoding is switched to another slice based on information related to the decoding status associated with the multiple slices. The information may comprise one or a combination of a pre-defined size of decoded bitstream associated with the current slice, the amount of data stored in the corresponding stream buffers, the fullness status in the corresponding stream buffer, the current decoding position in the image frame, and the amount of data in the decoded unit.
In the present invention, methods of encoding or decoding an image frame are developed for frame buffer compression in an image or video system. The image frame to be processed can be divided into multiple slices vertically, horizontally or both vertically and horizontally. The multiple slices comprise at least two slices. For description, these two slices are named as slice 0 and slice 1 in the present invention and the encoding of slice 0 starts before the encoding of slice 1. Slice 0 and slice 1 are any two slices in the image frame. The decoding of slice 0 may starts before or after the decoding of slice 1. Each of the multiple slices is a rectangular pixel group which comprises two or more pixels. According to the present invention, the frame buffer, reconstruction buffer, stream buffer, and/or the latency can be reduced compared with traditional coding methods based on horizontal partition or interleaved partition.
In order to reduce the stream buffer size in encoder or reduce the reconstruction buffer in decoder, the encoder compresses only one part of slice 0 before switching to slice 1 according to the present invention. The decoder can also de-compress only partial data of one slice before switching to another slice. Thus, the corresponding stream buffer or frame buffers of slice 0 is designed for storing only partial data of slice 0. When the decoding of slice 0 starts before the decoding of slice 1, the reconstruction buffer may store partial reconstructed data of slice 1 for decoding these two slices in parallel. Therefore, the frame buffer compression method according to the present invention can reduce the stream buffer size on encoder and reconstruction buffer on decoder.
In order to reduce the compressor cost, one module may be used for encoding two or more slices. Similarly, one module may be used to decode two or more slices in the image frame. Therefore, the number of the encoder or decoder modules may be less than the number of the multiple slices in the image frame. When the clock rate of an application processor is high enough with advanced process (e.g. 28 nm), single core compressor may be used to encode two or more slices of the image frame. Thus, the compressor cost may be saved by using single core compressor. In the decoder side, it is also possible to use single-core decoder to decode two or more slices. In another embodiment, however, the number of the encoder or decoder modules may be larger than the number of the multiple slices in the image frame.
According to the present invention, the coding processes for two or more slices can be processed in parallel by multi-core compressor so that the encoding or decoding throughput can be improved. For example, slice 0 and slice 1 are compressed by a multi-core compressor. After encoding the first part of slice 0, the processing steps performed on slice 0 can be performed in parallel with the processes performed on slice 1. The processing steps may correspond to compressing of each part, buffering and packaging the encoded bitstream into segments, and delivering the segments. In the decoding side, the decoding of the two slices can also be performed in parallel. After decoding the first part for the image frame, the following processing steps performed on slice 0 and slice 1 can be performed in parallel.
In the present invention, the encoded bitstreams for the multiple slices are interleaved and packed into fixed-size packets to generate interleaved segments. Among the interleaved segments, at least one segment of one slice is inserted into another slice stream. One segment of any slice in the image frame can be the first segment of the interleaved segments. For an image frame divided into slice 0 and slice 1, the segments of slice 0 and slice 1 can be arranged in any interleaved order, such as the order shown in
The bitstream for each processing unit is stored in one or more packets for the corresponding slice. When the bitstream of the current processing unit exceeds one or more packets, the exceeded part is packed into the following packet for the current slice. The bitstream of the following processing unit in the current slice is packed behind the exceeded part of the current processing unit.
According to the first embodiment of the present invention, each slice is divided into multiple partitions. The encoding of the image frame is based on the partitions. Each partition may be less than, equal to or more than one row of compression blocks in each slice. Two or more slices can be compressed by one encoder module. The de-compression of multiple slices can also be performed by one decoder module.
In step 530, the encoder encoding each partition for each slice to generate a bitstream. The encoding for the slice 0 starts before the encoding for the slice 1 and at least one partition of the slice 1 is encoded before the last partition in the slice 0. Temporal storage is used to preserve the internal encoding information for the multiple slices. The bitstreams for the multiple partitions from multiple slices are buffered and packed into fixed size segments in step 540. One or more frame buffer may be used to store the bitstreams. In order to reduce the stream buffer size, less than all the bitstreams of one slice may be stored in the corresponding stream buffer or buffers. For example, one bitstream of the slice 0 and one bitstream of the slice 1 are stored in one stream buffer. The bitstreams are packed into interleaved segments for providing in step 550.
The compression on each row of slice 0 and slice 1 generates bitstreams which are named as R0,0 stream to R1,n-1 stream. The bitstreams of slice 0 and slice 1 are delivered to stream pack unit 630 to form interleaved segments 640. One bitstream of slice 0 and one bitstream of slice 1 in the same row of the image frame may delivered and packed together. For example, R0,0 stream to R1,0 stream are delivered together to stream pack unit 630. Then, stream pack unit 630 provides the interleaved segments of R0,0 stream to R1,0 stream to the decoder. Each of the fixed-size packets is used to contain one segment of slice 0 or one segment of slice 1. The segments of slice 0 are interleaved with the segments of slice 1. As shown in
The de-compressing order shown in
According to another example of the first embodiment, each slice of the image frame is divided into multiple strips. Each strip contains one or more compression blocks which are less than one row. The number of the compression blocks in each strip may be determined based on the frame buffer size or how much transmitting bandwidth could be used. The encoder compresses one or more strips of slice 0 to generate a bitstream for slice 0. Then the encoder is switched to encode slice 1. Thus, the frame buffer may store only partial data of one row in slice 0.
Compressor 720 encodes strip Si to generate Si stream. The internal encoding information of slice 0 and slice 1 is kept in a temporal storage. The streams of the strips are packed into fixed size packets by stream pack unit 730. Each fixed size packet contains one fixed size segment of slice 0 or slice 1. As shown in
The decoder may de-compress interleaved segments 740 in the same order of encoding. When one de-compressor is used for decoding both slice 0 and slice 1, the decoder de-compresses one or more strips for one slice and then is switched to de-compress one or more strips for another slice. The decoding flow can be similar to the decoding flow shown in
According to the second embodiment of the present invention, the coding system encodes one part of one slice and switches to encode another slice based on the information related to the coding status associated with the multiple slices. The image frame is also divided into multiple slices which comprise slice 0 and slice 1. When one encoder module is used to encoding slice 0 and slice 1, the encoding of the slice 0 starts before the encoding of the slice 1. The information may be the pre-defined bitstream size, the amount of data stored in the corresponding stream buffer for the current slice, the fullness status of the corresponding stream buffer, the current encoding position in the image frame, the amount of data in the processing unit, or a combination of the foregoing.
If the amount of data in the processing unit may be used to determine the switch operation, the compressor may compress a pre-defined amount of data and switch to another slice. For example, the encoder is switch to another slice when compresses 2.5 rows in the current slice. The switching operation may be determined according to the current position in the image frame.
When the current encoding position in the image frame is used as the determination information, the encoding operation is switched at one pre-defined position in the current slice. For example, the pre-defined position for slice 0 is the end of the third row and the pre-defined position for slice 1 is the middle of the second compression block row. The compressor is switched to process slice 1 when it is detected the current position is the end of the third compression block row in slice 0. After compressing the data to the middle of the second row in slice 1, the compressor is switched back to slice 0.
In the second embodiment, the stream buffers are local storage in the coding system. The size of each stream buffer is variable and designed for storing one or more processing units of the corresponding slice. No stream buffer is used for buffering the full image frame or one entire slice according to the present invention. The fullness status of each stream buffer can be used to determine the switching operation of encoding or decoding. For example, the compressor is switched to another slice when the stream buffer is full or the buffered stream is over a threshold for the current slice.
In the second embodiment, it is not required to split each slice into multiple partitions. The encoder compresses one processing unit of the current slice and then is switched to compress another slice. Each processing unit may consist of one or more coding blocks (or compression blocks). The encoding of slice 1 starts after the encoding of the first processing unit and before the last processing unit of the slice 0. Each processing unit in slice 0 is smaller than the whole slice. During encoding, the encoder may compress less than or more than one row in one slice before being switched to compress another slice.
Similarly, the decoding is switched to another slice based on the information related to the decoding status associated with the multiple slices. The information for determining the switching operation of decoding may be one or a combination of the pre-defined size of the decoded bitstream associated with the current slice; the amount of data stored in the corresponding stream buffers; the fullness status in the corresponding stream buffer; the current decoding position in the image frame; and the amount of data in the decoded unit.
The encoder encodes a processing unit in a current slice to generate a bitstream associated with the processing unit in step 830. The encoder may use one module to encode two or more slices of the image frame. Therefore, the number of the encoder module may be less than the number of the slices in the image frame. Each processing unit may comprise one or more compression blocks and may be less than, equal to or more than one row of compression blocks. Each compression block is a rectangular pixel group consisting of one or more pixels. The encoder is switched to another slice according to the information related to the coding status associated with the multiple slices. It would be appreciated that the number of the encoder module may be larger than the number of the multiple slices in the image frame.
The bitstreams associated with the multiple processing units are buffered in one or more corresponding stream buffers. The bitstreams for each slice are packed into fixed size packets in step 840. In order to reduce the frame buffer size, less than all the compressed data of slice 0 may be stored in the corresponding stream buffer or buffers. The bitstreams for the multiple slices are interleaving packed to form interleaved segments. Among the interleaved segments, at least one segment of one slice is inserted into another slice stream. The interleaved segments are provided for decoding in step 850.
By decoding the segments stored in one or more corresponding stream buffers for a current slice, a decoded unit in the current slice is generated. The decoded unit is smaller than a slice size. The decoded units for the image frame are provided or displayed in step 890. At least one decoded unit of the slice 1 is provided after the first decoded unit for the slice 0 and before the last decoded unit for the slice 0.
The bitstream of each processing unit in slice 0 is stored in stream buffer 931 for slice 0 and the bitstream of each processing unit in slice 1 is preserved in stream buffer 932 for slice 1. The size of each stream buffer is variable. The bitstreams for these two slices are packed into fixed size packets to form interleaved segments 950 by using stream pack unit 940. As shown in
According to one example of the second embodiment, the encoder can be switched to another slice based on the pre-defined bitstream size. The pre-defined threshold for each slice can be the size of one or more packets. Same or different pre-defined thresholds may be used for the multiple slices. The encoder may be switched to another slice when the bitstream generated from the current processing unit in the current slice is equal to or larger than the pre-defined threshold for the current slice. For example, the pre-defined threshold is one or more 64-bit for each slice. The compressor is switched to encode another slice when the accumulated stream for the current processing unit of the current slice is equal to or larger than one or more 64-bit. When the bitstream size of the current processing unit is larger than one or more packets size, the exceeded stream is packed into the following packets for the current slice. The bitstream for the next processing unit of the current slice is packed behind the bitstream of the current processing unit.
In this example, the bitstream of processing unit P0,0 is larger than two packets size and less than three packets size. Thus the bitstream of processing unit P0,0 is stored in packets 1051, 1053 and 1055. In packet 1055, part 1055a contains the exceeded bitstream of processing unit P0,0. Thus, the bitstream of the following processing unit in slice 0 is packed from the rest part of packet 1055 which is part 1055b. The bitstream of processing unit P1,0 is larger than one packet size. Similarly, the bitstream of processing unit P1,0 is stored into packet 1052 and the exceeded part is stored into part 1054a of packet 1054. As shown in
According to second embodiment, the switching operation of encoding may be determined based on the pre-defined bitstream size only or together with other information. When the switching operation is determined only based on the pre-defined bitstream size, the compressor is switched back to slice 1 after the bitstream of processing unit P1,0 reaching the pre-defined threshold. The switching operation may also be determined based on the amount of the bitstream buffered for slice 0 as well as the pre-defined bitstream size for slice 0. In this situation, the compressor may continue encoding slice 1 even after the bitstream of processing unit P1,0 filling up one packet. The compressor may be switched to slice 0 until packet 1054 is filled up.
The methods disclosed in the present invention can be used by frame data encoder/decoder on display interfaces, such as Mobile Industry Processor Interface MIPI-DSI (Display Serial Interface) or MIPI-CSI (Camera Serial Interface), or eDP (embedded Display Port), etc. The above description is presented to enable a person of ordinary skill in the art to practice the present invention as provided in the context of a particular application and its requirement. Various modifications to the described embodiments will be apparent to those with skill in the art, and the general principles defined herein may be applied to other embodiments. Therefore, the present invention is not intended to be limited to the particular embodiments shown and described, but is to be accorded the widest scope consistent with the principles and novel features herein disclosed. In the above detailed description, various specific details are illustrated in order to provide a thorough understanding of the present invention. Nevertheless, it will be understood by those skilled in the art that the present invention may be practiced.
The invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described examples are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
Patent | Priority | Assignee | Title |
10552988, | Dec 22 2017 | Intel Corporation | Ordering segments of an image for encoding and transmission to a display device |
11494869, | Sep 30 2020 | Realtek Semiconductor Corporation | Image processor having a compressing engine performing operations on each row of M*N data block |
Patent | Priority | Assignee | Title |
20030138043, | |||
20110026604, | |||
20140105493, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jul 25 2014 | CHOU, HAN-LIANG | MEDIATEK INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 033405 | /0713 | |
Jul 25 2014 | LEE, KUN-BIN | MEDIATEK INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 033405 | /0713 | |
Jul 28 2014 | MEDIATEK INC. | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Sep 14 2020 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Sep 16 2024 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
Mar 14 2020 | 4 years fee payment window open |
Sep 14 2020 | 6 months grace period start (w surcharge) |
Mar 14 2021 | patent expiry (for year 4) |
Mar 14 2023 | 2 years to revive unintentionally abandoned end. (for year 4) |
Mar 14 2024 | 8 years fee payment window open |
Sep 14 2024 | 6 months grace period start (w surcharge) |
Mar 14 2025 | patent expiry (for year 8) |
Mar 14 2027 | 2 years to revive unintentionally abandoned end. (for year 8) |
Mar 14 2028 | 12 years fee payment window open |
Sep 14 2028 | 6 months grace period start (w surcharge) |
Mar 14 2029 | patent expiry (for year 12) |
Mar 14 2031 | 2 years to revive unintentionally abandoned end. (for year 12) |