Codecs may be modified to consider weighting and/or illumination compensation parameters when determining a deblocking filter strength that is to be applied. These parameters may be useful for recording illumination changes, such as fades, cross-fades, flashes, or light source changes, which allows these illumination changes to displayed during playback using the same reference frame data which different weighting and/or illumination compensation parameters applied. In different instances, the parameters may be considered when setting a deblocking filter strength to ensure that these effects are properly displaying during playback while minimizing the appearance of blocking artifacts.
|
1. A method for configuring a deblocking filter to reduce banding artifacts comprising:
comparing a weighted prediction parameter of a video codec inter-prediction process from a reference index in a plurality of blocks using a processing device;
when the compared weighted prediction parameter in the blocks is different, setting a deblocking filter strength of the blocks to a first value;
when the weighted prediction parameter in the blocks is similar:
when the blocks have different reference pictures or a different number of reference pictures, setting the deblocking filter strength to the second value;
calculating a difference between motion vectors of the respective blocks in a horizontal direction and a vertical direction;
when the difference in at least one of the directions is greater than or equal to a threshold, setting the deblocking filter strength of the blocks to a second value; and
when the difference in both directions is less than the threshold, setting the deblocking filter strength of the blocks to a third value.
12. A method for configuring a deblocking filter to reduce banding artifacts comprising:
comparing an illumination compensation parameter of a video codec inter-prediction process in a plurality of blocks of image data using a processing device;
when the illumination compensation parameter is similar in the plurality of blocks:
when the blocks have different reference pictures or a different number of reference pictures, setting the deblocking filter strength to the second value;
calculating a difference between motion vectors of the respective blocks in a horizontal direction and a vertical direction;
when the difference in at least one of the directions is greater than or equal to a threshold, setting the deblocking filter strength of the blocks to the second value; and
when the difference in both directions is less than the threshold, setting the deblocking filter strength of the blocks to the first value; and
when the illumination compensation parameter is different in the plurality of blocks, setting the deblocking filter strength to a second value.
23. An image processor comprising:
a buffer;
a processing device;
a prediction unit for estimating, using the processing device, image motion between a source image being coded and a reference frame stored in the buffer and generating a weighted motion prediction parameter stored in a reference index for each of a plurality of blocks of image data; and
a filter system for:
comparing the weighted prediction parameter of different blocks of the image data;
when the compared weighted prediction parameter in the blocks is different, setting a deblocking filter strength of the blocks to a first value;
when the weighted prediction parameter in the blocks is similar:
when the blocks have different reference pictures or a different number of reference pictures, setting the deblocking filter strength to the second value;
calculating a difference between motion vectors of the respective blocks in a horizontal direction and a vertical direction;
when the difference in at least one of the directions is greater than or equal to a threshold, setting the deblocking filter strength of the blocks to a second value; and
when the difference in both directions is less than the threshold, setting the deblocking filter strength of the blocks to a third value.
28. An image processor comprising:
a buffer;
a processing device;
a prediction unit for estimating, using the processing device, image motion between a source image being coded and a reference frame stored in the buffer and generating a illumination compensation parameter associated with respective blocks of image data; and
a filter system for:
comparing the illumination compensation parameter of a video codec inter-prediction process in the respective blocks of image data;
when the illumination compensation parameter is similar in the plurality of blocks:
when the blocks have different reference pictures or a different number of reference pictures, setting the deblocking filter strength to the second value;
calculating a difference between motion vectors of the respective blocks in a horizontal direction and a vertical direction;
when the difference in at least one of the directions is greater than or equal to a threshold, setting the deblocking filter strength of the blocks to the second value; and
when the difference in both directions is less than the threshold, setting the deblocking filter strength of the blocks to the first value; and
when the illumination compensation parameter is different in the plurality of blocks, setting the deblocking filter strength to a second value.
2. The method of
3. The method of
4. The method of
5. The method of
6. The method of
7. The method of
8. The method of
9. The method of
10. The method of
11. The method of
13. The method of
14. The method of
15. The method of
16. The method of
comparing a plurality of illumination compensation parameters in the plurality of blocks including (i) a scaling parameter applied to a motion compensated signal for a motion vector, and (ii) an offset applied to a scaled motion compensation signal for a motion vector;
setting the deblocking filter strength to the first value when the scaling parameter and the offset are similar in the blocks; and
setting the deblocking filter strength to the second value when at least one of the scaling parameter and the offset are different in the blocks.
17. The method of
calculating a difference between motion vectors of the respective blocks in a horizontal direction and a vertical direction;
setting the deblocking filter strength to a third value higher than the second value when both (i) the difference in at least one of the directions is greater than or equal to a threshold and (ii) the at least one of the scaling parameter and the offset are different in the blocks;
setting the deblocking filter strength to the second value when only one following condition applies: (i) the difference in at least one of the directions is greater than or equal to a threshold, or (ii) the at least one of the scaling parameter and the offset are different in the blocks, setting the deblocking filter strength of the blocks to the second value; and
setting the deblocking filter strength to the first value when (i) the difference in both directions is less than the threshold and (ii) the scaling parameter and the offset are similar in the blocks.
18. The method of
19. The method of
20. The method of
21. The method of
22. The method of
24. The image processor of
a strength derivation unit for comparing the weighted prediction parameter of different blocks and setting the deblocking filter strength for each of the compared blocks; and
a deblocking filter for applying deblocking filtering to image data at a strength provided by the strength derivation unit.
25. The image processor of
a motion estimator for estimating the image motion between the source image being coded and the reference frame; and
a mode decision unit for assigning a prediction mode to code the blocks of image data and select a coded block from the buffer to serve as a prediction reference for the image data to be coded.
26. The image processor of
27. The image processor of
29. The image processor of
a strength derivation unit for comparing the illumination compensation parameter of different blocks and setting the deblocking filter strength for each of the compared blocks; and
a deblocking filter for applying deblocking filtering to image data at a strength provided by the strength derivation unit.
30. The image processor of
a motion estimator for estimating the image motion between the source image being coded and the reference frame; and
a mode decision unit for assigning a prediction mode to code the blocks of image data and select a coded block from the buffer to serve as a prediction reference for the image data to be coded.
|
The present application claims the benefit of U.S. Provisional application Ser. No. 61/699,218 filed Sep. 10, 2012, entitled “VIDEO DEBLOCKING.” The aforementioned application is incorporated herein by reference in its entirety.
Existing video coding standards and technologies, such as MPEG-4 AVC/H.264, VC1, VP8, and the HEVC/H.265 video-coding standard, have employed block-based methods for coding information. These methods have included intra and inter prediction, as well as transform, quantization, and entropy coding processes. Intra and inter prediction exploit spatio-temporal correlation to compress video data. The transform and quantization processes, on the other hand, have been used to correct errors that may have incurred due to inaccuracies in prediction, given a constraint in bit rate or target quality. The bit rate or target quality has been primarily controlled by adjusting the quantization level for each block. Entropy encoding has further compressed the resulting data given its characteristics.
Although the above processes have resulted in substantial compression of an image or of video data, the inherent block characteristics of the prediction and coding process have resulted in coding artifacts that could be unpleasant and may result in deteriorating the performance of the coding process. Existing techniques introduced in some codecs and standards have attempted to reduce such coding artifacts. Some of these existing techniques applied a “deblocking” filter after reconstructing an image.
Deblocking filters have analyzed a variety of information about a region or block that has been coded and applied filtering strategies to reduce any detected coding artifacts. In codecs such as MPEG-4 AVC, VC1, VP8, and HEVC, the information may include the type of coding mode used for prediction, such as intra or inter, the motion vectors and their differences between adjacent blocks, the presence or absence of residual data, and the characteristics and differences between the samples that are to be filtered. The process is further controlled by adjusting the filtering process given the quantization parameters that were used for the samples currently being filtered. These characteristics were selected in an effort to maximize the detection ability of possible coding artifacts, also referred to as blocking artifacts.
Some codecs included an illumination compensation process, such as weighted prediction, as part of the inter-prediction process to further improve prediction performance. Motion compensated samples were adjusted through a weighting and offsetting process, which is commonly of the form of the below equation (1), instead of being copied directly from another area as the prediction signal:
y=w·x(mv)+o (1)
In this equation, y is the final motion compensated signal, x is the motion compensated signal given a motion vector mv, w is the weighting (scaling) parameter, and o is the offset. Illumination compensation has reduced blocking artifacts in different instances and not just during illumination changes, such as fades, cross-fades, flashes, light source changes, and so on. The codecs also enabled the prediction of similar samples within the same image using bi-prediction, or different samples within the same image using multiple instances of the same reference with different illumination compensation/weighted prediction parameters.
Unfortunately, these existing codecs have not considered differences in illumination compensation parameters during the de-blocking process. For example, in some instances where two adjacent blocks use the same reference but have different illumination compensation parameters, no de-blocking was performed. This caused blocking artifacts to appear across two neighboring blocks from the same reference even though the illumination compensation parameters are different. The blocking artifacts appeared because existing codecs, such as AVC and HEVC, only examine if the actual references used for prediction are the same, and do not consider whether any additional transformation beyond motion compensation has been applied to the reference samples.
If the boundary is not a macroblock boundary, then the block filter strength may be set to a non-zero value so that deblocking will be performed. For example, the lesser value 3 in box 105 or the lesser value 2 in box 107 may be used in one example, though other values may be used in other embodiments. If none of the samples is intra-coded, then in box 106, a determination may be made as to whether there are any non-zero transform coefficients such as discrete cosine transform (DCT) or discrete sine transform (DST) coefficients in either block p or block q. If there are any non-zero DCT coefficients in either block p or block q, then the block filter strength may be set to a lesser value, such as value 2 in box 107.
If there are not any non-zero DCT coefficients in either block p or bock q, then in box 108 a determination may be made as to whether blocks p and q have different reference pictures or different numbers of reference pictures. If blocks p and q have different reference pictures or different numbers of reference pictures, then the block filter strength may be set to a lesser value, such as value 1 in box 109.
If blocks p and q do not have different reference pictures or different numbers of reference pictures, then in box 110, a determination may be made as to whether a difference between the motion vectors of blocks p and q in either the horizontal direction or the vertical direction is greater than or equal to a threshold. In the example shown in
If the difference between the motion vectors of blocks p and q in either the horizontal direction or the vertical direction is greater than or equal to the threshold, then the block filter strength may be set to a lesser value, such as value 1 in box 109, which may be the same lesser value that is set when the blocks p and q have different reference pictures or different numbers of references pictures.
If the difference between the motion vectors of blocks p and q in either direction is less than the threshold, then filtering may be skipped and the block filter strength may be set to a zero or least value, such as value 0 in box 111.
As shown in
There is a need to eliminate blocking artifacts in those instances where additional transformations have been applied to one or more reference samples to generate distinct image blocks from one or more similar reference samples.
In various embodiments of the invention, one or more codecs may be modified to consider weighting or illumination compensation parameters when determining a deblocking filter strength that is to be applied. For example, instead of just determining whether two blocks p and q use different reference pictures or a different number of reference pictures, in an embodiment a determination may be made as to whether the two blocks p and q have different parameters that were not previously considered, such as weighting prediction parameters.
Codecs may be modified to consider weighting parameters when determining a deblocking filter strength that is to be applied. Weighting parameters may improve the compression efficiency of codecs by better compensating for different effects, such as fades, cross-fades, flashes, or light source changes. Codecs, such as MPEG-2 that do not support weighting parameters may still be able to encode these effects, however the encoding may require substantially more bits to achieve a similar quality. If less bits are used, more coding artifacts may result resulting in poorer perceived quality. In different instances, the weighting parameters may be considered when setting a deblocking filter strength to ensure that these effects are efficiently compressed while minimizing the appearance of blocking artifacts.
Since different weighted prediction parameter values may result in different values in a reference index associated with different image data blocks, in some embodiments, the reference indices of different blocks may be compared when setting the filter strength to determine whether blocks have different weighted prediction parameters. Checking whether the reference indices associated with each block are different instead of checking whether the same reference pictures are used may also simplify the deblocking process as there would be no need to provide an additional mapping from the reference index to the actual reference pointer when checking whether the same reference pictures are used.
In some of these embodiments, a weighted prediction parameter of a video codec inter-prediction process from a reference index in a plurality of blocks may be compared using a processing device. When the compared weighted prediction parameter in the blocks is different, a deblocking filter strength of the blocks may be set to a first value. When the weighted prediction parameter in the blocks is similar, a difference between motion vectors of the respective blocks in a horizontal direction and a vertical direction may be calculated.
When the calculated difference in at least one of the directions is greater than or equal to a threshold, the deblocking filter strength of the blocks may be set to a second value. Otherwise, when the difference in both directions is less than the threshold, the deblocking filter strength of the blocks may be set to a third value.
If the boundary is not a macroblock boundary, then the block filter strength may be set to lesser value, such as the lesser value 3 in box 205 or the lesser value 2 in box 207 in this example, though other values may be used in other embodiments. If none of the samples is intra-coded, then in box 206, a determination may be made as to whether there are any non-zero discrete cosine transform (DCT) coefficients in either block p or block q. If there are any non-zero DCT coefficients in either block p or block q, then the block filter strength may be set to an even lesser value, such as value 2 in box 207.
If there are not any non-zero DCT coefficients in either block p or bock q, then in box 208 a determination may be made as to whether blocks p and q have different reference indices or different numbers of reference pictures. If blocks p and q have different reference indices or different numbers of reference pictures, then the block filter strength may be set to a lesser value, such as value 1 in box 209.
If blocks p and q do not have different reference indices or different numbers of reference pictures, then in box 210, a determination may be made as to whether a difference between the horizontal or vertical motion vectors of blocks p and q is greater than or equal to a threshold. In the example shown in
If the difference between the motion vectors of blocks p and q in either direction is greater than or equal to the threshold, then the block filter strength may be set to a lesser value, such as value 1 in box 209, which may be the same lesser value that is set when the blocks p and q have different reference pictures or different numbers of references pictures.
If the difference between the motion vectors of blocks p and q in either direction is less than the threshold, then filtering may be skipped and the block filter strength may be set to a zero or least value, such as value 0 in box 211.
In other embodiments, a determination may be made as to whether the two adjacent blocks p and q use different illumination parameters. These illumination parameters may include the weighting factor w and the offset o in equation (1) above. If either the weighting or the offset parameters is different between the two blocks p and q, then the block filter strength may be set to a higher value than if the weighting and offset parameters are similar. This may ensure that filtering is not skipped when either the weighting or the offset parameters are different between the blocks even though the same reference pictures may be used by both blocks p and q.
In some of these embodiments, an illumination compensation parameter of a video codec inter-prediction process in a plurality of blocks of image data may be compared using a processing device. When the illumination compensation parameter is similar in the plurality of blocks, a deblocking filter strength may be set to a first value. When the illumination compensation parameter is different in the plurality of blocks, the deblocking filter strength may be set to a second value.
If the boundary is not a macroblock boundary, then the block filter strength may be set to lesser value, such as the lesser value 3 in box 305 or the lesser value 2 in box 307 in this example, though other values may be used in other embodiments. If none of the samples is intra-coded, then in box 306, a determination may be made as to whether there are any non-zero discrete cosine transform (DCT) coefficients in either block p or block q. If there are any non-zero DCT coefficients in either block p or block q, then the block filter strength may be set to an even lesser value, such as value 2 in box 307.
If there are not any non-zero DCT coefficients in either block p or bock q, then in box 308 a determination may be made as to whether blocks p and q have different reference pictures or different numbers of reference pictures. If blocks p and q have different reference pictures or different numbers of reference pictures, then the block filter strength may be set to a lesser value, such as value 1 in box 309.
If blocks p and q do not have different reference pictures or different numbers of reference pictures, then in box 310, a determination may be made as to whether (i) a difference between the horizontal or vertical motion vectors of blocks p and q is greater than or equal to a threshold or (ii) either the weighting or the offset parameter is different between the two blocks p and q. In the example shown in
If the difference between the motion vectors of blocks p and q in either direction is greater than or equal to the threshold, then the block filter strength may be set to a lesser value, such as value 1 in box 309, which may be the same lesser value that is set when the blocks p and q have different reference pictures or different numbers of references pictures. The block filter strength may also be set to the lesser value, such as value 1 in box 309, if either the weighting or the offset parameter is different between the two blocks p and q.
If the difference between the motion vectors of blocks p and q in either direction is less than the threshold, and both the weighting and the offset parameters are similar between the two blocks p and q, then filtering may be skipped and/or the block filter strength may be set to a zero or least value, such as value 0 in box 311.
In another embodiment, the deblocking filter strength may be set to a first value when both of the following conditions occur: (i) at least one of the weighting parameter and the offset parameter is different between the two blocks p and q, and (ii) a difference between at least one of the horizontal or vertical motion vectors of blocks p and q is greater than or equal to a threshold. If only one of the conditions occurs, then the deblocking filter strength may be set to a second value lower than the first value. If none of the conditions occur, then the deblocking filter strength may be set to a third value which may be a lowest value that skips filtering altogether.
If the boundary is not a macroblock boundary, then the block filter strength may be set to lesser value, such as the lesser value 3 in box 405, though this and the other values specified herein may be different in other embodiments. If none of the samples is intra-coded, then in box 406, a determination may be made as to whether there are any non-zero transform coefficients in either block p or block q. If there are any non-zero coefficients in either block p or block q, then the block filter strength may be set to an even lesser value, such as value 2 in box 407.
If there are not any non-zero coefficients in either block p or bock q, then in box 408 a determination may be made as to whether blocks p and q have different reference pictures or different numbers of reference pictures. If blocks p and q have different reference pictures or different numbers of reference pictures, then the block filter strength may be set to one of the existing lesser values, such as value 2 in box 407, or another lesser value.
If blocks p and q do not have different reference pictures or different numbers of reference pictures, then in box 409, a determination may be made as to whether both of the following conditions are satisfied: (i) a difference between at least one of the horizontal or vertical motion vectors of blocks p and q is greater than or equal to a threshold, and (ii) at least one of the weighting parameter and the offset parameter is different between the two blocks p and q. If both of these conditions apply, then the block filter strength may be set to one of the existing lesser values, such as value 2 in box 407, or another lesser value. In the example shown in
If both of the above conditions are not satisfied, then in box 410, a determination may be made as to whether only one of the conditions is satisfied. If either: (i) a difference between at least one of the horizontal or vertical motion vectors of blocks p and q is greater than or equal to a threshold, or (ii) at least one of the weighting parameter and the offset parameter is different between the two blocks p and q, then the block filter strength may be set to a lesser value than in the prior case when both of the conditions were satisfied. For example, the block filter strength may be set to the value 1 in box 411, or another lesser value. In the example shown in
If none of the conditions in box 410 are satisfied, then filtering may be skipped and/or the block filter strength may be set to a zero or least value, such as value 0 in box 412.
In other embodiments, additional multiple tiers, similar to blocks 409 and/or 410 could also be used. For example, if the motion vector difference in a dimension is greater than or equal to Xm and the weighting and offset parameters are different then filtering strength Sm may be set. However, if the motion vector difference in a dimension is less than Xm but greater than or equal to Xn, then filtering strength Sn<Sm may be set. Similarly, if the motion vector difference in a dimension is less than Xn but greater than or equal to Xr, then filtering strength Sr<Sn may be set, and so on. In some embodiments, multiple tiers may also be implemented with different motion vector absolute difference thresholds and filtering strength values, even for non-weighted and/or non-offset samples.
The deblocking strength may also be increased given particular quantization parameters of the blocks that are to be filtered. In some instances, the quantization parameters may be used during the setting or applying of the filtering threshold values, but not during the determination of the filtering strength. In some instances, an average, weighted average, or a maximum of the quantization parameter values of the blocks involved may be determined and then used in conjunction with a table lookup processes to derive or otherwise identify a particular threshold value that is to be used.
In some instances, higher quantization parameters may be associated with a higher probability of blocking artifacts appearing in an output, especially in those instances with zero or a few residual coefficients. For example, if a quantization parameter exceeds a value X and there are no coefficients in the blocks to be filtered, then, if motion difference across the two blocks is significant, i.e. above a certain threshold, filtering may be performed at a predetermined filtering strength, such as filter strength value 2. More significant filtering could also be performed if there are discrete cosine coefficients in the blocks with higher quantization parameters. For example, instead of using filtering strength value 2 for such blocks, as is currently done in codecs like AVC or HEVC, a higher filter strength, such as filter strength value 3 may be used.
The system 500 also may include an inverse quantization unit 522, an inverse transform unit 524, an adder 526, a filter system 530 a buffer 540, a motion and a prediction unit 550. The inverse quantization unit 522 may quantize coded video data according to the QP used by the quantizer 516. The inverse transform unit 524 may transform re-quantized coefficients to the pixel domain. The adder 526 may add pixel residuals output from the inverse transform unit 524 with predicted motion data from the prediction unit 550. The summed output from the adder 526 may output to the filtering system 530.
The filtering system 530 may include a deblocking filter 532 and a strength derivation unit 534. The deblocking filter 532 may apply deblocking filtering to recovered video data output from the adder 526 at a strength provided by the strength derivation unit 534. The strength derivation unit 534 may derive a strength value using any of the techniques described above. The filtering system 530 also may include other filters that may apply SAO filtering or other types of filters but these are not illustrated in
The buffer 540 may store recovered frame data as outputted by the filtering system 530. The recovered frame data may be stored for use as reference frames during coding of later-received blocks.
The prediction unit 550 may include a mode decision unit 552, and a motion estimator 534. The motion estimator 534 may estimate image motion between a source image being coded and reference frame(s) stored in the buffer 540. The mode decision unit 552 may assign a prediction mode to code the input block and select a block from the buffer 540 to serve as a prediction reference for the input block. For example, it may select a prediction mode to be used (for example, uni-predictive P-coding or bi-predictive B-coding), and generate motion vectors for use in such predictive coding. In this regard, the motion compensated predictor 548 may retrieve buffered block data of selected reference frames from the buffer 540.
Existing and upcoming video coding standards seem to currently be restricted in terms of the inter prediction modes that are performed. That is, for single list prediction, motion compensation given a reference is performed using a motion vector, a defined interpolation process, and a set of illumination parameters. For bi-prediction, two references may be utilized with different motion vectors and illumination compensation parameters for each. However, future codecs may utilize additional transformation processes such as affine or parabolic motion compensation, de-noising or de-ringing filters, among others. Such mechanisms could be different for each reference, whereas for one reference, similar to the case of weighted prediction, multiple such parameters may also be used for each instance of that reference. In that case, we propose that de-blocking should also account for such differences, when deriving the de-blocking strength, further avoiding and reducing discontinuities across block boundaries.
The foregoing discussion has described operation of the embodiments of the present invention in the context of codecs. Commonly, codecs are provided as electronic devices. They can be embodied in integrated circuits, such as application specific integrated circuits, field programmable gate arrays and/or digital signal processors. Alternatively, they can be embodied in computer programs that execute on personal computers, notebook computers or computer servers. Similarly, decoders can be embodied in integrated circuits, such as application specific integrated circuits, field programmable gate arrays and/or digital signal processors, or they can be embodied in computer programs that execute on personal computers, notebook computers or computer servers. Decoders commonly are packaged in consumer electronics devices, such as gaming systems, DVD players, portable media players and the like and they also can be packaged in consumer software applications such as video games, browser-based media players and the like. And, of course, these components may be provided as hybrid systems that distribute functionality across dedicated hardware components and programmed general purpose processors as desired.
Tourapis, Alexandros, Leontaris, Athanasios
Patent | Priority | Assignee | Title |
10645408, | Sep 17 2017 | GOOGLE LLC | Dual deblocking filter thresholds |
11153588, | Sep 17 2017 | GOOGLE LLC | Dual deblocking filter thresholds |
Patent | Priority | Assignee | Title |
8243790, | Sep 28 2007 | GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP , LTD | Treating video information |
20060051068, | |||
20080117981, | |||
20080144722, | |||
20090086814, | |||
20100183068, | |||
20120328029, | |||
EP1408697, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Feb 04 2013 | TOURAPIS, ALEXANDROS | Apple Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 029758 | /0640 | |
Feb 04 2013 | TOURAPIS, ALEXANDROS | Apple Inc | CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNEE ADDRESS PREVIOUSLY RECORDED ON REEL 029758 FRAME 0640 ASSIGNOR S HEREBY CONFIRMS THE ASSIGNMENT OF ASSIGNORS INTEREST | 036983 | /0928 | |
Feb 05 2013 | Apple Inc. | (assignment on the face of the patent) | / | |||
Feb 05 2013 | LEONTARIS, ATHANASIOS | Apple Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 029758 | /0640 | |
Feb 05 2013 | LEONTARIS, ATHANASIOS | Apple Inc | CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNEE ADDRESS PREVIOUSLY RECORDED ON REEL 029758 FRAME 0640 ASSIGNOR S HEREBY CONFIRMS THE ASSIGNMENT OF ASSIGNORS INTEREST | 036983 | /0928 |
Date | Maintenance Fee Events |
Feb 24 2016 | ASPN: Payor Number Assigned. |
Aug 08 2019 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Aug 09 2023 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
Feb 23 2019 | 4 years fee payment window open |
Aug 23 2019 | 6 months grace period start (w surcharge) |
Feb 23 2020 | patent expiry (for year 4) |
Feb 23 2022 | 2 years to revive unintentionally abandoned end. (for year 4) |
Feb 23 2023 | 8 years fee payment window open |
Aug 23 2023 | 6 months grace period start (w surcharge) |
Feb 23 2024 | patent expiry (for year 8) |
Feb 23 2026 | 2 years to revive unintentionally abandoned end. (for year 8) |
Feb 23 2027 | 12 years fee payment window open |
Aug 23 2027 | 6 months grace period start (w surcharge) |
Feb 23 2028 | patent expiry (for year 12) |
Feb 23 2030 | 2 years to revive unintentionally abandoned end. (for year 12) |