In an image processing apparatus for processing image data to be input, an edge image of which a pixel value changes between pixels is extracted from a generated drawing image of each color. Further, processing for expanding an edge amount is performed with respect to the extracted edge image, and brightness of a pixel of the drawing image corresponding to a pixel of the expanded edge image is adjusted based on the extracted edge amount.
|
8. A method for performing image processing by an image processing apparatus for processing inputted image data, the method comprising:
extracting edge intensity of a target pixel included in the inputted image data on a basis of the target pixel and surrounding pixels around the target pixel; and
performing conversion processing of converting a pixel value of the target pixel such that brightness of the target pixel becomes greater than brightness of the target pixel before conversion at least in accordance with the extracted edge intensity of the target pixel and the pixel value of the target pixel,
wherein, in the conversion processing, in a case where the pixel value of the target pixel is a first pixel value, the brightness of the target pixel after conversion differs depending on a value of the edge intensity of the target pixel, a ratio of increase in the brightness of the target pixel after conversion from the brightness of the target pixel before conversion decreases as the edge intensity of the target pixel increases, and the ratio of increase in the brightness of the target pixel after conversion from the brightness of the target pixel before conversion increases as the edge intensity of the target pixel decreases, and
wherein, after completion of the conversion processing, an image after the conversion processing is outputted.
15. A non-transitory computer readable storage medium storing a program which causes a computer to perform a method for performing image processing by an image processing apparatus for processing inputted image data, the method comprising the steps of:
extracting edge intensity of a target pixel included in the inputted image data on a basis of the target pixel and surrounding pixels around the target pixel; and
performing conversion processing of converting a pixel value of the target pixel such that brightness of the target pixel becomes greater than brightness of the target pixel before conversion at least in accordance with the extracted edge intensity of the target pixel and the pixel value of the target pixel,
wherein, in the conversion processing, in a case where the pixel value of the target pixel is a first pixel value, the brightness of the target pixel after conversion differs depending on a value of the edge intensity of the target pixel, a ratio of increase in the brightness of the target pixel after conversion from the brightness of the target pixel before conversion decreases as the edge intensity of the target pixel increases, and the ratio of increase in the brightness of the target pixel after conversion from the brightness of the target pixel before conversion increases as the edge intensity of the target pixel decreases, and
wherein, after completion of the conversion processing, an image after the conversion processing is outputted.
1. An image processing apparatus for processing inputted image data, the image processing apparatus comprising:
one or more memories storing instructions; and
one or more processors that, when executing the instructions, causes the image processing apparatus to:
extract edge intensity of a target pixel included in the inputted image data on a basis of the target pixel and surrounding pixels around the target pixel; and
perform conversion processing of converting a pixel value of the target pixel such that brightness of the target pixel becomes greater than brightness of the target pixel before conversion at least in accordance with the extracted edge intensity of the target pixel and the pixel value of the target pixel,
wherein, in the conversion processing, in a case where the pixel value of the target pixel is a first pixel value, the brightness of the target pixel after conversion differs depending on a value of the edge intensity of the target pixel, a ratio of increase in the brightness of the target pixel after conversion from the brightness of the target pixel before conversion decreases as the edge intensity of the target pixel increases, and the ratio of increase in the brightness of the target pixel after conversion from the brightness of the target pixel before conversion increases as the edge intensity of the target pixel decreases, and
wherein, after completion of the conversion processing, an image after the conversion processing is outputted.
2. The image processing apparatus according to
wherein, in the conversion processing, the ratio of increase in the brightness of the target pixel after conversion from the brightness of the target pixel before conversion differs depending on a preset recording material reduction target value.
3. The image processing apparatus according to
wherein the edge intensity of the target pixel is extracted by calculating an average value based on the pixel value of the target pixel and pixel values of the surrounding pixels and then by calculating a difference between two pixel values closest to the calculated average value.
4. The image processing apparatus according to
5. The image processing apparatus according to
6. The image processing apparatus according to
7. The image processing apparatus according to
9. The method according to
wherein, in the conversion processing, the ratio of increase in the brightness of the target pixel after conversion from the brightness of the target pixel before conversion differs depending on a preset recording material reduction target value.
10. The method according to
wherein the edge intensity of the target pixel is extracted by calculating an average value based on the pixel value of the target pixel and pixel values of the surrounding pixels and then by calculating a difference between two pixel values closest to the calculated average value.
11. The method according to
wherein a corrected brightness value that is after correction for making the brightness value of the pixel in the image data brighter is acquired, and wherein a second brightness value is generated by synthesizing the brightness value of the pixel and the corrected brightness value with each other using a synthesis ratio corresponding to the edge intensity of the pixel.
12. The method according to
14. The method according to
|
Field of the Invention
The present disclosure generally relates to image processing and, more particularly, to an image processing apparatus, an image processing method, and a storage medium.
Description of the Related Art
Processing (density decrease processing) has been known for determining each pixel of image data to be input as any one of an edge pixel and a non-edge pixel and applying coloring material amount reduction processing to the non-edge pixel (Japanese Patent Application Laid-Open No. 2002-086805).
The technique described in Japanese Patent Application Laid-Open No. 2002-086805 has an issue that the coloring material amount reduction processing is largely different between the pixel determined as the edge pixel and the pixel determined as the non-edge pixel, and thus a step due to switching of the relevant processing is noticeable in a print result between the pixel determined as the edge pixel and the pixel determined as the non-edge pixel.
The present disclosure is directed to the provision of a mechanism capable of reducing a step between pixels on edges extracted by image processing to solve the above-described issue.
An image processing apparatus according to the present disclosure for realizing the above-mentioned purpose includes following configuration.
An image processing apparatus for processing image data to be input includes a generation unit configured to generate a drawing image of each color from the image data, an extraction unit configured to extract an edge image of which a pixel value changes between pixels from the generated drawing image of each color, an edge amount expansion unit configured to perform processing for expanding an edge amount with respect to the extracted edge image, and a processing unit configured to adjust brightness of a pixel of the drawing image corresponding to a pixel of the expanded edge image based on the edge amount.
Further features of the present disclosure will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
Various exemplary embodiments of the present disclosure will be described in detail below with reference to the attached drawings.
<Description of System Configuration>
In
The image processing apparatus 100 includes a central processing unit (CPU), which may include one or more processors, a read-only memory (ROM), and a random access RAM, which are not illustrated, therein. The CPU loads a program of the image processing apparatus 100 from the ROM and executes the program of the image processing apparatus 100 using the RAM as a primary storage area. According to the above-described operations, processing of each unit (101 to 108) is executed.
<PDL Data Obtainment Unit 101 and Drawing Unit 102>
When receiving PDL data from an external computer 110, the PDL data obtainment unit 101 outputs the PDL data to the drawing unit 102. The PDL data is data constituted of drawing commands of a plurality of objects. Next, the drawing unit 102 generates a drawing image (here, it is assumed as a red, green, and blue (RGB) image) based on the PDL data received from the PDL data obtainment unit 101 and outputs the drawing image to the edge amount extraction processing unit 103.
In step S401, the edge amount extraction processing unit 103 selects a pixel of the RGB image one by one from an upper left pixel as a target pixel and initializes an edge to “0” which is a variable for storing the edge amount to be calculated therein. Next, in step S402, the edge amount extraction processing unit 103 selects a target color plate for the current target pixel. Since the RGB image is processed here, the target color plate is any of R, G, and B. Next, in step S403, the edge amount extraction processing unit 103 detects a pixel having a maximum pixel value and a pixel having a minimum pixel value from pixels in a window (for example, a range of three by three pixels centering on the target pixel) and regards an average value of the two pixel values as mid.
Next, in step S404, the edge amount extraction processing unit 103 calculates a pixel value of a pixel having a minimum pixel value (high_min) among pixels of which pixel values are larger than the mid in the pixels in the window.
Similarly, in step S405, the edge amount extraction processing unit 103 calculates a pixel value of a pixel having a maximum pixel value (low_max) among pixels of which pixel values are less than the mid in the pixels in the window. A difference between high_min and low_max is an edge amount regarding the target color plate of the target pixel. In step S406, the edge amount extraction processing unit 103 determines whether the difference between high_min and low_max is larger than the pixel value of the edge image of the target pixel. If it is determined that the processing in step S406 is true (YES in step S406), then in step S407, the edge amount extraction processing unit 103 substitutes the difference between high_min and low_max for the edge, whereas if it is determined that the processing in step S406 is false (NO in step S406), substitution is not performed.
Next, in step S408, the edge amount extraction processing unit 103 determines whether all of the color plates are selected as the target color plate. If it is determined that all of the color plates are not selected (NO in step S408), then in step S402, the edge amount extraction processing unit 103 selects an unprocessed color plate. On the other hand, if it is determined that all of the color plates are selected (YES in step S408), then in step S409, the edge amount extraction processing unit 103 writes the value of the edge to a pixel corresponding to the target pixel of the edge image.
Next, in step S410, the edge amount extraction processing unit 103 determines whether all pixels are selected as the target pixel. If it is determined that all pixels are not selected (NO in step S410), then in step S401, the edge amount extraction processing unit 103 selects an unprocessed pixel. The edge amount extraction processing unit 103 executes the above-described processing in step S401 to step S410 to all pixels, and thus extracts the edge amount for each plate with respect to the input RGB image and writes the maximum value of the edge amount extracted for each plate to the edge image.
The processing of the edge amount extraction processing unit 103 is not limited to the above-described example, and other known edge amount extraction processing such as the edge amount extraction by the Canny method and a space filter may be used.
<Details of Edge Amount Expansion Processing Unit>
<Details of Edge Amount Expansion Processing Unit>
In step S601, the edge amount expansion processing unit 104 selects a pixel one by one from an upper left pixel as a target pixel and initializes an edge to zero which is a variable for storing the expanded edge amount to be calculated therein.
Next, in step S602, the edge amount expansion processing unit 104 selects a pixel in the window (for example, a range of nine by nine pixels centering on the target pixel) one by one from an upper left pixel as a reference pixel. Next, in step S603, the edge amount expansion processing unit 104 determines whether the edge is smaller than an edge amount of the reference pixel. If it is determined that the processing in step S603 is true (YES in step S603), then in step S604, the edge amount expansion processing unit 104 substitutes the edge amount of the reference pixel for the edge. On the other hand, if it is determined that the processing in step S603 is false (NO in step S603), substitution is not performed.
Next, in step S605, the edge amount expansion processing unit 104 determines whether all of the reference pixels are referred from the window of nine by nine pixels with respect to the current target pixel. If it is determined that all of the reference pixels are not selected (NO in step S605), then in step S602, the edge amount expansion processing unit 104 selects an unprocessed reference pixel.
On the other hand, if it is determined that all of the reference pixels are selected (YES in step S605), then in step S606, the edge amount expansion processing unit 104 writes the value of the edge to a pixel corresponding to the target pixel of the expanded edge image. In step S607, the edge amount expansion processing unit 104 determines whether all pixels are selected as the target pixel. If it is determined that all pixels are not selected (NO in step S607), then in step S601, the edge amount expansion processing unit 104 selects an unprocessed pixel. The edge amount expansion processing unit 104 executes the above-described processing in step S601 to step S607 to all pixels, and thus performs the edge amount expansion processing on the input edge image.
The processing of the edge amount expansion processing unit 104 is not limited to the above-described example, and processing for calculating an average value of the edge amounts and a second largest edge amount in the window and the like may be performed.
Hereinbelow, the density decrease processing unit 105 strongly applies the processing for increasing brightness to the RGB image as the expanded edge amount of the edge image is smaller and weakly applies the processing for increasing brightness to the RGB image as the expanded edge amount of the edge image is larger with respect to each pixel.
More specifically, the weighted average processing is performed on the RGB image subjected to the processing for increasing brightness and the input RGB image at a ratio according to the expanded edge amount. Accordingly, the density decrease processing can be realized in which a coloring material amount reduction ratio is continuously adjusted to make the coloring material amount reduction ratio large in a pixel of which the edge amount is small and make the coloring material amount reduction ratio small in a pixel of which the edge amount is large.
According to the present exemplary embodiment, the edge amount expansion processing unit 104 may be skipped in a case where a sufficiently thick edge can be extracted or it is wanted to leave a thin edge by the processing of the edge amount extraction processing unit 103. In such a case, the expanded edge amount is not necessary in a data structure of the edge image illustrated in
According to the first exemplary embodiment, the processing for increasing brightness is continuously performed in such a manner that the processing for increasing brightness is strongly applied to a pixel of which the edge amount subjected to the image processing is small and the processing for increasing brightness is weakly applied to a pixel of which the edge amount is large. When the brightness is increased, the density is decreased, and a coloring material use amount is reduced.
According to the first exemplary embodiment, the density decrease processing is performed by the density decrease processing unit 105 prior to the output color processing unit 106, so that the coloring material use amount is reduced by changing the brightness.
In contrast, according to a second exemplary embodiment, the density decrease processing unit 105 is disposed between the output color processing unit 106 and the halftone processing unit 107, so that the density decrease processing unit 105 decreases pixel values of a CMYK image and the coloring material use amount can be reduced.
In this case, the weighted average processing is performed on a CMYK image of which coloring material is restricted and the input CMYK image at a ratio determined according to the expanded edge amount. In this case, the edge amount extraction processing unit 103 and the edge amount expansion processing unit 104 may be also disposed behind the output color processing unit 106. According to the present exemplary embodiment, the coloring material amount is continuously reduced in such a manner that the coloring material amount reduction ratio is high in a pixel of which the edge amount is small and the coloring material amount reduction ratio is low in a pixel of which the edge amount is large. In addition, since the coloring material amount reduction ratio is continuously changed according to the edge amount, the density decrease processing can be realized in which a step due to switching of the processing is unnoticeable.
According to a third exemplary embodiment, the density decrease processing unit 105 according to the first exemplary embodiment is expanded. A strength of reduction processing of each edge amount which is fixedly determined in the first exemplary embodiment is dynamically determined, and processing for changing brightness for each pixel is replaced with processing for changing the coloring material use amount to a target amount. Accordingly, the coloring material use amount is restricted so as to be a target coloring material use amount when the RGB image is output from the printer engine 108 via the output color processing unit 106 and the halftone processing unit 107.
<Details of Density Decrease Processing Unit 105>
These values are gathered by each edge amount, and a coloring material use amount table of each edge amount is generated which is indicated in first and second columns in
<Details of Density Decrease Processing Unit 105>
In
The remaining ratio determination processing unit 105B receives a target coloring material use amount of the RGB image together with the RGB image and the expanded edge image, generates a remaining ratio table, and outputs the table to the brightness enhancement processing unit 105A. The target coloring material use amount of the RGB image is a target value of the coloring material use amount of the RGB image which is a fixed value or specified from a user interface, which is not illustrated, included in the image processing apparatus 100 and the external computer 110.
The remaining ratio table indicates a ratio of how much the coloring material is remained for each edge amount of a pixel. For example, when the expanded edge image has a bit depth of eight bits, the expanded edge image is an array including 256 elements. When the remaining ratio is 0%, the coloring material is not remained, when the remaining ratio is 100%, the coloring material is completely remained, and when, for example, the remaining ratio is 50%, as a middle therebetween, the coloring material is remained by half.
Next, the brightness enhancement processing unit 105A receives the remaining ratio table together with the RGB image and the expanded edge image, controls the coloring material use amount by increasing brightness for each pixel, and outputs the RGB image as a result thereof. Details of these two types of processing are described below.
First, in step S1101, the density decrease processing unit 105 initializes the coloring material amount table of each edge amount to all “0”. The coloring material amount table of each edge amount is generated by gathering a total sum of the coloring material use amounts of the input RGB image for each edge amount. For example, when the expanded edge image has a bit depth of eight bits, the expanded edge image is an array including 256 elements. Next, in step S1102, the density decrease processing unit 105 selects a pixel one by one from an upper left pixel as a target pixel and calculates the coloring material use amount of the target pixel. In step S1103, the density decrease processing unit 105 adds the coloring material use amount of the target pixel to the coloring material use amount corresponding to the edge amount of the target pixel in the coloring material amount table of each edge amount. Next, in step S1104, the density decrease processing unit 105 determines whether all pixels are selected as the target pixel. If it is determined that all pixels are not selected (NO in step S1104), then in step S1102, the density decrease processing unit 105 selects an unprocessed target pixel.
On the other hand, the density decrease processing unit 105 determines that all pixels are selected (YES in step S1104), then in step S1105, the density decrease processing unit 105 calculates the remaining ratio table from the coloring material amount table of each edge amount and the target coloring material use amount of the RGB image.
A following method can be cited as an example of a method for calculating the remaining ratio table from the coloring material use amount table before reduction of each edge amount of the first and second columns in
Further, the coloring material use amount before processing of each edge amount indicated in the second column in
When TH1 and TH2 are predetermined values, u(e) and Uall are values determined from the RGB image, and Utarget is a value separately specified by the density decrease processing unit 105, so that the remaining ratio ε0 when the edge amount is “0” is uniquely determined.
When a value of the remaining ratio ε0 is “0” or greater, the remaining ratio table can be determined as expressed in Equation 1 by regarding the remaining ratio ε(e) as the remaining ratio of each edge amount. In this regard, the coloring material use amount after reduction coincides with the coloring material use amount after reduction indicated in the bottom line of the fourth column in
A case in which the remaining ratio ε0 is less than “0” is a case when the target coloring material use amount of the RGB image is not reached by the relevant generation method of the remaining ratio table. In this case, although visibility of the edge is deteriorated, the target coloring material use amount of the RGB image can be reached by lightening the edge portion. For example, as illustrated in
When the remaining ratio εc is less than “0”, and the visibility of the edge is prioritized than that the coloring material use amount becomes the target value, Equation 1 may be used by setting ε0=0, although the target coloring material use amount of the RGB image is not reached. Further, processing for changing the target coloring material use amount of the RGB image and the like may be performed.
First, in step S1301, the density decrease processing unit 105 selects a pixel one by one from an upper left pixel as a target pixel and calculates the coloring material use amount of the target pixel. In step S1302, the density decrease processing unit 105 determines a target coloring material amount of the target pixel from the remaining ratio corresponding to the edge amount of the target pixel in the remaining ratio table.
Next, in step S1303, the density decrease processing unit 105 determines whether the coloring material use amount of the target pixel is the target coloring material use amount of the target pixel or less. If it is determined that the processing in step S1303 is false (NO in step S1303), then in step S1304, the density decrease processing unit 105 slightly increases the brightness of the target pixel, calculates the coloring material use amount again, and returns the processing to step S1303.
As an example of a method for increasing a brightness value, the coloring material use amount is calculated again using 255*α+R*(1−α), 255*α+G*(1−α), and 255*α+B*(1−α), (0<α<1) by gradually increasing a value of a with respect to each signal value of an RGB image, and then the processing proceeds to step S1303.
The processing in steps S1303 and S1304 is repeated, and thus the coloring material use amount of the target pixel can be the target coloring material use amount by only changing the brightness of the target pixel. It is because that when the brightness of the target pixel is increased, the coloring material use amount is decreased.
As a method for increasing a brightness value, the pixel value of the RGB image may be converted to another color space such as a YUV color space to increase the brightness and then returned to the pixel value of the RGB image. The iterative operation may be replaced with a binary tree method, a cache of a processing result, and the like.
In this regard, when the target pixel has a discrete value, there is a possibility that the coloring material use amount of the target pixel will not completely coincide with the target coloring material amount of the target pixel. Thus, the condition in step S1303 may be set as that whether the coloring material use amount of the target pixel is the target coloring material use amount of the target pixel or less, or whether a difference between the coloring material use amount of the target pixel and the target coloring material use amount of the target pixel is less than a predetermined threshold value.
Finally, in step S1305, the density decrease processing unit 105 determines whether all pixels are selected. If it is determined that all pixels are not selected (NO in step S1305), the density decrease processing unit 105 returns the processing to step S1301 and selects an unprocessed target pixel.
On the other hand, if the density decrease processing unit 105 determines that all pixels are selected (YES in step S1305), the processing in the present flowchart is terminated.
According to the present exemplary embodiment, the coloring material use amount is reduced while being continuously controlled in such a manner that the coloring material amount reduction ratio is high in a pixel of which the edge amount is small and the coloring material amount reduction ratio is low in a pixel of which the edge amount is large so as to be the target coloring material use amount. In addition, since the coloring material amount reduction ratio is continuously changed according to the edge amount, the density decrease processing can be realized in which a step due to switching of the processing is unnoticeable while controlling the coloring material use amount.
According to the third exemplary embodiment, the density decrease processing is performed by the density decrease processing unit 105 prior to the output color processing unit 106 as with the case of the first exemplary embodiment, and thus the coloring material use amount is reduced by changing the brightness.
In contrast, according to a fourth exemplary embodiment, the density decrease processing unit 105 is disposed between the output color processing unit 106 and the halftone processing unit 107 as with the case of the second exemplary embodiment, and thus the density decrease processing unit 105 can controls and reduces the coloring material use amount by lowering pixel values of a CMYK image.
In this case, the density decrease processing unit 105 controls the coloring material amount of the target pixel of the CMYK image in step S1304 in
According to the present exemplary embodiment, the coloring material use amount is reduced while being continuously controlled in such a manner that the coloring material amount reduction ratio is high in a pixel of which the edge amount is small and the coloring material amount reduction ratio is low in a pixel of which the edge amount is large so as to be the target coloring material use amount. In addition, since the coloring material amount reduction ratio is continuously changed according to the edge amount, the density decrease processing can be realized in which a step due to switching of the processing is unnoticeable while controlling the coloring material use amount.
According to the present disclosure, a step between pixels due to an edge extracted by image processing can be reduced.
Embodiment(s) of the present disclosure can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
While the present disclosure has been described with reference to exemplary embodiments, it is to be understood that the disclosure is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of priority from Japanese Patent Application No. 2015-129616, filed Jun. 29, 2015, and Japanese Patent Application No. 2015-246905, filed Dec. 18, 2015, each of which are hereby incorporated by reference herein in their entirety.
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
5091967, | Apr 08 1988 | Dainippon Screen Mfg. Co., Ltd. | Method of extracting contour of a subject image from an original |
5583659, | Nov 10 1994 | KODAK ALARIS INC | Multi-windowing technique for thresholding an image using local image properties |
20020080377, | |||
20040125411, | |||
20060274162, | |||
20080111998, | |||
20090033962, | |||
20100231748, | |||
20100322536, | |||
20130216137, | |||
JP2002086805, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jun 13 2016 | KAKUTA, HITOSHI | Canon Kabushiki Kaisha | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 039943 | /0658 | |
Jun 27 2016 | Canon Kabushiki Kaisha | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Aug 18 2022 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Date | Maintenance Schedule |
Mar 19 2022 | 4 years fee payment window open |
Sep 19 2022 | 6 months grace period start (w surcharge) |
Mar 19 2023 | patent expiry (for year 4) |
Mar 19 2025 | 2 years to revive unintentionally abandoned end. (for year 4) |
Mar 19 2026 | 8 years fee payment window open |
Sep 19 2026 | 6 months grace period start (w surcharge) |
Mar 19 2027 | patent expiry (for year 8) |
Mar 19 2029 | 2 years to revive unintentionally abandoned end. (for year 8) |
Mar 19 2030 | 12 years fee payment window open |
Sep 19 2030 | 6 months grace period start (w surcharge) |
Mar 19 2031 | patent expiry (for year 12) |
Mar 19 2033 | 2 years to revive unintentionally abandoned end. (for year 12) |