A demodulation image sensor, such as used in time of flight (TOF) cameras, extracts all storage- and post-processing-related steps from the pixels to another array of storage and processing elements (proxels) on the chip. The pixel array has the task of photo-detection, first processing and intermediate storage, while the array of storage and processing elements provides further processing and enhanced storage capabilities for each pixel individually. The architecture can be used to address problems due to the down-scaling of the pixel size. Typically, either the photo-sensitivity or the signal storage capacitance suffers significantly. Both a lower sensitivity and smaller storage capacitances have negative influence on the image quality. The disclosed architecture allows for keeping the storage capacitance unaffected by the pixel down-scaling. In addition to that, it provides a high degree of flexibility in integrating more intelligence into the image sensor design already on the level of the pixel array. In particular, if applied to demodulation pixels, the flexibility of the architecture allows for integrating on sensor-level concepts for multi-tap sampling, mismatch compensation, background suppression and so on, without any requirement to adjust the particular demodulation pixel architecture.
|
6. A demodulation method, comprising:
detecting modulated light in a pixel array, which is implemented on a chip, the pixel array comprising pixels each of which produces at least two samples of the modulated light with the modulation of a light emitter;
transferring the samples from each of the pixels to a proxel array on the chip, the proxel array being physically separated from the pixel array and comprising storage elements and processing elements; and
receiving the at least two samples in storage elements of the proxel array from a corresponding one of the pixels, wherein individual ones of the storage elements accumulate signals for a plurality of phase-delay-matching sub-images corresponding to a single image.
1. A demodulation sensor implemented on a chip, comprising:
a pixel array comprising pixels each of which produces at least two samples of a scene, which is illuminated by modulated light from a light emitter;
a proxel array on the chip, the proxel array being physically separated from the pixel array and comprising storage elements and processing elements, each of the storage elements operable to receive the at least two samples from a corresponding one of the pixels; and
a transfer system to transfer the samples from the pixel array to the corresponding storage elements of the proxel array;
wherein the proxel array is arranged to alternate storage of the at least two samples between two different storage elements to perform in-pixel mismatch cancellation, and wherein each storage element is arranged to accumulate a plurality of phase-delay-matching sub-images for a single image.
2. A demodulation sensor as claimed in
photosensitive regions in which incoming light generates charge carriers, and demodulators/correlators that transfer the charge carriers among multiple storage sites.
3. A demodulation sensor as claimed in
4. A demodulation sensor as claimed in
5. A demodulation sensor as claimed in
7. The demodulation sensor of
8. The demodulation sensor of
9. A demodulation sensor as claimed in
10. A demodulation sensor as claimed in
11. A demodulation sensor as claimed in
12. A demodulation sensor as claimed in
13. A demodulation sensor as claimed in
14. A demodulation sensor as claimed in
15. A demodulation sensor as claimed in
16. A demodulation sensor as claimed in
17. The demodulation method of
converting the at least two samples of each of the pixels from analog signals to digital signals; and
transferring the digital signals for each of the pixels to the proxel array.
|
This application claims the benefit under 35 USC 119(e) of U.S. Provisional Application No. 61/292,588, filed on Jan. 6, 2010, which is incorporated herein by reference in its entirety.
Electronic imaging sensors usually have an array of m×n photo-sensitive pixels, with x>=1 rows and y>=1 columns. Each pixel of the array can individually be addressed by dedicated readout circuitry for column-wise and row-wise selection. Optionally a block for signal post-processing is integrated on the sensor.
The pixels typically have four basic functions: photo detection, signal processing, information storage, and analog or digital conversion. Each of these functions consumes a certain area on the chip.
A special group of smart pixels, called demodulation pixels, is well-known for the purpose of three dimensional (3D) imaging. Other applications of such demodulation pixels include fluorescence life-time imaging (FLIM). The pixels of these demodulation imaging sensors typically demodulate the incoming light signal by means of synchronous sampling or correlating the signal. Hence, the signal processing function is substituted more specifically by a sampler or a correlator. The output of the sampling or correlation process is a number n of different charge packets or samples (A0, A1, A3 . . . ) for each pixel. Thus, n storage sites are used for the information storage. The typical pixel output in the analog domain is accomplished by standard source follower amplification. However, analog to digital converters could also be integrated at the pixel-level.
The image quality of demodulation sensors is defined by the per-pixel measurement uncertainty. Similar to standard 2D imaging sensors, a larger number of signal carriers improves the signal-to-noise ratio and thus the image quality. For 3D imaging sensors, more signal carriers mean lower distance uncertainty. In general, the distance measurement standard deviation a shows an inverse proportionality either to the signal A or to the square root of the signal, depending whether the photon shot noise is dominant or not.
if photon shot noise is dominant
if other noise sources are dominant
A common problem for all demodulation pixels used in demodulation sensors, such as for TOF imaging or FLIM, or otherwise, arises when trying to shrink the pixel size to realize arrays of higher pixel counts. Since the storage nodes require a certain area in the pixel in order to maintain adequate storage capacity and thus image quality, the pixel's fill factor suffers from the shrinking process associated with moving to these larger arrays. Thus, there is a trade-off between the storage area needed for obtaining a certain image quality and the pixel's photo-sensitivity expressed by the fill-factor parameter. In the case of a minimum achievable image quality, the minimum size of the pixel is given by the minimum size of the total storage area.
In 3D imaging, typically a few hundreds of thousands up to several million charge carriers, i.e., electrons, need to be stored in order to achieve centimeter down to millimeter resolution. This performance requirement, in turn, means that the storage nodes typically cover areas of some hundreds of square micrometers in the pixel. Consequently, pixel pitches of 10 micrometers or less become almost impossible without compromises in terms of distance resolution and accuracy.
The aforementioned trade-off problem becomes even more critical if additional post-processing logic is to be integrated on a per-pixel basis. Such post-processing could include for example analog-to-digital conversion, logic for a common signal subtraction, integrators, and differentiators, to list a few examples.
Another challenge of the demodulation pixels is the number of samples required to unambiguously derive the characteristics of the impinging electromagnetic wave. Using a sine-modulated carrier signal, the characteristics of the wave are its amplitude A, the offset B and the phase P. Hence, in this case, at least three samples need to be acquired per period. However, for design and stability reasons, most common systems use four samples. Implementing a pixel capable of capturing and storing n=4 samples requires in general the four-fold duplication of electronics per pixel such as storage and readout electronics. The result is the further increase in the electronics per pixel and a further reduction in fill factor.
In order to avoid this loss in sensitivity, most common approaches use so-called 2-tap pixels, which are demodulation pixels able to sample and store two samples within the same period. Such type of pixel architectures are ideal in terms of sensitivity, since all the photo-electrons are converted into a signal and no light is wasted, but on the other hand, it requires at least two consequent measurements to get the four samples. Due to sampling mismatches and other non-idealities, even four images might be required to cancel or at least to reduce pixel mismatches. Such an approach has been presented by Lustenberger, Oggier, Becker, and Lamesch, in U.S. Pat. No. 7,462,808, entitled Method and device for redundant distance measurement and mismatch cancellation in phase measurement systems, which is incorporated herein by this reference in its entirety. Having now several images taken and combined to deduce one depth image, motion in the scene or a moving camera renders artifacts in the measured depth map. The more those different samples are separated in time, the worse the motion artifacts are.
The presented invention solves the problem of shrinking the pixel size without significantly reducing the pixel's fill factor and without compromising the image quality by making the storage nodes even smaller. The solution even provides the possibility for almost arbitrary integration of any additional post-processing circuitry for each pixel's signals individually. Furthermore, it can reduce the motion artifacts of time-of-flight cameras to a minimum.
In general, according to one aspect, the invention features a demodulation sensor comprising a pixel array comprising pixels that each produce at least two samples and a storage or proxel array comprising processing and/or storage elements, each of the storage elements receiving the at least two samples from a corresponding one of the pixels.
In embodiments, the pixels comprise photosensitive regions in which incoming light generates charge carriers and demodulators/correlators that transfer the charge carriers among multiple storage sites.
A transfer system is preferably provided that transfers the samples generated by the pixels to the corresponding storage elements. In examples, the transfer system analog to digitally converts the samples received by the storage elements.
In some cases, the storage elements monitor storage nodes that receive the samples for saturation. Different sized storage nodes can also be provided that receive the samples. Mismatch cancellation can also be performed along with post processing to determine depth information.
In general, according to another aspect, the invention features a time of flight camera system comprising a light source that generates modulating light and a demodulation sensor. The sensor includes a pixel array comprising pixels that each produce at least two samples of the modulated light and a storage array comprising storage elements. Each of the storage elements receives the at least two samples from a corresponding one of the pixels.
In general, according to another aspect, the invention features a demodulation method comprising: detecting modulated light in a pixel array comprising pixels that each produce at least two samples of the modulated light, transferring the at least two samples from each of the pixels to a storage array, and receiving the at least two samples in storage elements of the storage array from a corresponding one of the pixels.
The above and other features of the invention including various novel details of construction and combinations of parts, and other advantages, will now be more particularly described with reference to the accompanying drawings and pointed out in the claims. It will be understood that the particular method and device embodying the invention are shown by way of illustration and not as a limitation of the invention. The principles and features of this invention may be employed in various and numerous embodiments without departing from the scope of the invention.
In the accompanying drawings, reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale; emphasis has instead been placed upon illustrating the principles of the invention. Of the drawings:
The illustrated architecture extracts elements, which are typically integrated inside the pixel but not necessarily required for the photo detection, out of the pixel into physically separated elements that are basically storage and sometimes processing elements, termed storage elements or proxels. As a consequence, the sensor includes a pixel array 110 of x×y pixels and a storage or proxel array 150 of x×y of storage elements or proxels 310 that are used for further processing, storage of the information and readout. Usually x and y are greater than 100, and preferably greater than 200. In some examples x, y are greater than 1000. The two arrays are physically separated from each other in preferably discrete arrays that do not physically overlap with each other on the chip.
Multiple functions are preferably handled in this proxel array 150. Thus, the sensor 100 includes of a pixel array 110 and a proxel array 150, where each proxel 310 is linked to and associated with preferably one particular pixel 210.
It is worth mentioning that the proxel array 150 does not have to be one contiguous array. In examples the proxel array 150 is split into two, three, or four matrices that surround the pixel array 110.
The data transfer of the pixel 210 to the proxel 310 is controlled by the pixel readout decoder 182 and transferred through the transfer or connection system 180. The pixel readout decoder 182 selects the pixel 210 and establishes the connection 180 to the corresponding proxel 310. Preferably, the readout of the pixel field 110 is done row-wise. Hence, the readout decoder selects at least one row of the pixel field 110 which is then connected to the corresponding rows of proxels 310 in the proxel field 150. In that case, the connection lines of the transfer or connection system 180 are shared by all pixels in a column. In order to further speed up the pixel readout, multiple rows could be selected and transferred as well.
Additionally included in the sensor 100 is the proxel readout decoder 186 for controlling the readout of the proxels. An optional signal post processing block 184 is provided for analog to digital conversion and/or calculating phase/depth information based on the n acquired samples, for example.
The transfer or connection system 180 between the pixel array 110 and the proxel array 150 includes analog to digital converters in some embodiments and the information arriving and processed at the proxel array is therefore digital.
In more detail, a light source or emitter 510 with a possible reflector or projection optics 512 produces modulated light 514 that is directed at the 3-D scene 516 at range R from the camera. The returning light 518 from the scene 516 is collected by the objective lens system 520 and possibly bandpass filtered so that only light at the wavelength emitted by the light emitter 510 is transmitted. An image is formed on the pixel array 110 of the TOF sensor 100. A control unit 522 coordinates the modulation of the light emitter 510 with the sampling of the TOF detector chip 100. This results in synchronous demodulation. That is, the samples that are generated in each of the pixels 210 of the pixel array 110 are stored in the storage buckets or sites in the pixels and/or proxels 310 in the storage or proxel array 150 synchronously with the modulation of a light emitter 510. The kind of modulation signal is not restricted to sine but for similarity, sine wave modulation only is used for illustration.
The information or samples are transferred to the storage or proxel array 150 and then readout by the control unit 522, which then reconstructs the 3-D image representation using the samples generated by the chip 100 such that a range r to the scene is produced for each of the pixels of the chip 100.
In the case of sine wave modulation, using the n=4 samples A0, A1, A2, A3 generated by each pixel/proxel, the three decisive modulation parameters amplitude A, offset B and phase shift P of the modulation signal are extracted by the equations:
A=sqrt[(A3−A1)^2+(A2−A1)^2]/2
B=[A0+A1+A2+A3]/4
P=arctan [(A3−A1)/(A0−A2)]
With each pixel 210 of the sensor 100 being capable of demodulating the optical signal at the same time, the controller unit 522 is able to deliver 3D images in real-time, i.e., frame rates of up to 30 Hertz (Hz), or even more, are possible. Continuous sine modulation delivers the phase delay (P) between the emitted signal and the received signal, which corresponds directly to the distance R:
R=(P*c)/(4*pi*f mod),
where f mod is the modulation frequency of the optical signal 514. Typical state-of-the-art modulation frequencies range from a few MHz up to a few hundreds of MHz or even GHz.
Before reading out the storage sites 220 with the n samples, many demodulation pixels include in-pixel processing 222 e.g. for common mode suppression. In its simplest form, the demodulation pixel 210 only includes a sensitive area 212, a correlator/demodulator 218, storage sites 220 and readout 224.
The sensing 212 and demodulation 218 can be done using dynamic lateral drift fields as described in U.S. Pat. No. 7,498,621 B2, which is incorporated herein in its entirety, or static lateral drift fields as described in U.S. Pat. Appl. No. 2008/0239466 A1, which is incorporated herein in its entirety. Various approaches have been published based on the static lateral drift field principle B. Büttgen, F. Lustenberger and P. Seitz, Demodulation Pixel Based on Static Drift Fields, IEEE Transactions on Electron Devices, 53(11):2741-2747, November 2006, Cédric Tubert et al., High Speed Dual Port Pinned-photodiode for Time-Of-Flight Imaging, International Image Sensor Workshop Bergen 2009, and D. Durini, A. Spickermann, R. Mandi, W. Brockherde, H. Vogt, A. Grabmaier, B. Hosticka, “Lateral drift-field photodiode for low noise, high-speed, large photoactive-area CMOS imaging applications”, Nuclear Instruments and Methods in Physics Research A, 2010. Other methods do not have the photosensitive area 212 and the demodulation 218 physically separated such as the photo-detection assisted by switching majority currents, see M. Kuijk, D. van Niewenhove, “Detector for electromagnetic radiation assisted by majority current”, September 2003, EP 1 513 202 A1, or the methods based on toggling large transfer gates, see U.S. Pat. No. 5,856,667, U.S. Pat. No. 6,825,455, and US 2002/0084430 A1. All of those sensing/demodulation methods can be implemented here.
Demodulation sensors using the present technology can provide a number of advantages. For example, the pixel size can be reduced without giving up fill factor and data quality of every individual pixel. It also can provide high flexibility for the integration of more processing steps that are applied to the pixels' outputs. These include dynamic range enhancement, pixel-wise integration time control, several storage capacitance providing charge overflowing capabilities, background suppression by capacitance switching, increasing the number of sampling points when demodulation pixels are used, and appropriate capacitance switching in the proxel from integration period to integration period to remove mismatch problems inside the pixel.
Likewise, the signal post processing is split into a first signal post processing unit 184A for the first proxel array 150A and a second signal post processing unit 184B for the second proxel array 150B. Two proxel readout decoders 186A, 186B are similarly provided.
In the following some more proxel designs are disclosed. The integration of those functionalities into every pixel becomes only indirectly possible by excluding those particular steps of processing out of the pixel array. The examples show two connections between a pixel 210 and a proxel 310 in order to point out the functionality integrated in the proxel array.
Additionally, it is easily possible to combine the different examples.
The proxel 310 shows the DC suppression circuitry applied on two pixel outputs. Several of those circuitries could be integrated in the proxel, if there is the need to subtract even more pixel outputs.
By appropriate timing of the switching, the DC component between the channels integrated on consequent sub-images can be subtracted and integrated on capacitance 314.
A differential output 332 is used for the buffering during readout.
The sample outputs of demodulation pixels are generally referred to as taps. Hence, a 2-tap demodulation pixel provides n=2 sample outputs. In the case that this pixel is used for example for sampling a sinusoidally intensity-modulated light wave four times at equidistant steps of 0°, 90°, 180° and 270°, then two subsequent measurements need to be performed. A first measurement outputs the samples at for example 0° and 180° and a second integration cycle give the samples at 90° and 270° phase.
However, if a 4-tap pixel structure is available, all n=4 samples are obtained within one acquisition cycle. The proxel approach enables the use of a 2-tap pixel structure for obtaining all 4 samples within one single acquisition cycle. The proxel 310 is used to increase the sample number from n=2 to n=4.
Generally the concept can be extended to pixel structures of arbitrary tap numbers and to proxel structures that increase the number arbitrarily.
The four outputs of the pixel are denoted by Out_0, Out_90, Out_180 and Out_270, according to the particular phase value that the sample is representing.
In summary, a new concept for the design of image sensors has been demonstrated that allows for down-scaling the pixel size without compromising in the pixels' signal storage performances. The idea is based on keeping only the absolute necessary storage nodes inside the pixel, which still ensure intermediate signal storage, and further on extracting the final storage nodes to an on-chip array of storage elements out of the pixel field. Furthermore, the creation of an external array of elements, where each element is linked to a particular pixel, enables new functionalities. Analogue and digital processing circuitries can now be integrated on sensor-level in a very flexible fashion without affecting the photo-sensitivity of the pixel at all. The flexibility of integrating further processing steps for each pixel is a benefit for so-called demodulation pixels. Without adjusting the pixel architecture, different concepts like for example multi-sampling or in-pixel mismatch compensation can easily be achieved.
While this invention has been particularly shown and described with references to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims.
Oggier, Thierry, Buettgen, Bernhard
Patent | Priority | Assignee | Title |
10016137, | Nov 22 2017 | HI LLC | System and method for simultaneously detecting phase modulated optical signals |
10219700, | Dec 15 2017 | HI LLC | Systems and methods for quasi-ballistic photon optical coherence tomography in diffusive scattering media using a lock-in camera detector |
10276628, | May 17 2016 | AMS SENSORS SINGAPORE PTE LTD | Time-of-fight pixel including in-pixel buried channel transistors |
10299682, | Nov 22 2017 | HI LLC | Pulsed ultrasound modulated optical tomography with increased optical/ultrasound pulse ratio |
10335036, | Nov 22 2017 | HI LLC | Pulsed ultrasound modulated optical tomography using lock-in camera |
10368752, | Mar 08 2018 | HI LLC | Devices and methods to convert conventional imagers into lock-in cameras |
10420469, | Nov 22 2017 | HI LLC | Optical detection system for determining neural activity in brain based on water concentration |
10784294, | Jul 09 2018 | Samsung Electronics Co., Ltd. | Image sensor including multi-tap pixel |
10861886, | Oct 05 2018 | Samsung Electronics Co., Ltd. | Image sensor and image processing system having pixel structure to improve demodulation contrast performance |
10868966, | Jun 15 2016 | Sony Corporation | Imaging apparatus and imaging method |
10881300, | Dec 15 2017 | HI LLC | Systems and methods for quasi-ballistic photon optical coherence tomography in diffusive scattering media using a lock-in camera detector |
11058301, | Nov 22 2017 | HI LLC | System and method for simultaneously detecting phase modulated optical signals |
11204415, | Oct 01 2018 | Samsung Electronics Co., Ltd. | Three-dimensional (3D) image sensors including polarizer, and depth correction methods and 3D image generation methods based on 3D image sensors |
11206985, | Apr 13 2018 | HI LLC | Non-invasive optical detection systems and methods in highly scattering medium |
11263765, | Dec 04 2018 | IEE INTERNATIONAL ELECTRONICS & ENGINEERING S A | Method for corrected depth measurement with a time-of-flight camera using amplitude-modulated continuous light |
11291370, | Mar 08 2018 | HI LLC | Devices and methods to convert conventional imagers into lock-in cameras |
11378690, | Mar 21 2018 | Samsung Electronics Co., Ltd. | Time of flight sensor, a three-dimensional imaging device using the same, and a method for driving the three-dimensional imaging device |
11762073, | Oct 01 2018 | Samsung Electronics Co., Ltd. | Three-dimensional (3D) image sensors including polarizer, and depth correction methods and 3D image generation methods based on 3D image sensors |
11818482, | Sep 16 2021 | Samsung Electronics Co., Ltd. | Image sensor for measuring distance and camera module including the same |
11857316, | May 07 2018 | HI LLC | Non-invasive optical detection system and method |
11982769, | Mar 20 2018 | Sony Semiconductor Solutions Corporation | Distance sensor and distance measurement device |
Patent | Priority | Assignee | Title |
5054911, | Dec 30 1988 | Kabushiki Kaisha Topcon | Light wave distance measuring instrument of the pulse type |
5856667, | Nov 14 1994 | AMS SENSORS SINGAPORE PTE LTD | Apparatus and method for detection and demodulation of an intensity-modulated radiation field |
6078037, | Apr 16 1998 | Intel Corporation | Active pixel CMOS sensor with multiple storage capacitors |
6503195, | May 24 1999 | UNIVERSITY OF NORTH CAROLINA, THE | Methods and systems for real-time structured light depth extraction and endoscope using real-time structured light depth extraction |
6809666, | May 09 2000 | PIXIM INCORPORATED | Circuit and method for gray code to binary conversion |
6825455, | Sep 05 1996 | Method and apparatus for photomixing | |
7038820, | Apr 03 2002 | OmniVision Technologies, Inc | Automatic exposure control for an image sensor |
7462808, | Aug 08 2005 | AMS SENSORS SINGAPORE PTE LTD | Method and device for redundant distance measurement and mismatch cancellation in phase-measurement systems |
7498621, | Jun 20 2002 | AMS SENSORS SINGAPORE PTE LTD | Image sensing device and method of |
8159587, | Dec 13 2007 | STMICROELECTRONICS FRANCE | Pixel read circuitry |
8203699, | Jun 30 2008 | Microsoft Technology Licensing, LLC | System architecture design for time-of-flight system having reduced differential pixel size, and time-of-flight systems so designed |
20020084430, | |||
20070057209, | |||
20080117661, | |||
20080239466, | |||
20090021617, | |||
20090153715, | |||
20090161979, | |||
20090303553, | |||
20100026838, | |||
20100157447, | |||
20100231891, | |||
20120307232, | |||
DE19704496, | |||
EP1513202, | |||
EP1777747, | |||
JP2006084430, | |||
JP2009047661, | |||
JP2009047662, | |||
JP2181689, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jan 06 2011 | Heptagon Micro Optics Pte. Ltd. | (assignment on the face of the patent) | / | |||
Jan 11 2011 | BUETTGEN, BERNHARD | MESA Imaging AG | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 025799 | /0367 | |
Jan 11 2011 | OGGIER, THIERRY | MESA Imaging AG | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 025799 | /0367 | |
Sep 30 2015 | MESA Imaging AG | HEPTAGON MICRO OPTICS PTE LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 037211 | /0220 | |
Feb 05 2018 | HEPTAGON MICRO OPTICS PTE LTD | AMS SENSORS SINGAPORE PTE LTD | CHANGE OF NAME SEE DOCUMENT FOR DETAILS | 048513 | /0922 |
Date | Maintenance Fee Events |
Dec 06 2019 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Mar 03 2020 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Mar 07 2024 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
Sep 13 2019 | 4 years fee payment window open |
Mar 13 2020 | 6 months grace period start (w surcharge) |
Sep 13 2020 | patent expiry (for year 4) |
Sep 13 2022 | 2 years to revive unintentionally abandoned end. (for year 4) |
Sep 13 2023 | 8 years fee payment window open |
Mar 13 2024 | 6 months grace period start (w surcharge) |
Sep 13 2024 | patent expiry (for year 8) |
Sep 13 2026 | 2 years to revive unintentionally abandoned end. (for year 8) |
Sep 13 2027 | 12 years fee payment window open |
Mar 13 2028 | 6 months grace period start (w surcharge) |
Sep 13 2028 | patent expiry (for year 12) |
Sep 13 2030 | 2 years to revive unintentionally abandoned end. (for year 12) |