By arranging dies in a stack such that failed cores are aligned with adjacent good cores, fast connections between good cores and cache of failed cores can be implemented. cache can be allocated according to a priority assigned to each good core, by latency between a requesting core and available cache, and/or by load on a core.
|
8. A cache management method for a semiconductor device stack of at least two dies, each die including at least two computing devices, each computing device including at least one core, each core including a local cache, and each die including at least one shared cache connected to at least one core of the respective die, the method comprising:
testing each core;
responsive to a core failing the testing, identifying the core as a failed core;
responsive to a core passing the testing, identifying the core as a good core;
stacking the at least two dies such that a good core of a first die is aligned with a failed core of at least one adjacent die; and
connecting a failed core local cache of each failed core to at least a first aligned good core for primary use by the at least a first aligned good core.
1. A semiconductor device stack comprising:
at least two dies, each die including:
at least two computing devices, each computing device including:
at least one core having associated therewith a respective identifier of good core or failed core; and
a respective local cache connected to each core;
at least one shared cache connected to every core of the computing device; and
a configuration register connected to every core of the computing device and storing each respective identifier in a respective quality indicator,
wherein the at least two dies are oriented with any failed core of any die aligned with a respective good core of at least one other adjacent die,
the stack being configured to enter a cache extension mode in which at least one good core is configured to use at least one of a local cache of another core or a shared cache of another computing device.
5. A method comprising:
testing each of at least two dies each including at least one computing device with at least one respective core, each core having a respective local core, and each die having at least one shared cache connected to at least one respective core;
storing a quality indicator for each core in at least one configuration register, the quality indicator identifying the respective core as a failed core or as a good core responsive to the respective core failing or passing the testing, respectively;
stacking the dies with any failed core in alignment with a respective good core of at least one adjacent die;
determining a first latency between a good core and the respective shared cache;
determining a second latency between the good core and the local cache of a failed core; and
connecting the local cache of the good core to the respective one of the local cache or the shared cache associated with the lesser of the first latency and the second latency.
2. The stack of
3. The stack of
4. The stack of
6. The method of
7. The method of
9. The method of
10. The method of
11. The method of
12. The method of
13. The method of
14. The method of
15. The method of
16. The method of
17. The method of
18. The method of
19. The method of
|
The invention relates generally to semiconductor structures and fabrication of semiconductor chips and, in particular, to connections and logic enabling management of pooling of cache between dies in a stack of processor chips.
Multi-core processor chips can provide high performance while increasing chip area less than ganging multiple single core processor chips. However, the fabrication of multi-core processors has several challenges, some resulting from the complexity of such processors, and some resulting from spatial and layout requirements. A typical core can include central processing unit (CPU) logic and at least one cache memory. For example, cores of some multi-core processors can include level one (L1), level two (L2), and level three (L3) cache memories working at different speeds, which can provide faster access to data and enhanced processor performance. If one or more cores of a multi-core processor are defective at test, otherwise functional memory dedicated to the failed core(s) can sit unused. As a result, cache memory sharing has been pursued in single-layer or 2D packaging.
As chips grow and/or are ganged together, wire length can reach a point where latency reduces performance and/or increases power consumption. A technique developed to address wire latency is chip or die stacking, also called 3D chip or die stacking. In such stacking, one or more dies or chips are arranged to overlie one another and include features to enable components of the dies or chips to be connected, thus allowing communication between devices or components on different dies or chips. For example, connections can be established through silicon vias, solder bumps, wire bonding, and/or other arrangements and/or techniques, which can shorten effective wire length between components and/or devices. Once connected, additional materials and/or non-communication connections can be made so that the stack effectively becomes a substantially permanently connected semiconductor package. As a result, chip stacking can offer lower power consumption, reduced form factor, and reduced interface latency between components of multiple chips as compared to laying out the same chips in a common plane.
An embodiment of the invention disclosed herein can take the form of a method of cache management for a semiconductor device stack of at least two dies, each die including at least two computing devices, each computing device including at least one core, each core including a local cache, and each die including at least one shared cache connected to at least one core of the respective die. The method can include testing each core, identifying a core as a failed core responsive to the core failing the testing, and identifying the core as a good core responsive to a core passing the testing. The at least two dies can be stacked such that a good core of a first die is aligned with a failed core of at least one adjacent die, and a failed core local cache of each failed core can be connected to at least a first aligned good core for primary use by the at least a first aligned good core.
Another embodiment of the invention disclosed herein can take the form of a semiconductor device stack having at least two dies. Each die can include, at least two computing devices, and each computing device can include at least one core having associated therewith a respective identifier of good core or failed core. Each computing device can also have a respective local cache connected to each core, at least one shared cache connected to every core of the computing device, and a configuration register connected to every core of the computing device that can store each respective identifier in a respective quality indicator. The at least two dies can be oriented with any failed core of any die aligned with a respective good core of at least one other adjacent die, and the stack can be configured to enter a cache extension mode in which at least one good core is configured to use at least one of a local cache of another core or a shared cache of another computing device.
A further embodiment of the invention disclosed herein can take the form of a method in which each of at least two dies can be tested, each die including at least one computing device with at least one respective core, each core having a respective local core, and each die having at least one shared cache connected to at least one respective core. A quality indicator for each core can be stored in at least one configuration register, the quality indicator identifying the respective core as a failed core or as a good core responsive to the respective core failing or passing the testing, respectively. The dies can be stacked with any failed core in alignment with a respective good core of at least one adjacent die. So stacked, a first latency between a good core and the respective shared cache can be determined, as well as a second latency between the good core and the local cache of a failed core. The local cache of the good core can be connected to the respective one of the local cache or the shared cache associated with the lesser of the first latency and the second latency.
Additional features and advantages are realized through the techniques of the present invention. Other embodiments and aspects of the invention are described in detail herein and are considered a part of the claimed invention. For a better understanding of the invention with the advantages and the features, refer to the description and to the drawings.
The subject matter which is regarded as the invention is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other features and advantages of the invention are apparent from the following detailed description taken in conjunction with the accompanying drawings.
It is noted that the drawings of the invention are not necessarily to scale. The drawings are intended to depict only typical aspects of the invention, and therefore should not be considered as limiting the scope of the invention. It is understood that elements similarly numbered between the FIGURES may be substantially similar as described with reference to one another. Further, in embodiments shown and described with reference to
Embodiments of the invention can use firmware (FW), vital product data (VPD), and/or a hypervisor (HV), which are known in the art as tools to implement control of computing devices and so are not described in detail. Thus, it is understood that one of ordinary skill in the art seeing the terms firmware, VPD, and/or hypervisor appearing herein will know to what they refer and how to employ them as taught according to aspects of the invention disclosed herein. It should also be understood that one of ordinary skill in the art will know and understand the terms core, computing device, die, stack, cache, including local cache, remote cache, extended cache, shared cache, level n (Ln) cache where n is an integer, and memory, and will further know how to employ devices to which the terms refer according to embodiments of the invention disclosed herein.
Embodiments of the invention disclosed herein provide a method of cache management for a semiconductor device stack including at least one core on each die in the stack. By identifying failed cores on dies and arranging the dies according to embodiments, dies that might otherwise have been destroyed can be used, and performance of good cores can be enhanced. For example, a good core can overlie or otherwise be aligned with a failed core in the stack so that a connection distance therebetween can be minimized. A cache memory of the failed core can then be connected to at least one good core so that the at least one good core can use the failed core cache memory.
Turning to
Each configuration register 124 can be responsive to control system 150, which can include a hypervisor 152 in communication with a store of so-called vital product data (VPD) 154 and with firmware 156, which can also be in communication with each other. Hypervisor 152 can read and set each configuration register 124 of stack 110 according to a cache management method of embodiments as will be described. In embodiments, each computing device 114 can include a shared cache that multiple cores 116 can share, and/or each die 112, 112′, 112″, 112′″ can have a shared cache that multiple cores 116 and/or computing devices 114 can share, such as, for example, a L2 cache 119.
The particular arrangement of stack 110 in embodiments can be reached using a method 200 illustrated in
Thus, dies 112, 112′, 112″, 112′″ can be arranged such that any failed core 116 can be aligned with a good core 116 of an adjacent die 112, 112′, 112″, 112′″. “Adjacent” in this context can include next lower or next upper/higher, so that an adjacent die can be a next die in stack 110, such as a next lower or next upper die. In the particular example of
With reference to
In embodiments, with reference to
More particularly, in embodiments in which each core includes local L1 and local L2 cache and in which L2 cache is made accessible to multiple cores, an address scheme can have every even L2 block mapped to a current L2 cache and every odd L2 block mapped to an extended or remote L2 cache. An additional real address bit can be used to map the addresses to the L2 cache, accordingly. Thus, if a L2 cache includes 512 kilobytes (KB) (0x80000), the 21st bit can be used to hash and/or find which L2 it belongs to. This can divide L2 cache accesses between two L2 caches, which can improve performance. This division can be made dynamic, such as when L2 cache misses exceed a threshold number. In some implementations, L2 cache can require a bus bandwidth on the order of 16 Bytes per clock cycle for a DStore action, while a DLoad, DTranslate, and/or IFetch action can require 64 Bytes per clock cycle, which could be achieved by using 640 micro C4s to connect a core of one die to a L2 cache of an adjacent die.
In embodiments in which core and cache addresses and identifiers are stored, such as in VPD 154, such information can be organized as illustrated in TABLE I, below. In TABLE I, the sorts in the first column refer to a number of good cores on a die, each die including eight cores. Thus, for a die having six good cores, the two failed cores can be listed as available cache.
Active cores and Cache Availability mark area
Layer 1
Layer 2
Layer 3
Layer 4
Layer 5
Layer 6
Layer 7
Layer 8
8C sort
C0
C1
C2
C3
C4
C5
C6
C7
6C sort
C0
C1
Cache
C2
C3
C4
Cache
C5
4C sort
Cache
C0
Cache
C1
Cache
C2
Cache
C3
Similarly, allocation of cache access can be organized as illustrated in TABLE II, below. As in TABLE I, the sorts in the first column refer to the number of good cores on an eight-core die, and the remaining columns indicate a preferred priority of access to available cache, if any. For example, for a six-core sort, a first available cache can be accessed by Core 2 with highest priority and by Core 1 with next priority. Thus, Core 2 can have primary allocation (P), whereas Core 1 can have secondary allocation (S).
Primary usage allocation (P) and Secondary usage allocation (S) entry mark Area
Layer 1
Layer 2
Layer 3
Layer 4
Layer 5
Layer 6
Layer 7
Layer 8
8C sort
C0
C1
C2
C3
C4
C5
C6
C7
6C sort
C1 (S, L3)
Cache
C2 (P, L3)
C4 (S, L7)
Cache
C5 (P, L7)
4C sort
Cache
C0 (P, L1/S, L3)
Cache
C1 (P, L3/S, L5)
Cache
C2 (P, L5/S, L7)
Cache
C3 (P, L7)
As an example of operation of embodiments, consider
In another example of operation of embodiments, an operating good core can send a request, such as to firmware 156, for additional cache memory. In this example, any cache in the stack can be used, not only failed core local cache. The request can be passed on to hypervisor 152, which can free up cache and identify which and how much is available, such as by updating firmware 156. The additional cache memory can be released to the requesting core, such as by firmware 156, which can also update VPD 154 to reflect that cache is shared. Once the requesting core is done with the additional cache, the additional cache can be released, such as by firmware 156 taking control of the additional cache. The additional cache can then be reallocated to the core with which it might ordinarily be associated, such as by firmware 156 and/or hypervisor 152, and VPD 154 can be updated to reflect the reallocation.
In a further example of operation of embodiments, an operating good core that is not connected to extended cache and/or does not have sufficient extended cache available can send a request, such as to firmware 156 or to hypervisor 152, for additional resources in response to a memory intensive or otherwise complicated and/or resource intensive workload. For example, such a core can request that the memory intensive workload be reassigned to a core that has extended cache available. Alternatively, hypervisor 152 can monitor the cores to determine when a core requires additional resources and/or when a workload should be reassigned to a core that has extended cache available. The request can be passed on to hypervisor 152, which can reassign the workload and identify to which core the workload has been assigned, such as by updating firmware 156. The originally-assigned core can be released and/or assigned a new workload, such as by firmware 156 and/or hypervisor 152, which can also update stored information and/or be updated to reflect that the core is available and/or has been assigned a new workload.
Thus, with reference to
If no failure is detected at block 304, a check can be made to see if cache is needed (block 314), such as a core requesting cache and/or a core previously having been allocated extended cache. For example, a cache extension request and/or core augmentation request can result from a core being assigned a memory-intensive workload and/or job and/or process. If cache is not needed, then a check can be made to see if a core previously has been allocated extended cache (block 316). If cache was not previously extended or allocated, then data can be updated if necessary (block 310), and monitoring can resume (block 302), access to the data being optionally granted (block 312), while if cache was previously granted, unneeded cache(s) can be released (block 318) before updating data.
If at block 314 it is determined that cache is needed, such as a cache request having been detected and/or received and/or the hypervisor determining that more cache is required for a workload, then an address of one or more or every available cache can be retrieved (block 320). A check can be made to determine whether sufficient cache is available (block 322), and if not, a workload can be reassigned to a core that has access to sufficient cache (block 324). If sufficient cache is available and/or if a workload has been assigned to a core to which sufficient cache is available, then in embodiments, the requesting core can simply be connected to an available cache, but in other embodiments, latency between the requesting core and the available cache(s) can be determined (block 326) and the requesting core can be connected to the cache(s) with the lowest latency (block 328). Data can be updated if necessary (block 310), and monitoring can resume (block 302), access to the data being optionally granted (block 312). It should be noted that data can be updated after any operation if so desired.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing, such as can be considered non-transitory. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible or non-transitory medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Aspects of the present invention are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks and/or configure the computer or other programmable data processing apparatus to perform a method and/or functions in accordance with embodiments.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks. The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Turning now to
A machine readable computer program may be created by one of skill in the art and stored in computer system 600 or a data and/or any one or more of machine readable medium 675, such as in the form of a computer program product 680, to simplify the practicing of this invention. In operation, information for the computer program created to run the present invention can be loaded on the appropriate removable data and/or program storage device 655, fed through data port 645, and/or entered using keyboard 665. A user can control the program by manipulating functions performed by the computer program and providing other data inputs via any of the above mentioned data input means. Display device 670 can provide a means for the user to accurately control the computer program and perform the desired tasks described herein.
Computer program product 680 according to embodiments of the invention disclosed herein can be stored in memory and/or computer readable storage media 675, in embodiments. While shown as outside of RAM 610 and ROM 615, it should be readily apparent that computer program product 680 and/or portions thereof can reside in these and/or any other storage medium accessible by computer system 600. It should be noted that CPU(s) 605 can in embodiments be called a computing device(s), but that computer system 600 as a whole, or portions thereof, could also be called a computing device.
The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Haridass, Anand, Cordero, Edgar R., Sethuraman, Saravanan, Panda, Subrat K., Vidyapoornachary, Diyanesh Babu Chinnakkonda
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
5732209, | Nov 29 1995 | SAMSUNG ELECTRONICS CO , LTD | Self-testing multi-processor die with internal compare points |
5778429, | Jul 04 1994 | Hitachi, Ltd. | Parallel processor system including a cache memory subsystem that has independently addressable local and remote data areas |
6807596, | Jul 26 2001 | Hewlett Packard Enterprise Development LP | System for removing and replacing core I/O hardware in an operational computer system |
7272759, | Aug 05 2004 | GOOGLE LLC | Method and apparatus for system monitoring with reduced function cores |
7389403, | Aug 10 2005 | Oracle America, Inc | Adaptive computing ensemble microprocessor architecture |
7584327, | Dec 30 2005 | Intel Corporation | Method and system for proximity caching in a multiple-core system |
7610537, | Apr 04 2006 | International Business Machines Corporation | Method and apparatus for testing multi-core microprocessors |
7774551, | Oct 06 2006 | Hewlett-Packard Development Company, L.P. | Hierarchical cache coherence directory structure |
7827515, | Mar 15 2007 | Oracle America, Inc | Package designs for fully functional and partially functional chips |
8093102, | Jun 28 2007 | SHENZHEN XINGUODU TECHNOLOGY CO , LTD | Process of forming an electronic device including a plurality of singulated die |
20080091974, | |||
20110138167, | |||
20110252260, | |||
20130052760, | |||
20140298124, | |||
20150067310, | |||
JP2012032888, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Sep 26 2013 | CORDERO, EDGAR R | International Business Machines Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 031384 | /0073 | |
Oct 01 2013 | PANDA, SUBRAT K | International Business Machines Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 031384 | /0073 | |
Oct 01 2013 | HARIDASS, ANAND | International Business Machines Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 031384 | /0073 | |
Oct 07 2013 | VIDYAPOORNACHARY, DIYANESH BABU CHINNAKKONDA | International Business Machines Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 031384 | /0073 | |
Oct 07 2013 | SETHURAMAN, SARAVANAN | International Business Machines Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 031384 | /0073 | |
Oct 10 2013 | GLOBALFOUNDRIES Inc. | (assignment on the face of the patent) | / | |||
Jun 29 2015 | International Business Machines Corporation | GLOBALFOUNDRIES U S 2 LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 036550 | /0001 | |
Sep 10 2015 | GLOBALFOUNDRIES U S 2 LLC | GLOBALFOUNDRIES Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 036779 | /0001 | |
Sep 10 2015 | GLOBALFOUNDRIES U S INC | GLOBALFOUNDRIES Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 036779 | /0001 | |
Nov 27 2018 | GLOBALFOUNDRIES Inc | WILMINGTON TRUST, NATIONAL ASSOCIATION | SECURITY AGREEMENT | 049490 | /0001 | |
Nov 17 2020 | WILMINGTON TRUST, NATIONAL ASSOCIATION | GLOBALFOUNDRIES Inc | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 054636 | /0001 | |
Nov 17 2020 | WILMINGTON TRUST, NATIONAL ASSOCIATION | GLOBALFOUNDRIES U S INC | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 056987 | /0001 |
Date | Maintenance Fee Events |
Dec 09 2015 | ASPN: Payor Number Assigned. |
Dec 09 2015 | RMPN: Payer Number De-assigned. |
Sep 23 2019 | REM: Maintenance Fee Reminder Mailed. |
Mar 09 2020 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Feb 02 2019 | 4 years fee payment window open |
Aug 02 2019 | 6 months grace period start (w surcharge) |
Feb 02 2020 | patent expiry (for year 4) |
Feb 02 2022 | 2 years to revive unintentionally abandoned end. (for year 4) |
Feb 02 2023 | 8 years fee payment window open |
Aug 02 2023 | 6 months grace period start (w surcharge) |
Feb 02 2024 | patent expiry (for year 8) |
Feb 02 2026 | 2 years to revive unintentionally abandoned end. (for year 8) |
Feb 02 2027 | 12 years fee payment window open |
Aug 02 2027 | 6 months grace period start (w surcharge) |
Feb 02 2028 | patent expiry (for year 12) |
Feb 02 2030 | 2 years to revive unintentionally abandoned end. (for year 12) |