An integrated memory system with a spiral cache responds to requests for values at a first external interface coupled to a particular storage location in the cache in a time period determined by the proximity of the requested values to the particular storage location. The cache supports multiple outstanding in-flight requests directed to the same address using an issue table that tracks multiple outstanding requests and control logic that applies the multiple requests to the same address in the order received by the cache memory. The cache also includes a backing store request table that tracks push-back write operations issued from the cache memory when the cache memory is full and a new value is provided from the external interface, and the control logic to prevent multiple copies of the same value from being loaded into the cache or a copy being loaded before a pending push-back has been completed.
|
4. A method of caching a plurality of values within a storage device, comprising:
storing the plurality of values in multiple storage elements;
in response to a request for any one of the plurality of values, moving the requested value to a front-most one of the storage elements along at least one information pathway according to a move-to-front network prior to providing the requested value in response to the request;
responsive to moving the requested value to the front-most storage element, providing the requested value in response to the request from an interface coupled to the front-most storage element;
swapping remaining ones of the plurality of values backwards between adjacent elements in order according to an ordered push-back network along the at least on information pathway to a corresponding next-backward neighbor so that the values are stored in the multiple storage elements along the information pathway according to most-recent access, wherein a least-recently-accessed one of the plurality of values is either stored in a front-most non-empty memory location or is pushed out a last one of the multiple storage elements to a backing store interface, and wherein the moving moves the requested values according to the move-to-front network in a different order than a reverse order of the push-back network, whereby individual ones of the requested values are not moved through at least some of their adjacent multiple storage elements in the reverse order of the push-back network during their movement to the front-most one of the multiple storage elements.
1. A memory circuit, comprising:
multiple storage elements for storing values; and
access circuitry coupled to the multiple storage elements forming at least one information pathway for moving values among the multiple storage elements and supporting a move-to-front network and an ordered push-back network, wherein requested values provided in response to requests containing addresses corresponding to the requested values, wherein the requested values may be stored in and retrieved from any of the multiple storage elements and are moved to a front-most one of the multiple storage elements along the move-to-front network in response to the requests prior to accessing the requested values from an interface coupled to the front-most storage element, wherein the values stored in remaining ones of the multiple storage elements are swapped backward between adjacent ones of the multiple storage elements in order along the at least one information pathway according to the push-back network at each access to locations other than the front-most location so that the values are stored in the multiple storage elements along the at least one information pathway according to most-recent access, wherein a least-recently-accessed value is either stored in a front-most empty one of the storage elements or is pushed out of the memory circuit to a backing store interface, and wherein the requested values are moved according to the move-to-front network in a different order than a reverse order of the push-back network, whereby individual ones of the requested values are not moved through at least some of their adjacent multiple storage elements in the reverse order of the push-back network during their movement to the front-most one of the multiple storage elements.
2. The spiral memory circuit of
3. The spiral memory circuit of
5. The method of
6. The method of
7. The memory circuit of
8. The memory circuit of
9. The method of
10. The method of
|
The present Application is a Divisional of U.S. patent application Ser. No. 12/640,360, filed on Dec. 17, 2009, which is a Continuation-in-Part of U.S. patent application Ser. No. 12/270,095 entitled “A SPIRAL CACHE MEMORY AND METHOD OF OPERATING A SPIRAL CACHE,” and Ser. No. 12/270,249 entitled “SPIRAL CACHE POWER MANAGEMENT, ADAPTIVE SIZING AND INTERFACE OPERATIONS”, both of which were filed on Nov. 13, 2008, have at least one common inventor, and are assigned to the same Assignee. The disclosures of the above-referenced U.S. Patent Applications are incorporated herein by reference.
1. Field of the Invention
The present invention is related to hierarchical memory systems, and more particularly to a memory interface that couples a spiral cache memory to other members of a memory hierarchy.
2. Description of Related Art
A spiral cache memory as described in the above-referenced Parent U.S. Patent application supports multiple in-flight requests referencing the same or different values by their address. In order to integrate a spiral cache memory in a hierarchical memory system, while permitting the next lower-order level of the memory hierarchy or a processor to access the same value repeatedly before a request for the same value is completed, a way to ensure that writes to the value are satisfied before subsequent reads is needed. It is desirable to do so without constraining the activity of the processor or lower-order level of the memory hierarchy that is coupled to the front-most storage tile, as to do so would introduce performance penalties, or require the processor architecture and/or program code to constrain the order of accesses. Also, in particular because the backing store will generally have a much higher latency that the spiral cache itself, queues as described in the above-incorporated parent U.S. Patent application are needed between the memory hierarchy levels, and in order to not constrain the activity of the spiral cache with respect to the backing store, at least at the internal level of the storage tiles, it is desirable to provide a mechanism to coordinate requests to the backing store so that push-back write values can be coordinated with read requests issued to the backing store. Further, read requests issued to the backing store return values from the backing store into the spiral cache. Without checking the address of each value and tracking all of the values present in the spiral cache, multiple copies of the same value could be read into the spiral cache. Therefore, a mechanism to prevent multiple copies of the same value being returned to the spiral cache is needed.
Therefore, it would be desirable to provide a spiral cache interface to a memory hierarchy and an integrated memory hierarchy including a spiral cache, in which multiple outstanding requests for the same value can be issued into the spiral cache without constraining the processor, program code, or lower-order level of the memory hierarchy. It would further be desirable to provide an interface from the spiral cache to a backing store without constraining the behavior of the network of tiles in the spiral cache or having multiple copies of the same value returned to the spiral cache.
The invention is embodied in a spiral cache memory, a hierarchical memory system including the spiral cache memory and methods of operation of the system. The spiral cache memory has multiple tiles with storage locations for storing values, each of which may be a smaller cache memory such as a direct-mapped cache or an associative cache.
Multiple requests accessing the same value can be issued into the spiral cache and to prevent erroneous reads due to the requests directed to the same value not being satisfied in the order in which they are issued into the spiral cache, but the order in which the requests are returned, an issue table is used to track the requests and control logic within the spiral cache memory interface controls the order of application of the returned requests to the interface that couples the spiral cache to the lower-order level of the memory hierarchy or processor.
Prevention of multiple copies of the same value from being returned to the spiral cache from the backing store is performed by maintaining a backing store request table that prevents multiple read requests to the same value (address) being issued to the backing store. The backing store request table also tracks push-back write operations issued from the spiral cache by giving priority to write operations coming from the push-back spiral over read requests issued from the spiral cache due to a miss.
The memory interface also provides a number of queues to buffer operations and values/requests for preventing overflow of the backing store, for ordering of operations on values and serializing requests, as are described in further detail below.
The foregoing and other objectives, features, and advantages of the invention will be apparent from the following, more particular, description of the preferred embodiment of the invention, as illustrated in the accompanying drawings.
The novel features believed characteristic of the invention are set forth in the appended claims. The invention itself, however, as well as a preferred mode of use, further objectives, and advantages thereof, will best be understood by reference to the following detailed description of the invention when read in conjunction with the accompanying Figures, wherein like reference numerals indicate like components, and:
The present invention encompasses techniques for effectively integrating a spiral cache memory into a memory hierarchy. A memory interface having a number of tables and queues provides unconstrained operation by the adjacent levels of the system hierarchy, by controlling the order of application of values returned from the spiral cache according to the order of the issued requests and not the order of the returned values, which may not match. The memory interface also ensures that the backing store input does not overflow and that multiple copies of the same value are not loaded into the spiral cache due to multiple requests issued at the front of the spiral. The memory interface also ensures that backing store read requests do not bypass push-back values that are propagating backwards through the spiral, returning invalid values that are not identified as such. An arrangement of a spiral cache that locates the lower-order and higher-order hierarchy member interfaces at edges of the spiral is also further illustrated, and while the cache type is still referred to as “spiral”, since the front-most tile is not located near the center of the array, the push-back network follows a meandering path that zig-zags in segments of increasing length.
Black-Box Behavior of the Spiral Cache Referring now to
The cache line being accessed by a load or store operation may be located within the spiral cache, or the cache line may be absent. If the cache line is present, the spiral cache can report a hit, which completes the associated operation successfully. Otherwise, if the accessed cache line is not present in the spiral cache, we incur a miss. A miss requires fetching the cache line from backing store 112 and moving the cache line to front-most tile 0. The move-to-front (M2F) operation involves not only the move-to-front network 114 inside the spiral cache, but requires an additional connection to backing store 112. Referring now to
When spiral cache 104 reports a miss, a single-copy invariant condition imposed on spiral cache 104 guarantees that the requested cache line does not exist anywhere in spiral cache 104. Therefore, the cache line is fetched from backing store 112, and written into front-most tile 0. The associated push-back operation causes a cache line to be written into backing store 112 if all tile caches contain non-empty (valid) cache lines. The black-box communication behavior of spiral cache 104 is described below. Data are communicated between spiral cache 104 and backing store 112 only in case of a miss. A miss requires a cache line to be moved from backing store 112 into front-most tile 0. The associated push-back operation may cause a cache line to be written into backing store 112. It is noted that cache lines are initially loaded into spiral cache 104 only at front-most tile 0, and leave spiral cache 104 only from the tail end of spiral cache 104. A pushed-back cache line exits a spiral cache of N tiles at the tail end after a delay of at least N−1 duty cycles has elapsed since the writing of the cache line fetched from backing store 112 into front-most tile 0. In order for the above-described black-box behavior of the spiral cache to operate, the ordering of requests and responses must be considered. Spiral cache 104 does not inherently preserve any ordering. Multiple requests to different cache lines may return in arbitrary order depending on the location of the values being requested. Requests to the same cache line may also return in a different order, depending on the location of the cache line, the operation of the geometric retry mechanism, and the collision resolution mechanism of new requests on the diagonal in the move-to-front network, as described in the above-incorporated parent U.S. Patent Application“A SPIRAL CACHE MEMORY AND METHOD OF OPERATING A SPIRAL CACHE.” Therefore, any ordering guarantees of the responses with respect their requests must be implemented outside of the spiral cache tile array. The present invention provides mechanisms to guarantee the completion order of load and store operations to the same cache line as issued by processor 100, without imposing any ordering restrictions on operations to different cache lines. The ordering behavior described above is consistent with that of contemporary processor architectures, which are capable of accepting multiple outstanding memory operations.
System Integration of a Spiral Cache Referring now to
Queues and Tables The various queues and tables included in the system of
In addition to the queues described above, memory interface 106 contains two tables: An issue table itab keeps track of all outstanding memory operations, and ensures that memory interface 106 performs load and store operations to the same cache line in the order issued by the processor into load-store queue ldstq. A backing store request table mtab keeps track of all outstanding backing-store read operations, and guarantees that multiple read requests directed to the same cache line result in a single read operation from backing store 112, which preserves the single-copy invariant condition. A primary function of the queueing system architecture depicted in
The dataflow of a memory operation through the memory system depicted in
Ordering of the Spiral responses The ordering problem of a sequence of load and store operations to the same cache line that hit in spiral cache 104 is as follows. Assume, for example, that processor 100 issues a store operation and subsequently a load operation to the same address. For correctness, it is expected that the load operation responds with the previously stored value. Problems can arise within the system, because requests may return out of order from spiral cache 104. For example, assume in a hypothetical system that a request issued into the spiral cache comprises all the information needed to service the request, including an op-code to distinguish loads from stores, the address, and the store value if it applies. It should be noted that this request differs from the requests used in the exemplary system of
A request issued by memory interface 106 into spiral cache 104 includes the address and a retry radius. When the corresponding response (reply) arrives on the M2F network at front-most tile 0, the address portion of the response is used to retrieve the corresponding entry from issue table itab. It is the entry in issue table itab that provides the operational context, and for store operations, the entry provides the store value. Support for multiple outstanding requests per cache line is provided by organizing issue table itab as a FIFO queue. The implicit ordering of the issue table itab FIFO maintains the order of memory operations. Therefore, when a store operation is issued before a load operation to the same address, the store operation entry precedes the load operation entry in issue table itab, and will be completed before the load operation is completed.
Referring now to
Serialization of Backing Store Operations Backing store 112 serves read operations issued by memory interface 106 into read queue rdq if a request misses in spiral cache 104. Backing store 112 also serves write operations emitted by the push-back network of spiral cache 104. Since there are two distinct sources for these operations: memory interface 106 for reads; and the push-back network for writes, the operations must be serialized. Serialization of read and write requests to backing store 112 must respect the following ordering constraint: if a read operation issued by memory interface 104 to backing store 112 contains the same address as a write operation issued by the push-back network, then the write operation must precede the read operation. The reason for the ordering constraint is described below. A write operation to backing store 112 contains a modified (dirty) cache line because push-back requests containing clean cache lines are discarded at the tail end tile of spiral cache 104. (There is no reason to return a clean cache line to backing store 112, as by definition, the clean cache line is already identically present in backingstore 112.) The backing store write operation originates at the tail end of the push-back network of spiral cache 104, when tail-end tile 63 (tile N−1) pushes a dirty value out. The dirty value was produced earlier by a store operation that stored the modified value in front-most tile 0. Subsequent memory accesses cause the dirty value to be pushed back through the push-back network. An example of such memory accesses is: N accesses that missed in spiral cache 104 and have the same direct mapping as the dirty line, causing the corresponding values to be read from backing store 112 and loaded into tile 0 in the same cache line that the dirty line occupied. As each value is pushed-back to make room for the next, because their mapping is the same, they will push the previous occupants of that storage, including the dirty line, backward at each access. As another example, spiral cache 104 could have received N−1 requests, again mapped to the same cache lines, that hit in spiral cache 104, causing the corresponding values to be moved into front-most tile 0, causing the dirty line to be pushed back by N−1 tiles into tail-end tile 63. One subsequent request that maps to the same cache line, but misses in spiral cache 104, causes the corresponding value to be loaded from backing store 112 and stored in front-most tile 0, causing the dirty line to be pushed out of tail-end tile 63. If processor 100 issues a load operation for the dirty cache line while the dirty cache line is pushed back on the push-back network toward backing store 112, a race condition occurs if spiral cache 104 reports a miss, and memory interface 106 initiates a read operation to backing store 112 before the dirty line has been written back into backing store 112.
The move-to-front request of the load operation traverses spiral cache 104 while the requested cache line, modified by the preceding store operation, is pushed back on the push-back network within spiral cache 104 or has been pushed out of spiral cache at tail-end tile 63. If the cache line is in spiral cache 104, the single-copy invariant condition guarantees that the move-to-front request will move the cache line to front-most tile 0. Otherwise, the cache line must have been pushed out of spiral cache 104 via the push-back network. In the extreme case of timing a spiral cache hit, the move-to-front request meets the requested cache line during the same duty cycle that the push-back value arrives at tail-end tile 63. For a miss to occur, the requested cache line must have been pushed out at least one duty cycle before the move-to-front request reaches tail-end tile 63. Since the M2F request must travel to memory interface 106 before a miss can be reported and a read request issued to backing store 112, the travel time of the M2F request from tail-end tile 63 to frontmost tile 0 enables ordering of backing store operations such that the write operation will reach the backing store before the read operation. To prevent a race condition between backing store write and read requests, push-back read queue pbrdq forms the master queue of read queue rdq. As such, direct insertions into push-back read queue pbrdq have priority over entries in read queue rdq. Thus, write operations emitted by the push-back network have priority over read operations originating from the M2F network, and are enqueued immediately into push-back read queue pbrdq. Read operations are enqueued into push-back read queue pbrdq when possible, that is during clock cycles when no push-back request is being enqueued. Collisions are resolved by enqueuing read operations in read queue rdq. The organization of the push-back read queue pbrdq and read queue rdq guarantees that a read request to backing store 112 trails a potential write request. Thus, backing store 112 serves the above described exemplary read operation correctly with the cache line written during the preceding push-back write operation.
Multiple Spiral Misses When spiral cache 104 accepts multiple outstanding requests, one or more of them may miss. Backing store request table mtab and bypass queue bypq are included to prevent duplication of lines in the spiral cache when multiple misses to the same cache line require retrieving the cache line from the backing store. The potential for duplication of cache lines due to multiple outstanding backing-store read requests exists due to multiple operations to the same address. For example, assume that processor 100 issues a store followed by a load operation to the same cache line, as discussed above, and that both spiral responses result in a miss, but are returned in order. Without including logic for handling such conditions, memory interface 106 would enqueue two read requests to backing store 112, the first associated with the store operation, and the second with the load operation. Assuming that backing store 112 preserves the order of the requests, it first returns the requested cache line associated with the store operation. Memory interface 106 would then patch the cache line with the store value and write the cache line into front-most tile 0. When backing store 112 returns the same cache line again, now associated with the load operation, memory interface 106 would return the requested load value to processor 100, and write the cache line into front-most tile 0, overwriting the previously written store value. Not only is the load value returned to processor 100 different from the expected value, but all subsequent load operations will return the wrong value as well. If the first copy of the cache line returned by backing store 112 is pushed back and out of tile 0 before memory interface 106 writes the second copy into tile 0, the problem is further exacerbated. Spiral cache 104 now contains two copies of the same cache line, violating the single-copy invariant condition. Therefore, memory interface 106 prevents duplication of cache lines due to multiple outstanding read requests to the backing store. In the illustrated embodiment of
Backing store request table mtab is an associative memory that maintains one entry per cache-line address for each outstanding backing store read request. An address entry is inserted into backing store request table mtab when the spiral cache 104 responds with a miss. Memory interface 106 also enqueues a read request with the associated address into read queue rdq. The entry is deleted from the backing store request table mtab when memory interface 106 dequeues the backing store response from backing store queue bsq, and stores the cache line in front-most tile 0 of the spiral cache. Bypass queue bypq is a FIFO queue with additional functionality resembling that of an associative memory. Each queue entry contains an address plus a ready bit. Insertion of an entry into bypass queue bypq corresponds to a conventional enqueue operation. When inserting an address, its associated ready bit is initialized to not-ready. However, dequeuing an entry from bypass queue bypq is not performed according to a conventional dequeue operation. Instead, to dequeue an entry associated with an address, a priority decoder is included, which identifies the first ready entry having the requested address from the head of the queue, as has been described above for the operation of issue table itab. Bypass queue bypq also includes circuitry that implements a “ready” operation that sets the ready bits of all entries associated with an address from not-ready to ready.
Referring now to
Memory interface 106 is responsible for dequeuing ready entries from the bypass queue bypq in FIFO order. There are two ready entries associated with address 100 illustrated in table T2B. The first entry corresponds to the second memory operation associated with address 100. After the first entry is dequeued, the state of bypass queue bypq is as shown in table T2C. Memory interface 106 issues a request for the address of the entry dequeued from bypass queue bypq into spiral cache 104. When spiral cache 104 responds, issue table itab provides the information needed to handle the response as for any other spiral cache responses. Backing store request table mtab and bypass queue bypq not only serve to enforce correctness by preventing duplication of cache lines, but also to improve performance. If multiple requests to a particular memory address occur in close succession, backing store request table mtab and bypass queue bypq reduce the overall latency from multiple, presumably high-latency accesses to the backing store to just one. This capability also improves the throughput of the overall memory system.
Memory interface 106 also handles dequeuing and processing entries from its associated input queues. The selection of the queues determines the order in which actions are scheduled. An exemplary scheduling loop that may be implemented by memory interface 106 is illustrated in
Another important priority consideration not exposed in the scheduling loop of
As in many queueing systems, the system depicted in
The rate at which spiral cache 104 can generate misses and cause memory interface 106 to enqueue the associated read requests via read queue rdq to the backing store is much greater than the push-back rate, because spiral cache 104 operates at a much higher clock frequency than backing store 112. Therefore, to prevent overflow of push-back read queue pbrdq, which is the master queue of read queue rdq, the number of outstanding requests issued into spiral cache 104 must be controlled. A “one-quadrant” cache such as that illustrated in
While the invention has been particularly shown and described with reference to the preferred embodiments thereof, it will be understood by those skilled in the art that the foregoing and other changes in form, and details may be made therein without departing from the spirit and scope of the invention.
Schaub, Jeremy D., Gebara, Fadi H., Strumpen, Volker
Patent | Priority | Assignee | Title |
9542315, | Nov 13 2008 | International Business Machines Corporation | Tiled storage array with systolic move-to-front organization |
Patent | Priority | Assignee | Title |
5339268, | May 08 1992 | Mitsubishi Denki Kabushiki Kaisha | Content addressable memory cell and content addressable memory circuit for implementing a least recently used algorithm |
5355345, | Oct 04 1993 | CHASE MANHATTAN BANK, AS ADMINISTRATIVE AGENT, THE | Fully scalable memory apparatus |
5539893, | Nov 16 1993 | Unisys Corporation | Multi-level memory and methods for allocating data most likely to be used to the fastest memory level |
5564041, | Apr 20 1990 | Hitachi, Ltd.; Hitachi Microcomputer System Ltd. | Microprocessor for inserting a bus cycle in an instruction set to output an internal information for an emulation |
5640339, | May 11 1993 | International Business Machines Corporation | Cache memory including master and local word lines coupled to memory cells |
6370620, | Dec 10 1998 | International Business Machines Corporation | Web object caching and apparatus for performing the same |
6418525, | Jan 29 1999 | International Business Machines Corporation | Method and apparatus for reducing latency in set-associative caches using set prediction |
6430654, | Jan 21 1998 | Oracle America, Inc | Apparatus and method for distributed non-blocking multi-level cache |
6665767, | Jul 15 1999 | Texas Instruments Incorporated | Programmer initiated cache block operations |
6763426, | Dec 27 2001 | AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED | Cascadable content addressable memory (CAM) device and architecture |
6839809, | May 31 2000 | Cisco Technology, Inc. | Methods and apparatus for improving content quality in web caching systems |
6961821, | Oct 16 2002 | International Business Machines Corporation | Reconfigurable cache controller for nonuniform memory access computer systems |
6996117, | Sep 19 2001 | BAY MICROSYSTEMS, INC | Vertical instruction and data processing in a network processor architecture |
7050351, | Dec 30 2003 | Intel Corporation | Method and apparatus for multiple row caches per bank |
7107399, | May 11 2001 | International Business Machines Corporation | Scalable memory |
7370252, | Nov 10 2004 | INTELLECTUAL DISCOVERY CO , LTD | Interleaving apparatus and method for orthogonal frequency division multiplexing transmitter |
7461210, | Apr 14 2006 | EZCHIP SEMICONDUCTOR LTD ; Mellanox Technologies, LTD | Managing set associative cache memory according to entry type |
7498836, | Sep 19 2003 | XILINX, Inc. | Programmable low power modes for embedded memory blocks |
7805575, | Sep 29 2006 | EZCHIP SEMICONDUCTOR LTD ; Mellanox Technologies, LTD | Caching in multicore and multiprocessor architectures |
8015357, | Nov 13 2008 | International Business Machines Corporation | Storage array tile supporting systolic movement operations |
8060699, | Nov 13 2008 | International Business Machines Corporation | Spiral cache memory and method of operating a spiral cache |
8271728, | Nov 13 2008 | International Business Machines Corporation | Spiral cache power management, adaptive sizing and interface operations |
8364895, | Dec 17 2009 | International Business Machines Corporation | Global instructions for spiral cache management |
8370579, | Dec 17 2009 | International Business Machines Corporation | Global instructions for spiral cache management |
20020083266, | |||
20020116579, | |||
20020188781, | |||
20030033500, | |||
20030074505, | |||
20030128702, | |||
20030145239, | |||
20030236961, | |||
20040148482, | |||
20050114618, | |||
20050125702, | |||
20050132140, | |||
20050160132, | |||
20060143384, | |||
20060212654, | |||
20070022309, | |||
20090178052, | |||
20100057948, | |||
20100064108, | |||
20100115204, | |||
20100122012, | |||
20100122033, | |||
20100122057, | |||
20100122100, | |||
RE43301, | May 10 1996 | Apple Inc. | Method and apparatus for an improved stack arrangement and operations thereon |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Feb 14 2013 | GEBARA, FADI H | International Business Machines Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 032725 | /0971 | |
Feb 14 2013 | SCHAUB, JEREMY D | International Business Machines Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 032725 | /0971 | |
Feb 19 2013 | International Business Machines Corporation | (assignment on the face of the patent) | / | |||
Feb 19 2013 | STRUMPEN, VOLKER | International Business Machines Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 032725 | /0971 |
Date | Maintenance Fee Events |
Dec 03 2018 | REM: Maintenance Fee Reminder Mailed. |
May 20 2019 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Apr 14 2018 | 4 years fee payment window open |
Oct 14 2018 | 6 months grace period start (w surcharge) |
Apr 14 2019 | patent expiry (for year 4) |
Apr 14 2021 | 2 years to revive unintentionally abandoned end. (for year 4) |
Apr 14 2022 | 8 years fee payment window open |
Oct 14 2022 | 6 months grace period start (w surcharge) |
Apr 14 2023 | patent expiry (for year 8) |
Apr 14 2025 | 2 years to revive unintentionally abandoned end. (for year 8) |
Apr 14 2026 | 12 years fee payment window open |
Oct 14 2026 | 6 months grace period start (w surcharge) |
Apr 14 2027 | patent expiry (for year 12) |
Apr 14 2029 | 2 years to revive unintentionally abandoned end. (for year 12) |