A method, system, and computer program product for data consistency, the system comprising sending an auto propagating message from a management host to hold IO for devices of a consistency group to at least one storage array of a set of storage arrays, and causing each storage array of the set of storage arrays, upon receiving the message, to send the hold message to each storage array to which they have connectivity.
|
3. A method for data consistency, the system comprising:
sending an auto propagating message from a management host to hold IO for devices of a consistency group to at least one storage array of a set of storage arrays;
causing each storage array of the set of storage arrays, upon receiving the message, to send the hold message to each storage array to which they have connectivity;
polling, by the management host, a status of the set of storage arrays to determine if registration has been successfully completed;
wherein the connectivity is through fiber channel;
changing a status on each storage array of the set of storage arrays when the storage array receives the message; wherein the status denotes that a storage array has received the hold message for the consistency group;
polling, by the management array, the status of the storage arrays;
upon confirming that each storage array of the storage arrays that has devices in the consistency group received the hold, sending an auto propagating clone activate message from the management host to perform a clone of the devices of the consistency group to the at least one storage array of the set of storage arrays;
causing each storage array of the set of storage arrays, upon receiving the auto propagating clone activate message, to send the clone message to each storage array to which they have connectivity;
changing the status on each storage array of the set of storage arrays when the storage array receives the message; wherein the status denotes that a storage array has completed setting the clone for the consistency group; and
polling by the management array, the status of the storage arrays to confirm the clone has been set.
5. A computer program product comprising:
a non-transitory computer readable medium encoded with computer executable program code, the code configured to enable the execution of:
sending an auto propagating message from a management host to hold IO for devices of a consistency group to at least one storage array of a set of storage arrays;
causing each storage array of the set of storage arrays, upon receiving the message, to send the hold message to each storage array to which they have connectivity;
polling, by the management host, a status of the set of storage arrays to determine if registration has been successfully completed;
wherein the connectivity is through fiber channel;
changing a status on each storage array of the set of stotage arrays when the storage array receives the message; wherein the status denotes that a storage array has received the hold message for the consistency group;
polling, by the management array, the status of the storage arrays;
upon confirming that each storage array of the storage arrays that has devices in the consistency group received the hold, sending an auto propagating clone activate message from the management host to perform a clone of the devices of the consistency group to the at least one storage array of the set of storage arrays;
causing each storage array of the set of storage arrays, upon receiving the auto propagating clone activates message, to send the clone activate message to each storage array to which they have connectivity;
changing the status on each storage array of the set of storage arrays when the storage array receives the message; wherein the status denotes that a storage array has completed setting the clone for the consistency group; and
polling, by the management array, the status of the storage arrays to confirm the clone has been set.
1. A system for data consistency, the system comprising:
a management host; wherein the management host is connected to at least one storage array of a set of storage arrays; and
computer-executable logic operating in memory, wherein the computer-executable program logic is configured for execution of;
sending an auto propagating message from the management host to hold IO for devices of a consistency group to the at least one storage array of the set of storage arrays;
causing each storage array of the set of storage arrays, upon receiving the message, to send the hold message to each storage array to which they have connectivity;
polling, by the management host, a status of the set of storage arrays to determine if registration has been successfully completed;
wherein the connectivity is through fiber channel;
changing a status on each storage array of the set of storage arrays when the storage array receives the message; wherein the status denotes that a storage array has received the hold message for the consistency group;
polling, by the management array, the status of the storage arrays;
upon confirming that each storage array of the storage arrays that has devices in the consistency group received the hold, sending an auto propagating clone activate message from the management host to perform a clone of the devices of the consistency group to the at least one storage array of the set of storage arrays;
causing each storage array of the set of storage arrays, upon receiving the auto propagating clone activate message, to send the clone activate message to each storage array to which they have connectivity;
changing the status on each storage array of the set of storage arrays when the storage array receives the message; wherein the status denotes that a storage array has completed setting the clone for the consistency group; and
polling, by the management array, the status of the storage arrays to confirm the clone has been set.
2. The system of
enabling each storage array of the set of storage arrays determine the amount of time it has been holding IO;
determining if the IO has been held at a particular storage array of the set of storage arrays beyond a timeout time for the IO;
causing the particular storage array of the set of storage arrays to start accepting IO;
changing the status on the particular storage array of the set of storage arrays to denote setting of the clone copy failed; and
polling, by the management array, the status of the set of storage arrays to determine the clone has not been successfully set.
4. The method of
enabling each storage array of the set of storage arrays determine the amount of time it has been holding IO;
determining if the IO has been held at a particular storage array of the set of storage arrays beyond a timeout time for the IO;
causing the particular storage array of the set of storage arrays to start accepting IO;
changing the status on the particular storage array of the set of storage arrays to denote setting of the clone copy failed; and
polling, by the management array, the status of the set of storage arrays to determine the clone has not been successfully set.
6. The computer program product of
enabling each storage array of the set of storage arrays determine the amount of time it has been holding IO;
determining if the IO has been held at a particular storage array of the set of storage arrays beyond a timeout time for the IO;
causing the particular storage array of the set of storage arrays to start accepting IO;
changing the status on the particular storage array of the set of storage arrays to denote setting of the clone copy failed; and
polling, by the management array, the status of the set of storage arrays to determine the clone has not been successfully set.
7. The method of
sending an auto propagating registration message from a management host to registration devices of a consistency group to at least one storage array of a set of storage arrays; and
causing each storage array of the set of storage arrays, upon receiving the registration message, to send the registration message to each storage array to which they have connectivity.
|
A portion of the disclosure of this patent document may contain command formats and other computer language listings, all of which are subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
This invention relates to data replication.
Computer data is vital to today's organizations, and a significant part of protection against disasters is focused on data protection. As solid-state memory has advanced to the point where cost of memory has become a relatively insignificant factor, organizations can afford to operate with systems that store and process terabytes of data.
Conventional data protection systems include tape backup drives, for storing organizational production site data on a periodic basis. Such systems suffer from several drawbacks. First, they require a system shutdown during backup, since the data being backed up cannot be used during the backup operation. Second, they limit the points in time to which the production site can recover. For example, if data is backed up on a daily basis, there may be several hours of lost data in the event of a disaster. Third, the data recovery process itself takes a long time.
Another conventional data protection system uses data replication, by creating a copy of the organization's production site data on a secondary backup storage system, and updating the backup with changes. The backup storage system may be situated in the same physical location as the production storage system, or in a physically remote location.
A method, system, and computer program product for data consistency, the system comprising sending an auto propagating message from a management host to hold IO for devices of a consistency group to at least one storage array of a set of storage arrays, and causing each storage array of the set of storage arrays, upon receiving the message, to send the hold message to each storage array to which they have connectivity.
Objects, features, and advantages of embodiments disclosed herein may be better understood by referring to the following description in conjunction with the accompanying drawings. The drawings are not meant to limit the scope of the claims included herewith. For clarity, not every element may be labeled in every figure. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating embodiments, principles, and concepts. Thus, features and advantages of the present disclosure will become more apparent from the following detailed description of exemplary embodiments thereof taken in conjunction with the accompanying drawings in which:
Generally, to make a useful copy of multiple applications' data, the copy of applications' data should be in a consistent state. Conventionally, this may not be a problem when there is one application writing to storage using a direct link. Typically, the application may be paused and then a consistent copy of the storage may be taken. Usually, if an application and storage are separated by a switches or networks and the application's data is spread across multiple devices on multiple storage arrays, it may be more problematic to take a consistent copy of the storage due to delays between the application and storage. Typically, if the application stores data across many different types of storage distributed across a network, it may be more problematic to create a consistent copy of the storage. Conventionally with distributed storage, all IO must be held until a point of time for the copy is set of each of the storage mediums across the network is confirmed to be taken. Conventionally, as the number of storage devices that need to be paused increase in both number and geographic location, the more problematic it may be come to create a consistent copy of the data due to IO timeout restrictions.
Generally, in a data storage environment, applications may be running on multiple hosts. Usually, each host may be connected to one or more switches. Conventionally, each switch may in turn be connected to one or more storage arrays. Usually, each storage array may have one or more storage devices. Typically, each application may have files stored on multiple devices located on multiple storage arrays. Usually, an application may use dependent writes and the setting of any point in time for consistency group of devices in the consistency group, which may be located across storage arrays, must ensure the ordering of the dependent writes. In certain embodiments, applications may run on other devices, such as storage arrays, and storage arrays may drive IO to other storage arrays. As used herein, host may generally refer to the device driving IO to devices on storage arrays, but hosts may include storage arrays and other devices capable of driving IO.
In an embodiment, a requirement of an enterprise cloud may be to supply a consistency service to allow creating consistent copies within the cloud across different operating systems, applications and cloning technologies—all at enterprise-level scale, speed, reliability and security. In some embodiments, enterprise storage clouds may be a federation of multiple enterprise level storage arrays logically located together, with connectivity and compute elements, inside an enterprise cloud. As used herein, setting of any point in time for a clone copy may indicate marking the device at a given point in time to allow a clone copy of that point in time to be made.
Conventionally, multi-arrays consistency technology may be implemented using a management server connected to a heterogeneous arrays-set. Today, a management server may synchronize the arrays through the process of cloning devices. Typically, this could be done over TCPIP or fiber channel, but TCPIP is less scalable and less secure. Using current techniques, it may not be possible to take a clone when the number of devices becomes too great or the devices are too geographically disperse due to IO timeout constraints. Using conventional techniques, it may not be possible to take a consistent clone copy when there are millions of devices across thousands of storage arrays because a management server may not be able to serially contact storage arrays containing each device in a consistency group, to open the consistency group, create a clone copy, and close the group before an IO timeout occurs. Generally, an IO timeout may cause an application to crash. Usually, a limitation on the speed an IO takes to be sent between storage arrays or hosts may be limited by the speed of light.
In certain embodiments, the current disclosure enables consistent data copying across heterogeneous multi-arrays at enterprise level scale, speed, reliability and security. In some embodiments, the current disclosure enables Fiber Channel (FC) connectivity between the storage arrays in the Enterprise cloud, allowing each array to discover, and communicate with other arrays over Fiber Channel (FC). In many embodiments, each array may be connected to each other array, directly or indirectly, in the storage area network (SAN) through FC. In some embodiments, this may implement a Storage back-bone network. In many embodiments, the storage back-bone network may use fiber channel connections.
In some embodiments, a Fiber Channel storage area network may protect against external, IP based, attacks. In certain embodiments, storage arrays may be communication using VU SCSI commands to achieve specific operations—enhancing the security level by allowing only certain operations. In some embodiments, use of a consistency group trigger may be over fiber channel. In certain embodiments, enterprise cloud Consistency may require many LUNs in a cloud to stop processing writes while a setting of any point in time for a clone copy of each member of the consistency group may be made. In many embodiments, host based consistency triggering may not be practical due to the variety/amount of hosts and storage devices involved. In most embodiments, a storage array may have many devices that may belong to a consistency group.
In some embodiments, marking a consistent copy of data may require a consistency window to be opened on as many as million LUNs in a short time. In most embodiments, the period of time to open the window may be more than the SCSCI command timeout time for Fiber Channel. In certain embodiments, a management host, connected to at least one array, may discover the devices in a Consistency Group. In many embodiments, discovery of the arrays containing devices in a consistency group may occur by a management server discovering or contacting an array, which in turn discovers or contacts each array to which it is connected, to increase the speed of notification across the arrays. In most embodiments, each discovered array may discover other connected arrays and send a message to the management server. Traditionally, a management server may have discovered each array with devices individually, which may not been able to notify each array on a consistency group trigger before a SCSI IO timeout.
In some embodiments, a management host may send VU SCSI commands over a fiber channel backbone to reach arrays in a cloud. In many embodiments, a management host may notify devices participating in a consistency group that they are part of the consistency group. In an embodiment, the management host may contact the arrays using a fiber channel backbone. In most embodiments, each array discovered or contacted by the management array may in turn discover or contact each array it sees with the VU SCSI command to speed propagation of the command.
In most embodiments, once a consistency group has been activated on devices in the arrays in the consistency group, the arrays in a fiber channel backbone may relate the message to all arrays to which they are connected. In many embodiments, the command to open a consistency group window may propagate to the arrays and devices in the arrays in the fiber channel backbone quickly, which may be quicker than serial propagation, according to how the network is connected. In certain embodiments, by propagating the notification across many arrays the notifications may reach an enterprise level devices count quickly. In many embodiments, distributed dispersion of notifications may reduce requirements from a single element to manage the whole storage set. In further embodiments, if notifications to create the clone copy do not happen in a given period of time, the clone copy may be cancelled to prevent IO timeout.
Refer now to the example embodiment of
In the example embodiment of
Refer now to the example embodiment of
Refer now to the example embodiment of
In this embodiment, the connectivity provided by the switches is given by a connectivity table stored in the switch. Connectivity table 370 is an example of a portion of a connectivity table of a switch. Connectivity table 370 shows host 1 300 may reach storage array 1 345 through initiator port P0 through target port P5. As well, connectivity table 370 shows that storage array 1 345 may reach storage array 2 350 through initiator port P5 in switch 1 to target port P6 in switch 1.
Refer now to the example embodiment of
Refer now to the example embodiment of
In many embodiments, there may be a registration process. In certain embodiments, a registration process may propagate a discovery message across a set of arrays containing one or more devices in one or more consistency groups to ensure that each array is connected either directly or indirectly, i.e. through other arrays, to all other arrays. In most embodiments, there may be a distributed execution of creation of a consistency group and cloning the devices of the arrays in the consistency group. In at least some embodiments, given a proper registration, a management host may contact each connected array with a command to trigger a consistency session on a consistency group and each discovered array will in turn notify each connected array with devices in the consistency group to trigger the group. In most embodiments, once a consistency group has been triggered, the array will set a status about the devices in the triggered group that may be read by a management host. In many embodiments, by having each contacted array contact each other array it sees, the message to open a consistency group may reach the arrays in an exponential instead of linear manner. In certain embodiments, an array without devices in the consistency group by ignore the message other than sending it to each array to which it is connected.
Refer now to the example embodiments of
Each contacted storage array executes steps 710 to 725. Note, each storage array may have multiple devices that are part of the consistency group. After a given period of time, management host 620 determines the setup is ok by sending an auto propagating command to each storage array to determine if it has marked itself as seen (step 735). In this embodiment, if each storage array has been contacted, then there is interconnection between each storage array and the group has been set up correctly. In most embodiments, the storage array will set a contacted status that may be polled by a management server. In most embodiments, if a storage array has not been discovered or contacted, then the contact status would not be set and the management host determines this each storage array is not connected to any other storage array through connection and the setup has failed.
Refer now to the example embodiments of
Management host 805 sends a command to trigger the consistency group to all storage array to which it is connected (step 900). The command to trigger the consistency group propagates as each storage array sends the command to each storage array it sees. When a storage array receives the command (step 905) from either management host 805 or another storage array, the storage array send a message that the command has been received (step 910) and that the storage arrays are holding write IOs to devices that in the consistency group. Management host receives ok command from each of the storage arrays (step 915). Management host 805 sends an autopropogated command to perform a clone each storage array it sees, which in turn sends the command to each storage array it sees (step 925). After each storage array has completed the clone of the devices which are members of the consistency group, the storage array notifies the management host of successful completion by changing its status, where the management host periodically contacts each array for its status. After management host receives the complete command, it sends a command to close the consistency group. Note, in this embodiment storage array 1035 has devices 1034 and 1044 which are not part of the consistency group so they are not cloned. In the background as the clone is being triggered, each storage array is checking the amount of time it has been holding IOs (step 945). If the time IOs are being held comes close to the IO timeout time, then the storage array will fail the clone trigger by setting a fail status that may be read by management host 1000 and start processing IOs (step 955).
Refer now to the example embodiments of
Refer now to the example embodiment of
The methods and apparatus of this invention may take the form, at least partially, of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, random access or read only-memory, or any other machine-readable storage medium. When the program code is loaded into and executed by a machine, such as the computer of
The logic for carrying out the method may be embodied as part of the system described below, which is useful for carrying out a method described with reference to embodiments shown in, for example,
Although the foregoing invention has been described in some detail for purposes of clarity of understanding, it will be apparent that certain changes and modifications may be practiced within the scope of the appended claims. Accordingly, the present implementations are to be considered as illustrative and not restrictive, and the invention is not to be limited to the details given herein, but may be modified within the scope and equivalents of the appended claims.
Patent | Priority | Assignee | Title |
10019194, | Sep 23 2016 | EMC IP Holding Company, LLC | Eventually consistent synchronous data replication in a storage system |
10031703, | Dec 31 2013 | EMC Corporation | Extent-based tiering for virtual storage using full LUNs |
10042751, | Sep 30 2015 | EMC IP HOLDING COMPANY LLC | Method and system for multi-tier all-flash array |
10055148, | Dec 22 2015 | EMC IP HOLDING COMPANY LLC | Storing application data as an enhanced copy |
10061666, | Dec 30 2011 | EMC International Company | Method and apparatus for adding a director to storage with network-based replication without data resynchronization |
10067837, | Dec 28 2015 | EMC IP HOLDING COMPANY LLC | Continuous data protection with cloud resources |
10078459, | Sep 26 2016 | EMC IP HOLDING COMPANY LLC | Ransomware detection using I/O patterns |
10082980, | Jun 20 2014 | EMC IP HOLDING COMPANY LLC | Migration of snapshot in replication system using a log |
10101943, | Sep 25 2014 | EMC IP HOLDING COMPANY LLC | Realigning data in replication system |
10108356, | Mar 25 2016 | EMC IP HOLDING COMPANY LLC | Determining data to store in retention storage |
10114581, | Dec 27 2016 | EMC IP HOLDING COMPANY LLC | Creating a virtual access point in time on an object based journal replication |
10133874, | Dec 28 2015 | EMC IP HOLDING COMPANY LLC | Performing snapshot replication on a storage system not configured to support snapshot replication |
10140039, | Dec 15 2016 | EMC IP HOLDING COMPANY LLC | I/O alignment for continuous replication in a storage system |
10146961, | Sep 23 2016 | EMC IP Holding Company, LLC | Encrypting replication journals in a storage system |
10152267, | Mar 30 2016 | EMC Corporation | Replication data pull |
10191687, | Dec 15 2016 | EMC IP HOLDING COMPANY LLC | Adaptive snap-based replication in a storage system |
10210073, | Sep 23 2016 | EMC IP Holding Company, LLC | Real time debugging of production replicated data with data obfuscation in a storage system |
10223023, | Sep 26 2016 | EMC IP HOLDING COMPANY LLC | Bandwidth reduction for multi-level data replication |
10229006, | Dec 28 2015 | EMC IP HOLDING COMPANY LLC | Providing continuous data protection on a storage array configured to generate snapshots |
10235060, | Apr 14 2016 | EMC IP Holding Company, LLC | Multilevel snapshot replication for hot and cold regions of a storage system |
10235061, | Sep 26 2016 | EMC IP HOLDING COMPANY LLC | Granular virtual machine snapshots |
10235064, | Dec 27 2016 | EMC IP HOLDING COMPANY LLC | Optimized data replication using special NVME protocol and running in a friendly zone of storage array |
10235087, | Mar 30 2016 | EMC IP HOLDING COMPANY LLC | Distributing journal data over multiple journals |
10235088, | Mar 30 2016 | EMC IP HOLDING COMPANY LLC | Global replication policy for multi-copy replication |
10235090, | Sep 23 2016 | EMC IP Holding Company, LLC | Validating replication copy consistency using a hash function in a storage system |
10235091, | Sep 23 2016 | EMC IP HOLDING COMPANY LLC | Full sweep disk synchronization in a storage system |
10235092, | Dec 15 2016 | EMC IP HOLDING COMPANY LLC | Independent parallel on demand recovery of data replicas in a storage system |
10235145, | Sep 13 2012 | EMC International Company | Distributed scale-out replication |
10235196, | Dec 28 2015 | EMC IP HOLDING COMPANY LLC | Virtual machine joining or separating |
10235247, | Sep 26 2016 | EMC IP HOLDING COMPANY LLC | Compressing memory snapshots |
10296419, | Mar 27 2015 | EMC IP HOLDING COMPANY LLC | Accessing a virtual device using a kernel |
10324637, | Dec 13 2016 | EMC IP HOLDING COMPANY LLC | Dual-splitter for high performance replication |
10324798, | Sep 25 2014 | EMC IP HOLDING COMPANY LLC | Restoring active areas of a logical unit |
10353603, | Dec 27 2016 | EMC IP HOLDING COMPANY LLC | Storage container based replication services |
10366011, | May 03 2018 | EMC IP HOLDING COMPANY LLC | Content-based deduplicated storage having multilevel data cache |
10409629, | Sep 26 2016 | EMC IP HOLDING COMPANY LLC | Automated host data protection configuration |
10409787, | Dec 22 2015 | EMC IP HOLDING COMPANY LLC | Database migration |
10409986, | Sep 26 2016 | EMC IP HOLDING COMPANY LLC | Ransomware detection in a continuous data protection environment |
10423634, | Dec 27 2016 | EMC IP HOLDING COMPANY LLC | Temporal queries on secondary storage |
10437783, | Sep 25 2014 | EMC IP HOLDING COMPANY LLC | Recover storage array using remote deduplication device |
10467102, | Dec 15 2016 | EMC IP HOLDING COMPANY LLC | I/O score-based hybrid replication in a storage system |
10489321, | Jul 31 2018 | EMC IP HOLDING COMPANY LLC | Performance improvement for an active-active distributed non-ALUA system with address ownerships |
10496487, | Dec 03 2014 | EMC IP HOLDING COMPANY LLC | Storing snapshot changes with snapshots |
10579282, | Mar 30 2016 | EMC IP HOLDING COMPANY LLC | Distributed copy in multi-copy replication where offset and size of I/O requests to replication site is half offset and size of I/O request to production volume |
10592166, | Aug 01 2018 | EMC IP HOLDING COMPANY LLC | Fast input/output in a content-addressable storage architecture with paged metadata |
10628268, | Dec 15 2016 | EMC IP HOLDING COMPANY LLC | Proof of data replication consistency using blockchain |
10713221, | Jul 30 2018 | EMC IP HOLDING COMPANY LLC | Dual layer deduplication for a file system running over a deduplicated block storage |
10747606, | Dec 21 2016 | EMC IP HOLDING COMPANY LLC | Risk based analysis of adverse event impact on system availability |
10747667, | Nov 02 2018 | EMC IP HOLDING COMPANY LLC | Memory management of multi-level metadata cache for content-based deduplicated storage |
10776211, | Dec 27 2016 | EMC IP HOLDING COMPANY LLC | Methods, systems, and apparatuses to update point in time journal using map reduce to create a highly parallel update |
10853181, | Jun 29 2015 | EMC IP HOLDING COMPANY LLC | Backing up volumes using fragment files |
10853286, | Jul 31 2018 | EMC IP HOLDING COMPANY LLC | Performance improvement for an active-active distributed non-ALUA system with address ownerships |
11016677, | Dec 13 2016 | EMC IP HOLDING COMPANY LLC | Dual-splitter for high performance replication |
11093158, | Jan 29 2019 | EMC IP HOLDING COMPANY LLC | Sub-lun non-deduplicated tier in a CAS storage to reduce mapping information and improve memory efficiency |
11144247, | Aug 01 2018 | EMC IP HOLDING COMPANY LLC | Fast input/output in a content-addressable storage architecture with paged metadata |
12164480, | Mar 28 2019 | EMC IP HOLDING COMPANY LLC | Optimizing file system defrag for deduplicated block storage |
Patent | Priority | Assignee | Title |
7640408, | Jun 29 2004 | EMC IP HOLDING COMPANY LLC | Online data migration |
8751878, | Mar 30 2010 | EMC IP HOLDING COMPANY LLC | Automatic failover during online data migration |
8832325, | Jun 28 2012 | EMC IP HOLDING COMPANY LLC | Transfer between storage devices |
8880821, | Jun 28 2012 | EMC IP HOLDING COMPANY LLC | Determining whether to migrate from a source storage device to a target storage device |
20040064639, |
Date | Maintenance Fee Events |
Apr 21 2021 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Date | Maintenance Schedule |
Nov 28 2020 | 4 years fee payment window open |
May 28 2021 | 6 months grace period start (w surcharge) |
Nov 28 2021 | patent expiry (for year 4) |
Nov 28 2023 | 2 years to revive unintentionally abandoned end. (for year 4) |
Nov 28 2024 | 8 years fee payment window open |
May 28 2025 | 6 months grace period start (w surcharge) |
Nov 28 2025 | patent expiry (for year 8) |
Nov 28 2027 | 2 years to revive unintentionally abandoned end. (for year 8) |
Nov 28 2028 | 12 years fee payment window open |
May 28 2029 | 6 months grace period start (w surcharge) |
Nov 28 2029 | patent expiry (for year 12) |
Nov 28 2031 | 2 years to revive unintentionally abandoned end. (for year 12) |