A system implements task contention reduction via policy-based selection. Tasks waiting to be performed are indexed in a task data structure that groups the tasks based on the resources to which the tasks pertain. workers request batches of tasks for the workers to perform. A scan cycle includes building multiple batches of tasks by scanning the task data structure for a requesting worker. A policy (e.g., random or some other form of optimization) determines where the scan cycle starts in the data structure. Each batch of tasks is delivered to a worker along with a token that keeps the state of the scan cycle (e.g., where the scan cycle started, and where the next scan to build the next batch within the scan cycle begins). The worker returns the token with the next request for the next batch and the next batch is built based on the token's state information.
|
5. A method, comprising:
performing, by a task service implemented via one or more computers:
indexing tasks in a task data structure, wherein tasks are indexed within the task data structure according to a task key value of a task key space, and wherein indexing tasks comprises locating the tasks within the task data structure based at least in part on resources to which the tasks pertain;
receiving a request from a worker to begin a scan cycle for tasks to perform;
applying a policy to determine a start location within the task key space for scanning for tasks;
scanning the task data structure starting from the start location for one or more tasks;
encoding the start location and a next location in the task key space into a token; and
returning the one or more tasks and token to the worker.
16. A non-transitory computer readable storage medium storing program instructions that, when executed by a computer, cause the computer to implement a worker to:
receive a batch of tasks and a token comprising a start location within a task key space and a next location in the task key space;
perform each task of the batch of tasks; and
send a subsequent request, including the token, to a task service to perform a scan for one or more additional tasks to perform, wherein the token sent in the subsequent request comprises:
the start location within the task key space, and
the next location in the task key space; and
wherein the subsequent request to the task service is a request for the task service to perform the scan to determine the one or more additional tasks from the next location in the task key space included in the token.
1. A system, comprising:
a task service implemented via one or more hardware processors and memory, wherein the task service indexes tasks for a distributed system in a task data structure, wherein tasks are identified within the task data structure according to a task key value of a task key space;
a plurality of resources of the distributed system, wherein tasks are grouped within the task data structure based at least in part on one or more of the resources to which the tasks pertain;
a plurality of workers implemented via one or more hardware processors and memory, wherein a respective worker of the plurality of workers:
sends a request to the task service that begins a scan cycle through the task key space;
receives, from the task service, a batch of one or more tasks and a corresponding token, wherein the token includes:
a start location within the task key space that indicates where the scan cycle started, and
a next location in the task key space;
performs the one or more tasks; and
includes the token in a subsequent request to the task service for one or more additional tasks determined from the next location in the task key space;
wherein the task service, responsive to the request to begin the scan through the task key space, applies a policy to determine the start location within the task key space, wherein the task service returns to the worker the batch of one or more tasks and the corresponding token that includes the start location and the next location in the task key space.
2. The system of
the plurality of resources is organized into a plurality of cells, each cell comprising one or more of the plurality of resources,
the task service groups tasks into shards of the task key space,
each shard corresponds to a different cell of the plurality of cells and includes tasks pertaining to resources within that cell; and
the start location is for a shard of the task key space corresponding to a cell of the plurality of cells selected by the task service according to the policy.
3. The system of
4. The system of
determines, based at least on the start location from the token, that a scan cycle for the worker has cycled through the task key space; and
applies the policy to determine a new start location within the task key space, the new start location for starting a new scan cycle to return batches of tasks to the worker.
6. The method as recited in
said locating tasks comprises grouping the tasks within the task data structure according to shards of the task key space, each shard corresponding to a different cell of a plurality of cells of resources to which the tasks pertain; and
the start location is for a shard of the task key space corresponding to a cell of the plurality of cells selected by the task service according to the policy.
7. The method as recited in
applying a sharding technique to the task key values to determine shards within the key space; and
storing the tasks within the task data structure according to the determined shards.
8. The method as recited in
receiving a subsequent request from the worker for tasks to perform, the subsequent request comprising the token that includes the start location and the next location in the task key space;
continuing the scanning of the task data structure starting from the next location for another one or more tasks;
determining a new next location for the token; and
returning the other one or more tasks and the token encoded with the new next location to the worker.
9. The method as recited in
receiving a subsequent request from the worker for tasks to perform, the subsequent request comprising the token that includes the start location and the next location in the task key space;
scanning the task data structure starting from the next location for another one or more tasks;
determining that the scan cycle is complete; and
returning the other one or more tasks and the token encoded with an indication of the completed scan cycle to the worker.
10. The method as recited in
receiving another subsequent request from the worker for tasks to perform, the subsequent request comprising a token that includes the indication of the completed scan cycle;
applying the policy to determine a new start location within the task key space for scanning for tasks;
scanning the task data structure starting from the new start location for one or more tasks;
encoding the token with the new start location and a next location in the task key space; and
returning the one or more tasks and the token to the worker.
11. The method as recited in
receiving a new task to be indexed into the key space;
determining a key value for the new task;
applying a sharding technique to the key value to determine a shard within the key space; and
storing an indication of the task within the task data structure according to the determined shard.
12. The method as recited in
13. The method as recited in
14. The method as recited in
15. The method as recited in
17. The non-transitory computer readable storage medium as in
18. The non-transitory computer readable storage medium as in
the tasks comprise create snapshot, delete snapshot, copy snapshot, view snapshot, or share snapshot; and
to perform each task of the batch of tasks, the program instructions cause the worker, for a respective data volume, to: create a snapshot, delete a snapshot, copy a snapshot, view a snapshot, or share a snapshot.
19. The non-transitory computer readable storage medium as in
20. The non-transitory computer readable storage medium as in
|
Compute activities, such as time-consuming compute activities are sometimes performed via asynchronous processing schemes. In an example, but not necessarily limited to, distributed systems, processing (e.g., asynchronous processing) of compute activities can be performed using two approaches: pessimistic locking or optimistic concurrency control.
The pessimistic locking technique includes an entity placing an exclusive lock on a record to prevent other entities from manipulating the record while the entity updates the record. Generally, such a technique locks out the other users for a relatively longer period of time, slowing the overall system response and causing frustration on the consumer end.
On the other hand, the optimistic concurrency control technique generally allows multiple concurrent entities to access the record while the system keeps a copy of the initial record. For an update, the copy of the initial record is compared to the update to verify changes made to the record since the copy was made. A discrepancy between the copy and the actual record violates concurrency and the update is disregarded; the update process is performed again, starting with a new copy of the record. This technique generally improves performance because the amount of locking time is reduced, thereby reducing the load; however, update failures sometimes reach an unacceptable level (e.g., under relatively higher volumes of update requests), again slowing the overall system response and causing frustration on the consumer end.
Various embodiments of task contention reduction via policy-based selection of tasks from a task data structure organized according to a task key space are disclosed. Generally, a task service receives and indexes tasks for a computer system in a task data structure. The tasks may be identified within the task data structure according to a task key value of a task key space. In at least some embodiments, tasks are grouped within the task data structure based on a task key space that organizes the resources to which the tasks pertain.
The tasks are then selected, grouped into batches and sent to workers that perform the tasks. Some task selection techniques cause contention between workers that perform the tasks because the same task may be selected more than once for distinct batches. Embodiments improve upon prior task selection techniques by reducing contention for task processing. Contention may be reduced by selecting tasks for a worker to process based upon techniques specified in a policy, instead of selecting tasks in a fixed manner, for example. A non-exhaustive list of example techniques for selecting a starting location in the task data structure, and that may be specified in the policy include selecting a starting point randomly, based on a heat map, or based on avoiding a particular range in the task data structure.
In embodiments, workers that are available to perform tasks request batches of tasks to perform. The batches may be generated by selecting tasks from a task data structure that indexes the tasks that are to be processed. In some systems (e.g., a system that implements an optimistic concurrency control technique), if entries for the batches for multiple workers are selected near in time, there is potential for multiple workers to be processing batches that include the same task. The workers receive respective batches (some batches including a task that is also in another worker's batch) and begin processing the tasks. With such a technique, different workers may end up working on the same task or set of tasks at the same time, which is an inefficient use of resources.
In some embodiments, in an effort to reduce the likelihood of multiple workers processing the same task or tasks, a system may start the selection of tasks from the task data structure from other than a fixed starting point. The starting point for selecting the tasks may be determined using any number of different optimization techniques. Random or pseudorandom selection, selection based on a heat-map based technique utilizing a heat map corresponding to a workload at the plurality of cells, or based on an availability based technique that utilizes availability information for the plurality of cells, or a scheme that avoids a range of the task data structure (e.g., temporarily unavailable tasks, temporary fault tolerance) are just a few example techniques. The particular starting point may be determined based upon an optimization technique specified in a policy that is implemented by the task service, in embodiments.
In embodiments, a set of requests from a worker may be associated with a scan cycle that includes several rounds of (a) a request for tasks from the worker, and (b) a response from a task service that includes the tasks for the worker to perform. The policy specifies a technique for determining where to start the scan cycle in the data structure, in embodiments. In embodiments, the technique specified in the policy is not applied in between the several rounds of (a) a request for tasks from the worker, and (b) a response from a task service, of a scan cycle for a worker. The specified technique may be applied only to the initial start of the scan cycle, in embodiments. However, it is contemplated that a technique specified in the policy may be applied to determine where to start selection of tasks for one or more rounds of (a) a request for tasks from the worker, and (b) a response from a task service of a scan cycle for a worker, in some embodiments.
Various components of
Generally, the task service 110 receives new incoming tasks 105, and indexes the received tasks in a key space defined by a task data structure 102. For example, task loader 126 may obtain the incoming tasks 105, apply a technique to determine where to index the task in the task data structure 102, and store an indication of the task in the task data structure. A particular incoming task storage technique is illustrated in
In embodiments, tasks may be identified within the task data structure 102 according to a task key value of a task key space and the tasks are grouped within the task data structure based on the resources to which the tasks pertain. For example, the task key space may be organized into shards and the shards may correspond to resources (e.g., groups of resources or cells) to which the tasks pertain. The groups of resources in a cell may be related, in embodiments.
Task selection manager 124 responds to a request from a worker 132 for tasks by building batches of one or more tasks from the task data structure 102. Generally, task selection manager 124 determines a starting point in the task data structure 102 for selecting tasks based on a policy, scans from the starting point until the batch is complete, and then sends the batch to the requesting worker. A token that maintains state associated with the selection from the task data structure 102 is included along with the batch sent to the worker, in embodiments. An example selection process is depicted in
Policy manager 112 may include an interface for configuring the policy. For instance, policy manager 112 may include a graphical user interface or application programming interface for making configuration changes to the policy. For example, an administrator may use the policy manager 112 to change the starting selection technique specified in the policy from a random based policy to an availability based policy. Other policy management features may be implemented by the policy manager, such as policy creation, starting point selection technique additions, changes or deletions, etc. The policy manager 112 may maintain a heat map of the key space, availability information, and/or faults in the key space, in embodiments.
Workers process the batches of tasks. In some embodiments, due to the organization of the task data structure 102 into shards based upon the resources 140 that pertain to the tasks, the resources involved with a worker processing the tasks of the batch are limited to a set of resources (e.g., limited to a set of resources R1-R4 in cell I). Such organization may increase system efficiency, in embodiments.
Diamond 202 illustrates that the process waits for a new task to be received. If no new tasks are received [no], the illustrated process waits until a new task is received. If a new task is received [yes], a key value for the task is determined (block 204). For example, a key value may have been provided with the task, or a key value may be determined based on characteristics of the request or based upon metadata associated with the new task. Although other techniques are contemplated for some systems, in at least the illustrated embodiment, a sharding technique is applied to the key value to determine a shard within the key space (block 206). For example, the key value may be hashed to determine the shard. An identifier of the task is stored within the determined shard of the task data structure that implements the key space (block 208). For example, the task loader 126 stores a key that identifies a task in a shard of task key space.
Requests are received from a worker (block 302), from one of the workers 132 in
In embodiments, the technique specified by the policy (e.g., random, heat-based, availability-based) does not take the relationship between the shards and cells into account when determining a start location for a task scan cycle. A task scan cycle may include a number of distinct selections of batches of tasks, in embodiments.
In embodiments, a heat-map based technique may select a start location based on areas of the key space that are associated with high workload shards. In another example, an availability based technique may avoid ranges of the key space that are unavailable (e.g., due to faults in the key space) or a temporarily unavailable (e.g., when a resource range is sharded and handled by small cells of resources). Other techniques are contemplated.
The task data structure is scanned from the determined start location for one or more tasks (block 308) to build a batch of one or more tasks and the start location as well as a next location is encoded (block 310) in a token that corresponds to the batch of one or more tasks. The next location that is identified in the token is the next location in the task data structure that the scan will continue with when the next request in the scan cycle is received, in embodiments. In embodiments, the next location changes for every request from the worker until the cycle scan completes or until all tasks are selected. The one or more tasks and the token are returned to the worker.
As noted above,
In some embodiments, the scanning may have two phases. For example, in a first phase, scanning proceeds in a direction moving away from the scanning start location.
In
If the scan direction is away from the start location of the scan cycle (404, away) the task data structure is scanned from that next location for one or more tasks (block 408). Block 410 illustrates that the system may identify a page wrap. If a page wrap is encountered (410, yes) the system continues scanning through the page wrap (e.g., Worker B scan cycle in
Each task is marked complete in the task data structure as the tasks are completed (block 506). The performance of the tasks continues (all tasks complete?—no) until all tasks are completed (all tasks complete?—yes) and a request that includes the token is sent to the task service for a new batch of one or more tasks (block 510).
In embodiments, a token may include a starting point (e.g., the first resource id selected for a scan cycle based on the policy), a next resource (e.g., identifies the end of the prior scan, or the starting point for the next batch), and/or a direction of selection (e.g., from left to right or from right to left).
The last column (shaded) of task data structure 102 illustrate that shards of the task data structure may be designated as high heat shards, and the third to last column of the illustrated task data structure 102 illustrate that shards may be designated as temporarily unavailable.
Worker B scan cycle start and Worker B scan cycle continuation illustrate that scan cycles may start in different parts of the task data structure and continue around the data structure (e.g., wrapping around a page). The scan cycle may continue in the same row until the starting location is reached, completing the scan cycle for that worker. A task prior to the starting location will be the last resource to be scanned in the scan cycle, in embodiments. The starting location may act as a marker for the end of the scan cycle, in some embodiments.
A new starting location may then be selected (based on the policy) when a subsequent request for tasks is received from that worker. It is notable that task key spaces may be of any size from small to very large, without restriction, in some embodiments. Example key spaces may be large—including millions of tasks, in embodiments.
Illustrative System
Any of various computer systems may be configured to implement processes associated with a system that implements task contention reduction via policy-based selection. For example,
Various of the illustrated embodiments may include one or more computer systems 800 such as that illustrated in
In the illustrated embodiment, computer system 800 includes one or more processors 810 coupled to a system memory 820 via an input/output (I/O) interface 830. Computer system 800 further includes a network interface 840 coupled to I/O interface 830. In some embodiments, computer system 800 may be illustrative of servers implementing service provider logic, enterprise logic or downloadable applications, while in other embodiments servers may include more, fewer, or different elements than computer system 800.
In various embodiments, computer system 800 may be a uniprocessor system including one processor 810, or a multiprocessor system including several processors 810 (e.g., two, four, eight, or another suitable number). Processors 810 may be any suitable processors capable of executing instructions. For example, in various embodiments, processors 810 may be embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x106, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of processors 810 may commonly, but not necessarily, implement the same ISA.
System memory 820 may be configured to store instructions and data accessible by processor 810. In various embodiments, system memory 820 may be implemented using any suitable memory technology, such as static random-access memory (SRAM), synchronous dynamic RAM (SDRAM), non-volatile/Flash-type memory, or any other type of memory. In the illustrated embodiment, program instructions and data implementing desired functions, such as those methods and techniques described above for the downloadable software or provider network are shown stored within system memory 820 as program instructions 824. In some embodiments, system memory 820 may include data 825 which may be configured as described herein.
In one embodiment, I/O interface 830 may be configured to coordinate I/O traffic between processor 810, system memory 820 and any peripheral devices in the system, including through network interface 840 or other peripheral interfaces. In some embodiments, I/O interface 830 may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory 820) into a format suitable for use by another component (e.g., processor 810). In some embodiments, I/O interface 830 may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example. In some embodiments, the function of I/O interface 830 may be split into two or more separate components, such as a north bridge and a south bridge, for example. Also, in some embodiments, some or all of the functionality of I/O interface 830, such as an interface to system memory 820, may be incorporated directly into processor 810.
Network interface 840 may be configured to allow data to be exchanged between computer system 800 and other devices attached to a network, such as between a client device (e.g., 780) and other computer systems, or among hosts (e.g., hosts of a service provider network 710), for example. In particular, network interface 840 may be configured to allow communication between computer system 800 and/or various other device 860 (e.g., I/O devices). Other devices 860 may include scanning devices, display devices, input devices and/or other communication devices, as described herein. Network interface 840 may commonly support one or more wireless networking protocols (e.g., Wi-Fi/IEEE 802.7, or another wireless networking standard). However, in various embodiments, network interface 840 may support communication via any suitable wired or wireless general data networks, such as other types of Ethernet networks, for example. Additionally, network interface 840 may support communication via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks, via storage area networks such as Fibre Channel SANs, or via any other suitable type of network and/or protocol.
In some embodiments, system memory 820 may be one embodiment of a computer-accessible medium configured to store program instructions and data as described above. However, in other embodiments, program instructions and/or data may be received, sent or stored upon different types of computer-accessible media. Generally speaking, a computer-accessible medium may include computer-readable storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD-ROM coupled to computer system 800 via I/O interface 830. A computer-readable storage medium may also include any volatile or non-volatile media such as RAM (e.g. SDRAM, DDR SDRAM, RDRAM, SRAM, etc.), ROM, etc., that may be included in some embodiments of computer system 800 as system memory 820 or another type of memory. Further, a computer-accessible medium may include transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface 840.
In some embodiments, I/O devices may be relatively simple or “thin” client devices. For example, I/O devices may be configured as dumb terminals with display, data entry and communications capabilities, but otherwise little computational functionality. However, in some embodiments, I/O devices may be computer systems configured similarly to computer system 800, including one or more processors 810 and various other devices (though in some embodiments, a computer system 800 implementing an I/O device 850 may have somewhat different devices, or different classes of devices).
In various embodiments, I/O devices (e.g., scanners or display devices and other communication devices) may include, but are not limited to, one or more of: handheld devices, devices worn by or attached to a person, and devices integrated into or mounted on any mobile or fixed equipment, according to various embodiments. I/O devices may further include, but are not limited to, one or more of: personal computer systems, desktop computers, rack-mounted computers, laptop or notebook computers, workstations, network computers, “dumb” terminals (i.e., computer terminals with little or no integrated processing ability), Personal Digital Assistants (PDAs), mobile phones, or other handheld devices, proprietary devices, printers, or any other devices suitable to communicate with the computer system 800. In general, an I/O device (e.g., cursor control device, keyboard, or display(s) may be any device that can communicate with elements of computing system 800.
The various methods as illustrated in the figures and described herein represent illustrative embodiments of methods. The methods may be implemented manually, in software, in hardware, or in a combination thereof. The order of any method may be changed, and various elements may be added, reordered, combined, omitted, modified, etc. For example, in one embodiment, the methods may be implemented by a computer system that includes a processor executing program instructions stored on a computer-readable storage medium coupled to the processor. The program instructions may be configured to implement the functionality described herein (e.g., the functionality of the task service 110, worker(s) 132a-n, resources or components of the service provider network 710, other various services, data stores, devices and/or other communication devices, etc.).
Various modifications and changes may be made as would be obvious to a person skilled in the art having the benefit of this disclosure. It is intended to embrace all such modifications and changes and, accordingly, the above description to be regarded in an illustrative rather than a restrictive sense.
Various embodiments may further include receiving, sending or storing instructions and/or data implemented in accordance with the foregoing description upon a computer-accessible medium. Generally speaking, a computer-accessible medium may include storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD-ROM, volatile or non-volatile media such as RAM (e.g. SDRAM, DDR, RDRAM, SRAM, etc.), ROM, etc., as well as transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as network and/or a wireless link.
While embodiments are described herein by way of example for several embodiments and illustrative drawings, those skilled in the art will recognize that the embodiments are not limited to the embodiments or drawings described. It should be understood, that the drawings and detailed description thereto are not intended to limit embodiments to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope as defined by the appended claims. The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word “may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). The words “include,” “including,” and “includes” indicate open-ended relationships and therefore mean including, but not limited to. Similarly, the words “have,” “having,” and “has” also indicate open-ended relationships, and thus mean having, but not limited to. The terms “first,” “second,” “third,” and so forth as used herein are used as labels for nouns that they precede, and do not imply any type of ordering (e.g., spatial, temporal, logical, etc.) unless such an ordering is otherwise explicitly indicated.
Various components may be described as “configured to” perform a task or tasks. In such contexts, “configured to” is a broad recitation generally meaning “having structure that” performs the task or tasks during operation. As such, the component can be configured to perform the task even when the component is not currently performing that task (e.g., a computer system may be configured to perform operations even when the operations are not currently being performed). In some contexts, “configured to” may be a broad recitation of structure generally meaning “having circuitry that” performs the task or tasks during operation. As such, the component can be configured to perform the task even when the component is not currently on. In general, the circuitry that forms the structure corresponding to “configured to” may include hardware circuits.
Various components may be described as performing a task or tasks, for convenience in the description. Such descriptions should be interpreted as including the phrase “configured to.” Reciting a component that is configured to perform one or more tasks is expressly intended not to invoke 35 U.S.C. § 112, paragraph six, interpretation for that component.
“Based On.” As used herein, this term is used to describe one or more factors that affect a determination. This term does not foreclose additional factors that may affect a determination. That is, a determination may be solely based on those factors or based, at least in part, on those factors. Consider the phrase “determine A based on B.” While B may be a factor that affects the determination of A, such a phrase does not foreclose the determination of A from also being based on C. In other instances, A may be determined based solely on B.
The scope of the present disclosure includes any feature or combination of features disclosed herein (either explicitly or implicitly), or any generalization thereof, whether or not it mitigates any or all of the problems addressed herein. Accordingly, new claims may be formulated during prosecution of this application (or an application claiming priority thereto) to any such combination of features. In particular, with reference to the appended claims, features from dependent claims may be combined with those of the independent claims and features from respective independent claims may be combined in any appropriate manner and not merely in the specific combinations enumerated in the appended claims.
Kumar, Sandeep, Bhadoriya, Anirudha Singh
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
10169728, | Mar 28 2011 | GOOGLE LLC | Opportunistic job processing of input data divided into partitions of different sizes |
9009726, | Dec 10 2010 | Microsoft Technology Licensing, LLC | Deterministic sharing of data among concurrent tasks using pre-defined deterministic conflict resolution policies |
20050240924, | |||
20110161943, | |||
20140223095, | |||
20160041906, | |||
20180089324, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Oct 23 2017 | Amazon Technologies, Inc. | (assignment on the face of the patent) | / | |||
Oct 23 2017 | KUMAR, SANDEEP | Amazon Technologies, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 043930 | /0488 | |
Oct 23 2017 | BHADORIYA, ANIRUDHA SINGH | Amazon Technologies, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 043930 | /0488 |
Date | Maintenance Fee Events |
Oct 23 2017 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Date | Maintenance Schedule |
Dec 27 2025 | 4 years fee payment window open |
Jun 27 2026 | 6 months grace period start (w surcharge) |
Dec 27 2026 | patent expiry (for year 4) |
Dec 27 2028 | 2 years to revive unintentionally abandoned end. (for year 4) |
Dec 27 2029 | 8 years fee payment window open |
Jun 27 2030 | 6 months grace period start (w surcharge) |
Dec 27 2030 | patent expiry (for year 8) |
Dec 27 2032 | 2 years to revive unintentionally abandoned end. (for year 8) |
Dec 27 2033 | 12 years fee payment window open |
Jun 27 2034 | 6 months grace period start (w surcharge) |
Dec 27 2034 | patent expiry (for year 12) |
Dec 27 2036 | 2 years to revive unintentionally abandoned end. (for year 12) |