A technique for expediting the unloading of an operating system kernel module that executes read-copy update (rcu) callback processing code in a computing system having one or more processors. According to embodiments of the disclosed technique, an rcu callback is enqueued so that it can be processed by the kernel module's callback processing code following completion of a grace period in which each of the one or more processors has passed through a quiescent state. An expediting operation is performed to expedite processing of the rcu callback. The rcu callback is then processed and the kernel module is unloaded.
|
1. A system, comprising:
one or more processors;
a memory coupled to said one or more processors, said memory including a computer useable medium tangibly embodying at least one program of instructions executable by said processor to perform operations for expediting unloading of an operating system kernel module that executes read-copy update (rcu) callback processing code, said operations comprising:
performing regular periodic grace period detection processing to detect the end of grace periods in which each of said one or more processors has passed through a quiescent state;
wherein said expediting unloading of the operating system kernel module further comprising:
enqueuing an rcu callback to be processed by said kernel module's callback processing code following completion of a grace period;
performing an expediting operation that forces early completion of said grace period after it commences or expedites processing of said rcu callback when said grace period completes;
processing said rcu callback;
unloading said kernel module;
wherein said system is one of a uniprocessor system that runs a non-preemptible operating system kernel, a multiprocessor system that runs a non-preemptible operating system kernel, or a multiprocessor that runs a preemptible operating system kernel;
wherein if said system is a uniprocessor system that runs a non-preemptible operating system kernel, said callback processing code runs in a deferred non-process context of said operating system kernel, and said expediting operation comprises invoking said deferred non-process context to force said callback processing code to execute;
wherein if said system is a multiprocessor system that runs a non-preemptible operating system kernel, and said expediting operation comprises forcing each processor to note a new grace period and forcing a quiescent state on each processor, said forcing a quiescent state including implementing a rescheduling operation on each processor, and wherein said expediting operation is repeated until said rcu callback is processed; and
wherein if said system is a multiprocessor system that runs a preemptible operating system kernel, and said expediting operation comprises forcing each processor to note a new grace period and forcing a quiescent state on each processor, said forcing a quiescent state including implementing a priority boost for blocked reader tasks that are preventing completion of said grace period.
3. A computer program product, comprising:
one or more non-transitory machine-useable storage media;
program instructions provided by said one or more media for programming a data processing platform to perform operations for expediting unloading of an operating system kernel module that executes read-copy update (rcu) callback processing code, said operations comprising:
performing regular periodic grace period detection processing to detect the end of grace periods in which each of said one or more processors has passed through a quiescent state;
wherein said expediting unloading of the operating system kernel module further comprising:
enqueuing an rcu callback to be processed by said kernel module's callback processing code following completion of a grace period;
performing an expediting operation that forces early completion of said grace period after it commences or expedites processing of said rcu callback when said grace period completes;
processing said rcu callback;
unloading said kernel module;
wherein said data processing platform is one of a uniprocessor system that runs a non-preemptible operating system kernel, a multiprocessor system that runs a non-preemptible operating system kernel, or a multiprocessor that runs a preemptible operating system kernel;
wherein if said data processing platform is a uniprocessor system that runs a non-preemptible operating system kernel, said callback processing code runs in a deferred non-process context of said operating system kernel, and said expediting operation comprises invoking said deferred non-process context to force said callback processing code to execute;
wherein if said data processing platform is a multiprocessor system that runs a non-preemptible operating system kernel, and said expediting operation comprises forcing each processor to note a new grace period and forcing a quiescent state on each processor, said forcing a quiescent state including implementing a rescheduling operation on each processor, and wherein said expediting operation is repeated until said rcu callback is processed; and
wherein if said data processing platform is a multiprocessor system that runs a preemptible operating system kernel, and said expediting operation comprises forcing each processor to note a new grace period and forcing a quiescent state on each processor, said forcing a quiescent state including implementing a priority boost for blocked reader tasks that are preventing completion of said grace period.
2. A system in accordance with
4. A computer program product in accordance with
|
1. Field
The present disclosure relates to computer systems and methods in which data resources are shared among data consumers while preserving data integrity and consistency relative to each consumer. More particularly, the disclosure concerns an implementation of a mutual exclusion mechanism known as “read-copy update” in a computing environment wherein loadable modules contain code that is used to process read-copy update callbacks.
2. Description of the Prior Art
By way of background, read-copy update (also known as “RCU”) is a mutual exclusion technique that permits shared data to be accessed for reading without the use of locks, writes to shared memory, memory barriers, atomic instructions, or other computationally expensive synchronization mechanisms, while still permitting the data to be updated (modify, delete, insert, etc.) concurrently. The technique is well suited to both uniprocessor and multiprocessor computing environments wherein the number of read operations (readers) accessing a shared data set is large in comparison to the number of update operations (updaters), and wherein the overhead cost of employing other mutual exclusion techniques (such as locks) for each read operation would be high. By way of example, a network routing table that is updated at most once every few minutes but searched many thousands of times per second is a case where read-side lock acquisition would be quite burdensome.
The read-copy update technique implements data updates in two phases. In the first (initial update) phase, the actual data update is carried out in a manner that temporarily preserves two views of the data being updated. One view is the old (pre-update) data state that is maintained for the benefit of read operations that may have been referencing the data concurrently with the update. The other view is the new (post-update) data state that is seen by operations that access the data following the update. In the second (deferred update) phase, the old data state is removed following a “grace period” that is long enough to ensure that the first group of read operations will no longer maintain references to the pre-update data. The second-phase update operation typically comprises freeing a stale data element to reclaim its memory. In certain RCU implementations, the second-phase update operation may comprise something else, such as changing an operational state according to the first-phase update.
It is assumed that the data element list of
At some subsequent time following the update, r1 will have continued its traversal of the linked list and moved its reference off of B. In addition, there will be a time at which no other reader process is entitled to access B. It is at this point, representing an expiration of the grace period referred to above, that u1 can free B, as shown in
In the context of the read-copy update mechanism, a grace period represents the point at which all running tasks (e.g., processes, threads or other work) having access to a data element guarded by read-copy update have passed through a “quiescent state” in which they can no longer maintain references to the data element, assert locks thereon, or make any assumptions about data element state. By convention, for operating system kernel code paths, a context switch, an idle loop, and user mode execution all represent quiescent states for any given CPU running non-preemptible code (as can other operations that will not be listed here). The reason for this is that a non-preemptible kernel will always complete a particular operation (e.g., servicing a system call while running in process context) prior to a context switch. In preemptible operating system kernels, additional steps are needed to account for readers that were preempted within their RCU read-side critical sections. In current RCU implementations designed for the Linux® kernel, a blocked reader task list is maintained to track such readers. A grace period will only end when the blocked task list indicates that is safe to do so because all blocked readers associated with the grace period have exited their RCU read-side critical sections. Other techniques for tracking blocked readers may also be used, but tend to require more read-side overhead than the current blocked task list method.
In
Grace periods may be synchronous or asynchronous. According to the synchronous technique, an updater performs the first phase update operation, invokes an RCU primitive such as synchronize_rcu( ) to advise when all current RCU readers have completed their RCU critical sections and the grace period has ended, blocks (waits) until the grace period has completed, and then implements the second phase update operation, such as by removing stale data. According to the asynchronous technique, an updater performs the first phase update operation, specifies the second phase update operation as a callback using an RCU primitive such as call_rcu( ), then resumes other processing with the knowledge that the callback will eventually be processed at the end of a grace period. Advantageously, callbacks requested by one or more updaters can be batched (e.g., on callback lists) and processed as a group at the end of an asynchronous grace period. This allows the grace period overhead to be amortized over plural deferred update operations.
Modern operating systems, including current versions of the Linux® kernel, use loadable modules to implement device drivers, file systems and other software. Loadable modules allow software functionality to be installed on an as-needed basis and then removed when the software is no longer required. This reduces the memory footprint of the base kernel. In operating systems that implement read-copy update with asynchronous grace period detection, some or all of the callback function code that processes a callback following the end of a grace period may be located within a loadable module. If the module containing the callback function code is unloaded before a pending callback that requires such code can be invoked, problems will arise when an attempt is made to implement the callback function because its code is no longer part of the running kernel.
A response to this scenario was the development of the “rcu_barrier( )” primitive, which can be called by a module's exit code during module unloading. The rcu_barrier( ) primitive waits for the end of the current grace period and for all RCU callbacks associated with the grace period to be invoked. When using the rcu_barrier( ) primitive, the sequence of operations performed by a kernel module's exit code is to (1) prevent any new RCU callbacks from being posted, (2) execute rcu_barrier( ), and (3) allow the module to be unloaded. The rcu_barrier( ) primitive is for use by process context code. For the non-preemptible uniprocessor version of RCU known as TINY_RCU, the rcu_barrier( ) primitive is set forth at lines 41-44 of the Linux® version 3.1 source code file named Linux/include/linux/rcutiny.h. This primitive is a wrapper function for a helper function called “rcu_barrier_sched( ), which is set forth at lines 298-309 of the Linux® version 3.1 source code file named Linux/kernel/rcutiny.c. For the preemptible uniprocessor version of RCU known as TINY_PREEMPTIBLE_RCU, the rcu_barrier( ) primitive is set forth at lines 700-711 of the Linux® version 3.1 source code file named Linux/kernel/rcutiny_plugin.h. For the hierarchical multiprocessor versions of RCU known as TREE_RCU and TREE_PREEMPTIBLE_RCU, the rcu_barrier( ) primitive is set forth at lines 854-857 of the Linux® version 3.1 source code file named Linux/kernel/rcutree_plugin.h. This is a wrapper function that calls a helper function named rcu_barrier( ), which may be found at lines 1778-1807 of the Linux® version 3.1 source code file named Linux/kernel/rcutree.c.
In many instances, it is desirable to expedite module unloading so that the module's kernel memory can be reclaimed for other uses. Unfortunately, the rcu_barrier( ) primitive can delay module unloading due to the latency associated with waiting for the end of a current RCU grace period and for all prior RCU callbacks to be invoked. The present disclosure presents a technique for improving this situation by speeding up RCU grace period detection and callback processing operations during module unloading.
A method, system and computer program product are provided for expediting the unloading of an operating system kernel module that executes read-copy update (RCU) callback processing code in a computing system having one or more processors. According to embodiments of the disclosed technique, an RCU callback is enqueued so that it can be processed by the kernel module's callback processing code following completion of a grace period in which each of the one or more processors has passed through a quiescent state. An expediting operation is performed to expedite processing of the RCU callback. The RCU callback is then processed and the kernel module is unloaded.
In an example embodiment, the computing system is a uniprocessor system that runs a non-preemptible operating system kernel, and the callback processing code runs in a deferred non-process context of the operating system kernel. In that case, the expediting operation may comprise invoking the deferred non-process context to force the callback processing code to execute.
In another example embodiment, the computing system is a uniprocessor system that runs a preemptible operating system kernel. In that case, the expediting operation may comprise implementing a priority boost for blocked reader tasks that are preventing completion of the grace period.
In another example embodiment, the computing system is a multiprocessor system that runs a non-preemptible operating system kernel. In that case, the expediting operation may comprise forcing each processor to note a new grace period and forcing a quiescent state on each processor, such as by implementing a rescheduling operation on each processor. The expediting operation may be repeated as necessary until the RCU callback is processed.
In another example embodiment, the computing system is a multiprocessor system that runs a preemptible operating system kernel. In that case, the expediting operation may comprise forcing each processor to note a new grace period and forcing a quiescent state on each processor by implementing a priority boost for blocked reader tasks that are preventing completion of the grace period. The expediting operation may be repeated as necessary until the RCU callback is processed.
The foregoing and other features and advantages will be apparent from the following more particular description of example embodiments, as illustrated in the accompanying Drawings, in which:
Turning now to the figures, wherein like reference numerals represent like elements in all of the several views,
In each of
An update operation (updater) 18 may periodically execute within a process, thread, or other execution context (hereinafter “task”) on any processor 4 of
During run time, an updater 18 will occasionally perform an update to one of the shared data elements 16. In accordance the philosophy of RCU, a first-phase update is performed in a manner that temporarily preserves a pre-update view of the shared data element for the benefit of readers 21 that may be concurrently referencing the shared data element during the update operation. Following the first-phase update, the updater 18 may register a callback with the RCU subsystem 20 for the deferred destruction of the pre-update view following a grace period (second-phase update). As described in the “Background” section above, this is known as asynchronous grace period processing.
The RCU subsystem 20 may handle both asynchronous and synchronous grace periods. Each type of grace period processing entails starting new grace periods and detecting the end of old grace periods so that the RCU subsystem 20 knows when it is safe to free stale data (or take other actions). Asynchronous grace period processing further entails the management of callback lists that accumulate callbacks until they are ripe for batch processing at the end of a given grace period. As part of this batch processing, it is assumed for purposes of the present disclosure that at least some of the code that processes RCU callbacks is implemented by a loadable operating system kernel module. It will be appreciated that the kernel module should not be unloaded unless and until it has no further callback processing work remaining to be done.
Grace period processing operations may be performed by periodically running the RCU subsystem 20 on the lone processor 4 in
In current versions of the Linux® kernel, there are four main variants of RCU designed for different processor and operating system configurations. Two uniprocessor variants called TINY_RCU and TINY_PREEMPT_RCU may be used with the uniprocessor system 2 of FIG. 4. TINY_RCU is for non-preemptible kernels and TINY_PREEMPT_RCU is for preemptible kernels. Two multiprocessor variants called TREE_RCU and TREE_PREEMPT_RCU may be used with the multiprocessor system 2A of
TABLE 1
LINUX ®
RCU VARIANT
3.1 SOURCE CODE FILES
TINY_RCU/TINY_PREEMPT_RCU
Linux/kernel/rcutiny.c
Linux/include/linux/rcutiny.h
Linux/kernel/rcutiny_plugin.h
TREE_RCU/TREE_PREEMPT_RCU
Linux/kernel/rcutree.c
Linux/kernel/rcutree.h
Linux/kernel/rcutree_plugin.h
With continuing reference to
Turning now to
struct rcu_head {
struct rcu_head *next;
void (*func)(struct rcu_head *head);
};
Each processor's RCU callback list 32A is accessed using list pointers that are maintained as part of the grace period/callback processing information 34 (See
Turning now to
The rcu_node structure 34D are arranged in a tree hierarchy that is embedded in a linear array in the rcu_state structure 34E. The rcu_node hierarchy comprises one or more leaf rcu_node structures, zero or more levels of internal rcu_node structures, and a top-level root rcu_node structure (if there is more than one leaf rcu_node structure). Each leaf rcu_node structure 34D is assigned to some number of rcu_data structures 34C. Each internal rcu_node structure 34D is then assigned to some number of lower level rcu_node structures, and so on until the root rcu_node structure is reached. The number of rcu_node structures 34D depends on the number of processors in the system. Very small multiprocessor systems may require only a single rcu_node structure 34D whereas very large systems may require numerous rcu_node structures arranged in a multi-level hierarchy.
Each rcu_node structure comprises a set of grace period/quiescent state counters and flags 34D-1, and a set of blocked task tracking information 34D-2. The grace period/quiescent state counters and flags 34D-1 track grace period and quiescent state information at the node level. The blocked task tracking information 34D-2 is used by the TREE-PREEMPT_RCU implementation to track reader tasks that were preempted inside their RCU read-side critical sections. The rcu_state structure 34E also contains a set of a set of grace period/quiescent state counters and flags 34E-1 that track grace period and quiescent state information at a global level.
Turning now to
As further shown in
The expedited grace period component 44B of the RCU updater API 44 may be implemented using the synchronize_rcu_expedited( ) function found in each of the above-referenced RCU variants for implementing expedited synchronous grace periods. The expedited grace period component 44B is invoked by updaters 18 to request an expedited grace period following a first-phase update to a shared data element 16. The updater 18 blocks while the expedited grace period is in progress, then performs second-phase update processing to free stale data (or perform other actions). In the non-preemptible TINY_RCU and TREE_RCU implementations, the expedited grace period component 44B performs a scheduling action that forces the invoking processor to pass through a quiescent state. A scheduling action is also performed in TREE_PREEMPT_RCU to force all currently executing RCU readers 21 onto the blocked task list (see 34D-2 of
The RCU grace period detection and callback processing functions 46 include a set of standard RCU grace period detection/callback processing components 46A, as well as a new component 46B, referred to as “RCU barrier expedited,” that may be used for expediting the unloading of kernel modules that contain RCU callback processing code. The standard components 46A implement conventional RCU grace period detection and callback processing operations. The details of these operations will not be described herein insofar as they are well known to persons of ordinary skill in the art who are familiar with the RCU source code files identified in Table 1 above. The basic approach is to have the operating system task scheduler and timer tick (scheduling clock interrupt) functions drive the RCU subsystem state changes (by respectively initiating such processing via calls to the RCU functions “rcu_note_context_switch( )” and “rcu_check_callbacks( )”). Once invoked by the task scheduler or the scheduling clock interrupt handler, the grace period detection and callback processing operations performed by the standard components 46A differ between RCU implementations. In TINY_RCU, the standard components 46A implicitly note a quiescent state, check for pending callbacks and invoke deferred callback processing if any callbacks are found (using softirq context or a kernel thread). In TINY_PREEMPT_RCU, TREE_RCU and TREE_PREEMPT_RCU, the standard components 46A perform far more complex processing involving the data structures shown in
Turning now to the RCU barrier expedited component 46B, different versions thereof are provided for each of the above-referenced RCU variants. Each version may be implemented by modifying the existing RCU barrier primitive used for the corresponding RCU variant. The existing RCU barrier primitives are of two basic type, one type being used for the uniprocessor TINY_RCU and TINY_PREMPT_RCU variants, and the other type being used for the multiprocessor TREE_RCU and TREE_PREEMPT_RCU variants. The source code function names and file locations of these existing RCU barrier primitives are set forth in the “Background” section above.
Turning now to
Block 54 explicitly forces callback processing to be performed by the processor 4. For example, if RCU callbacks are processed in a deferred manner by a Linux® kernel thread (kthread), block 54 may implement the TINY_RCU function called “invoke_rcu_kthread( )” in order wake up the kthread. In block 56, the RCU barrier expedited component 46B sleeps until the special RCU callback 32A that was enqueued in block 52 is processed and signals the RCU barrier expedited component 46B to wake up. Insofar as the special RCU callback 32A represents the last callback on the processor's RCU callback list 32, the RCU barrier component 46B will return and the kernel module that invoked this component may be safely unloaded without leaving behind any unprocessed callbacks.
Blocks 50, 52 and 56 of
Code Listing 1
1
Void rcu_barrier_expedited(void)
2
{
3
struct rcu_synchronize rcu;
4
init_rcu_head_on_stack(&rcu.head);
5
init_completion(&rcu.completion);
6
/* Will wake me after RCU finished. */
7
call_rcu_sched(&rcu.head, wakeme_after_rcu);
8
rcu_sched_qs( );
9
/* Wait for it. */
10
wait_for_completion(&rcu.completion);
11
destroy_rcu_head_on_stack(&rcu.head);
12
}
Turning now to
Blocks 50, 52 and 56 of
Code Listing 2
1
Void rcu_barrier_expedited(void)
2
{
3
struct rcu_synchronize rcu;
4
init_rcu_head_on_stack(&rcu.head);
5
init_completion(&rcu.completion);
6
/* Will wake me after RCU finished. */
7
call_rcu(&rcu.head, wakeme_after_rcu);
8
synchronize_rcu_expedited( );
9
/* Wait for it. */
10
wait_for_completion(&rcu.completion);
11
destroy_rcu_head_on_stack(&rcu.head);
12
}
Turning now to
Once the special RCU callbacks 34A are enqueued, blocks 66 and 68 are used to expedite callback invocation. Block 66 forces each processor 4 to take note a new grace period. This can be accomplished by calling the “invoke_rcu_core” function found at lines 1501-1504 of the Linux® version 3.1 source code file Linux/kernel/rcutree.c. This function wakes up the RCU kthread associated each processor 4, which in turn causes the processor to acknowledge the new grace period. Block 68 then forces a quiescent state on each processor 4. This may be accomplished by calling the TREE_RCU version of the force_quiescent_state( ) function found at lines 1424-1427 of the Linux® version 3.1 source code file Linux/kernel/rcutree.c. Invoking this function forces a rescheduling action on each processor 4.
Block 70 causes the operations of blocks 66 and 68 to be repeated until all of the special RCU callbacks 32A enqueued in block 64 have been invoked. This repetition is needed because each processor 4 in the multiprocessor system 2A might have an arbitrarily large number of pending RCU callbacks enqueued on the various portions of their callback lists 32 that are associated with different grace periods. Even though blocks 66 and 68 might forced the end of an existing grace period, it might take more than one invocation cycle to force the end of the old grace period, then begin a new grace period, then force each processor through a quiescent state, then report the quiescent state up the rcu_node hierarchy, and then actually process the callbacks. Eventually, the repeated cycling of blocks 66 and 68 will cause the completion structure that was initialized in block 60 to be reset, which will be detected in block 72. At this point, the kernel module that invoked the component RCU barrier expedited component 46B may be safely unloaded without leaving behind any unprocessed callbacks.
Blocks 60, 62, 64 and 72 of
Code Listing 3
1
static void_rcu_barrier_expedited(struct rcu_state *rsp,
2
void (*call_rcu_func)(struct rcu_head *head,
3
void (*func)(struct rcu_head *head)))
4
{
5
BUG_ON(in_interrupt( ));
6
/* Take mutex to serialize concurrent rcu_barrier( ) requests. */
7
mutex_lock(&rcu_barrier_mutex);
8
init_completion(&rcu_barrier_completion);
9
/*
10
* Initialize rcu_barrier_cpu_count to 1, then invoke
11
* rcu_barrier_func( ) on each CPU, so that each CPU also has
12
* incremented rcu_barrier_cpu_count. Only then is it safe to
13
* decrement rcu_barrier_cpu_count -- otherwise the first CPU
14
* might complete its grace period before all of the other CPUs
15
* did their increment, causing this function to return too
16
* early. Note that on_each_cpu( ) disables irqs, which prevents
17
* any CPUs from coming online or going offline until each
18
* online CPU has queued its RCU-barrier callback.
19
*/
20
atomic_set(&rcu_barrier_cpu_count, 1);
21
on_each_cpu(rcu_barrier_func, (void *)call_rcu_func, 1);
22
while (callbacks remain) {
23
/*This requires that invoke_rcu_core( ) be modified to take a
24
* single parameter that it ignores. The invoke_rcu_core( )
25
* function replaces the older use of raise_softirq( ).
26
* /
27
on_each_cpu(invoke_rcu_core, NULL, 1);
28
on_each_cpu(fqs_wrapper, rsp, 1);
29
}
30
if (atomic_dec_and_test(&rcu_barrier_cpu_count))
31
complete(&rcu_barrier_completion);
32
wait_for_completion(&rcu_barrier_completion);
33
mutex_unlock(&rcu_barrier_mutex);
34
}
35
36
void fqs_wrapper(void *rsp_in)
37
{
38
struct rcu_state *rsp = (struct rcu_state *)rsp_in;
39
force_quiescent_state(rsp, 0);
40
}
Turning now to
Accordingly, a technique for has been disclosed for expeditiously unloading operating system kernel modules that implement RCU callback processing code. It will be appreciated that the foregoing concepts may be variously embodied in any of a data processing system, a machine implemented method, and a computer program product in which programming logic is provided by one or more machine-useable storage media for use in controlling a data processing system to perform the required functions. Example embodiments of a data processing system and machine implemented method were previously described in connection with
Example data storage media for storing such program instructions are shown by reference numerals 8 (memory) and 10 (cache) of the uniprocessor system 2 of
Although various example embodiments have been shown and described, it should be apparent that many variations and alternative embodiments could be implemented in accordance with the disclosure. It is understood, therefore, that the invention is not to be in any way limited except in accordance with the spirit of the appended claims and their equivalents.
Patent | Priority | Assignee | Title |
10140131, | Aug 11 2016 | International Business Machines Corporation | Shielding real-time workloads from OS jitter due to expedited grace periods |
10146577, | Dec 11 2016 | International Business Machines Corporation | Enabling real-time CPU-bound in-kernel workloads to run infinite loops while keeping RCU grace periods finite |
10146579, | Dec 11 2016 | International Business Machines Corporation | Enabling real-time CPU-bound in-kernel workloads to run infinite loops while keeping RCU grace periods finite |
10162644, | Aug 11 2016 | International Business Machines Corporation | Shielding real-time workloads from OS jitter due to expedited grace periods |
10268610, | Aug 16 2018 | International Business Machines Corporation | Determining whether a CPU stalling a current RCU grace period had interrupts enabled |
10282230, | Oct 03 2016 | International Business Machines Corporation | Fair high-throughput locking for expedited grace periods |
10353748, | Aug 30 2016 | International Business Machines Corporation | Short-circuiting normal grace-period computations in the presence of expedited grace periods |
10360080, | Aug 30 2016 | International Business Machines Corporation | Short-circuiting normal grace-period computations in the presence of expedited grace periods |
10372510, | Mar 15 2017 | International Business Machines Corporation | Using expedited grace periods to short-circuit normal grace-period computations |
10459761, | Dec 11 2016 | International Business Machines Corporation | Enabling real-time CPU-bound in-kernel workloads to run infinite loops while keeping RCU grace periods finite |
10459762, | Dec 11 2016 | International Business Machines Corporation | Enabling real-time CPU-bound in-kernel workloads to run infinite loops while keeping RCU grace periods finite |
10613913, | Oct 06 2018 | International Business Machines Corporation | Funnel locking for normal RCU grace period requests |
10831542, | Oct 01 2018 | International Business Machines Corporation | Prevent counter wrap during update-side grace-period-request processing in tree-SRCU implementations |
10915426, | Jun 06 2019 | International Business Machines Corporation | Intercepting and recording calls to a module in real-time |
10929126, | Jun 06 2019 | International Business Machines Corporation | Intercepting and replaying interactions with transactional and database environments |
10977042, | Jul 26 2019 | International Business Machines Corporation | Using expedited RCU grace periods to avoid out-of-memory conditions for offloaded RCU callbacks |
10983840, | Jun 21 2018 | International Business Machines Corporation | Consolidating read-copy update types having different definitions of a quiescent state |
11016762, | Jun 06 2019 | International Business Machines Corporation | Determining caller of a module in real-time |
11036619, | Jun 06 2019 | International Business Machines Corporation | Bypassing execution of a module in real-time |
11055271, | Nov 13 2017 | International Business Machines Corporation | Funnel locking for sleepable read-copy update |
11074069, | Jun 06 2019 | International Business Machines Corporation | Replaying interactions with transactional and database environments with re-arrangement |
11321147, | Aug 29 2019 | International Business Machines Corporation | Determining when it is safe to use scheduler lock-acquiring wakeups to defer quiescent states in real-time preemptible read-copy update |
11386079, | Jun 26 2019 | International Business Machines Corporation | Replacing preemptible RCU with an augmented SRCU implementation |
Patent | Priority | Assignee | Title |
5442758, | Jul 19 1993 | International Business Machines Corporation | Apparatus and method for achieving reduced overhead mutual exclusion and maintaining coherency in a multiprocessor system utilizing execution history and thread monitoring |
5608893, | Jul 19 1993 | International Business Machines Corporation | Method for maintaining data coherency using thread activity summaries in a multicomputer system |
5727209, | Jul 19 1993 | International Business Machines Corporation | Apparatus and method for achieving reduced overhead mutual-exclusion and maintaining coherency in a multiprocessor system utilizing execution history and thread monitoring |
6219690, | Jul 19 1993 | International Business Machines Corporation | Apparatus and method for achieving reduced overhead mutual exclusion and maintaining coherency in a multiprocessor system utilizing execution history and thread monitoring |
6662184, | Sep 23 1999 | MASCOTECH, INC | Lock-free wild card search data structure and method |
6886162, | Aug 29 1997 | International Business Machines Corporation | High speed methods for maintaining a summary of thread activity for multiprocessor computer systems |
6996812, | Jun 18 2001 | International Business Machines Corporation | Software implementation of synchronous memory barriers |
7191272, | Dec 19 2000 | International Business Machines Corporation | Adaptive reader-writer lock |
7287135, | Sep 29 2004 | International Business Machines Corporation | Adapting RCU for real-time operating system usage |
7349926, | Mar 30 2004 | International Business Machines Corporation | Atomic renaming and moving of data files while permitting lock-free look-ups |
7353346, | Mar 24 2006 | Meta Platforms, Inc | Read-copy-update (RCU) operations with reduced memory barrier usage |
7395263, | Oct 12 2005 | Meta Platforms, Inc | Realtime-safe read copy update with lock-free readers |
7395383, | Nov 01 2005 | International Business Machines Corporation | Realtime-safe read copy update with per-processor read/write locks |
7426511, | Mar 08 2004 | International Business Machines Corporation | Efficient support of consistent cyclic search with read-copy-update |
7454581, | Oct 27 2004 | International Business Machines Corporation | Read-copy update grace period detection without atomic instructions that gracefully handles large numbers of processors |
7472228, | Oct 27 2004 | International Business Machines Corporation | Read-copy update method |
7653791, | Nov 01 2005 | International Business Machines Corporation | Realtime-safe read copy update with per-processor read/write locks |
7668851, | Nov 29 2006 | International Business Machines Corporation | Lockless hash table lookups while performing key update on hash table element |
7689789, | Oct 27 2004 | International Business Machines Corporation | Read-copy update grace period detection without atomic instructions that gracefully handles large numbers of processors |
7734879, | Jul 27 2006 | TWITTER, INC | Efficiently boosting priority of read-copy update readers in a real-time data processing system |
7734881, | Sep 29 2004 | International Business Machines Corporation | Adapting RCU for real-time operating system usage |
7747805, | Dec 19 2000 | International Business Machines Corporation | Adaptive reader-writer lock |
7814082, | Mar 08 2004 | International Business Machines Corporation | Efficient support of consistent cyclic search with read-copy-update |
7818306, | Mar 24 2006 | Meta Platforms, Inc | Read-copy-update (RCU) operations with reduced memory barrier usage |
7873612, | Nov 23 2004 | International Business Machines Corporation | Atomically moving list elements between lists using read-copy update |
7904436, | Oct 12 2005 | Meta Platforms, Inc | Realtime-safe read copy update with lock-free readers |
7934062, | Jun 22 2007 | International Business Machines Corporation | Read/write lock with reduced reader lock sampling overhead in absence of writer lock acquisition |
7953708, | Jul 28 2008 | International Business Machines Corporation | Optimizing grace period detection for preemptible read-copy update on uniprocessor systems |
7953778, | May 20 2008 | International Business Machines Corporation | Efficient support of consistent cyclic search with read-copy update and parallel updates |
7987166, | Mar 30 2004 | International Business Machines Corporation | Atomic renaming and moving of data files while permitting lock-free look-ups |
8020160, | Jul 28 2008 | International Business Machines Corporation | User-level read-copy update that does not require disabling preemption or signal handling |
8055860, | Mar 24 2006 | Meta Platforms, Inc | Read-copy-update (RCU) operations with reduced memory barrier usage |
8055918, | Apr 03 2008 | Daedalus Blue LLC | Optimizing preemptible read-copy update for low-power usage by avoiding unnecessary wakeups |
8108696, | Jul 24 2008 | International Business Machines Corporation | Optimizing non-preemptible read-copy update for low-power usage by avoiding unnecessary wakeups |
20060112121, | |||
20060117072, | |||
20060130061, | |||
20060265373, | |||
20080082532, | |||
20080313238, | |||
20090006403, | |||
20090077080, | |||
20090320030, | |||
20100023559, | |||
20100023732, | |||
20100115235, | |||
20110055183, | |||
20110283082, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Dec 09 2011 | MCKENNEY, PAUL E | International Business Machines Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 027366 | /0763 | |
Dec 10 2011 | International Business Machines Corporation | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Sep 30 2019 | REM: Maintenance Fee Reminder Mailed. |
Mar 16 2020 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Feb 09 2019 | 4 years fee payment window open |
Aug 09 2019 | 6 months grace period start (w surcharge) |
Feb 09 2020 | patent expiry (for year 4) |
Feb 09 2022 | 2 years to revive unintentionally abandoned end. (for year 4) |
Feb 09 2023 | 8 years fee payment window open |
Aug 09 2023 | 6 months grace period start (w surcharge) |
Feb 09 2024 | patent expiry (for year 8) |
Feb 09 2026 | 2 years to revive unintentionally abandoned end. (for year 8) |
Feb 09 2027 | 12 years fee payment window open |
Aug 09 2027 | 6 months grace period start (w surcharge) |
Feb 09 2028 | patent expiry (for year 12) |
Feb 09 2030 | 2 years to revive unintentionally abandoned end. (for year 12) |