A data processing system is arranged to execute multiple program threads, with each program thread comprising program thread instructions. An interpreter is operable, during execution of each program thread, to employ a table pointer to reference a table to determine for a current program thread instruction a sequence of native instructions to be executed by the processor core to effect execution of that current program thread instruction. A consistency module is provided which is responsive to occurrence of a predetermined event to cause the table pointer to be manipulated, such that for a predetermined number of the program threads, the interpreter will be operable to associate a subsequent program thread instruction with a predetermined routine to be executed by the processor core, the predetermined routine being operable to cause the state of the corresponding program thread to be made available for subsequent reference.
|
1. A data processing apparatus for executing multiple program threads, each program thread comprising program thread instructions, the apparatus comprising:
a processor core operable to execute native instructions;
an interpreter means, during execution of each program thread for employing a table pointer to reference a table for determining from a current program thread instruction a sequence of native instructions to be executed by said processor core to effect execution of that current program thread instruction; and
a consistency modules responsive to occurrence of a predetermined event, for causing the table pointer to said table to be manipulated, such that for a predetermined number of said program threads, the interpreter will be operable to associate a subsequent program thread instruction with a predetermined routine to be executed by the processor core, the predetermined routine being operable to cause the state of the corresponding program thread to be made available for subsequent reference.
14. A method of operating a data processing apparatus to execute multiple program threads, each program thread comprising program thread instructions, the apparatus having a processor core operable to execute native instructions, and the method comprising:
(i) during execution of each program thread, employing a table pointer to reference a table to determine for a current program thread instruction a sequence of native instructions to be executed by said processor core to effect execution of that current program thread instruction; and
(ii) responsive to occurrence of a predetermined event, manipulating the table pointer to said table, such that for a predetermined number of said program threads, subsequent iterations of said step (i) will cause a subsequent program thread instruction to be associated with a predetermined routine to be executed by the processor core, the predetermined routine being operable to cause the state of the corresponding program thread to be made available for subsequent reference.
2. A data processing apparatus as claimed in
3. A data processing apparatus as claimed in
4. A data processing apparatus as claimed in
5. A data processing apparatus as claimed in
6. A data processing apparatus as claimed in
7. A data processing apparatus as claimed in
8. A data processing apparatus as claimed in
9. A data processing apparatus as claimed in
10. A data processing apparatus as claimed in
11. A data processing apparatus as claimed in
12. A data processing apparatus as claimed in
13. A data processing apparatus as claimed in
15. A method as claimed in
16. A method as claimed in
17. A method as claimed in
18. A method as claimed in
19. A method as claimed in
20. A method as claimed in
22. A method as claimed in
23. A method as claimed in
25. A method as claimed in
26. A method as claimed in
27. A computer program product carrying a computer program for controlling a computer to perform the method of
|
1. Field of the Invention
The present invention relates to the field of data processing systems. More particularly, the present invention relates to techniques for reaching consistent state in a multi-threaded data processing system.
2. Description of the Prior Art
A data processing apparatus will typically include a processor core that is operable to execute native instructions from a native instruction set. Some of the applications that may be run by the data processing apparatus may include multiple program threads, and it is possible that those program threads may consist of instructions that are not native instructions, and hence cannot directly be executed on the processor core. In such situations, it is known to provide an interpreter which is operable to determine for a current program thread instruction a sequence of native instructions to be executed by the processor core in order to execute that current program thread instruction.
One example of such an approach is where the program thread instructions are from an instruction set that uses a stack-based approach for storing and manipulating data items upon which those instructions act, whilst the native instructions executable by the processor core are from an instruction set that uses a register-based approach for storing and manipulating the data items.
One example of a stack-based instruction set is the Java Virtual Machine instruction set as specified by Sun Microsystems Inc. The Java programming language seeks to provide an environment in which computer software written in Java can be executed upon many different processing hardware platforms without having to alter the Java software. Another example of a stack-based instruction set is the Java Card instruction set as specified by Sun Microsystems Inc., which is a version of Java which has been designed for use within smart cards and similar devices, i.e. devices which are relatively cheap and consume relatively low power.
An example of register-based systems are the ARM processors produced by ARM Limited of Cambridge, England. ARM instructions execute operations (such as mathematical manipulations, loads, stores, etc) upon operands stored within registers of the processor specified by register fields within the instructions.
It is becoming more desirable for data processing systems designed to execute register-based instructions to support execution of stack-based instructions. An example of such a data processing system is described in UK patent application no. 0024404.6. As described in that patent application, stack-based instructions are converted into a sequence of operations to be executed by the processor core upon registers within a register bank. The data items on the stack that are required by those operations are stored from the stack into registers of the register bank so that they are available to the processor core.
When such a data processing system is operating in a multi-threaded environment, some events may occur that require a number of the program threads (typically all of the program threads) to be stopped at a point where they will all then be in a consistent state. Such events include garbage collection to free up space in the data heap shared by the program threads, thread switching when performed by software associated with the interpreter (rather than being performed at the operating system level), certain debug events, etc.
Since at the time the event occurred, each of the program threads may be part way through execution of a sequence of native instructions used to execute a current program thread instruction, it is not appropriate to immediately stop each of the program threads, since if that were done the state of one or more of the program threads may be in some intermediate state which would not be consistent with the state that would actually arise upon completion of execution of that current program thread instruction. For any particular program thread, a consistent state for that program thread is reached at points where a current program thread instruction has completed execution, and a next program thread instruction has not yet begun execution. It is possible for the consistent state to remain a few native instructions into the execution of a program thread instruction, if those instructions do not modify the state.
One way in which this problem has been addressed in prior art techniques is to provide a predetermined routine, also referred to herein as a rendezvous routine, which when executed for a particular program thread will cause the state of that thread to be stored from the processor core's internal memory (for example the processor core's working registers in the example of a register-based processor core) to a block of memory storing an execution environment for that program thread. This execution environment will typically be provided within a memory accessible by other threads, for example a portion of Random Access Memory (RAM) used to store the execution environment of each thread, and accessible by all threads. To ensure that this rendezvous routine is only actioned at a point where the state will be a consistent state, a known prior art technique is to include within the native instruction sequences associated with particular program thread instructions a sequence of native instructions that will conditionally cause a branch to the rendezvous routine if an event requiring the threads to be stopped has occurred.
When such an event does occur, a particular memory location is written with a particular value (also referred to herein as the “rendezvous flag”), and the purpose of the extra sequence of native instructions added to the native instruction sequences for particular program thread instructions is to cause the data at that memory location to be loaded into a register, for the contents of that register to be compared with the predetermined value, and for the process to branch to the rendezvous routine if the comparison indicates equality between the values (i.e. indicates that the memory location has been written with that particular value, thereby indicating that the thread needs to be stopped, and that accordingly execution of the rendezvous routine is appropriate). These extra native instructions are written such that the rendezvous flag is polled at the end of the native instruction sequences for particular program thread instructions since at that point the state will be in a consistent state (i.e. the corresponding program thread instruction will have completed execution).
As will be appreciated by those skilled in the art, this approach involves adding a number of instructions (in one known implementation three instructions) to the native instruction sequence for a number of the program thread instructions, and these extra instructions have to be executed each time that corresponding program thread instruction needs to be executed, irrespective of whether the rendezvous routine does in fact need to be performed. This significantly impacts the performance of execution of such program thread instructions.
Ideally, to enable the system to react most quickly to an event requiring the threads to be stopped, these extra native instructions would be added to the native instruction sequence for every program thread instruction. However, that would adversely impact the performance to an unacceptable degree, and accordingly a compromise approach is typically employed where these extra native instructions are only added to the native instruction sequences corresponding to certain program thread instructions, for example method invocation instructions, backwards branches, etc. The rationale behind this compromise approach is to choose some instructions so that the period between polling the rendezvous flag is not too great. For example, polling in backwards branches means that all loops include at least one check. As another example, some Virtual Machines (VMs) may require the rendezvous flag to be checked in some instructions to ensure correct operation.
Nevertheless, it will be appreciated that there is still overall a significant performance hit in execution of the multiple program threads, since these extra instructions will still be executed every time the relevant program thread instructions are executed and irrespective of whether an event has in fact arisen that requires the rendezvous routine to take place.
Accordingly, it would be desirable to provide an improved technique which, upon the occurrence of an event requiring the threads to be stopped, enables the rendezvous routine to be invoked by each thread when that thread's state is in a consistent state.
Viewed from a first aspect, the present invention provides a data processing apparatus for executing multiple program threads, each program thread comprising program thread instructions, the apparatus comprising: a processor core operable to execute native instructions; an interpreter operable, during execution of each program thread, to employ a table pointer to reference a table to determine for a current program thread instruction a sequence of native instructions to be executed by said processor core to effect execution of that current program thread instruction; and a consistency module responsive to occurrence of a predetermined event to cause the table pointer to said table to be manipulated, such that for a predetermined number of said program threads, the interpreter will be operable to associate a subsequent program thread instruction with a predetermined routine to be executed by the processor core, the predetermined routine being operable to cause the state of the corresponding program thread to be made available for subsequent reference.
In accordance with the present invention, the interpreter employs a table pointer to reference a table in order to determine for a current program thread instruction a sequence of native instructions to be executed by the processor core. Upon occurrence of a predetermined event that requires a predetermined number of the program threads to be stopped, a consistency module is used to cause the table pointer to the table to be manipulated, such that when the interpreter is reviewing a subsequent program thread instruction from any of the program threads to be stopped, it will associate a predetermined routine with that subsequent program thread instruction, this predetermined routine being operable to cause the state of the corresponding program thread to be made available for subsequent reference.
By this approach, once an event has occurred that requires a predetermined number of the program threads to be stopped (in preferred embodiments all of the program threads), the predetermined routine (also referred to herein as the rendezvous routine) will be automatically invoked when a subsequent program thread instruction is reviewed by the interpreter. It should be noted that at this point the corresponding thread will be in a consistent state, since the preceding program thread instruction will have completed execution, and this subsequent program thread instruction will not yet have begun execution. By this approach, the need to include a series of native instructions within the native instruction sequences for particular program thread instructions in order to speculatively poll a memory location in order to determine whether the rendezvous routine should be invoked is removed, and accordingly the present invention significantly alleviates the performance problems associated with that prior art technique.
It will be appreciated by those skilled in the art that the table used by the interpreter may take a variety of forms. In one embodiment, the table contains for each program thread instruction a code pointer pointing to a corresponding sequence of native instructions. In such embodiments, the consistency module is preferably operable in response to occurrence of said predetermined event to cause the table pointer to be replaced by a table pointer to a replacement table, the replacement table containing for each program thread instruction a code pointer pointing to said predetermined routine.
In preferred embodiments, the consistency module is arranged to store the table pointer to the replacement table within the execution environment for each of the predetermined number of program threads, and then to include within the sequence of native instructions for certain program thread instructions a single instruction to load that new table pointer into the relevant register of the processor core, so that the interpreter will then use that new table pointer when reviewing a subsequent program thread instruction for any of the predetermined number of program threads that are to be stopped.
In an alternative embodiment, the table has for each program thread instruction a block containing the corresponding sequence of native instructions. In such embodiments, each block is preferably arranged to further contain at a predetermined entry an instruction for branching to the predetermined routine, and the consistency module is preferably operable in response to occurrence of said predetermined event to cause the table pointer to be modified so that when the interpreter subsequently references the table using the modified table pointer, the instruction for branching to said predetermined routine will be associated with the subsequent program thread instruction.
More particularly, in one embodiment, the instruction for branching to said predetermined routine has a size of X bytes, the predetermined entry containing the instruction for branching to said predetermined routine is the final entry in each block, and the consistency module is operable to modify the table pointer by subtracting X bytes from the table pointer.
In an alternative embodiment, the instruction for branching to said predetermined routine has a size of X bytes, the predetermined entry containing the instruction for branching to said predetermined routine is the final entry in each block, each block has a size of Y bytes, and the consistency module is operable to modify the table pointer by adding Y-X bytes to the table pointer.
In one embodiment, each native instruction has a size of X bytes, whilst in other embodiments variable length native instructions may be used.
In preferred embodiments, the consistency module is preferably arranged to store the modified pointer to the relevant entry of the execution environment for each of the predetermined program threads, and a single load instruction is then added to the sequence of native instructions for certain of the program thread instructions to cause that modified pointer to be loaded into the appropriate register of the processor core, such that the interpreter will use that modified pointer when reviewing a subsequent program thread instruction from any of those predetermined number of program threads.
It will be appreciated that there are a number of different predetermined events that may require one or more of the program threads to be stopped, and hence will require the consistency module to manipulate the table pointer. In preferred embodiments, each of the program threads share a data heap, and the predetermined event is the determination that a garbage collection process is required to be performed upon that data heap. Before any such garbage collection is performed, it is necessary that each of the program threads sharing that data heap to reach a consistent state, and that that consistent state is made available to the garbage collection routine so that it can ensure that any garbage collection performed on the shared data heap takes into account the current consistent state of each thread.
In one embodiment, switching between the various program threads is performed under the control of the operating system. However, in an alternative embodiment, switching between the program threads is performed by software associated with the interpreter. In such instances, it is necessary that each of the threads reaches a consistent state, and is stopped at that consistent state, prior to the switching being performed, and accordingly in such embodiments, the predetermined event is the requirement for a switch between program threads.
In such embodiments, there are a number of known ways in which it can be determined that a switch between program threads is appropriate. One simple approach is to use a timer, so that each thread is allocated a certain amount of time. In such embodiments, it will be appreciated that the predetermined event can be deemed to have occurred upon expiry of the timer.
In preferred embodiments, the data processing apparatus further comprises a debug interface for interfacing with a debugging application used to debug program threads being executed by the data processing apparatus. As will be appreciated by those skilled in the art, such debugging applications typically use techniques such as break points and the like to step through execution of particular applications. When a break point is asserted in a particular program thread, it is again necessary that the other threads are only stopped once they have reached a consistent state and hence the technique of the present invention may be used in such scenarios. Accordingly, in such embodiments where debugging is being performed, the predetermined event is a debug event that requires access to the state of one or more of the program threads.
It will be appreciated by those skilled in the art that the consistency module may be arranged to be responsive to any number of predetermined events, and hence, as an example, could be arranged to be responsive to any of the above described predetermined events to cause the table pointer to be manipulated. Further, it will be appreciated that the way in which the table pointer is manipulated can be made dependent on the type of predetermined event that has occurred. However, in an alternative embodiment, the consistency module will be arranged to perform the same manipulation of the table pointer irrespective of the type of predetermined event, such that the same predetermined routine is called. In this embodiment, the predetermined routine would then independently be informed of the type of predetermined event.
In preferred embodiments, each program thread has associated therewith an execution environment stored in memory external to the processor core and the predetermined routine is operable to cause the state of the corresponding program thread to be stored from registers of the processor core to the execution environment. Considering the example where the program thread instructions are Java Virtual Machine instructions, the state that may be stored to the execution environment may comprise the program counter, a stack pointer, a frame pointer, a constant pool pointer and a local variable pointer. It will be appreciated that other state could also be stored as required.
It will be appreciated that the program thread instructions may take a variety of forms. However, in preferred embodiments, the program thread instructions are Java bytecodes. Furthermore, in such preferred embodiments, the interpreter is provided within a Java Virtual Machine arranged to be executed on the processor core.
In such embodiments, the processor core preferably has a set of registers in which to store data required by the processor core, a subset of these registers being allocated for storing data relating to the Java Virtual Machine, and the state to be made available for subsequent reference comprises the contents of said subset of registers.
Viewed from a second aspect, the present invention provides a method of operating a data processing apparatus to execute multiple program threads, each program thread comprising program thread instructions, the apparatus having a processor core operable to execute native instructions, and the method comprising: (i) during execution of each program thread, employing a table pointer to reference a table to determine for a current program thread instruction a sequence of native instructions to be executed by said processor core to effect execution of that current program thread instruction; and (ii) responsive to occurrence of a predetermined event, manipulating the table pointer to said table, such that for a predetermined number of said program threads, subsequent iterations of said step (i) will cause a subsequent program thread instruction to be associated with a predetermined routine to be executed by the processor core, the predetermined routine being operable to cause the state of the corresponding program thread to be made available for subsequent reference.
Viewed from a third aspect, the present invention provides a computer program product carrying a computer program for controlling a computer to perform the method in accordance with the second aspect of the present invention.
The present invention will be described further, by way of example only, with reference to a preferred embodiment thereof as illustrated in the accompanying drawings, in which:
For the purposes of describing a preferred embodiment of the present invention, a data processing system will be considered in which multiple program threads consisting of Java bytecodes are arranged to be executed on a register-based processor core, for example an ARM processor produced by ARM Limited of Cambridge, England.
As shown in
As will be appreciated by those skilled in the art, the Java VM 60 contains a number of modules. In
In addition to the interpreter 70, the Java VM 60 also includes a memory manager 80 which incorporates within it a memory allocator 82 and a garbage collector 84. As will be appreciated by those skilled in the art, certain Java bytecodes will require memory to be allocated within the data heap 40 for containing data required by the associated Java thread. This job is performed in a known manner by the memory allocator 82. However, when the data heap 40 becomes full, before any subsequent memory allocation can be performed by the memory allocator 82, the garbage collector routine 84 is first required to free up some space within the data heap 40, for example by deleting data values no longer required by the various Java threads 10, 20, 30. Hence, when the garbage collector is required, it is first necessary to stop each of the Java threads 10, 20, 30 at a point where each thread is at a consistent state, and for that consistent state to be then made available to the garbage collector 84 for use in determining which data values can be removed from the data heap 40 to free up space within the data heap.
As will be described in more detail later, this is achieved by causing the rendezvous module 90 to manipulate the table pointer for the table used by the interpreter, such that following completion of a current Java bytecode, and prior to execution of a subsequent Java bytecode, each Java thread executes a rendezvous routine to store its corresponding consistent state within the associated execution environment 12, 22, 32, whereafter that state information is then available to the garbage collector 84.
The Java VM 60 may also include a debug interface 65 for interfacing with an external debugger 67. The debugger 67 may be provided on the same computer as that being used to process the Java threads, or alternatively may be provided on a separate computer, for example a separate PC. When a break point is reached during execution of a particular Java thread, then it is necessary to stop execution of all of the other threads at a consistent point to allow analysis of the state of the various Java threads. Again, this is achieved by causing the rendezvous module 90 to invoke the same process as is invoked when garbage collection is required, to cause the consistent state of each thread to be stored into the corresponding execution environment 12, 22, 32, after which that state can be analysed by the debugger 67.
In one embodiment of the present invention, the operating system 100 is arranged to run the Java VM 60 multiple times, once for each Java thread 10, 20, 30, and the switching between threads is performed at the operating system level. In this embodiment, it is again necessary for all of the Java threads to be stopped when their state is in a consistent state, prior to switching between threads. Again, this is achieved by causing the rendezvous module to manipulate the table pointers in the same manner discussed earlier with reference to garbage collection and debugging, to ensure that the necessary consistent state is stored to the corresponding execution environments 12, 22, 32 prior to any switching between the threads taking place. This then ensures that the necessary state information is available for each Java thread when that Java thread is subsequently switched to by the Java VM 60.
In an alternative embodiment, the switching between threads is performed at the Java VM level, with the single occurrence of the Java VM being run on the operating system 100. In this embodiment, only one thread is running at a time. Hence, prior to garbage collection, there is no need to cause the rendezvous module 90 to manipulate the table pointer, since the single running thread is in a consistent state at the point that it requests garbage collection, and all stopped threads are also in a consistent state. However, where the rendezvous is caused external from the running thread, e.g. a debugger event, or the thread switch timer expiring, then the running thread needs to be brought to a consistent state by the rendezvous module 90 manipulating the table pointer.
As described earlier, a prior art technique for invoking the rendezvous routine involved polling a memory location when executing certain Java bytecodes and conditionally branching to a rendezvous routine where the state of the corresponding thread could be written to its associated execution environment. This process is schematically illustrated in
However, returning to
Hence, at step 310, the rendezvous module awaits acknowledgements from each of the Java threads to confirm that that thread has reached the rendezvous point. If one of the Java threads is the instigator of the rendezvous request to the rendezvous module, then that particular thread will not need to send an acknowledgement, since, by initiating a rendezvous, it is implicitly acknowledging being at a consistent state.
Once each relevant thread has acknowledged that the rendezvous point has been reached (at step 310), the process proceeds to step 320, where a value other than X is written to the memory location Y. This will ensure that when the three instructions illustrated in
As discussed earlier, the main disadvantage with this prior art technique is that the three instructions illustrated in
Before discussing in detail the techniques used in preferred embodiments within the rendezvous module 90 to manage the rendezvous process, the example of garbage collection will be discussed in more detail with reference to
If memory allocation is not required, and the bytecode processing has been completed, the process branches to step 420, where the value of i is incremented by one, after which it is checked that i does not exceed iMAX. Assuming i does not exceed iMAX, then the process returns to step 405 where the next Java bytecode is interpreted. Once it is determined that i does exceed iMAX at step 425, the process ends at step 430.
Assuming memory allocation is determined to be necessary at step 415, then the process proceeds to step 435, where the memory manager 80 is called. The memory manager 80 then determines at step 440 whether garbage collection is required. Assuming it is not (for example because there is still enough space in the data heap 40), then the process proceeds to step 445, where the memory is allocated, after which it is determined at step 450 whether any further processing of the current Java bytecode is required. If it is, the process returns to step 410, whereas if it is not, then the process proceeds to step 420.
If at step 440 it is determined that garbage collection is required, then the rendezvous module is caused to request a rendezvous at step 455. The processing performed by the rendezvous module in preferred embodiments of the present invention will be described in more detail later with reference to
As will be appreciated by those skilled in the art, during debugging certain events may also occur that require the rendezvous module to be caused to request rendezvous. As an example, when a break point is reached in a particular thread, this will cause the debug interface 65 to inform the rendezvous module 90 that a rendezvous is required (this being an analogous step to step 455 illustrated in
In embodiments where the interpreter 70 uses a jump table,
In such preferred embodiments, the native code sequences for certain Java bytecodes (for example method invocation instructions, backwards branch instructions, etc) are modified to include as a last instruction the instruction illustrated in
The relevant thread will then execute the rendezvous routine, which is illustrated schematically in
As illustrated in
Hence, returning to
Once the rendezvous module 90 receives the indication that rendezvous is no longer required, it then proceeds to step 640, where it sends a message to each thread to notify those threads that they may continue execution under the control of the Java VM 60.
As is apparent from
In embodiments where the interpreter 70 uses a case table rather than a jump table,
As will be appreciated by those skilled in the art, during normal operation, the branch instructions will never be reached, and accordingly the rendezvous routine will not be invoked. The process used by the rendezvous module 90 to ensure that the branch instruction 705, 715, 725, 735 are actioned in the event that the rendezvous routine is required will now be described with reference to
As illustrated in
As mentioned earlier, when the interpreter 70 uses the case table to determine the native code sequence to be executed, it does this by adding n*SIZE to the table pointer, where n is an indication of the bytecode being analysed. Hence, as an example, for bytecode 2, prior to the case table pointer being modified, this would result in the interpreter pointing to a location at the beginning of native code 2 to cause native code 2 to be executed on the processor core. However, since the case table pointer has been modified by adding SIZE-4 bytes at step 800, this would now result in the interpreter pointing to the branch instruction 725 which will hence cause the processor core to branch to the rendezvous routine, this rendezvous routine being the rendezvous routine discussed earlier with reference to
The process within the rendezvous module 90 then proceeds to step 810, where an acknowledgement that each thread has reached the rendezvous point is awaited. This is the same process step as described earlier with reference to step 610 of
Given the above description of preferred embodiments of the present invention, it will be appreciated that the techniques of preferred embodiments of the present invention, whereby the table pointers are manipulated by the rendezvous module 90 when a rendezvous is required, significantly alleviate the performance hit associated with the known prior art technique. In accordance with the embodiments of the present invention, the overhead for the check performed within the relevant native code sequences in scenarios where a rendezvous is not required is a maximum of one cycle (due to the presence of the load instruction described earlier with reference to
Although a particular embodiment has been described herein, it will be apparent that the invention is not limited thereto, and that many modifications and additions thereto may be made within the scope of the invention. For example, various combinations of the features of the following dependent claims can be made with the features of the independent claims without departing from the scope of the present invention.
Patent | Priority | Assignee | Title |
7852845, | Oct 05 2006 | Waratek Pty Ltd. | Asynchronous data transmission |
8407444, | Dec 23 2009 | International Business Machines Corporation | Hardware off-load garbage collection acceleration for languages with finalizers |
8661413, | Apr 13 2011 | International Business Machines Corporation | Impact indication of thread-specific events in a non-stop debugging environment |
8756578, | Apr 13 2011 | International Business Machines Corporation | Impact indication of thread-specific events in a non-stop debugging environment |
8813042, | Apr 06 2012 | Hewlett Packard Enterprise Development LP | Identifying globally consistent states in a multithreaded program |
8943108, | Dec 23 2009 | International Business Machines Corporation | Hardware off-load memory garbage collection acceleration |
Patent | Priority | Assignee | Title |
5963741, | Jan 29 1997 | NEC Corporation | Information processor which rewrites instructions in program to dynamically change program structure and method therefor |
6314560, | Jul 02 1998 | Hewlett Packard Enterprise Development LP | Method and apparatus for a translation system that aggressively optimizes and preserves full synchronous exception state |
6598154, | Dec 29 1998 | Intel Corporation | Precoding branch instructions to reduce branch-penalty in pipelined processors |
6795966, | Feb 15 1998 | VMware, Inc.; VMWARE, INC | Mechanism for restoring, porting, replicating and checkpointing computer systems using state extraction |
6799315, | Apr 08 1998 | Matsushita Electric Industrial Co. Ltd. | High speed virtual machine and compiler |
6961935, | Jul 12 1996 | NEC Corporation | Multi-processor system executing a plurality of threads simultaneously and an execution method therefor |
7080362, | Dec 08 1998 | Nazomi Communication, Inc. | Java virtual machine hardware for RISC and CISC processors |
20040015967, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jan 24 2003 | ARM Limited | (assignment on the face of the patent) | / | |||
Mar 20 2003 | BAYLIS, CHARLES GAYTON | ARM Limited | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 014070 | /0444 |
Date | Maintenance Fee Events |
Nov 22 2010 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Nov 05 2014 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Nov 22 2018 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Jun 05 2010 | 4 years fee payment window open |
Dec 05 2010 | 6 months grace period start (w surcharge) |
Jun 05 2011 | patent expiry (for year 4) |
Jun 05 2013 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jun 05 2014 | 8 years fee payment window open |
Dec 05 2014 | 6 months grace period start (w surcharge) |
Jun 05 2015 | patent expiry (for year 8) |
Jun 05 2017 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jun 05 2018 | 12 years fee payment window open |
Dec 05 2018 | 6 months grace period start (w surcharge) |
Jun 05 2019 | patent expiry (for year 12) |
Jun 05 2021 | 2 years to revive unintentionally abandoned end. (for year 12) |