A method for analyzing a set of spawning pairs, where each spawning pair identifies at least one speculative thread. The analysis may be practiced via software in a compiler, binary optimizer, standalone modeler, or the like. The analysis may include determining a predicted execution time for a sequence of program instructions, given the set of spawning pairs, for a target processor having a known number of thread units, where the target processor supports speculative multithreading. The method is further to select a spawning pair, according to a greedy approach, if the spawning pair provides a performance enhancement, in terms of decreased execution time due to increased parallelism, when the speculative thread is spawned during execution of a code sequence. Other embodiments are also described and claimed.
|
15. A compiler comprising a plurality of machine accessible instructions stored on a machine-accessible storage medium, the plurality of instructions further comprising:
an analyzer/selector to select a first speculative thread from a set of potential candidates, wherein the set of potential candidates further comprises a set of candidate spawning pairs, where each spawning pair indicates a spawn point and a target point for one of the candidates;
an execution time modeler to model concurrent execution, on a target processor, of a main thread and the first speculative thread to calculate a parallel execution time, wherein the first speculative thread includes a subset of the instructions in a code sequence associated with the main thread;
an analyzer to determine whether the calculated parallel execution time is less than an estimated sequential execution time for the main thread;
the execution time modeler further to model a second concurrent execution, the second concurrent execution to include a second concurrent speculative thread to calculate a second parallel execution time;
the analyzer/selector further to select the second speculative thread, and de-select the first speculative thread, responsive to determining that the second parallel execution time is less than the estimated sequential execution time; and
a code modifier to modify a binary file for the main thread to include a spawn instruction for the selected speculative thread.
1. A method, comprising:
determining, for a target processor having a plurality of thread units, a predicted sequential execution time for a sequence of program instructions;
selecting a first speculative thread from a set of potential candidates, wherein the set of potential candidates further comprises a set of candidate spawning pairs, where each spawning pair indicates a spawn point and a target point for one of the candidates
modeling a first predicted multithreaded execution time for the sequence of program instructions, wherein said modeling further includes modeling an effect of concurrent execution of the first speculative thread on the sequential execution time;
responsive to determining that the first predicted multithreaded execution time is less than the predicted sequential execution time, selecting the first speculative thread;
modeling a second predicted multithreaded execution time for the sequence of program instructions, the second predicted multithreaded execution time to include an effect of a second concurrent speculative thread on the sequential execution time;
selecting the second speculative thread, and de-selecting the first speculative thread, if the second predicted multithreaded execution time is less than the first predicted multithreaded execution time; and
generating a binary file corresponding to the sequence of program instructions, where the binary file includes a spawn instruction for the selected speculative thread.
7. A system, comprising:
a memory;
a processor communicably coupled to the memory, wherein the processor comprises a plurality of thread units; and
a compiler residing in said memory, said compiler to:
select a first speculative thread from a set of potential candidates, wherein the set of potential candidates further comprises a set of candidate spawning pairs, where each spawning pair indicates a spawn point and a target point for one of the candidates;
determine, for a target processor having a plurality of thread units, a predicted sequential execution time for a sequence of program instructions based on modeling a predicted multithreaded execution time for the sequence of program instructions, wherein said modeling further includes modeling an effect of concurrent execution of the first speculative thread on the sequential execution time;
select, responsive to determining that the predicted multithreaded execution time is less than the predicted sequential execution time, the first speculative thread:
model a second predicted multithreaded execution time for the sequence of program instructions, the second predicted multithreaded execution time to include an effect of a second concurrent speculative thread on the sequential execution time;
select the second speculative thread, and de-select the first speculative thread, if the second predicted multithreaded execution time is less than the predicted multithreaded execution time; and
generate a binary file corresponding to the sequence of program instructions, where the binary file includes a spawn instruction for the selected speculative thread.
4. An article, comprising:
a machine-accessible storage medium having a plurality of machine accessible instructions; wherein, when the instructions are executed by a processor, the instructions provide for:
determining, for a target processor having a plurality of thread units, a predicted sequential execution time for a sequence of program instructions;
selecting a first speculative thread from a set of potential candidates, wherein the set of potential candidates further comprises a set of candidate spawning pairs, where each spawning pair indicates a spawn point and a target point for one of the candidates
modeling a first predicted multithreaded execution time for the sequence of program instructions, wherein said modeling further includes modeling an effect of concurrent execution of the first speculative thread on the sequential execution time;
responsive to determining that the first predicted multithreaded execution time is less than the predicted sequential execution time, selecting the first speculative thread;
modeling a second predicted multithreaded execution time for the sequence of program instructions, the second predicted multithreaded execution time to include an effect of a second concurrent speculative thread on the sequential execution time;
selecting the second speculative thread, and de-selecting the first speculative thread, if the second predicted multithreaded execution time is less than the first predicted multithreaded execution time; and
generating a binary file corresponding to the sequence of program instructions, where the binary file includes a spawn instruction for the selected speculative thread.
2. The method of
generating the binary file to include a precomputation slice for the speculative thread.
3. The method of
modeling the expected run-time behavior of the target processor for execution of the sequence of program instructions by a main thread and said concurrent speculative thread.
5. The article of
instructions that provide for generating the binary file to include a precomputation slice for the speculative thread.
6. The article of
modeling further comprise:
instructions that provide for modeling the expected run-time behavior of the target processor for execution of the sequence of program instructions by a main thread and said concurrent speculative thread.
8. The system of
the compiler is further to model a plurality of predicted multithreaded execution times, each for a distinct one of a plurality of potential speculative threads.
9. The system of
the compiler is further to utilize a greedy algorithm to determine whether to select one or more of the plurality of potential speculative threads.
10. The system of
the compiler is further to modify a binary file to include a spawn instruction for each of a plurality of selected potential speculative threads.
11. The system of
the compiler is further to modify a binary file to include a spawn instruction for the speculative thread.
12. The system of
the compiler is further to modify the binary file to include a copy of a subset of instructions from the binary file, wherein said subset is to determine an input value for the speculative thread.
13. The system of
the compiler is further to generate a spawn instruction that it indicates the first of the subset of instructions as its target address.
14. The system of
the compiler is further to add a branch instruction to the subset, wherein the branch instruction indicates a target address for the speculative thread.
16. The compiler of
said execution time modeler is further to model concurrent execution, on the target processor, of the main thread, the speculative thread, and at least one additional speculative thread to calculate a third parallel execution time.
17. The compiler of
select the speculative thread if the third parallel execution time is less than the parallel execution time.
18. The compiler of
the code modifier is further to modify the binary file to include a precomputation slice for the selected speculative thread.
19. The compiler of
the selected speculative thread is associated with a spawn point and a target point; and:
the code modifier is further to modify the binary file such that spawn instruction indicates that the selected speculative thread should begin execution at the target point.
20. The compiler of
the selected speculative thread is associated with a spawn point and a target point; and
the code modifier is further to modify the binary file such that spawn instruction indicates that the selected speculative thread should begin execution at the precomputation slice.
21. The compiler of
the code modifier is further to modify the binary file with an instruction to cause the selected speculative thread to begin execution at the target point after execution of the precomputation slice.
22. The compiler of
the selected speculative thread is associated with a spawn point and a target point; and:
the code modifier is further to modify the binary file to place the spawn instruction at the spawn point.
24. The method of
a. designating the second predicted multithreaded execution time as the current predicted multithreaded execution time;
b. modeling an additional predicted multithreaded execution time tor the sequence of program instructions, the additional predicted multithreaded execution time to include the effect of an additional concurrent speculative thread on the sequential execution time; and
c. selecting the additional speculative thread, de-selecting a previously-selected speculative thread, and designating the additional predicted multithreaded execution time as the current predicted multithreaded execution time if the additional predicted multithreaded execution time is less that the current predicted multithreaded execution time; and
repeating b. and c. until the additional predicted multithreaded execution time is not less than the predicted multithreaded execution time.
25. The compiler of
the execution time modeler is further to calculate a plurality of parallel execution times based on a plurality of speculative threads; and
the analyzer/selector is to perform a greedy algorithm to select one or more speculative threads based on the plurality of execution times.
|
1. Technical Field
The present disclosure relates generally to information processing systems and, more specifically, to embodiments of a method and apparatus for selecting spawning pairs for speculative multithreading.
2. Background Art
In order to increase performance of information processing systems, such as those that include microprocessors, both hardware and software techniques have been employed. One approach that has been employed to improve processor performance is known as “multithreading.” In multithreading, an instruction stream is split into multiple instruction streams that can be executed concurrently. In software-only multithreading approaches, such as time-multiplex multithreading or switch-on-event multithreading, the multiple instruction streams are alternatively executed on the same shared processor.
Increasingly, multithreading is supported in hardware. For instance, in one approach, referred to as simultaneous multithreading (“SMT”), a single physical processor is made to appear as multiple logical processors to operating systems and user programs. Each logical processor maintains a complete set of the architecture state, but nearly all other resources of the physical processor, such as caches, execution units, branch predictors, control logic, and buses are shared. In another approach, processors in a multi-processor system, such as a chip multiprocessor (“CMP”) system, may each act on one of the multiple threads concurrently. In the SMT and CMP multithreading approaches, threads execute concurrently and make better use of shared resources than time-multiplex multithreading or switch-on-event multithreading.
For those systems, such as CMP and SMT multithreading systems, that provide hardware support for multiple threads, several independent threads may be executed concurrently. In addition, however, such systems may also be utilized to increase the throughput for single-threaded applications. That is, one or more thread contexts may be idle during execution of a single-threaded application. Utilizing otherwise idle thread contexts to speculatively parallelize the single-threaded application can increase speed of execution and throughput for the single-threaded application.
The present invention may be understood with reference to the following drawings in which like elements are indicated by like numbers. These drawings are not intended to be limiting but are instead provided to illustrate selected embodiments of a method and apparatus for selecting spawning pairs for a speculative multithreading processor.
Described herein are selected embodiments of a method, compiler and system for selecting spawning pairs for speculative multithreading. In the following description, numerous specific details such as thread unit architectures (SMT and CMP), SpMT processor architecture, number of thread units, variable names, stages for speculative thread execution, and the like have been set forth to provide a more thorough understanding of the present invention. It will be appreciated, however, by one skilled in the art that the embodiments may be practiced without such specific details. Additionally, some well-known structures, circuits, and the like have not been shown in detail to avoid unnecessarily obscuring the embodiments discussed herein.
As used herein, the term “thread” is intended to refer to a sequence of one or more instructions. For the multithreading system embodiments described herein, the instructions of a thread may be executed by a thread unit of a processor, concurrently with one or more other threads executed by other thread units of the processor. For at least one embodiment, the thread units may be separate logical processors in a single processor core, such as an SMT processor core. Alternatively, each thread unit may be a separate processor core in a single chip package. For CMP embodiments, such as the embodiments 300, 700 illustrated in
For purposes of the discussion herein, it is assumed that the target processor for the disclosed method embodiments includes a plurality of thread units, and that each thread unit is equipped with hardware to support the spawning, validating, squashing and committing of speculative threads. Such processors are referred to herein as Speculative Multithreading (SpMT) processors. As is discussed above, for at least one embodiment, the thread units for an SpMT processor are separate processor cores in a CMP embodiment (see
The method embodiments for analyzing and selecting spawning pairs, discussed herein, may thus be utilized for an SpMT processor that supports speculative multithreading. For at least one speculative multithreading approach, the execution time for a single-threaded application is reduced through the execution of one or more concurrent speculative threads. One approach for speculatively spawning additional threads to improve throughput for single-threaded code is discussed in commonly-assigned U.S. patent application Ser. No. 10/356,435, “Control-Quasi-Independent-Points Guided Speculative Multithreading”. Under such approach, single-threaded code is partitioned into threads that may be executed concurrently.
For at least one embodiment, a portion of an application's code (referred to herein as a code sequence) may be parallelized through the use of the concurrent speculative threads. A speculative thread, referred to as the spawnee thread, is spawned at a spawn point in response to a spawn instruction, such as a fork instruction. The thread that performed the spawn is referred to as the spawner thread.
The spawned thread executes instructions that are ahead, in sequential program order, of the code being executed by the thread that performed the spawn. The address at which the spawned thread is to begin execution (the “target” address) may be specified in the spawn instruction. For at least one embodiment, a thread unit (such as an SMT logical processor or a CMP core), separate from the thread unit executing the spawner thread, executes the spawnee thread. One skilled in the art will recognize that the embodiments discussed herein may be utilized for any multithreading approach, including SMT, CMP multithreading or other multiprocessor multithreading, or any other known multithreading approach that may encounter idle thread contexts.
A spawnee thread is thus associated, via a spawn instruction, with a spawn point in the spawner thread as well as with a point at which the spawnee thread should begin execution. The latter is referred to as a target point. (For at least one embodiment, the target point address may be identified as the beginning address for particular basic block). These two points together are referred to as a “spawning pair.” A potential speculative thread is thus defined by a spawning pair, which includes a spawn point in the static program where a new thread is to be spawned and a target point further along in the program where the speculative thread will begin execution when it is spawned. It should be noted that, for certain situations, such as when a spawning pair indicates a portion of code within a loop, that a spawning pair may indicate more than one instance of the speculative thread.
Well-chosen spawning pairs can generate speculative threads that provide significant performance enhancement for otherwise single-threaded code.
For purposes of example, the notation 106Sp refers to the target point instruction executed by the speculative thread 142 while the main thread 101 continues execution after the spawn point 104. If such speculative thread 142 begins concurrent execution at the target point 106Sp, while the main thread 101 continues single-threaded execution of the instructions after the spawn point 104 (but before the target point 106), then execution time between the spawn point 104 and the target point 106 may be decreased to the time between the spawn point 104 and the speculative thread's 142 execution of the target point 106SP (see 144).
That is not to say that the spawned speculative thread 142 necessarily begins execution at the target point 106Sp immediately after the speculative thread 142 has been spawned. Indeed, for at least one embodiment, certain initialization and data dependence processing may occur before the spawned speculative thread 142 begins execution at the target point 106. Such processing is represented in
Generally, the discussion of
The time it takes to execute the initialization processing 204 for a speculative thread is referred to herein as Init time 203. Init time 203 represents the overhead to create a new thread. For at least one embodiment of the methods for modeling execution time discussed herein, the overhead 144 (
After the initialization stage 204, an optional slice stage 206 may occur. During the slice stage 206, live-in input values, upon which the speculative thread 142 is anticipated to depend, may be calculated. For at least one embodiment, live-in values are computed via execution of a “precomputation slice” during the slice stage 206. When a precomputation slice is executed, live-in values for a speculative thread are pre-computed using speculative precomputation based on backward dependency analysis. The slice stage 206 is optional in the sense that every speculative thread need not necessarily include a precomputation slice.
For at least one embodiment, the precomputation slice is executed, in order to pre-compute the live-in values for the speculative thread, before the main body of the speculative thread instructions is executed. This is illustrated in
The precomputation slice may be a subset of instructions from one or more previous threads, which is generated for the speculative thread based on backward dependency analysis. A “previous thread” may include the main non-speculative thread, as well as any other “earlier” (according to sequential program order) speculative thread. The subset of instructions for a precomputation slice may be copied from the input code sequence (see 416,
In order to facilitate execution of a precomputation slice before execution of the thread body for a speculative thread, the target address indicated by a spawn instruction may be the beginning address of a sequence of precomputation slice instructions (“the precomputation slice”). The precomputation slice may be added to binary instructions associated with the input code sequence (see 416,
Live-in calculations performed during the slice stage 206 may be particularly useful if the target processor for the speculative thread does not support synchronization among threads in order to correctly handle data dependencies. Details for at least one embodiment of a target processor is discussed in further detail below in connection with
Brief reference is made to
A speculative thread 1112 may include two portions. Specifically, the speculative thread 1112 may include a precomputation slice 1114 and a thread body 1116. During execution of the precomputation slice 1114, the speculative thread 1112 determines one or more live-in values in the infix region 1110 before starting to execute the thread body 1116 in the postfix region 1102. The instructions executed by the speculative thread 1112 during execution of the precomputation slice 1114 correspond to a subset (referred to as a “backward slice”) of instructions from the main thread in the infix region 1110 that fall between the spawn point 1108 and the target point 1104. This subset may include instructions to calculate data values upon which instructions in the postfix region 1102 depend. For at least one embodiment of the methods described herein, the time that it takes to execute a slice is referred to as slice time 205.
During execution of the thread body 1116, the speculative thread 1112 executes code from the postfix region 1102, which may be an intact portion of the main thread's original code.
Returning to
The time from the beginning of the first basic block of the thread body to the end of the last basic block of the thread body (i.e., to the beginning of the first basic block of the next thread) corresponds to the body time 215. The body time 215 represents the time it takes to execute the speculative thread's 1112 thread body 1116.
After the speculative thread has completed execution of its thread body 1116 during the body stage 208, the thread enters a wait stage 210. The time at which the thread has completed execution of the instructions of its thread body 1116 may be referred to as the end time 216. For at least one embodiment, end time 216 may be calculated as the cumulation of the start time 214 and the body time 215.
The wait stage 210 represents the time that the speculative thread 1112 must wait until it becomes the least speculative thread. The wait stage 210 reflects assumption of an execution model in which speculative threads commit their results according to sequential program order. At this point, a discussion of an example embodiment of a target SpMT processor may be helpful in understanding the processing of the wait stage 210.
Reference is now made to
For embodiments of the selection methods discussed herein (such as, for example, method 400 illustrated in
For at least one embodiment, such as that illustrated in
While the CMP embodiments of processor 300 discussed herein refer to only a single thread per processor core 304, it should not be assumed that the disclosures herein are limited to single-threaded processors. The techniques discussed herein may be employed in any CMP system, including those that include multiple multi-threaded processor cores in a single chip package 303.
The thread units 304a-304n may communicate with each other via an interconnection network such as on-chip interconnect 310. Such interconnect 310 may allow register communication among the threads. In addition,
The topology of the interconnect 310 may be a multi-drop bus, a point-to-point network that directly connects each thread unit 304 to each other, or the like. In other words, any interconnection approach may be utilized. For instance, one of skill in the art will recognize that, for at least one alternative embodiment, the interconnect 310 may be based on a ring topology.
According to an execution model that is assumed for at least one embodiment of method 400 (
For at least one embodiment of the execution model assumed for an SpMT processor, the requirements to spawn a thread are: 1) there is a free thread unit 304 available, OR 2) there is at least one running thread that is more speculative than the thread to be spawned. That is, for the second condition, there is an active thread that is further away in sequential time from the “target point” for the speculative thread that is to be spawned. In this second case, the method 400 assumes an execution model in which the most speculative thread is squashed, and its freed thread unit is assigned to the new thread that is to be spawned.
Among the running threads, at least one embodiment of the assumed execution model only allows one thread (referred to as the “main” thread) to be non-speculative. When all previously-spawned threads have either completed execution or been squashed, then the next speculative thread becomes the non-speculative main thread. Accordingly, over time the current non-speculative “main” thread may alternatively execute on different thread units.
Each thread becomes non-speculative and commits in a sequential order. A thread finished when it reaches the beginning address (the “target” address) of another thread that is active. A speculative thread must wait (see wait stage 210,
As is stated above, speculative threads can speed the execution of otherwise sequential software code. As each thread is executed on a thread unit 304, the thread unit 304 updates and/or reads the values of architectural registers. The thread unit's register values are not committed to the architectural state of the processor 300 until the thread being executed by the thread unit 304 becomes the non-speculative thread. Accordingly, each thread unit 304 may include a local register file 306. In addition, processor 300 may include a global register file 308, which can store the committed architectural value for each of R architectural registers. Additional details regarding at least one embodiment of a processor that provides local register files 306 for each thread unit 304 may be found in co-pending patent application U.S. patent application Ser. No. 10/896,585, filed Jul. 21, 2004, and entitled “Multi-Version Register File For Multithreading Processors With Live-In Precomputation”.
Returning to
The speculative thread may then enter the commit stage 212 and the local register values for the thread unit 304 (
The commit time 218 illustrated in
The effectiveness of a spawning pair may depend on the control flow between the spawn point and the start of the speculative thread, as well as on the control after the start of the speculative thread, the aggressiveness of the compiler in generating the p-slice that precomputes the speculative thread's input values (discussed in further detail below), and the number of hardware contexts available to execute speculative threads. Determination of the true execution speedup due to speculative multithreading should take the interaction between various instances of the thread into account. Thus, the determination of how effective a potential speculative thread will be can be quite complex.
For at least one embodiment, the method 400 may be performed by a compiler (such as, for example, compiler 708 illustrated in
For at least one alternative embodiment, the method 400 may be embodied in any other type of software, hardware, or firmware product, including a standalone product that selects spawning points. The method 400 may be performed in connection with a sequence of program instructions to be run on a processor that supports speculative multithreading (such as, for example, SpMT processor 300 illustrated in
For at least one embodiment, the method 400 may be performed by a compiler 708 to analyze, at compile time, the expected benefits of a set of spawning pairs for a given sequence of program instructions. Generally, the method 400 models execution of the program instructions as they would be performed on the target SpMT processor, taking into account the behavior induced by the specified set of spawning pairs. The method 400 determines whether speculative thread(s) indicated by a spawning pair will improve the expected execution time for the given code sequence. If so, the method 400 selects the spawning pair and inserts an appropriate spawn instruction into the code sequence to facilitate spawning of the speculative thread(s) when the modified code sequence is executed. The method 400 thus may be utilized in order to speed the execution of an otherwise single-threaded code sequence on an SpMT processor.
Rather than a simple heuristic approach, at least one embodiment of the method 400 provides a more thorough analysis of the code sequence (via profiling information) and candidate spawning pairs in order to estimate the benefit of any set of candidate spawning pairs and to select effective spawning pairs. The analysis provided by the method 400 takes into account inter-thread dependences for speculative threads indicated by a set of spawn points.
For at least one embodiment, the pairset 414 includes one or more spawning pairs, with each spawning pair representing at least one potential speculative thread. (Of course, a given spawning pair may represent several speculative threads if, for instance, it is enclosed in a loop). A given spawning pair in the pairset 414 may include the following information: SP (spawn point) and TGT (target point). The SP indicates, for the speculative thread that is indicated by the spawning pair, the static basic block of the main thread program that fires the spawning of a speculative thread when executed. The TGT indicates, for the speculative thread indicated by the spawning pair, the static basic block that represents the starting point, in the main thread's sequential binary code, of the speculative thread associated with the SP. The SP and TGT may be in any combination of basic blocks associated with the input code sequence.
In addition, each spawning pair in the pairset 414 may also include precomputation slice information for the indicated speculative thread. The precomputation slice information provided for a spawning pair may include the following information. First, an estimated probability that the speculative thread, when executing the precomputation slice, will reach the TGT point (referred to as a start slice condition), and the average length of the p-slice in such cases. Second, an estimated probability that the speculative thread, when executing the p-slice, does not reach the TGT point (referred to as a cancel slice condition), and the average length of the p-slice in such cases. (For further discussion of how such information may be utilized, see the following discussion of filter conditions in connection with block 504 of
Regarding the third input, the representation of a sequence of program instructions, such input may be provided in any of several manners. For at least one embodiment, the sequence of program instructions provided as an input to the method 400 may be a subset of the instructions for a program, such as a section of code (a loop, for example) or a routine. Alternatively, the sequence of instructions may be a full program. For either of such embodiments, the input is reflected as the dotted line in
For at least one other embodiment, rather than receiving the actual sequence of program instructions as an input, the method 400 may receive instead a program trace 404 that corresponds to the sequence of program instructions. The program trace 404 is optional, and is not needed if the input to the method 400 is a sequence of instructions 416. Similarly, the original code sequence 416 is optional, and is not needed if the program trace 404 is received by the method 400 as an input. Accordingly, the code sequence 416 and the program trace 404, as well as the edge profile 412 (discussed below) that may be utilized to generate the program trace 404, are all denoted with dotted lines in
A program trace is a sequence of basic blocks that represents the dynamic execution of the given sequence of code. For at least one embodiment, the program trace that is provided as an input to the method 400 may be the full execution trace for the selected sequence of program instructions. For other embodiments, the program trace 404 that is provided as an input to the method 400 may be a subset of the full program trace for the target instructions. For example, via sampling techniques a subset of the full program trace may be chosen as an input, with the subset being representative of the whole program trace.
For at least one embodiment, the trace 404 may be generated for the code sequence 416 for the input instructions based upon profile information that has been generated for the binary sequence 416 during a previous pass of a compiler or other profile-information generator. For at least one embodiment, such profile information includes an edge profile 412. The edge profile 412 may include the probability of going from any basic block of the input program sequence to each of its successors. The edge profile 412 may also include the execution count for each basic block of the input program sequence.
For at least one embodiment, it is assumed that the length (number of instructions) for each basic block in the trace 404 is known, as well as the accumulated length for each basic block represented in the trace 404. It is further assumed that each instruction requires the same fixed amount of time for execution; therefore the length and accumulated length values for the basic blocks in the trace 404 may be understood to represent time values. Accordingly, the accumulated length of the last basic block in the trace may represent the total executed number of sequential instructions of the input program sequence. Of course, for other embodiments, the time needed to execute each basic block, as well as accumulated time, may be determined by other methods, such as profiling.
As is known in the art, a greedy algorithm takes the best immediate, or local, solution while finding an answer. Greedy algorithms find the overall, or globally, optimal solution for some optimization problems, but may find less-than-optimal solutions for some instances of other problems. At block 406, utilization of a greedy approach does not necessarily provide the optimal solution. For each iteration, block 406 selects that spawning pair (if any) that, when added to the existing “selected” pairs, yields the greatest additional benefit. Such decision is considered to be local because the approach does not necessarily consider all possible combinations of remaining pairs in the pairset. Further details regarding greedy algorithms are presented in G. Bassard & P. Bratley, Fundamentals of Algorithms (Prentice Hall—1996), Chapter 6.
As a result of the operation of block 406, the method may select one or more spawning pairs that are predicted to speed the execution of the input sequence of program instructions. Processing then proceeds from block 406 to block 408.
At block 408, processing is performed to include the spawning pairs (if any) that were selected at block 406 into a binary file 420. Such processing 408 includes placing into the binary file 420 for the input sequence of program instructions a spawn instruction at the spawn point for each spawning pair that was selected at block 406. Such spawn instructions may indicate that the spawnee thread should begin execution at the target point indicated by the respective spawning point, such that, when executed, a speculative thread will be spawned at the spawn point and begin execution at the target point.
Alternatively, for one or more of the selected spawning pairs (if any), input values (“live-ins”) may need to be computed for the speculative thread to be spawned. This is particularly true for embodiments of a target SpMT processor that does not synchronize in order to correctly handle data dependences among threads. In such case, the spawn instruction that is inserted into the binary file 420 at block 408 may indicate that the spawnee thread should begin execution at a precomputation slice associated with the speculative thread to be spawned.
For at least one embodiment, such precomputation slice may be appended to the end of the binary file 420 at block 408. Alternatively, the precomputation slice may be placed at the end of the routine indicated by the spawn point of the selected spawning pair. At the end of the precomputation slice, the last instruction of the slice may effect a branch to the target point indicated in the associated spawning point. In this manner, live-in values for the speculative spawnee thread may be generated before the spawnee thread begins execution at the specified target point. Further discussion of the modification of a binary file to include a precomputation slice is set forth in co-pending patent application U.S. patent application Ser. No. 10/245,548, filed Sep. 17, 2002, and entitled “Post-Pass Binary Adaptation For Software-Based Speculative Precomputation”.
From block 408, processing for the method 400 ends at block 422.
Generally,
During execution of the method 406, one or more candidate spawning pairs from the input pairset 414 may be selected. Accordingly, during initialization 504 the set of “selected” spawning pairs is initialized to a null set.
From the initialization block 504, processing proceeds to block 506. At block 506, a modeling algorithm is utilized to determine if the effect of any of the candidate spawning pairs is anticipated to decrease the execution time for the input code sequence. In order to make this determination, a modeling algorithm is used to model the execution time of the input code sequence, taking into account the effect of concurrent execution of the speculative thread identified by one of the spawning pairs. A greedy algorithm is utilized to temporarily select the spawning pair if the modeling algorithm indicates that the spawning pair is expected to decrease the execution time. Such processing is discussed in further detail below in connection with
Generally, at block 506 the method 406 will continue to evaluate other candidates in pairset, and will select a new candidate, and de-select the previously-selected candidate, if the new candidate is modeled to provide an even smaller execution time. Such processing continues at block 506 until all candidate spawning pairs in the pairset have been considered.
If, after all candidates in the pairset have been considered, no spawning pair has been temporarily selected at block 506 (i.e., “Select” indicator is null), then processing ends at block 514. In such case, none of the spawning pairs in the pairset identify a speculative thread that is expected to provide a performance benefit, in terms of reduced execution time, for the input code sequence.
Otherwise, if a candidate spawning pair has been temporarily selected at block 506 (i.e., “Select” indicator is not null), processing proceeds to block 508. At block 508, the temporarily selected spawning pair is added to the set of selected spawning pairs (which was initialized to a null set at the initialization block 504). Then, at block 510 the selected spawning pair is deleted from the pairset so that the pairset now includes a smaller set of spawning pairs to be considered in the next pass of block 506. It is determined at block 512 whether additional candidates remain in the pairset. If not, processing ends at block 514. Otherwise, processing loops back to block 506. In this manner, the processing of the method 406 is thus repeated until either the pairset is empty (see block 512), or none of the spawning pairs is predicted to provide any additional benefit (see block 506).
Regarding block 504, an optional optimization can be performed during initialization 504 for some embodiments in order to streamline the analysis of spawning pairs in the pairset. For such embodiments, only those spawning pairs that are expected to provide at least a minimal benefit are included as part of the pairset to be considered at block 506. Any spawning pairs that are not expected to provide at least a minimum benefit (in terms of reduced execution time for the input code sequence) are filtered out and are not considered during the block 506 processing described above. Such optimization is particularly useful for embodiments that do not restrict spawning pairs to high-level structures. Without such restriction, the number of possible spawning pairs can be quite large. Accordingly, compile time (for embodiments in which the method 400 is performed by a compiler) may be unacceptably long. Filtering may reduce the number of spawning pairs that are considered during the method 400, and may concomitantly reduce compilation time.
Various filter conditions may be imposed during initialization 504 in order to determine if a spawning pair should be excluded from consideration at block 506. Such filter conditions may include any or all the following:
Block 604 is the first block in a group of blocks 604, 606, 608, 610, 612 that implement a greedy selection loop. At each iteration of the loop, a candidate spawning pair in the pairset is considered and its effect on the execution time of the input code sequence is analyzed.
At block 604, the estimated execution time for the input code sequence, t_exec_tmp, is calculated for the candidate spawning pair, “cand”. The estimated execution time may be determined at block 606 by modeling the run-time behavior target processor as it executes the input code sequence, taking into account the multithreading effects of already-selected spawning pair(s), if any, along with the effects of the candidate spawning pair. Such expected execution time may be modeled for a given target SpMT processor. An example of at least one embodiment of a modeling method that can be utilized for this purpose is disclosed in the copending patent application bearing U.S. patent application Ser. No. 10/933,076, entitled “Analyzer For Spawning Pairs In Speculative Multithreaded Processor,” filed Sep. 1, 2004. An execution time modeler (such as, for example, 855 of
From block 604, processing proceeds to block 606. At block 606, it is determined whether the candidate spawning pair's estimated execution time, t_exec_tmp, indicates a performance benefit over the global execution time that takes into account only the spawning pairs (if any) in the “selected” set. Such evaluation may be made, for at least one embodiment, by determining whether t_exec_tmp is less than the global execution time, t_exec. If so, then processing proceeds to block 608. Otherwise, processing proceeds to block 610.
At block 608, the candidate spawning pair is temporarily selected, such that Select=cand. Processing then proceeds to block 610. At block 610, it is determined whether any additional candidates in the pairset remain to be considered. If so, processing proceeds to block 612. Otherwise, processing ends at block 614.
At block 612, “cand” is updated to reflect the next candidate spawning pair in the pairset. Processing then loops back to block 604.
In general, the discussion of
The discussion above further indicates that the method 400 follows a greedy approach to selecting spawning pairs from the candidate pairset. Spawning pairs are selected from the pairset (block 610), added to a “selected” set of spawning pairs (block 508), and deleted from the pairset (block 510)t, until negligible additional benefit is expected from the remaining spawning pairs in the pairset or until the pairset is empty, whichever occurs first.
For at least one embodiment, the greedy algorithm is performed such that, among all candidate pairs remaining in the pairset, the new pair chosen (if any) to be included in the “selected” set is the one that is expected to provide (based on the modeling method described above) the best improvement in execution time among all the spawning pairs in the candidate pairset. Again, such benefit is computed using a model that estimates the execution behavior of a set of pairs for a given number of thread units.
The main loop for the method 400 exits when the expected execution time, taking any remaining candidate spawning pair into account, is not expected to provide significant improvement over the estimated execution that has been determined for the input code sequence that takes into account the effect of all other spawning pairs (if any) already in the “selected” set.
Embodiments of the methods discussed herein thus provide for determining the effect of a set of spawning pairs on the execution time for a sequence of program instructions for a particular multithreading processor and, further, for modifying an input program to include those speculative threads that are predicted to decrease execution time of the input code sequence. The input set of spawning pairs indicate potential concurrent speculative threads that may be spawned during execution of the sequence of program instructions and may thus reduce total execution time. The total execution time is determined by modeling the effects of speculative threads, which are indicated by the spawning pairs, on execution time for the sequence of program instructions.
Embodiments of the methods 400, 406, 600 disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments of the invention may be implemented as computer programs executing on programmable systems comprising at least one processor, a data storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Program code may be applied to input data to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.
The programs may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The programs may also be implemented in assembly or machine language, if desired. In fact, the method described herein is not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language
The programs may be stored on a storage media or device (e.g., hard disk drive, floppy disk drive, read only memory (ROM), CD-ROM device, flash memory device, digital versatile disk (DVD), or other storage device) readable by a general or special purpose programmable processing system. The instructions, accessible to a processor in a processing system, provide for configuring and operating the processing system when the storage media or device is read by the processing system to perform the procedures described herein. Embodiments of the invention may also be considered to be implemented as a machine-readable storage medium, configured for use with a processing system, where the storage medium so configured causes the processing system to operate in a specific and predefined manner to perform the functions described herein.
An example of one such type of processing system is shown in
Processor 704 includes N thread units 304a-304n, where each thread unit 304 may be (but is not required to be) associated with a separate core. For purposes of this disclosure, N may be any integer>1, including 2, 4 and 8. For at least one embodiment, the processor cores 304a-304n may share the memory system 750. The memory system 750 may include an off-chip memory 702 as well as a memory controller function provided by an off-chip interconnect 725. In addition, the memory system may include one or more caches (not shown).
Memory 702 may store instructions 740 and data 741 for controlling the operation of the processor 704. For example, instructions 740 may include a compiler program 708 that, when executed, causes the processor 704 to compile a program (not shown) that resides in the memory system 702. Memory 702 holds the program to be compiled, intermediate forms of the program, and a resulting compiled program. For at least one embodiment, the compiler program 708 includes instructions to model runtime execution of a sequence of program instructions, given a set of spawning pairs, for a particular multithreaded processor, and to select spawning pairs.
Memory 702 is intended as a generalized representation of memory and may include a variety of forms of memory, such as a hard drive, CD-ROM, random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM) and related circuitry. Memory 702 may store instructions 740 and/or data 741 represented by data signals that may be executed by processor 704. The instructions 740 and/or data 741 may include code for performing any or all of the techniques discussed herein. For example, at least one embodiment of a method for determining an execution time is related to the use of the compiler 708 in system 700 to cause the processor 704 to model execution time, given one or more spawning pairs, as described above. The compiler may thus, given the spawn instructions indicated by the spawning pairs, model a multithreaded execution time for the given sequence of program instructions and may select one or more of the spawning pairs.
Turning to
The execution time modeler 855 may, when executed by the processor 704, perform at least one embodiment of a method for modeling execution time of a code sequence, taking into account the effect of one or more speculative threads. For at least one embodiment, the analyzer/selector 850 invokes the execution time modeler 855 to perform its processing. Again, further details for at least one embodiment of an example of a method that maybe performed by the execution time modeler 855 to model expected execution time may be found in the copending patent application bearing U.S. patent application Ser. No. 10/933,076, entitled “Analyzer For Spawning Pairs In Speculative Multithreaded Processor,” filed Sep. 1, 2004.
While particular embodiments of the present invention have been shown and described, it will be obvious to those skilled in the art that changes and modifications can be made without departing from the present invention in its broader aspects. For example, the methods described herein may be utilized to generate a modified binary file when the input code is presented as an original binary file. For such embodiments, the methods described herein may be performed by a binary optimizer. The appended claims are to encompass within their scope all such changes and modifications that fall within the true scope of the present invention.
Garcia, Carlos, Gonzalez, Antonio, Sanchez, Jesus, Marcuello, Pedro, Madriles, Carlos, Rundberg, Peter
Patent | Priority | Assignee | Title |
10379863, | Sep 21 2017 | CSR Imaging US, LP | Slice construction for pre-executing data dependent loads |
10416995, | May 29 2009 | KYNDRYL, INC | Techniques for providing environmental impact information associated with code |
7627864, | Jun 27 2005 | Intel Corporation | Mechanism to optimize speculative parallel threading |
7865702, | Jan 30 2006 | SONY INTERACTIVE ENTERTAINMENT INC | Stall prediction thread management |
8266607, | Aug 27 2007 | International Business Machines Corporation | Lock reservation using cooperative multithreading and lightweight single reader reserved locks |
8365151, | Jan 07 2000 | SONY INTERACTIVE ENTERTAINMENT INC | Multiple stage program recompiler using information flow determination |
8490098, | Jan 08 2008 | International Business Machines Corporation | Concomitance scheduling commensal threads in a multi-threading computer system |
8495636, | Dec 19 2007 | International Business Machines Corporation | Parallelizing single threaded programs by performing look ahead operation on the single threaded program to identify plurality of instruction threads prior to execution |
8544006, | Dec 19 2007 | International Business Machines Corporation | Resolving conflicts by restarting execution of failed discretely executable subcomponent using register and memory values generated by main component after the occurrence of a conflict |
8756564, | May 29 2009 | KYNDRYL, INC | Techniques for providing environmental impact information associated with code |
9286090, | Jan 20 2014 | Sony Corporation | Method and system for compiler identification of code for parallel execution |
9335975, | May 29 2009 | KYNDRYL, INC | Techniques for providing environmental impact information associated with code |
9361078, | Mar 19 2007 | International Business Machines Corporation | Compiler method of exploiting data value locality for computation reuse |
9703667, | Feb 22 2015 | International Business Machines Corporation | Hardware-based edge profiling |
Patent | Priority | Assignee | Title |
6009269, | Mar 10 1997 | Hewlett Packard Enterprise Development LP | Detecting concurrency errors in multi-threaded programs |
6247121, | Dec 16 1997 | Intel Corporation | Multithreading processor with thread predictor |
6317872, | Jul 11 1997 | Rockwell Collins, Inc | Real time processor optimized for executing JAVA programs |
6442585, | Nov 26 1997 | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | Method for scheduling contexts based on statistics of memory system interactions in a computer system |
6622300, | Apr 21 1999 | Hewlett Packard Enterprise Development LP | Dynamic optimization of computer programs using code-rewriting kernal module |
6931631, | Jun 27 2001 | INTELLECTUAL DISCOVERY, INC | Low impact breakpoint for multi-user debugging |
6957422, | Oct 02 1998 | Microsoft Technology Licensing, LLC | Dynamic classification of sections of software |
6993750, | Dec 13 2001 | VALTRUS INNOVATIONS LIMITED | Dynamic registration of dynamically generated code and corresponding unwind information |
7013456, | Jan 28 1999 | ADVANCED SILICON TECHNOLOGIES, LLC | Profiling execution of computer programs |
7069545, | Dec 29 2000 | Intel Corporation | Quantization and compression for computation reuse |
7082599, | Jan 14 2000 | National Instruments Corporation | Method and apparatus for detecting and resolving circularflow paths in graphical programming systems |
7185178, | Jun 30 2004 | Oracle America, Inc | Fetch speculation in a multithreaded processor |
7240160, | Jun 30 2004 | Oracle America, Inc | Multiple-core processor with flexible cache directory scheme |
7346902, | Oct 22 2002 | Oracle America, Inc | System and method for block-based concurrentization of software code |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Sep 21 2004 | Intel Corporation | (assignment on the face of the patent) | / | |||
Nov 18 2004 | RUNDBERG, PETER | Intel Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 016111 | /0097 | |
Nov 19 2004 | SANCHEZ, JESUS | Intel Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 016111 | /0097 | |
Nov 19 2004 | GARCIA, CARLOS | Intel Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 016111 | /0097 | |
Nov 19 2004 | MADRILES, CARLOS | Intel Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 016111 | /0097 | |
Nov 19 2004 | MARCUELLO, PEDRO | Intel Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 016111 | /0097 | |
Nov 19 2004 | GONZALEZ, ANTONIO | Intel Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 016111 | /0097 | |
Jul 18 2022 | Intel Corporation | TAHOE RESEARCH, LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 061175 | /0176 |
Date | Maintenance Fee Events |
May 16 2012 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
May 12 2016 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
May 14 2020 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Nov 25 2011 | 4 years fee payment window open |
May 25 2012 | 6 months grace period start (w surcharge) |
Nov 25 2012 | patent expiry (for year 4) |
Nov 25 2014 | 2 years to revive unintentionally abandoned end. (for year 4) |
Nov 25 2015 | 8 years fee payment window open |
May 25 2016 | 6 months grace period start (w surcharge) |
Nov 25 2016 | patent expiry (for year 8) |
Nov 25 2018 | 2 years to revive unintentionally abandoned end. (for year 8) |
Nov 25 2019 | 12 years fee payment window open |
May 25 2020 | 6 months grace period start (w surcharge) |
Nov 25 2020 | patent expiry (for year 12) |
Nov 25 2022 | 2 years to revive unintentionally abandoned end. (for year 12) |