A conditional move instruction implemented in a processor by forming and processing two decoded instructions, and applications thereof. In an embodiment, the conditional move instruction specifies a first source operand, a second source operand, and a third operand that is both a source and a destination. If the value of the second operand is not equal to a specified value, the first decoded instruction moves the third operand to a completion buffer register. If the value of the second operand is equal to the specified value, the second decoded instruction moves the value of the first operand to the completion buffer. When the decoded instruction that performed the move graduates, the contents of the completion buffer register is transferred to a register file register specified by the third operand.

Patent
   8078846
Priority
Sep 29 2006
Filed
Dec 18 2006
Issued
Dec 13 2011
Expiry
Feb 26 2027
Extension
70 days
Assg.orig
Entity
Large
4
81
EXPIRED<2yrs
14. A processor that implements a conditional move instruction, comprising:
an instruction decode and dispatch unit configured to receive the conditional move instruction that specifies a first source operand, a second source operand, and a third source operand that is both a source and a destination, and to output a first decoded instruction and a second decoded instruction;
a hardware execution unit, coupled to the instruction decode and dispatch unit, configured to execute the first decoded instruction and the second decoded instruction; and
a graduation unit configured to graduate either the first or the second decoded instruction,
wherein, if a first condition is not satisfied, the first decoded instruction is invalidated and the second decoded instruction is graduated such that the second decoded instruction causes the first source operand to be moved to an allocated completion buffer register, and
wherein, if the first condition is satisfied, the second decoded instruction is invalidated and the first decoded instruction is graduated such that the first decoded instruction causes the third source operand to be moved to the allocated completion buffer register.
8. A method for implementing a conditional move instruction, comprising:
receiving the conditional move instruction comprising a first, second, and third operand, the third operand being both a source and a destination;
forming a first decoded instruction comprising the second and third operands from the conditional move instruction and a second decoded instruction comprising the first and second operands from the conditional move instruction;
allocating a register in a completion buffer as a destination register to temporarily store any result of the first and second decoded instructions;
executing the decoded instructions, and
graduating one of the decoded instructions,
wherein if a first condition is satisfied, the first decoded instruction is invalidated and the second decoded instruction is graduated such that the second decoded instruction causes the first operand to be moved to the allocated completion buffer register, and
wherein if the first condition is not satisfied, the second decoded instruction is invalidated and the first decoded instruction is graduated such that the first decoded instruction causes the third operand to be moved to the allocated completion buffer register.
5. A method for implementing a conditional move in a processor, comprising:
forming a first decoded instruction and a second decoded instruction from a conditional move instruction comprising a first, second, and third operand, the third operand being both a source and a destination, wherein the first decoded instruction comprising the second and third operands from the conditional move instruction causes the processor to move the third operand to a completion buffer register if a first condition is satisfied, and the second decoded instruction comprising the first and second operands from the conditional move instruction causes the processor to move the first operand to the completion buffer register if the first condition is not satisfied;
allocating a register in a completion buffer as a destination register to temporarily store any result of the first and second decoded instructions;
executing the first decoded instruction and the second decoded instruction;
graduating the second decoded instruction and invalidating the first decoded instruction if the first condition is not satisfied;
graduating the first decoded instruction and invalidating the second decoded instruction if the first condition is satisfied.
12. A method for implementing a conditional move in a processor, comprising:
forming a first decoded instruction and a second decoded instruction from a conditional move instruction comprising a first, second, and third operand, the third operand being both a source and a destination, without stalling a decode stage of the processor, wherein the first decoded instruction comprising the second and third operands from the conditional move instruction causes the processor to move the third operand to a completion buffer register if a first condition is satisfied, and the second decoded instruction comprising the first and second operands from the conditional move instruction causes the processor to move the first operand to the completion buffer register if the first condition is not satisfied;
allocating a register in a completion buffer as a destination register to temporarily store any result of the first and second decoded instructions;
executing the first decoded instruction and the second decoded instruction, and
graduating one of the decoded instructions,
wherein, if a first condition is satisfied, the second decoded instruction is invalidated and the first decoded instruction is graduated, and
wherein, if a first condition is not satisfied, the first decoded instruction is invalidated and the second decoded instruction is graduated.
13. A processor that implements a conditional move instruction, comprising:
an instruction decode and dispatch unit configured to receive the conditional move instruction comprising a first, second, and third operand, the third operand being both a source and a destination and to output a first decoded instruction comprising the second and third operands from the conditional move instruction and a second decoded instruction comprising the first and second operands from the conditional move instruction,
wherein the first decoded instruction causes the processor to move the third operand to a completion buffer register if a first condition is satisfied, and the second decoded instruction causes the processor to move the first operand to the completion buffer register if the first condition is not satisfied,
a hardware execution unit, coupled to the instruction decode and dispatch unit, configured to execute the first decoded instruction and the second decoded instruction, and;
a graduation unit configured to graduate either the first or the second decoded instruction,
wherein, if the first condition is not satisfied, the second decoded instruction is graduated and the first decoded instruction is invalidated, and
wherein, if the first condition is satisfied, the first decoded instruction is graduated and the second decoded instruction is invalidated.
7. A method for implementing in a processor a conditional move instruction that specifies a first operand, a second operand, and a third operand, the method comprising:
forming a first decoded instruction comprising the second and third operands from the conditional move instruction and a second decoded instruction comprising the first and second operands from the conditional move instruction, wherein the first decoded instruction causes the processor to move the third operand to a completion buffer register if the second operand is not equal to a predetermined value, and the second decoded instruction causes the processor to move the first operand to the completion buffer register if the second operand is equal to the predetermined value;
allocating a register in a completion buffer as a destination register to temporarily store any result of the first and second decoded instructions;
if the second operand is equal to the predetermined value, graduating the second decoded instruction, thereby altering an architectural state of the processor according to the second decoded instruction, and invalidating the first decoded instruction; and
if the second operand is not equal to the predetermined value, graduating the first decoded instruction, thereby altering the architectural state of the processor according to the first decoded instruction, and invalidating the second decoded instruction.
11. A processor that implements a conditional move instruction, comprising:
an instruction decode and dispatch unit configured to receive the conditional move instruction comprising a first, second, and third operand, the third operand being both a source and a destination, to output, without stalling the instruction decode and dispatch unit, a first decoded instruction comprising the second and third operands from the conditional move instruction and a second decoded instruction comprising the first and second operands from the conditional move instruction, and to allocate a register in a completion buffer as a destination register to temporarily store any result of the first and second decoded instructions,
wherein the first decoded instruction causes the processor to move the third operand to the completion buffer register if a first condition is satisfied, and the second decoded instruction causes the processor to move the first operand to the completion buffer register if the first condition is not satisfied,
a hardware execution unit, coupled to the instruction decode and dispatch unit, configured to execute the first decoded instruction and the second decoded instruction and;
a graduation unit configured to graduate either the first or the second decoded instructions,
wherein, if the first condition is not satisfied, the second decoded instruction is graduated and the first decoded instruction is invalidated, and
wherein, if the first condition is satisfied, the first decoded instruction is graduated and the second decoded instruction is invalidated.
1. A processor that implements a conditional move instruction, comprising:
an instruction decode and dispatch unit configured to receive the conditional move instruction comprising a first, second, and third operand, the third operand being both a source and a destination, to output a first decoded instruction comprising the second and third operands from the conditional move instruction and a second decoded instruction comprising the first and second operands from the conditional move instruction, and to allocate a register in a completion buffer as a destination register to temporarily store any result of the first and second decoded instructions,
wherein the first decoded instruction causes the processor to move the third operand to the completion buffer register if a first condition is satisfied, and the second decoded instruction causes the processor to move the first operand to the completion buffer register if the first condition is not satisfied,
wherein the first condition is evaluated by comparing the second operand from the conditional move instruction with a predetermined value,
a hardware execution unit, coupled to the instruction decode and dispatch unit, configured to execute the first decoded instruction and the second decoded instruction, and;
a graduation unit configured to graduate either the first or the second decoded instruction,
wherein, if the first condition is not satisfied, the second decoded instruction is graduated and the first decoded instruction is invalidated, and
wherein, if the first condition is satisfied, the first decoded instruction is graduated and the second decoded instruction is invalidated.
2. The processor of claim 1, wherein the graduation unit is configured to, during graduation, transfer the first or third operand stored in the completion buffer register to a register specified by the third operand of a register file of the processor.
3. The processor of claim 1, wherein the hardware execution unit executes decoded instructions out-of-program-order.
4. The processor of claim 1, wherein the hardware execution unit is a load/store unit.
6. The method of claim 5, further comprising:
transferring the first or third operand from the completion buffer to a register specified by the third operand of a register file of the processor.
9. The method of claim 8, wherein the conditional move instruction specifies a plurality of operands and executing one of the decoded instructions comprises:
executing one of the decoded instructions based on one of the operands.
10. The method of claim 8, wherein the conditional move instruction specifies a plurality of operands and executing one of the decoded instructions comprises:
moving an identified one of the operands to a completion buffer register.

This application claims priority under 35 U.S.C. §119(e) to U.S. Provisional Patent Application No. 60/853,314, filed on Sep. 29, 2006, titled “Twice Issued Conditional Move Instruction, And Applications Thereof”.

The present invention is generally directed to processors.

Reduced Instruction Set Computer (RISC) processors are well known. RISC processors have instructions that facilitate the use of a technique known as pipelining. Pipelining enables a processor to work on different steps of an instruction at the same time and thereby take advantage of parallelism that exists among the steps needed to execute an instruction. As a result, a processor can execute more instructions in a shorter period of time. Additionally, modern Complex Instruction Set Computer (CISC) processors often translate their instructions into micro-operations (i.e., instructions similar to those of a RISC processor) prior to execution to facilitate pipelining.

Instruction set architectures (ISA) for RISC processors limit the number of operands that can be operated upon by a single instruction. One way to increase the number of operands that can be operated upon by a single instruction is to add additional ports to a register file of the processor. Such an approach, however, is expensive both in terms of area and timing. An alternative approach is to stall the pipeline while an instruction is implemented. This approach is also expensive in terms of timing.

What is needed are techniques and apparatuses for implementing instructions that overcome the limitations noted above.

The present invention provides apparatuses, systems, and methods for implementing a conditional move instruction, and applications thereof. In an embodiment, a first decoded instruction and a second decoded instruction are formed from a conditional move instruction that specifies a first source operand, a second source operand, and a third operand that is both a source and a destination. If the value of the second operand is not equal to a specified value, the first decoded instruction moves the third operand to a completion buffer register. If the value of the second operand is equal to the specified value, the second decoded instruction moves the value of the first operand to the completion buffer. When the decoded instruction that performed the move graduates, the contents of the completion buffer register is transferred to a register file register specified by the third operand.

Further features and advantages of the invention, as well as the structure and operation of various embodiments of the invention, are described in detail below with reference to the accompanying drawings. It is noted that the invention is not limited to the specific embodiments described herein. Such embodiments are presented herein for illustrative purposes only. Additional embodiments will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein.

The accompanying drawings, which are incorporated herein and form part of the specification, illustrate the present invention and, together with the description, further serve to explain the principles of the invention and to enable a person skilled in the relevant art(s) to make and use the invention.

FIG. 1A is a diagram that illustrates a processor according to an embodiment of the present invention.

FIG. 1B is a diagram that further illustrates the processor of FIG. 1A.

FIG. 2 is a diagram that illustrates an example manner in which a conditional move instruction is implemented in accordance with an embodiment of the present invention

FIG. 3 is a diagram that illustrates an example system according to an embodiment of the present invention.

The features and advantages of the present invention will become more apparent from the detailed description set forth below when read in conjunction with the drawings. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements. The drawing in which an element first appears is indicated by the leftmost digit(s) in the corresponding reference number.

The present invention provides apparatuses, systems, and methods for implementing a conditional move instruction, and applications thereof. In the specification, references to “one embodiment”, “an embodiment”, “an example embodiment”, etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.

FIG. 1A is a diagram of a processor 100 according to an embodiment of the present invention. Processor 100 is capable of implementing a conditional move instruction. Processor 100 preferably implements a load-store, reduced instruction set computer (RISC) architecture. The various components and features of processor 100 illustrated in FIG. 1A are described below.

While processor 100 is described herein as including several separate components, many of these components are optional components that are not present in each embodiment of the present invention, or components that may be combined, for example, so that the functionality of two components reside within a single component. Thus, the individual components shown for example in FIG. 1A are illustrative and not intended to limit the present invention.

As shown in FIG. 1A, processor 100 includes one or more execution units 102. In an embodiment, execution units 102 include an integer execution unit (IEU) 118 and a load/store unit (LSU) 108. IEU 118 handles arithmetic operations, such as logical operations, shift operations, add operations, and/or subtract operations. LSU 108 handles load/store operations. In a further embodiment, execution units 102 also include, for example, a multiply/divide unit (MDU) 120 to perform multiply and divide operations.

In an embodiment, execution units 102 interact with data stored in registers of a register file (RF) 130 and/or data stored in registers of one or more completion buffers (CB) 128. A multiplexer 124 is used to select data from RF 130 or CB 128. In an embodiment, a first completion buffer 128 includes 64-bit registers for storing data from integer execution unit 118 and multiply/divide unit 120. A second completion buffer 128 includes 32-bit registers for storing data from load/store unit 108. Optionally, one or more additional register file sets can be included to minimize content switching overhead, for example, during interrupt and/or exception processing.

Execution units 102 interface with an instruction dispatch unit (IDU) 106, a memory management unit (MMU) 110, and data cache 114.

Instruction fetch unit (IFU) 104 is responsible for providing instructions to instruction dispatch unit 106. In one embodiment, instruction fetch unit 104 includes control logic for instruction cache 112, an optional recoder for recoding compressed format instructions, an instruction buffer to decouple operation of instruction fetch unit 104 from execution units 102, and an interface to a scratch pad (not shown). In an embodiment, instruction fetch unit 104 performs dynamic branch prediction. Instruction fetch unit 104 interfaces with instruction dispatch unit 106, memory management unit 110, instruction cache 112, and bus interface unit (BIU) 116.

Instruction dispatch unit 106 is responsible for decoding instructions received from instruction fetch unit 104 and dispatching them to execution units 102 when their operands and required resources are available. In an embodiment, instruction dispatch unit 106 may receive up to two instructions in order from instruction fetch unit 104 per cycle. The instructions are assigned an instruction identification value and a completion buffer identification value (CBID). The CBID identifies a buffer location or entry in completion buffer 128 that can be used to hold results temporarily before they are committed to the architectural state of processor 100 by writing the results to register file 130.

Instruction dispatch unit 106 also performs operand renaming to facilitate forwarding of data. Renamed instructions are written into a decoded instruction buffer 113 (see FIG. 1B). The oldest instructions stored in the decoded instruction buffer 113 that have all their operands ready and meet all resource requirements are dispatched to an appropriate execution unit for execution. Instructions may be dispatched out-of-program-order to execution units 102. Dispatched instructions do not stall in the execution pipe, and they write their results into completion buffer 128.

In an embodiment, instruction dispatch unit 106 also keeps track of the progress of an instruction through pipeline stages, for example, within execution units 102 and updates the availability of operands in a rename map and in all dependent instructions that are in the decoded instruction buffer. Instruction dispatch unit 106 also writes the instruction identification, CBID, and related information values into structures in graduation unit 126.

Memory management unit 110 translates virtual addresses to physical addresses for memory access. In one embodiment, memory management unit 110 includes a translation lookaside buffer (TLB) and may include a separate instruction TLB and a separate data TLB. Memory management unit 110 interfaces with instruction fetch unit 104 and load/store unit 108.

Instruction cache 112 is an on-chip memory array organized as a multi-way set associative cache such as, for example, a 2-way set associative cache or a 4-way set associative cache. Instruction cache 112 is preferably virtually indexed and physically tagged, thereby allowing virtual-to-physical address translations to occur in parallel with cache accesses. In one embodiment, the tags include a valid bit and optional parity bits in addition to physical address bits. Instruction cache 112 interfaces with instruction fetch unit 104.

Data cache 114 is also an on-chip memory array organized as a multi-way set associative cache such as, for example, a 2-way set associative cache or a 4-way set associative cache. Data cache 114 is preferably virtually indexed and physically tagged, thereby allowing virtual-to-physical address translations to occur in parallel with cache accesses. Data cache 114 interfaces with load/store unit 108.

Bus interface unit 116 controls external interface signals for processor 100. In one embodiment, bus interface unit 116 includes a collapsing write buffer used to merge write-through transactions and gather writes from uncached stores.

Load/store unit 108 is responsible for handling load/store instructions to read/write data from data caches and/or memory. Load/store unit 108 is capable of handling loads and stores issued out-of-program-order.

Integer execution unit 118 executes integer instructions. It is capable of handling instructions issued out-of-program order. Integer execution unit 118 includes an arithmetic logic unit for performing arithmetic operations such as add, subtract, shift and logic operations. Integer execution unit 118 interfaces with and operates on data stored in completion buffer 128 and register file 130.

Multiply/divide unit 120 contains a pipeline for integer multiply and divide operations. This pipeline preferably operates in parallel with the integer execution pipeline in integer execution unit 118 and has a separate write port into completion buffer 128. In an embodiment, multiply/divide unit 120 looks ahead and informs instruction dispatch unit 106 that a divide operation is about to complete so that there are no bubbles in the multiply/divide unit pipeline.

Graduation unit 126 ensures instructions graduate and change the architectural state of processor 100 in-program order. Graduation unit 126 also releases buffers and resources used by instructions prior to their graduation.

FIG. 1B further illustrates the operation of processor 100. As illustrated in FIG. 1B, processor 100 performs four basic functions: instruction fetch; instruction decode and dispatch; instruction execution; and instruction graduation. These four basic functions are illustrative and not intended to limit the present invention.

Instruction fetch (represented in FIG. 1A by instruction fetch unit 104) begins when a PC selector 101 selects amongst a variety of program counter values and determines a value that is used to fetch an instruction from instruction cache 112. In one embodiment, the program counter value selected is the program counter value of a new program thread, the next sequential program counter value for an existing program thread, or a redirect program counter value associated with a branch instruction or a jump instruction. After each instruction is fetched, PC selector 101 selects a new value for the next instruction to be fetched.

During instruction fetch, tags associated with an instruction to be fetched from instruction cache 112 are checked. In one embodiment, the tags contain precode bits for each instruction indicating instruction type. If these precode bits indicate that an instruction is a control transfer instruction, a branch history table is accessed and used to determine whether the control transfer instruction is likely to branch or likely not to branch.

In one embodiment, any compressed-format instructions that are fetched are recoded by an optional instruction recoder 103 into a format that can be decoded and executed by processor 100. For example, in one embodiment in which processor 100 implements both 16-bit instructions and 32-bit instructions, any 16-bit compressed-format instructions are recoded by instruction recoder 103 to form instructions having 32 bits. In another embodiment, instruction recoder 103 recodes both 16-bit instructions and 32-bit instructions to a format having more than 32 bits.

After optional recoding, instructions are written to an instruction buffer 105. In one embodiment, this stage can be bypassed and instructions can be dispatched directly to an instruction decoder 107.

Instruction decode and dispatch (represented in FIG. 1A by instruction dispatch unit 106) begins, for example, when one or more instructions are received from instruction buffer 105 and decoded by instruction decoder 107. In one embodiment, following resolution of a branch mis-prediction, the ability to receive instructions from instruction buffer 105 may be temporarily halted until selected instructions residing within the instruction execution portion and/or instruction graduation portion of processor 100 are purged.

In parallel with instruction decoding, operands are renamed. Register renaming map(s) located within instruction identification (ID) generator and operand renamer 109 are updated and used to determine whether required source operands are available, for example, in register file 130 and/or a completion buffer 128. A register renaming map is a structure that holds the mapping information between programmer visible architectural registers and internal physical registers of processor 100. Register renaming map(s) indicate whether data is available and where data is available. As will be understood by persons skilled in the relevant arts given the description herein, register renaming is used to remove instruction output dependencies and to ensure that there is a single producer of a given register in processor 100 at any given time. Source registers are renamed so that data is obtained from a producer at the earliest opportunity instead of waiting for the processor's architectural state to be updated.

Also in parallel with instruction decoding, instruction identification (ID) generator and operand renamer 109 generates and assigns an instruction identification tag to each instruction. An instruction identification tag assigned to an instruction is used, for example, to determine the program order of the instruction relative to other instructions. In one embodiment, each instruction identification tag is a thread-specific sequentially generated value that uniquely determines the program order of instructions. The instruction identification tags can be used to facilitate graduating instructions in-program order, which were executed out-of-program order.

Each decoded instruction is assigned a completion buffer identification value or tag by a completion buffer allocater 111. The completion buffer identification value determines the location in completion buffer 128 where instruction execution units 102 can write calculated results for an instruction. In one embodiment, the assignment of completion buffer identification values is accomplished using a free list. The free list contains as many entries as the number of entries in completion buffer 128. The free list can be implemented, for example, using a bitmap. A first bit of the bitmap can be used to indicate whether the completion buffer entry is either available (e.g., if the bit has a value of one) or unavailable (e.g., if the bit has a value of zero).

Assigned completion buffer identification values are written into a graduation buffer 121. In one embodiment, completion buffer completion bits associated with newly renamed instructions are reset/cleared to indicate incomplete results. As instructions complete execution, their corresponding completion buffer completion bits are set, thereby enabling the instructions to graduate and release their associated completion buffer identification values. In one embodiment, control logic (not shown) ensures that one program thread does not consume more than its share of completion buffer entries.

Decoded instructions are written to a decoded instruction buffer 113. An instruction dispatcher 115 selects instructions residing in decoded instruction buffer 113 for dispatch to execution units 102. In embodiments, instructions can be dispatched for execution out-of-program-order to execution units 102. In one embodiment, instructions are selected and dispatched, for example, based on their age (ID tags) assuming that their operands are determined to be ready.

Instruction execution units 102 execute instructions as they are dispatched. During execution, operand data is obtained as appropriate from data cache 114, register file 130, and/or completion buffer 128. Multiplexer 124 may be used to obtain the operand data from register file 130 and/or completion buffer 128. A result calculated by instruction execution units 102 for a particular instruction is written to a location/entry of completion buffer 128 specified by the instruction's associated completion buffer identification value.

Instruction graduation (represented in FIG. 1A by instruction graduation unit 126) is controlled by a graduation controller 119. Graduation controller 119 graduates instructions in accordance with the completion buffer identification values stored in graduation buffer 121. When an instruction graduates, its associated result is transferred from completion buffer 128 to register file 130. In conjunction with instruction graduation, graduation controller 119 updates, for example, the free list of completion buffer allocater 111 to indicate a change in availability status of the graduating instruction's assigned completion buffer identification value.

FIG. 2 illustrates how processor 100 implements a conditional move instruction 210 in accordance with an embodiment of the present invention. Conditional move instruction 210 implements Equation 1 and the pseudo code shown in Table 1 below.
RD=(RT==0)?RS:RD  (Eq. 1)
wherein RD is a source register and a destination register,

TABLE 1
CONDITIONAL MOVE INSTRUCTION
*** Form two decoded instructions ***
*** Issue first decoded instruction ***
if (RT = = 0){
  invalidate first decoded instruction
  }
if (RT != 0){
  write value of RD to completion buffer
  }
*** Issue second decoded instruction ***
if (RT != 0){
  invalidate second decoded instruction
  }
if (RT == 0){
  write value of RS to completion buffer
  }

As illustrated by FIG. 2, conditional move instruction 210 is retrieved by processor 100 during instruction fetch. Conditional move instruction 210 includes an opcode field 212, a first operand field 214, a second operand field 216, and a third operand field 218. In the example shown in FIG. 2, the first operand field 214 specifies the contents of register R1 as a first source operand. The second operand field 216 specifies the contents of register R2 as a second source operand. The third operand field 218 specifies the contents of register R3 as a third source operand, and it specifies register R3 as the destination register for the result of conditional move instruction 210.

Conditional move instruction 210 is used to form two decoded instructions 230 and 240 during instruction decode and rename. The first decoded instruction 230 is formed by decoding the bits of opcode field 212 to form control bits. The RT source operand of conditional move instruction 210 (the value stored in register R2) is renamed using rename table 245. As shown in rename table 245, the required value is available in completion buffer register 4 (CB4). The RD source operand of conditional move instruction 210 (the value stored in register R3) is available in register R3, and thus operand renaming is not required. Finally, completion buffer register 10 (CB10) is allocated (e.g., by results allocation buffer 111) as a destination register to temporarily store any result of the first decoded instruction.

The second decoded instruction 240 is formed in a manner similar to the first decoded instruction. As shown in FIG. 2, the second decoded instruction is formed by decoding the bits of opcode field 212 to form control bits. The RT source operand of conditional move instruction 210 (the value stored in register R2) is renamed using rename table 245. As shown in rename table 245, the required value is available in completion buffer register 4 (CB4). The RS source operand of conditional move instruction 210 (the value stored in register R1) is available in register R1, and thus operand renaming is not required. Finally, completion buffer register 10 (CB10) is allocated as a destination register to temporarily store any result of the second decoded instruction. This is the same completion buffer register allocated as the destination register to temporarily store any result of the first decoded instruction.

Following instruction decode and rename, the two decoded instructions 230 and 240 are issued to an execution unit 102. In an embodiment, the decoded instructions 230 and 240 are issued to a load/store unit.

In an embodiment, neither the first decoded instruction nor the second decoded instruction is issued until all three source operands are available, either in the register file of the processor or in a completion buffer register. In an embodiment, once the operands are available, the first decoded instruction 230 is issued for execution.

As shown in FIG. 2, in an embodiment, if the value stored in CB4 equals a predetermined value, for example zero, the first decoded instruction is invalidated. If the value stored in CB4 is not equal to the predetermined value, the value stored in register R3 is written/moved to completion buffer register 10.

Sometime after decoded instruction 230 is issued, decoded instruction 240 is issued. Decoded instructions 230 and 240 need not be issued in consecutive cycles. As shown in FIG. 2, in an embodiment, if the value stored in CB4 is not equal to the predetermined value (e.g., zero), the second decoded instruction is invalidated. If the value stored in CB4 equals the predetermined value, the value stored in register R1 is written/moved to completion buffer register 10.

During instruction graduation, if first decoded instruction 230 is a valid instruction, the contents of CB10 are moved to register R3 upon graduation of first decoded instruction 230. If, however, first decoded instruction 230 is an invalid instruction, and second decoded instruction 240 is a valid instruction, the contents of CB10 are moved to register R3 upon graduation of second decoded instruction 240.

It is to be appreciated that FIG. 2 is presented for illustrative purposes only, and not limitation. For example, operands other than registers R1, R2, and R3 may be specified by conditional move instruction 210 without deviating from the spirit and scope of the present invention. Further, the designation of the first and second decoded instructions is for convenience only and is not intended to limit the order in which the decoded instructions are issued for execution. For example, the second decoded instruction may, in some processor architectures, be issued for execution before the first decoded instruction.

FIG. 3 is a diagram of an example system 300 according to an embodiment of the present invention. System 300 includes a processor 302, a memory 304, an input/output (I/O) controller 306, a clock 308, and custom hardware 310. In an embodiment, system 300 is a system on a chip (SOC) in an application specific integrated circuit (ASIC).

Processor 302 is any processor that includes features of the present invention described herein and/or implements a method embodiment of the present invention. In one embodiment, processor 302 includes an instruction fetch unit, an instruction cache, an instruction decode and dispatch unit, one or more instruction execution unit(s), a data cache, an instruction graduation unit, a register file, and a bus interface unit similar to processor 100 described above.

Memory 304 can be any memory capable of storing instructions and/or data. Memory 304 can include, for example, random access memory and/or read-only memory.

Input/output (I/O) controller 306 is used to enable components of system 300 to receive and/or send information to peripheral devices. I/O controller 306 can include, for example, an analog-to-digital converter and/or a digital-to-analog converter.

Clock 308 is used to determine when sequential subsystems of system 300 change state. For example, each time a clock signal of clock 308 ticks, state registers of system 300 capture signals generated by combinatorial logic. In an embodiment, the clock signal of clock 308 can be varied. The clock signal can also be divided, for example, before it is provided to selected components of system 300.

Custom hardware 310 is any hardware added to system 300 to tailor system 300 to a specific application. Custom hardware 310 can include, for example, hardware needed to decode audio and/or video signals, accelerate graphics operations, and/or implement a smart sensor. Persons skilled in the relevant arts will understand how to implement custom hardware 310 to tailor system 300 to a specific application.

While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example, and not limitation. It will be apparent to persons skilled in the relevant computer arts that various changes can be made therein without departing from the scope of the invention. Furthermore, it should be appreciated that the detailed description of the present invention provided herein, and not the summary and abstract sections, is intended to be used to interpret the claims. The summary and abstract sections may set forth one or more but not all exemplary embodiments of the present invention as contemplated by the inventors.

For example, in addition to implementations using hardware (e.g., within or coupled to a Central Processing Unit (“CPU”), microprocessor, microcontroller, digital signal processor, processor core, System on Chip (“SOC”), or any other programmable or electronic device), implementations may also be embodied in software (e.g., computer readable code, program code and/or instructions disposed in any form, such as source, object or machine language) disposed, for example, in a computer usable (e.g., readable) medium configured to store the software. Such software can enable, for example, the function, fabrication, modeling, simulation, description, and/or testing of the apparatus and methods described herein. For example, this can be accomplished through the use of general programming languages (e.g., C, C++), hardware description languages (HDL) including Verilog HDL, VHDL, SystemC Register Transfer Level (RTL) and so on, or other available programs, databases, and/or circuit (i.e., schematic) capture tools. Such software can be disposed in any known computer usable medium including semiconductor, magnetic disk, optical disk (e.g., CD-ROM, DVD-ROM, etc.) and as a computer data signal embodied in a computer usable (e.g., readable) transmission medium (e.g., carrier wave or any other medium including digital, optical, or analog-based medium). As such, the software can be transmitted over communication networks including the Internet and intranets.

It is understood that the apparatus and method embodiments described herein may be included in a semiconductor intellectual property core, such as a microprocessor core (e.g., embodied in HDL) and transformed to hardware in the production of integrated circuits. Additionally, the apparatus and methods described herein may be embodied as a combination of hardware and software. Thus, the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalence.

Rajagopalan, Vidya, Kishore, Karagada Ramarao, Jiang, Xing Yu, Ukanwa, Maria

Patent Priority Assignee Title
10540183, Jan 19 2015 International Business Machines Corporation Accelerated execution of execute instruction target
9383999, May 11 2010 ARM Limited Conditional compare instruction
9389865, Jan 19 2015 International Business Machines Corporation Accelerated execution of target of execute instruction
9875107, Jan 19 2015 International Business Machines Corporation Accelerated execution of execute instruction target
Patent Priority Assignee Title
5091851, Jul 19 1989 Hewlett-Packard Company Fast multiple-word accesses from a multi-way set-associative cache memory
5109520, Feb 19 1985 AMERICAN VIDEO GRAPHICS, L P Image frame buffer access speedup by providing multiple buffer controllers each containing command FIFO buffers
5193167, Jun 29 1990 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Ensuring data integrity by locked-load and conditional-store operations in a multiprocessor system
5325511, Jun 15 1990 SAMSUNG ELECTRONICS CO , LTD True least recently used replacement method and apparatus
5493523, Dec 15 1993 ARM Finance Overseas Limited Mechanism and method for integer divide involving pre-alignment of the divisor relative to the dividend
5493667, Feb 09 1993 Intel Corporation Apparatus and method for an instruction cache locking scheme
5510934, Dec 15 1993 ARM Finance Overseas Limited Memory system including local and global caches for storing floating point and integer data
5526504, Dec 15 1993 ARM Finance Overseas Limited Variable page size translation lookaside buffer
5537538, Dec 15 1993 ARM Finance Overseas Limited Debug mode for a superscalar RISC processor
5546545, Dec 09 1994 International Business Machines Corporation Rotating priority selection logic circuit
5568630, Apr 05 1994 MIPS Technologies, Inc Backward-compatible computer architecture with extended word size and address space
5572704, Dec 15 1993 ARM Finance Overseas Limited System and method for controlling split-level caches in a multi-processor system including data loss and deadlock prevention schemes
5586278, Mar 01 1994 Intel Corporation Method and apparatus for state recovery following branch misprediction in an out-of-order microprocessor
5604909, Dec 15 1993 ARM Finance Overseas Limited Apparatus for processing instructions in a computing system
5606683, Jan 28 1994 QUANTUM EFFECT DESIGN, INC Structure and method for virtual-to-physical address translation in a translation lookaside buffer
5632025, Dec 15 1993 ARM Finance Overseas Limited Method for preventing multi-level cache system deadlock in a multi-processor system
5670898, Nov 22 1995 ARM Finance Overseas Limited Low-power, compact digital logic topology that facilitates large fan-in and high-speed circuit performance
5734881, Dec 15 1995 VIA-Cyrix, Inc Detecting short branches in a prefetch buffer using target location information in a branch target cache
5740402, Dec 15 1993 ARM Finance Overseas Limited Conflict resolution in interleaved memory systems with multiple parallel accesses
5758112, Oct 14 1994 MIPS Technologies, Inc Pipeline processor with enhanced method and apparatus for restoring register-renaming information in the event of a branch misprediction
5764999, Oct 10 1995 VIA-Cyrix, Inc Enhanced system management mode with nesting
5765037, Oct 31 1985 Biax Corporation System for executing instructions with delayed firing times
5781753, Feb 24 1989 Advanced Micro Devices, INC Semi-autonomous RISC pipelines for overlapped execution of RISC-like instructions within the multiple superscalar execution units of a processor having distributed pipeline control for speculative and out-of-order execution of complex instructions
5784584, Aug 03 1989 MOORE, CHARLES H , TTE, UTD 03 21 2006 THE EQUINOX TRUST High performance microprocessor using instructions that operate within instruction groups
5799165, Jan 26 1996 Advanced Micro Devices, INC Out-of-order processing that removes an issued operation from an execution pipeline upon determining that the operation would cause a lengthy pipeline delay
5802339, Nov 15 1994 GLOBALFOUNDRIES Inc Pipeline throughput via parallel out-of-order execution of adds and moves in a supplemental integer execution unit
5802386, Nov 19 1996 International Business Machines Corporation Latency-based scheduling of instructions in a superscalar processor
5805913, Nov 30 1993 Texas Instruments Incorporated Arithmetic logic unit with conditional register source selection
5809326, Sep 25 1995 Kabushiki Kaisha Toshiba Signal processor and method of operating a signal processor
5809336, Aug 03 1989 MOORE, CHARLES H , TTE, UTD 03 21 2006 THE EQUINOX TRUST High performance microprocessor having variable speed system clock
5848433, Apr 12 1995 Advanced Micro Devices Way prediction unit and a method for operating the same
5881257, Oct 08 1996 ARM Limited Data processing system register control
5884061, Oct 24 1994 International Business Machines Corporation Apparatus to perform source operand dependency analysis perform register renaming and provide rapid pipeline recovery for a microprocessor capable of issuing and executing multiple instructions out-of-order in a single processor cycle
5923862, Jan 28 1997 Samsung Electronics Co., Ltd. Processor that decodes a multi-cycle instruction into single-cycle micro-instructions and schedules execution of the micro-instructions
5954815, Dec 15 1993 ARM Finance Overseas Limited Invalidating instructions in fetched instruction blocks upon predicted two-step branch operations with second operation relative target address
5961629, Jul 08 1991 Seiko Epson Corporation High performance, superscalar-based computer system with out-of-order instruction execution
5966734, Oct 18 1996 SAMSUNG ELECTRIC CO , LTD Resizable and relocatable memory scratch pad as a cache slice
6044478, May 30 1997 GLOBALFOUNDRIES Inc Cache with finely granular locked-down regions
6076159, Sep 12 1997 Infineon Technologies AG Execution of a loop instructing in a loop pipeline after detection of a first occurrence of the loop instruction in an integer pipeline
6079014, Dec 02 1993 Intel Corporation Processor that redirects an instruction fetch pipeline immediately upon detection of a mispredicted branch while committing prior instructions to an architectural state
6085315, Sep 12 1997 Infineon Technologies AG Data processing device with loop pipeline
6216200, Oct 14 1994 MIPS Technologies, Inc Address queue
6223278, Nov 05 1998 Intel Corporation Method and apparatus for floating point (FP) status word handling in an out-of-order (000) Processor Pipeline
6247124, Dec 15 1993 ARM Finance Overseas Limited Branch prediction entry with target line index calculated using relative position of second operation of two step branch operation in a line of instructions
6249862, May 17 1996 GLOBALFOUNDRIES Inc Dependency table for reducing dependency checking hardware
6266755, Oct 14 1994 MIPS Technologies, Inc Translation lookaside buffer with virtual address conflict prevention
6269436, Sep 15 1998 GLOBALFOUNDRIES Inc Superscalar microprocessor configured to predict return addresses from a return stack storage
6298438, Dec 02 1996 GLOBALFOUNDRIES Inc System and method for conditional moving an operand from a source register to destination register
6304960, Aug 06 1998 Intel Corporation Validating prediction for branches in a cluster via comparison of predicted and condition selected tentative target addresses and validation of branch conditions
6308252, Feb 04 1999 Kabushiki Kaisha Toshiba Processor method and apparatus for performing single operand operation and multiple parallel operand operation
6393550, Dec 10 1993 Intel Corporation Method and apparatus for pipeline streamlining where resources are immediate or certainly retired
6430655, Jan 31 2000 ARM Finance Overseas Limited Scratchpad RAM memory accessible in parallel to a primary cache
6473837, May 18 1999 Advanced Micro Devices, Inc. Snoop resynchronization mechanism to preserve read ordering
6477639, Oct 01 1999 Hitachi, Ltd. Branch instruction mechanism for processor
6505285, Jun 26 2000 TERADATA US, INC Scratch segment subsystem for a parallel processing database system
6546477, Sep 20 1999 Texas Instruments Incorporated Memory management in embedded systems with dynamic object instantiation
6557127, Feb 28 2000 Cadence Design Systems, INC Method and apparatus for testing multi-port memories
6594728, Oct 14 1994 MIPS Technologies, Inc Cache memory with dual-way arrays and multiplexed parallel output
6598148, Aug 03 1989 MOORE, CHARLES H , TTE, UTD 03 21 2006 THE EQUINOX TRUST High performance microprocessor having variable speed system clock
6691221, Dec 15 1993 ARM Finance Overseas Limited Loading previously dispatched slots in multiple instruction dispatch buffer before dispatching remaining slots for parallel execution
6757817, May 19 2000 Intel Corporation Apparatus having a cache and a loop buffer
6760835, Nov 22 2000 AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED Instruction branch mispredict streaming
6836833, Oct 22 2002 ARM Finance Overseas Limited Apparatus and method for discovering a scratch pad memory configuration
6915395, May 03 2000 Oracle America, Inc Active address content addressable memory
7032226, Jun 30 2000 ARM Finance Overseas Limited Methods and apparatus for managing a buffer of events in the background
7624256, Apr 14 2005 Qualcomm Incorporated System and method wherein conditional instructions unconditionally provide output
20020002666,
20020019928,
20020103991,
20020112142,
20040148496,
20040193858,
20050102483,
20050246499,
20060095732,
20060149904,
20060259747,
20090006811,
GB2304215,
GB2322718,
WO2082278,
/////////
Executed onAssignorAssigneeConveyanceFrameReelDoc
Dec 18 2006MIPS Technologies, Inc.(assignment on the face of the patent)
Feb 16 2007UKANWA, MARIAMIPS Technologies, IncASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0193110973 pdf
Feb 21 2007KISHORE, KARAGADA RAMARAOMIPS Technologies, IncASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0193110973 pdf
Mar 06 2007JIANG, XING YUMIPS Technologies, IncASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0193110973 pdf
Mar 12 2007RAJAGOPALAN, VIDYAMIPS Technologies, IncASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0193110973 pdf
Aug 24 2007MIPS Technologies, IncJEFFERIES FINANCE LLC, AS COLLATERAL AGENTSECURITY AGREEMENT0197440001 pdf
Dec 05 2008JEFFERIES FINANCE LLC, AS COLLATERAL AGENTMIPS Technologies, IncRELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0219850015 pdf
Feb 06 2013MIPS Technologies, IncBridge Crossing, LLCASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0302020440 pdf
Jan 31 2014Bridge Crossing, LLCARM Finance Overseas LimitedASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0330740058 pdf
Date Maintenance Fee Events
Jun 15 2015M1551: Payment of Maintenance Fee, 4th Year, Large Entity.
May 30 2019M1552: Payment of Maintenance Fee, 8th Year, Large Entity.
Jul 31 2023REM: Maintenance Fee Reminder Mailed.
Jan 15 2024EXP: Patent Expired for Failure to Pay Maintenance Fees.


Date Maintenance Schedule
Dec 13 20144 years fee payment window open
Jun 13 20156 months grace period start (w surcharge)
Dec 13 2015patent expiry (for year 4)
Dec 13 20172 years to revive unintentionally abandoned end. (for year 4)
Dec 13 20188 years fee payment window open
Jun 13 20196 months grace period start (w surcharge)
Dec 13 2019patent expiry (for year 8)
Dec 13 20212 years to revive unintentionally abandoned end. (for year 8)
Dec 13 202212 years fee payment window open
Jun 13 20236 months grace period start (w surcharge)
Dec 13 2023patent expiry (for year 12)
Dec 13 20252 years to revive unintentionally abandoned end. (for year 12)