Methods and apparatus provide for associating memory allocation table (MAT) entries with nodes in a binary tree such that the nodes and the entries are grouped into hierarchical levels, each entry including status information; associating the nodes and the entries with segments of a shared memory of a multi-processor system such that higher level nodes and entries are associated with larger numbers of segments of the shared memory and lower level nodes and entries are associated with smaller numbers of segments of the shared memory; initializing the MAT such that the status information of at least a plurality of entries indicates that the associated segment or segments of the shared memory are available for reservation; and selecting one entry in a group of entries in the MAT at a level corresponding to a desired size of the shared memory to be reserved.
|
1. A method, comprising:
associating entries of a memory allocation table (MAT) managing a shared memory in a multiprocessor system with nodes in a binary tree such that the nodes and the entries are grouped into hierarchical levels;
associating the nodes and the entries with segments of the shared memory of the multi-processor system such that higher level nodes and entries are associated with larger numbers of contiguous segments of the shared memory and lower level nodes and entries are associated with smaller numbers of contiguous segments of the shared memory;
selecting one or more of the contiguous segments of the shared memory by one or more processors of the multiprocessor system by evaluating status information of entries of the MAT corresponding to a desired size of the shared memory to be reserved followed by evaluating status information of one or more higher level entries; and
changing status of the one or more contiguous segments by the one or more processors from available to partially reserved to indicate the selection.
17. An apparatus comprising:
a plurality of parallel processors capable of operative communication with a shared memory, each processor including a local memory that is not a hardware cache memory, and an instruction execution pipeline,
wherein at least one of the processors is operable to:
read a memory allocation table (MAT) managing the shared memory from a storage medium, the MAT including a plurality of entries, each entry being associated with a respective node in a binary tree such that the nodes and the entries are grouped into hierarchical levels, and the respective nodes and the entries being associated with contiguous segments of a shared memory of a multi-processor system;
select one or more of the contiguous segments of the shared memory by evaluating status information of the entries of the MAT corresponding to a desired size of the shared memory to be reserved;
modify the MAT to indicate that the one or more selected segments of the shared memory have been reserved; and
write the modified MAT back to the storage medium.
24. A storage medium containing at least one software program capable of causing a multi-processor system to perform actions, comprising:
associating entries of a memory allocation table (MAT) managing a shared memory of the multi-processor system with nodes in a binary tree such that the nodes and the entries are grouped into hierarchical levels, each entry including status information;
associating the nodes and the entries with segments of the shared memory of the multi- processor system such that higher level nodes and entries are associated with larger numbers of contiguous segments of the shared memory and lower level nodes and entries are associated with smaller numbers of contiguous segments of the shared memory;
initializing the MAT such that the status information of at least a plurality of entries indicates that the associated segment or segments of the shared memory are available for reservation; and
selecting by one or more processors of the multi-processor system at least one entry in a group of entries in the MAT at a level corresponding to a desired size of the shared memory to be reserved; and
changing status of the one or more contiguous segments by the one or more processors from available to partially reserved to indicate the selection.
2. The method of
3. The method of
4. The method of
5. The method of
a highest level node in the tree and an associated highest level entry in the MAT are associated with all of the segments of the shared memory; and
a plurality of lowest level nodes in the tree and an associated plurality of lowest level entries in the MAT are each associated with one segment of the shared memory.
6. The method of
7. The method of
8. The method of
9. The method of
10. The method of
11. The method of
12. The method of
13. The method of
14. The method of
15. The method of
16. The method of
using an object that has already been formed when the key table so indicates; and
selecting one or more segments of the shared memory to form the object when the key table indicates that such object has not been formed.
19. The apparatus of
20. The apparatus of
21. The apparatus of
22. The apparatus of
23. The apparatus of
25. The storage medium of
|
This application is a continuation of U.S. patent application Ser. No. 11/031,461, now U.S. Pat. No. 7,386,687, filed Jan. 7, 2005, the entire disclosure of which is hereby incorporated by reference.
The present invention relates to methods and apparatus for managing a shared memory in a multi-processor system in which portions of the memory may be reserved.
In recent years, there has been an insatiable desire for faster computer processing data throughputs because cutting-edge computer applications involve real-time, multimedia functionality. Graphics applications are among those that place the highest demands on a processing system because they require such vast numbers of data accesses, data computations, and data manipulations in relatively short periods of time to achieve desirable visual results. These applications require extremely fast processing speeds, such as many thousands of megabits of data per second. While some processing systems employ a single processor to achieve fast processing speeds, others are implemented utilizing multi-processor architectures. In multi-processor systems, a plurality of sub-processors can operate in parallel (or at least in concert) to achieve desired processing results.
In some existing multi-processor systems a plurality of parallel processors may use a shared memory in to store data. Memory management techniques are employed to prevent allocation of areas that are already being used and to permit allocation of unused areas. The conventional approach to manage the allocation of the shared memory involves a managing processor as an arbiter of the memory areas. This approach removes autonomy from the parallel processors and, therefore, decreases processing efficiency in some instances.
One or more embodiments of the present invention may provide for the parallel processors of a multi-processor system to control memory allocation by accessing a memory allocation table (MAT) from a shared memory of the system. The processors are further operable to search the table for unused segments of memory, and reserve one or more segments as needed. The invention also provides for an extremely compact MAT structure that can be copied from the shared memory in one DMA cycle, easily searched and easily updated.
The MAT is preferably a one dimensional array, where each sequential location in the array corresponds with a “node” and the contents of each location includes status bits (preferably 2 bits). The nodes of the MAT represent the nodes of a binary tree. The root node, node 1, is at level 0 (at the top of the tree) and represents the maximum allocation space of the shared memory. Intermediate nodes 2 and 3, which depend from node 1, are at level 1 and each represent ½ of the maximum allocation space. Intermediate nodes 4 and 5, which depend from node 2, and intermediate nodes 6 and 7, which depend from node 3, are at level 2 and each represent ¼ of the maximum allocation space. Assuming a four level tree (L=4), terminal nodes 8 and 9 (depending from node 4), terminal nodes 10 and 11 (depending from node 5), terminal nodes 12 and 13 (depending from node 6), and terminal nodes 14 and 15 (depending from node 7) are at level 3 and each represent ⅛ of the maximum allocation space.
The contents of each node location of the MAT preferably include two status bits, which may represent: N=partial reservation; I=available; U=used; and C=continued.
In keeping with the example above and assuming, for example, a maximum allocation size of 8 KB, each terminal node (i.e., nodes 8-15) represent a 1 KB segment of memory that may be allocated, which is also the granularity of the allocation. The size of the MAT is equal to the number of nodes (2L−1) times 2 bits, which in this case is 15*2=30 bits.
A processor seeking to obtain space (e.g., 1 KB of space) in the shared memory reads the MAT from shared memory (advantageously using a single DMA transfer). The processor converts the size of the space needed to a level in the tree using the following equation: level=log2 (Max space/requested space), which in the example above is level=log2 (8 KB/1 KB)=3. This level corresponds to the terminal nodes 8-15 in the MAT. Starting at node 8, the processor searches for a status of (I), such as may be found, for example, at node 10. Next, the lineage of node 10 is tested to see if a larger area of the memory (e.g., two or more contiguous segments) were previously reserved. If not, then the segment of the shared memory associated with node 10 and entry 10 of the MAT may be reserved.
In accordance with one or more further aspects of the present invention, methods and apparatus provide for: associating memory allocation table (MAT) entries with nodes in a binary tree such that the nodes and the entries are grouped into hierarchical levels; associating the nodes and the entries with segments of a shared memory of a multi-processor system such that higher level nodes and entries are associated with larger numbers of segments of the shared memory and lower level nodes and entries are associated with smaller numbers of segments of the shared memory; and selecting one or more segments of the shared memory by evaluating status information of entries of the MAT corresponding to a desired size of the shared memory to be reserved followed by evaluating status information of one or more higher level entries.
The status information of each entry includes at least: (i) an indicator of whether the associated segment or segments of the shared memory have been reserved or are available for reservation; and (ii) an indicator that the segment or segments of the shared memory associated with one or more lower level entries in a lineage of the given entry have been reserved.
A highest level node in the tree and an associated highest level entry in the MAT are preferably associated with all of the segments of the shared memory; and a plurality of lowest level nodes in the tree and an associated plurality of lowest level entries in the MAT are preferably each associated with one segment of the shared memory.
The MAT is preferably initialized such that the status information of at least a plurality of entries indicates that the associated segment or segments of the shared memory are available for reservation.
The function of selecting one or more segments of the shared memory may include computing a level in the MAT based on a desired size of the shared memory to be reserved. The level in the MAT may be computed to be approximately equal to log2 (M/D), where M is the maximum size of the shared memory available for reservation and D is the desired size of the shared memory to be reserved.
The methods and apparatus preferably further provide for selecting one of the entries associated with the computed level having status information indicating that the associated segment or segments of the shared memory are available for reservation. The methods and apparatus may further provide for evaluating one or more higher level entries in a lineage of the selected entry to determine whether the higher level entries have status information indicating that the associated segments of the shared memory are available for reservation. This function is preferably repeated for successively higher level entries in the lineage until status information of one of the higher level entries indicates that the associated segments of the shared memory are available for reservation.
A different one of the entries associated with the computed level may be selected when the determination indicates that one or more of the higher level entries in the lineage have status information indicating that the associated segments of the shared memory are not available for reservation.
The evaluation may include: continuing the evaluation of a sequentially higher level entry in the lineage when the status information of a current entry indicates that all segment(s) of the shared memory associated with the current entry are available for reservation. This evaluation is preferably repeated until the status information of the current entry indicates that a prior reservation was made for one or more memory segments associated with a lower level entry in the MAT but not all the memory segments associated with the current entry were reserved, whereby the evaluation of higher level entries in the lineage is terminated.
The status information of all the evaluated entries is preferably modified when: (i) the status information of all the evaluated entries indicate that all segment(s) of the shared memory associated with the respective evaluated entries are available for reservation, and (ii) the highest level entry in the MAT is reached prior to terminating the sequential evaluation. For example, the status information for each evaluated entry is modified to indicate that a prior reservation was made for one or more memory segments associated with a lower level entry in the MAT but not all the memory segments associated with the current entry were reserved.
The provisionally selected entry of the MAT may be abandoned when the evaluation of the status information of a current higher level entry in the lineage indicates that all segment(s) of the shared memory associated with the current entry are reserved. In this situation, another entry of the group of entries may be provisionally selected at the computed level having status information indicating that the one or more segments of the shared memory associated with the provisionally selected entry are available for reservation. Thereafter, the evaluation may be performed on one or more higher level entries in a lineage of the other provisionally selected entry to determine whether the status information indicates that the one or more segments of the shared memory associated with the other provisionally selected entry are available for reservation.
Other aspects, features, advantages, etc. will become apparent to one skilled in the art when the description of the invention herein is taken in conjunction with the accompanying drawings.
For the purposes of illustrating the various aspects of the invention, there are shown in the drawings forms that are presently preferred, it being understood, however, that the invention is not limited to the precise arrangements and instrumentalities shown.
With reference to the drawings, wherein like numerals indicate like elements, there is shown in
The processing system 100 includes a plurality of processors 102A, 102B, 102C, and 102D, it being understood that any number of processors may be employed without departing from the spirit and scope of the invention. The processing system 100 also includes a plurality of local memories 104A, 104B, 104C, 104D and a shared memory 106. At least the processors 102, the local memories 104, and the shared memory 106 are preferably (directly or indirectly) coupled to one another over a bus system 108 that is operable to transfer data to and from each component in accordance with suitable protocols.
Each of the processors 102 may be of similar construction or of differing construction. The processors may be implemented utilizing any of the known technologies that are capable of requesting data from the shared (or system) memory 106, and manipulating the data to achieve a desirable result. For example, the processors 102 may be implemented using any of the known microprocessors that are capable of executing software and/or firmware, including standard microprocessors, distributed microprocessors, etc. By way of example, one or more of the processors 102 may be a graphics processor that is capable of requesting and manipulating data, such as pixel data, including gray scale information, color information, texture data, polygonal information, video frame information, etc.
One or more of the processors 102 of the system 100 may take on the role as a main (or managing) processor. The main processor may schedule and orchestrate the processing of data by the other processors.
The system memory 106 is preferably a dynamic random access memory (DRAM) coupled to the processors 102 through a memory interface circuit (not shown). Although the system memory 106 is preferably a DRAM, the memory 106 may be implemented using other means, e.g., a static random access memory (SRAM), a magnetic random access memory (MRAM), an optical memory, a holographic memory, etc.
Each processor 102 preferably includes a processor core and an associated one of the local memories 104 in which to execute programs. These components may be integrally disposed on a common semi-conductor substrate or may be separately disposed as may be desired by a designer. The processor core is preferably implemented using a processing pipeline, in which logic instructions are processed in a pipelined fashion. Although the pipeline may be divided into any number of stages at which instructions are processed, the pipeline generally comprises fetching one or more instructions, decoding the instructions, checking for dependencies among the instructions, issuing the instructions, and executing the instructions. In this regard, the processor core may include an instruction buffer, instruction decode circuitry, dependency check circuitry, instruction issue circuitry, and execution stages.
Each local memory 104 is coupled to its associated processor core 102 via a bus and is preferably located on the same chip (same semiconductor substrate) as the processor core. The local memory 104 is preferably not a traditional hardware cache memory in that there are no on-chip or off-chip hardware cache circuits, cache registers, cache memory controllers, etc. to implement a hardware cache memory function. As on chip space is often limited, the size of the local memory may be much smaller than the shared memory 106.
The processors 102 preferably provide data access requests to copy data (which may include program data) from the system memory 106 over the bus system 108 into their respective local memories 104 for program execution and data manipulation. The mechanism for facilitating data access may be implemented utilizing any of the known techniques, for example the direct memory access (DMA) technique. This function is preferably carried out by the memory interface circuit.
With reference to
With reference to
With reference to
It is noted that the MAT 110 need not include an excessive amount of data to achieve management of the mutex objects of the shared memory 106. Indeed, each entry need only include a relatively small number of bits, such as two bits, representing the status of the given entry. Thus, the MAT 110 need not utilize a significant amount of space in a storage medium, such as some portion of the shared memory 106. Advantageously, the relatively small number of bits needed to fully define the MAT 110 permits the entire MAT 110 to be transferred between the shared memory 106 and the processors in one DMA transfer. Those skilled in the art will appreciate that conventional techniques for establishing an allocation table result in tables of very large size, which are unlikely to be transferable in one DMA cycle.
With reference to
Assuming that there are only four levels in the tree, nodes 8-15 are terminal nodes, meaning that there are no further nodes depending therefrom. Terminal nodes 8 and 9 depend from node 4, terminal nodes 10 and 11 depend from node 5, terminal nodes 12 and 13 depend from node 6, and terminal nodes 14 and 15 depend from node 7. Each of these nodes are associated with entries 8, 9, 10, 11, 12, 13, 14, and 15, respectively, of the MAT 110. Each of these nodes represents one eighth of the maximum allocation space (in this example, one segment of the shared memory 106).
In keeping with the example above, and assuming, for example, a maximum allocation size of 8 KB, each terminal node (e.g., nodes 8-15) represents a 1 KB segment of the shared memory 106 that may be reserved (or allocated), which is to say that the granularity of the allocation is 1 KB. Thus, the size of the MAT 110 is approximately equal to the number of nodes (entries) times the number of bits representing the status information for each entry. The number of nodes (entries) may be computed utilizing the following formula: 2L−1, where L is the number of levels in the binary tree. Thus, in the example discussed thus far, the size of the MAT 110 is 15×2=30 bits.
Reference is now made to
At action 304, the computed level of the MAT 110 is associated with a number of entries within the MAT 110 in order to form a group of entries from which one entry is selected. With reference to
It is preferred that the selection of one entry within the group is only a provisional selection in that further processing is to be carried out before the selection is finalized and the associated segment of the shared memory 106 is reserved. For the purposes of illustration, it is assumed that circumstances, the evaluation of a sequentially higher level entry is preferably continued.
Turning to
With reference to
With reference to
With reference to
With reference to
The MAT 110 may be revised to reflect that no segment of the shared memory is associated with entry 15. Entry 15 may be “provisionally” selected. The next higher level entry in the MAT 110 from entry 15 is parent node=INTEGER (15/2)=entry 7. The status information of entry 7 of the MAT 110 is preferably evaluated. Turning to
The status information of entry 15 of the MAT 110 is preferably modified from I to U (see
With reference to
Thus, another entry in the group is provisionally selected, e.g., entry 11. This yields the same result: parent entry 5 has a status of U. Thus, yet another entry in the group is provisionally selected, e.g., entry 12. The parent entry is INTEGER (12/2)=6. The status information of entry 6, is evaluated and found to be N, indicating that at least one of the segments associated with a lower level entry associated with entry 6 has been reserved but not all of the memory segments associated with entry 6 were reserved. As the first occurrence of status N insures that the segment of the shared memory 106 associated with the provisionally selected entry 12 is available for reservation, the evaluation of higher level entries in the lineage may be terminated. Next, the status information associated with entry 12 is changed from I to U (see
With reference to
In accordance with one or more further embodiments of the present invention, and with reference to
With reference to
Turning to
At action 404, a determination is made as to whether the desired object has already been reserved. This preferably entails reviewing the object names within the key table 112 to determine whether any associated nodes have already been reserved. For example, if a given processor 102 (or task) seeks to use OBJECT 2, then it searches for that name under the object names within the key table 112 and checks the associated node names to determine whether any nodes (or segments) of the shared memory 106 have been reserved. In this example, node 5 has been reserved with respect to OBJECT 2. Notable, nodes 10 and 11, which are associated with node 5 at a lower level may also be implicated when node 5 is reserved. If the result of the determination at action 404 is in the affirmative, then the process flow preferably advances to action 410, where the MAT 110 and the key table 112 are written back into the shared memory 106 and unlocked. Thereafter, the processor seeking to utilize OBJECT 2 may do so by utilizing the segments associated with nodes 5, 10, and 11 of the shared memory 106.
On the other hand, if the result of the determination at action 404 is in the negative, then the process flow preferably advances to action 406, where actions 302-308 of
A description of a preferred computer architecture for a multi-processor system will now be provided that is suitable for carrying out one or more of the features discussed herein. In accordance with one or more embodiments, the multi-processor system may be implemented as a single-chip solution operable for stand-alone and/or distributed processing of media-rich applications, such as game systems, home terminals, PC systems, server systems and workstations. In some applications, such as game systems and home terminals, real-time computing may be a necessity. For example, in a real-time, distributed gaming application, one or more of networking image decompression, 3D computer graphics, audio generation, network communications, physical simulation, and artificial intelligence processes have to be executed quickly enough to provide the user with the illusion of a real-time experience. Thus, each processor in the multi-processor system must complete tasks in a short and predictable time.
To this end, and in accordance with this computer architecture, all processors of a multi-processing computer system are constructed from a common computing module (or cell). This common computing module has a consistent structure and preferably employs the same instruction set architecture. The multi-processing computer system can be formed of one or more clients, servers, PCs, mobile computers, game machines, PDAs, set top boxes, appliances, digital televisions and other devices using computer processors.
A plurality of the computer systems may also be members of a network if desired. The consistent modular structure enables efficient, high speed processing of applications and data by the multi-processing computer system, and if a network is employed, the rapid transmission of applications and data over the network. This structure also simplifies the building of members of the network of various sizes and processing power and the preparation of applications for processing by these members.
With reference to
The PE 500 can be constructed using various methods for implementing digital logic. The PE 500 preferably is constructed, however, as a single integrated circuit employing a complementary metal oxide semiconductor (CMOS) on a silicon substrate. Alternative materials for substrates include gallium arsinide, gallium aluminum arsinide and other so-called III-B compounds employing a wide variety of dopants. The PE 500 also may be implemented using superconducting material, e.g., rapid single-flux-quantum (RSFQ) logic.
The PE 500 is closely associated with a shared (main) memory 514 through a high bandwidth memory connection 516. Although the memory 514 preferably is a dynamic random access memory (DRAM), the memory 514 could be implemented using other means, e.g., as a static random access memory (SRAM), a magnetic random access memory (MRAM), an optical memory, a holographic memory, etc.
The PU 504 and the sub-processing units 508 are preferably each coupled to a memory flow controller (MFC) including direct memory access DMA functionality, which in combination with the memory interface 511, facilitate the transfer of data between the DRAM 514 and the sub-processing units 508 and the PU 504 of the PE 500. It is noted that the DMAC and/or the memory interface 511 may be integrally or separately disposed with respect to the sub-processing units 508 and the PU 504. Indeed, the DMAC function and/or the memory interface 511 function may be integral with one or more (preferably all) of the sub-processing units 508 and the PU 504. It is also noted that the DRAM 514 may be integrally or separately disposed with respect to the PE 500. For example, the DRAM 514 may be disposed off-chip as is implied by the illustration shown or the DRAM 514 may be disposed on-chip in an integrated fashion.
The PU 504 can be, e.g., a standard processor capable of stand-alone processing of data and applications. In operation, the PU 504 preferably schedules and orchestrates the processing of data and applications by the sub-processing units. The sub-processing units preferably are single instruction, multiple data (SIMD) processors. Under the control of the PU 504, the sub-processing units perform the processing of these data and applications in a parallel and independent manner. The PU 504 is preferably implemented using a PowerPC core, which is a microprocessor architecture that employs reduced instruction-set computing (RISC) technique. RISC performs more complex instructions using combinations of simple instructions. Thus, the timing for the processor may be based on simpler and faster operations, enabling the microprocessor to perform more instructions for a given clock speed.
It is noted that the PU 504 may be implemented by one of the sub-processing units 508 taking on the role of a main processing unit that schedules and orchestrates the processing of data and applications by the sub-processing units 508. Further, there may be more than one PU implemented within the processor element 500.
In accordance with this modular structure, the number of PEs 500 employed by a particular computer system is based upon the processing power required by that system. For example, a server may employ four PEs 500, a workstation may employ two PEs 500 and a PDA may employ one PE 500. The number of sub-processing units of a PE 500 assigned to processing a particular software cell depends upon the complexity and magnitude of the programs and data within the cell.
The sub-processing unit 508 includes two basic functional units, namely an SPU core 510A and a memory flow controller (MFC) 510B. The SPU core 510A performs program execution, data manipulation, etc., while the MFC 510B performs functions related to data transfers between the SPU core 510A and the DRAM 514 of the system.
The SPU core 510A includes a local memory 550, an instruction unit (IU) 552, registers 554, one ore more floating point execution stages 556 and one or more fixed point execution stages 558. The local memory 550 is preferably implemented using single-ported random access memory, such as an SRAM. Whereas most processors reduce latency to memory by employing caches, the SPU core 510A implements the relatively small local memory 550 rather than a cache. Indeed, in order to provide consistent and predictable memory access latency for programmers of real-time applications (and other applications as mentioned herein) a cache memory architecture within the SPU 508A is not preferred. The cache hit/miss characteristics of a cache memory results in volatile memory access times, varying from a few cycles to a few hundred cycles. Such volatility undercuts the access timing predictability that is desirable in, for example, real-time application programming. Latency hiding may be achieved in the local memory SRAM 550 by overlapping DMA transfers with data computation. This provides a high degree of control for the programming of real-time applications. As the latency and instruction overhead associated with DMA transfers exceeds that of the latency of servicing a cache miss, the SRAM local memory approach achieves an advantage when the DMA transfer size is sufficiently large and is sufficiently predictable (e.g., a DMA command can be issued before data is needed).
A program running on a given one of the sub-processing units 508 references the associated local memory 550 using a local address, however, each location of the local memory 550 is also assigned a real address (RA) within the overall system's memory map. This allows Privilege Software to map a local memory 550 into the Effective Address (EA) of a process to facilitate DMA transfers between one local memory 550 and another local memory 550. The PU 504 can also directly access the local memory 550 using an effective address. In a preferred embodiment, the local memory 550 contains 556 kilobytes of storage, and the capacity of registers 552 is 128×128 bits.
The SPU core 504A is preferably implemented using a processing pipeline, in which logic instructions are processed in a pipelined fashion. Although the pipeline may be divided into any number of stages at which instructions are processed, the pipeline generally comprises fetching one or more instructions, decoding the instructions, checking for dependencies among the instructions, issuing the instructions, and executing the instructions. In this regard, the IU 552 includes an instruction buffer, instruction decode circuitry, dependency check circuitry, and instruction issue circuitry.
The instruction buffer preferably includes a plurality of registers that are coupled to the local memory 550 and operable to temporarily store instructions as they are fetched. The instruction buffer preferably operates such that all the instructions leave the registers as a group, i.e., substantially simultaneously. Although the instruction buffer may be of any size, it is preferred that it is of a size not larger than about two or three registers.
In general, the decode circuitry breaks down the instructions and generates logical micro-operations that perform the function of the corresponding instruction. For example, the logical micro-operations may specify arithmetic and logical operations, load and store operations to the local memory 550, register source operands and/or immediate data operands. The decode circuitry may also indicate which resources the instruction uses, such as target register addresses, structural resources, function units and/or busses. The decode circuitry may also supply information indicating the instruction pipeline stages in which the resources are required. The instruction decode circuitry is preferably operable to substantially simultaneously decode a number of instructions equal to the number of registers of the instruction buffer.
The dependency check circuitry includes digital logic that performs testing to determine whether the operands of given instruction are dependent on the operands of other instructions in the pipeline. If so, then the given instruction should not be executed until such other operands are updated (e.g., by permitting the other instructions to complete execution). It is preferred that the dependency check circuitry determines dependencies of multiple instructions dispatched from the decoder circuitry 112 simultaneously.
The instruction issue circuitry is operable to issue the instructions to the floating point execution stages 556 and/or the fixed point execution stages 558.
The registers 554 are preferably implemented as a relatively large unified register file, such as a 128-entry register file. This allows for deeply pipelined high-frequency implementations without requiring register renaming to avoid register starvation. Renaming hardware typically consumes a significant fraction of the area and power in a processing system. Consequently, advantageous operation may be achieved when latencies are covered by software loop unrolling or other interleaving techniques.
Preferably, the SPU core 510A is of a superscalar architecture, such that more than one instruction is issued per clock cycle. The SPU core 510A preferably operates as a superscalar to a degree corresponding to the number of simultaneous instruction dispatches from the instruction buffer, such as between 2 and 3 (meaning that two or three instructions are issued each clock cycle). Depending upon the required processing power, a greater or lesser number of floating point execution stages 556 and fixed point execution stages 558 may be employed. In a preferred embodiment, the floating point execution stages 556 operate at a speed of 32 billion floating point operations per second (32 GFLOPS), and the fixed point execution stages 558 operate at a speed of 32 billion operations per second (32 GOPS).
The MFC 510B preferably includes a bus interface unit (BIU) 564, a memory management unit (MMU) 562, and a direct memory access controller (DMAC) 560. With the exception of the DMAC 560, the MFC 510B preferably runs at half frequency (half speed) as compared with the SPU core 510A and the bus 512 to meet low power dissipation design objectives. The MFC 510B is operable to handle data and instructions coming into the SPU 508 from the bus 512, provides address translation for the DMAC, and snoop-operations for data coherency. The BIU 564 provides an interface between the bus 512 and the MMU 562 and DMAC 560. Thus, the SPU 508 (including the SPU core 510A and the MFC 510B) and the DMAC 560 are connected physically and/or logically to the bus 512.
The MMU 562 is preferably operable to translate effective addresses (taken from DMA commands) into real addresses for memory access. For example, the MMU 562 may translate the higher order bits of the effective address into real address bits. The lower-order address bits, however, are preferably untranslatable and are considered both logical and physical for use to form the real address and request access to memory. In one or more embodiments, the MMU 562 may be implemented based on a 64-bit memory management model, and may provide 264 bytes of effective address space with 4K-, 64K-, 1M-, and 16M- byte page sizes and 256 MB segment sizes. Preferably, the MMU 562 is operable to support up to 265 bytes of virtual memory, and 242 bytes (4 TeraBytes) of physical memory for DMA commands. The hardware of the MMU 562 may include an 8-entry, fully associative SLB, a 256-entry, 4way set associative TLB, and a 4×4 Replacement Management Table (RMT) for the TLB—used for hardware TLB miss handling.
The DMAC 560 is preferably operable to manage DMA commands from the SPU core 510A and one or more other devices such as the PU 504 and/or the other SPUs. There may be three categories of DMA commands: Put commands, which operate to move data from the local memory 550 to the shared memory 514; Get commands, which operate to move data into the local memory 550 from the shared memory 514; and Storage Control commands, which include SLI commands and synchronization commands. The synchronization commands may include atomic commands, send signal commands, and dedicated barrier commands. In response to DMA commands, the MMU 562 translates the effective address into a real address and the real address is forwarded to the BIU 564.
The SPU core 510A preferably uses a channel interface and data interface to communicate (send DMA commands, status, etc.) with an interface within the DMAC 560. The SPU core 510A dispatches DMA commands through the channel interface to a DMA queue in the DMAC 560. Once a DMA command is in the DMA queue, it is handled by issue and completion logic within the DMAC 560. When all bus transactions for a DMA command are finished, a completion signal is sent back to the SPU core 510A over the channel interface.
The PU core 504A may include an L1 cache 570, an instruction unit 572, registers 574, one or more floating point execution stages 576 and one or more fixed point execution stages 578. The L1 cache provides data caching functionality for data received from the shared memory 106, the processors 102, or other portions of the memory space through the MFC 504B. As the PU core 504A is preferably implemented as a superpipeline, the instruction unit 572 is preferably implemented as an instruction pipeline with many stages, including fetching, decoding, dependency checking, issuing, etc. The PU core 504A is also preferably of a superscalar configuration, whereby more than one instruction is issued from the instruction unit 572 per clock cycle. To achieve a high processing power, the floating point execution stages 576 and the fixed point execution stages 578 include a plurality of stages in a pipeline configuration. Depending upon the required processing power, a greater or lesser number of floating point execution stages 576 and fixed point execution stages 578 may be employed.
The MFC 504B includes a bus interface unit (BIU) 580, an L2 cache memory, a non-cachable unit (NCU) 584, a core interface unit (CIU) 586, and a memory management unit (MMU) 588. Most of the MFC 504B runs at half frequency (half speed) as compared with the PU core 504A and the bus 108 to meet low power dissipation design objectives.
The BIU 580 provides an interface between the bus 108 and the L2 cache 582 and NCU 584 logic blocks. To this end, the BIU 580 may act as a Master as well as a Slave device on the bus 108 in order to perform fully coherent memory operations. As a Master device it may source load/store requests to the bus 108 for service on behalf of the L2 cache 582 and the NCU 584. The BIU 580 may also implement a flow control mechanism for commands which limits the total number of commands that can be sent to the bus 108. The data operations on the bus 108 may be designed to take eight beats and, therefore, the BIU 580 is preferably designed around 128 byte cache-lines and the coherency and synchronization granularity is 128 KB.
The L2 cache memory 582 (and supporting hardware logic) is preferably designed to cache 512 KB of data. For example, the L2 cache 582 may handle cacheable loads/stores, data pre-fetches, instruction fetches, instruction pre-fetches, cache operations, and barrier operations. The L2 cache 582 is preferably an 8-way set associative system. The L2 cache 582 may include six reload queues matching six (6) castout queues (e.g., six RC machines), and eight (64-byte wide) store queues. The L2 cache 582 may operate to provide a backup copy of some or all of the data in the L1 cache 570. Advantageously, this is useful in restoring state(s) when processing nodes are hot-swapped. This configuration also permits the L1 cache 570 to operate more quickly with fewer ports, and permits faster cache-to-cache transfers (because the requests may stop at the L2 cache 582). This configuration also provides a mechanism for passing cache coherency management to the L2 cache memory 582.
The NCU 584 interfaces with the CIU 586, the L2 cache memory 582, and the BIU 580 and generally functions as a queueing/buffering circuit for non-cacheable operations between the PU core 504A and the memory system. The NCU 584 preferably handles all communications with the PU core 504A that are not handled by the L2 cache 582, such as cache-inhibited load/stores, barrier operations, and cache coherency operations. The NCU 584 is preferably run at half speed to meet the aforementioned power dissipation objectives.
The CIU 586 is disposed on the boundary of the MFC 504B and the PU core 504A and acts as a routing, arbitration, and flow control point for requests coming from the execution stages 576, 578, the instruction unit 572, and the MMU unit 588 and going to the L2 cache 582 and the NCU 584. The PU core 504A and the MMU 588 preferably run at full speed, while the L2 cache 582 and the NCU 584 are operable for a 2:1 speed ratio. Thus, a frequency boundary exists in the CIU 586 and one of its functions is to properly handle the frequency crossing as it forwards requests and reloads data between the two frequency domains.
The CIU 586 is comprised of three functional blocks: a load unit, a store unit, and reload unit. In addition, a data pre-fetch function is performed by the CIU 586 and is preferably a functional part of the load unit. The CIU 586 is preferably operable to: (i) accept load and store requests from the PU core 504A and the MMU 588; (ii) convert the requests from full speed clock frequency to half speed (a 2:1 clock frequency conversion); (iii) route cachable requests to the L2 cache 582, and route non-cachable requests to the NCU 584; (iv) arbitrate fairly between the requests to the L2 cache 582 and the NCU 584; (v) provide flow control over the dispatch to the L2 cache 582 and the NCU 584 so that the requests are received in a target window and overflow is avoided; (vi) accept load return data and route it to the execution stages 576, 578, the instruction unit 572, or the MMU 588; (vii) pass snoop requests to the execution stages 576, 578, the instruction unit 572, or the MMU 588; and (viii) convert load return data and snoop traffic from half speed to full speed.
The MMU 588 preferably provides address translation for the PU core 540A, such as by way of a second level address translation facility. A first level of translation is preferably provided in the PU core 504A by separate instruction and data ERAT (effective to real address translation) arrays that may be much smaller and faster than the MMU 588.
In a preferred embodiment, the PU 504 operates at 4-6 GHz, 10F04, with a 64-bit implementation. The registers are preferably 64 bits long (although one or more special purpose registers may be smaller) and effective addresses are 64 bits long. The instruction unit 570, registers 572 and execution stages 574 and 576 are preferably implemented using PowerPC technology to achieve the (RISC) computing technique.
Additional details regarding the modular structure of this computer system may be found in U.S. Pat. No. 6,526,491, the entire disclosure of which is hereby incorporated by reference.
In accordance with at least one further aspect of the present invention, the methods and apparatus described above may be achieved utilizing suitable hardware, such as that illustrated in the figures. Such hardware may be implemented utilizing any of the known technologies, such as standard digital circuitry, any of the known processors that are operable to execute software and/or firmware programs, one or more programmable digital devices or systems, such as programmable read only memories (PROMs), programmable array logic devices (PALs), etc. Furthermore, although the apparatus illustrated in the figures are shown as being partitioned into certain functional blocks, such blocks may be implemented by way of separate circuitry and/or combined into one or more functional units. Still further, the various aspects of the invention may be implemented by way of software and/or firmware program(s) that may be stored on suitable storage medium or media (such as floppy disk(s), memory chip(s), etc.) for transportability and/or distribution.
Although the invention herein has been described with reference to particular embodiments, it is to be understood that these embodiments are merely illustrative of the principles and applications of the present invention. It is therefore to be understood that numerous modifications may be made to the illustrative embodiments and that other arrangements may be devised without departing from the spirit and scope of the present invention as defined by the appended claims.
Inoue, Keisuke, Yasue, Masahiro
Patent | Priority | Assignee | Title |
9183150, | Dec 16 2011 | International Business Machines Corporation | Memory sharing by processors |
9342258, | Sep 01 2011 | NXP USA, INC | Integrated circuit device and method for providing data access control |
Patent | Priority | Assignee | Title |
6185654, | Jul 17 1998 | Hewlett Packard Enterprise Development LP | Phantom resource memory address mapping system |
6526491, | Mar 22 2001 | SONY NETWORK ENTERTAINMENT PLATFORM INC ; Sony Computer Entertainment Inc | Memory protection system and method for computer architecture for broadband networks |
6628668, | Mar 16 1999 | FUJITSU NETWORK COMMUNICATIONS, INC | Crosspoint switch bandwidth allocation management |
6874074, | Nov 13 2000 | Wind River Systems, Inc.; WIND RIVER SYSTEMS, INC | System and method for memory reclamation |
20020059503, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
May 01 2008 | Sony Computer Entertainment Inc. | (assignment on the face of the patent) | / | |||
Apr 01 2010 | Sony Computer Entertainment Inc | SONY NETWORK ENTERTAINMENT PLATFORM INC | CHANGE OF NAME SEE DOCUMENT FOR DETAILS | 027445 | /0773 | |
Apr 01 2010 | SONY NETWORK ENTERTAINMENT PLATFORM INC | Sony Computer Entertainment Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 027449 | /0380 |
Date | Maintenance Fee Events |
Jul 07 2011 | ASPN: Payor Number Assigned. |
Jul 02 2014 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Jul 19 2018 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Jul 21 2022 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Feb 01 2014 | 4 years fee payment window open |
Aug 01 2014 | 6 months grace period start (w surcharge) |
Feb 01 2015 | patent expiry (for year 4) |
Feb 01 2017 | 2 years to revive unintentionally abandoned end. (for year 4) |
Feb 01 2018 | 8 years fee payment window open |
Aug 01 2018 | 6 months grace period start (w surcharge) |
Feb 01 2019 | patent expiry (for year 8) |
Feb 01 2021 | 2 years to revive unintentionally abandoned end. (for year 8) |
Feb 01 2022 | 12 years fee payment window open |
Aug 01 2022 | 6 months grace period start (w surcharge) |
Feb 01 2023 | patent expiry (for year 12) |
Feb 01 2025 | 2 years to revive unintentionally abandoned end. (for year 12) |