A multi-processor system (10) includes a plurality of processors (12). Each processor (12) has an integrated memory (16) operable to provide, receive, and store data. Each processor (12) also includes an integrated memory controller (30) in order to control read and write access to the integrated memory (16). Additionally, each processor (12) includes an integrated memory directory (18) operable to maintain a plurality of memory references to data within the integrated memory (16). The multi-processor system (10) also includes an external switch (14) coupled to each of the plurality of processors (12). The external switch (14) passes data to and from any of the plurality of processors (12). The external switch (14) has an external directory (22). The external directory (22) provides a memory reference for each of the plurality of processors (12) to remote data that is not provided within its own integrated memory directory (18).
|
4. A method of accessing data in a multi-processor system, comprising:
storing information in a local memory; maintaining a list of memory references to the information in the local memory; generating a request for data; determining whether the data is associated with information stored in the local memory and has a memory reference; forwarding the request to an external switch in response to the data not having a memory reference, the data not having a memory reference to the local memory being data stored in a remote memory; identifying a memory reference for the data in response to the request; obtaining the data from the remote memory via the external switch in response to the identified memory reference; representing the identified memory reference with bit tags, pointer/vector bits, state bits and ECC bits.
5. A processor in a multi-processor system, comprising:
a local memory integrated in the processor and operable to provide/receive/store data; a memory controller integrated in the processor and operable to control access to and from the local memory; a memory directory integrated in the processor and operable to maintain memory references to data within the local memory, the memory directory operable to generate a data request for data not having a memory reference; a network interface integrated in the processor and operable to provide the data request to an external directory external to the processor, the network interface operable to provide a memory reference generated by the external directory to the memory directory; wherein the memory references are represented with bit tags, state bits, pointer/vector bits, and ECC bits.
1. A multi-processor system, comprising:
a plurality of processors, each processor including an integrated memory operable to provide/receive/store data, each processor including an integrated memory controller operable to control access to the integrated memory, each processor including an integrated memory directory operable to maintain a plurality of memory references to data within the integrated memory; an external switch coupled to each of the plurality of processors, the external switch operable to pass data to and from any of the plurality of processors, the external switch including an external directory, the external directory operable to provide a memory reference for each of the plurality of processors to remote data that is not provided within its own integrated memory directory; wherein the plurality of memory references are represented by bit tags, state bits, pointer/vector bits, and ECC bits.
2. The multi-processor system of
3. The multi-processor system of
|
Controlling access to memory in a multi-processor system is a difficult process, especially when many processors share data in memory. Typically, each processor maintains a small cache of most frequently used data for quick access so that time consuming requests for data to the common system memory may be avoided. However, the cache for each processor must be updated with changes made to its associated data that are reflected in the common system memory. One technique for updating processor caches is to couple each processor to what's known as a snoopy bus. A request for access to data by a requesting processor is broadcast to other processors over the snoopy bus. Each processor "snoops" into their cache to see if it has the most recent copy of the requested data. If a processor does have a most recent copy of the requested data, then that processor provides the data to the requesting processor. If no processor has a most recent copy of the requested data, a memory access is required to fulfill the requesting processor's request. If a processor updates a memory location, this update is broadcasted over the snoopy bus to the other processors in the system. Each processor checks its cache to see if it has the data corresponding to the updated memory location. If so, the processor may either remove that data and corresponding memory location from its cache or update its cache with the new information. This snoopy bus technique is effective for a small number of processors within a computer system but is ineffective for computer systems having hundreds of processors.
Another technique is to provide a directory based memory configuration. For directory based memories, a directory is used to maintain a directory entry corresponding to every entry in memory. The directory entry specifies whether the associated data in memory is valid or where the most recent copy of the data may be accessed. The directory based memory configuration avoids coupling all the processors in the computer system together and having processors be bothered handling broadcast requests found in snoopy bus designs. Communication only needs to occur with the processor having the most recent copy of the data. The size of the directory provides the constraint for this configuration as the directory would become too large to support the number of processors and memories in a large computer system. Therefore, it is desirable to provide a memory access control mechanism for computer systems with a large number of processors.
From the foregoing, it may be appreciated that a need has arisen for providing a multi-processor system with processors having integrated memories and memory directories linked together through an external directory. In accordance with the present invention, a multi-processor system and method of accessing data therein are provided that substantially eliminate or reduce disadvantages and problems of conventional multi-processor systems.
According to an embodiment of the present invention, there is provided a multi-processor system that includes a plurality of processors, wherein each processor includes an integrated memory, an integrated memory controller, and an integrated memory directory. The integrated memory provides, receives, and stores data. The integrated memory controller controls access to and from the integrated memory. The integrated memory directory maintains a plurality of memory references to data within the integrated memory. The multi-processor system also includes an external switch coupled to each of the plurality of processors. The external switch passes data to and from any of the plurality of processors. The external switch includes an external directory. The external directory provides a memory reference to remote data for each of the plurality of processors that is not provided within its own integrated memory directory.
The present invention provides various technical advantages over conventional multi-processor systems. For example, one technical advantage is to integrate memory, memory control, and memory directory into a processor. Another technical advantage is the ability to extend the integrated memory directory capability with external support in order to implement large cache coherent multi-processor systems. Yet another technical advantage is to remove large system directory policy decisions from the individual processor in the system. Still another technical advantage is to provide a directory protocol that can be used with commodity processors having integrated memories and directories. Other technical advantages may be readily ascertainable by those skilled in the art from the following figures, description, and claims.
For a more complete understanding of the present invention and the advantages thereof, reference is now made to the following description taken in conjunction with the accompanying drawings, wherein like reference numerals represent like parts, in which:
In operation, memory directory 18 of a particular processor 12 includes memory references to data stored within its corresponding memory 16. For smaller multi-processor systems, memory directory 18 may also include memory references to data stored in a remote memory 16 associated with a different processor 12 within a local regional group. As memory sizes and systems become larger, an individual memory directory 18 of a particular processor 12 may not be able to include a memory reference to all data in the system which the particular processor 12 desires to access. In order to alleviate this situation, external directory 22 of external switch 14 includes a capability to retrieve memory references to data in memories remote from the particular processor 12.
When the particular processor 12 desires to access data from a remote memory 16, its memory directory 18 determines that it does not have a memory reference to the desired data. Memory directory 18 generates a data request that is sent to external directory 22 in external switch 14. External directory 22 processes the request and generates a memory reference to the desired data. External switch 14 uses the generated memory reference to retrieve the desired data and provide it to the requesting processor 12.
Memory directory 18 preferably holds memory references to data that has been most recently accessed. If data is requested by the particular processor 12 and that data resides in its associated memory 16, then memory directory 18 generates a memory reference to the new data. If memory directory 18 is fully occupied with memory references, then memory directory 18 may overwrite the memory reference to data that has not been accessed for the longest period of time with the newly generated memory reference. External directory 22 may operate in a similar manner by maintaining memory references to most recently accessed data from among the plurality of processors 12 and only generate a new memory reference for a request to data not currently represented by a memory reference within external directory 22. Though not necessary, memory references within each memory directory 18 may be represented in a similar manner as memory references in external directory 22.
The size of memory directory 18 may vary according to the size of its associated memory 16. For example, a processor 12 holding eight megabytes with sixty-four byte lines of cache in a four to one ratio may use 2(17) entries. Using a four gigabyte dynamic random access memory for memory 16, memory references may be represented by thirteen bit tags, two state bits, four pointer/vector bits and two error correction code (ECC) bits. With twenty-one bits per entry and 2(17) entries, memory directory 18 has a size of 2.6 Megabytes. As another example, a processor 12 holding thirty-two megabytes with one hundred twenty-eight byte lines of cache in a four to one ratio may use 2(18) entries. Using an eight gigabyte dynamic random access memory for memory 16, memory references may be represented by twelve bit tags, two state bits, four pointer/vector bits and two ECC bits. With twenty bits per entry and 2(18) entries, memory directory 18 has a size of 5 Megabytes.
With the presence of external directory 22, each memory directory 18 may be set up to track its local memory 16 cached memory references. External directory 22 may be set up to track remote cached memory references for the processors 12. Through the use of memory directories 18 and at each processor 12 and external directories 22 in a large multi-processor system 10 environment, cache coherency is provided to ensure that all processors 12 have an accurate view of the entire system memory. Requests for memory may even be passed from one external switch 14 to another to further extend the memory and access mechanism of multi-processor system 10.
Thus, it is apparent that there has been provided in accordance with the present invention, a multi-processor system and method of accessing data therein that satisfies the advantages set forth above. Although the present invention has been described in detail, various changes, substitutions, and alterations may be readily ascertainable by those skilled in the art and may be made herein without departing from the spirit-and scop of the present invention as defined by the following claims.
Kuskin, Jeffrey S., Galles, Michael B.
Patent | Priority | Assignee | Title |
7089372, | Dec 01 2003 | International Business Machines Corporation | Local region table for storage of information regarding memory access by other nodes |
7870365, | Jul 07 2008 | Ovics | Matrix of processors with data stream instruction execution pipeline coupled to data switch linking to neighbor units by non-contentious command channel / data channel |
7958341, | Jul 07 2008 | Ovics | Processing stream instruction in IC of mesh connected matrix of processors containing pipeline coupled switch transferring messages over consecutive cycles from one link to another link or memory |
8131975, | Jul 07 2008 | Ovics | Matrix processor initialization systems and methods |
8145880, | Jul 07 2008 | Ovics | Matrix processor data switch routing systems and methods |
8327114, | Jul 07 2008 | Ovics | Matrix processor proxy systems and methods |
9280513, | Jul 07 2008 | Ovics | Matrix processor proxy systems and methods |
Patent | Priority | Assignee | Title |
5303362, | Mar 20 1991 | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | Coupled memory multiprocessor computer system including cache coherency management protocols |
5394555, | Dec 23 1992 | BULL HN INFORMATION SYSTEMS INC | Multi-node cluster computer system incorporating an external coherency unit at each node to insure integrity of information stored in a shared, distributed memory |
5522058, | Aug 11 1992 | Kabushiki Kaisha Toshiba | Distributed shared-memory multiprocessor system with reduced traffic on shared bus |
5699551, | Dec 01 1989 | ARM Finance Overseas Limited | Software invalidation in a multiple level, multiple cache system |
5802578, | Jun 12 1996 | International Business Machines Corporation | Multinode computer system with cache for combined tags |
5829052, | Dec 28 1994 | Intel Corporation | Method and apparatus for managing memory accesses in a multiple multiprocessor cluster system |
5890217, | Mar 20 1995 | Fujitsu Limited; PFU Limited | Coherence apparatus for cache of multiprocessor |
5944780, | May 05 1997 | AT&T Properties, LLC; AT&T INTELLECTUAL PROPERTY II, L P | Network with shared caching |
6088769, | Oct 01 1996 | International Business Machines Corporation | Multiprocessor cache coherence directed by combined local and global tables |
6092155, | Jul 10 1997 | International Business Machines Corporation | Cache coherent network adapter for scalable shared memory processing systems |
6148378, | May 26 1997 | Bull S.A. | Process for operating a machine with non-uniform memory access and cache coherency and a machine for implementing the process |
EP817069, | |||
EP881579, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Sep 07 1999 | KUSKIN, JEFFREY S | Silicon Graphics, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 010325 | /0832 | |
Sep 21 1999 | GALLES, MICHAEL B | Silicon Graphics, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 010325 | /0832 | |
Oct 15 1999 | Silicon Graphics, Inc. | (assignment on the face of the patent) | / | |||
Apr 12 2005 | SILICON GRAPHICS, INC AND SILICON GRAPHICS FEDERAL, INC EACH A DELAWARE CORPORATION | WELLS FARGO FOOTHILL CAPITAL, INC | SECURITY AGREEMENT | 016871 | /0809 | |
Oct 17 2006 | Silicon Graphics, Inc | General Electric Capital Corporation | SECURITY INTEREST SEE DOCUMENT FOR DETAILS | 018545 | /0777 | |
Sep 26 2007 | General Electric Capital Corporation | MORGAN STANLEY & CO , INCORPORATED | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 019995 | /0895 | |
May 08 2009 | Silicon Graphics, Inc | SILICON GRAPHICS INTERNATIONAL, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 034804 | /0446 | |
May 08 2009 | SILICON GRAPHICS, INC ET AL | SILICON GRAPHICS INTERNATIONAL, CORP | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 027727 | /0020 | |
May 13 2009 | SILICON GRAPHICS INTERNATIONAL, INC | SGI INTERNATIONAL, INC | CHANGE OF NAME SEE DOCUMENT FOR DETAILS | 034804 | /0661 | |
Feb 08 2012 | SGI INTERNATIONAL, INC | SILICON GRAPHICS INTERNATIONAL, CORP | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 027727 | /0020 | |
Aug 08 2012 | SGI INTERNATIONAL, INC | Silicon Graphics International Corp | MERGER SEE DOCUMENT FOR DETAILS | 034804 | /0437 | |
Jan 27 2015 | Silicon Graphics International Corp | MORGAN STANLEY SENIOR FUNDING, INC | SECURITY INTEREST SEE DOCUMENT FOR DETAILS | 035200 | /0722 | |
Nov 01 2016 | MORGAN STANLEY SENIOR FUNDING, INC , AS AGENT | Silicon Graphics International Corp | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 040545 | /0362 | |
May 01 2017 | Silicon Graphics International Corp | Hewlett Packard Enterprise Development LP | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 044128 | /0149 |
Date | Maintenance Fee Events |
May 17 2004 | ASPN: Payor Number Assigned. |
May 18 2007 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
May 18 2011 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
May 18 2015 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Nov 18 2006 | 4 years fee payment window open |
May 18 2007 | 6 months grace period start (w surcharge) |
Nov 18 2007 | patent expiry (for year 4) |
Nov 18 2009 | 2 years to revive unintentionally abandoned end. (for year 4) |
Nov 18 2010 | 8 years fee payment window open |
May 18 2011 | 6 months grace period start (w surcharge) |
Nov 18 2011 | patent expiry (for year 8) |
Nov 18 2013 | 2 years to revive unintentionally abandoned end. (for year 8) |
Nov 18 2014 | 12 years fee payment window open |
May 18 2015 | 6 months grace period start (w surcharge) |
Nov 18 2015 | patent expiry (for year 12) |
Nov 18 2017 | 2 years to revive unintentionally abandoned end. (for year 12) |