Technologies are generally described for a system for sending a data block stored in a cache. In some examples described herein, a system may comprise a first processor in a first tile. The first processor is effective to generate a request for a data block, the request including a destination identifier identifying a destination tile for the data block, the destination tile being distinct from the first tile. Some example systems may further comprise a second tile effective to receive the request, the second tile effective to determine a data tile including the data block, the second tile further effective to send the request to the data tile. Some example systems may still further comprise a data tile effective to receive the request from the second tile, the data tile effective to send the data block to the destination tile.
|
19. A method for sending a data block stored in a cache, the method comprising:
generating a request for the data block in a data tile by a first processor in a first tile, the request including a destination identifier that identifies a destination tile for the data block, the destination tile and data tile being different from the first tile;
receiving the request by a second tile from the first tile, the second tile being different from the destination tile; and
processing, by a second processor, the request received from the first tile so that the data block is sent to the destination tile.
1. A system effective to send a data block stored in a cache, the system comprising:
a first processor in a first tile, the first processor effective to generate a request for the data block, wherein the request includes a destination identifier that identifies a destination tile for the data block, the destination tile being different from the first tile and the data block being located in a tile different from the first tile; and
a second tile different from the destination tile and effective to receive the request from the first tile, the second tile effective to process the request from the first tile so that the data block is sent to the destination tile.
34. A method for requesting a data block in a die, the die including a first tile with a first processor and a first memory wherein the first memory is effective to include a table wherein the table is a subset of a directory for the die, and the die including a second tile, the second tile including a second processor and a second memory, and the second memory is effective to include a directory for the die, wherein the directory includes an index of tiles in the die and blocks stored in respective tiles, the method comprising:
generating a request by a first processor for a data block in a data tile different from the first tile;
searching the table in the first tile to determine if the data block is present in the table; and
when the data block is determined to be indexed in the table in association with the data tile, sending a request, by the first processor, from the first tile to the data tile.
29. A system effective to request a data block in a die, the system comprising:
a first tile, wherein the first tile includes a first processor and a first memory, wherein the first memory is effective to be in communication with the first processor, and wherein the first memory is effective to include a table, wherein the table is a subset of a directory for the die; and
a second tile, wherein the second tile is effective to be in communication with the first tile, the second tile includes a second processor and a second memory, and the second memory is effective to include the directory for the die, wherein the directory includes an index of tiles in the die and blocks stored in the respective tiles; wherein
the first processor is effective to
generate a request for a data block in a data tile different from the first tile;
search the table in the first tile to determine if the data block is indexed in the table; and
when the data block is determined to be indexed in the table in association with the data tile, the first processor is effective to send the request to the data tile.
2. The system of
3. The system of
4. The system of
the second tile is further effective to send the request to a data tile, the data tile including the data block;
the destination tile is different from the data tile; and
the data tile is effective to receive the request from the second tile, the data tile is further effective to send the data block to the destination tile identified by the destination identifier.
5. The system of
6. The system of
the request is a first request;
the first processor is further effective to generate a second request for the data block;
the first processor is effective to search the table to determine if the destination identifier is in the table;
when the destination identifier is determined to be in the table, the first processor is effective to send the second request from the first tile to the destination tile; and
when the destination identifier is determined to be absent from the table, the first processor is effective to send the second request from the first tile to the second tile.
7. The system of
9. The system of
10. The system of
11. The system of
the request is a first request;
the first processor is further effective to generate a second request for the data block;
the first processor is effective to send the second request from the first tile to the second tile;
the second tile is further effective to search the location hint table in the second tile to determine if the destination identifier is in the location hint table;
when the destination identifier is determined to be in the location hint table, the second tile is effective to send a location hint from the second tile to the first tile; and
the second tile is effective to search the directory and send the second request to the destination tile.
12. The system of
the request is a first request;
the first processor is further effective to generate a second request for the data block;
the first processor is effective to send the second request from the first tile to the second tile;
the second tile is further effective to search the location hint table to determine if the destination identifier is in the location hint table;
when the destination identifier is determined to be in the location hint table, the second tile is effective to send the second request from the directory tile to the destination tile;
when the destination identifier is determined to be absent from the location hint table, the second tile is effective to search the directory and send the second request from the second tile to the destination tile.
13. The system of
14. The system of
the data block is a first data block;
the first data block has a first priority level with respect to a probability of being accessed;
a second data block stored in the first tile has a second priority level with respect to a probability of being accessed; and
the first and second priority levels are different.
15. The system of
the first tile is in a first die;
the destination tile is in a second die; and
the first and second dies are different.
17. The system of
18. The system of
20. The method of
21. The method of
22. The method of
receiving, by the data tile, the request from the second tile; and
sending, by the data tile, the data block to the destination tile identified by the destination identifier.
23. The method of
24. The method of
generating a second request by the first processor for the data block;
searching, by the first processor, the table to determine if the destination identifier is present in the table; and
when the destination identifier is determined to be present in the table, sending by the first processor the second request from the first tile to the destination tile.
25. The method of
generating a second request by the first processor for the data block;
searching, by the first processor, the table to determine if the destination identifier is in the table; and
when the destination identifier is determined to be absent from the table, sending by the first processor the second request from the first tile to the second tile.
26. The method of
27. The method of
28. The method of
searching, by the second processor, the location hint table to determine if the destination identifier is in the location hint table;
when the destination identifier is determined to be in the location hint table, sending, by the second processor, a location hint from the second tile to the first tile; and
searching, by the second processor, the directory, and sending by the second processor the second request from the second tile to the destination tile.
30. The system of
31. The system of
32. The system of
the data block is a first data block;
the first data block has a first priority level with respect to a probability of being accessed;
a second data block stored in the first tile has a second priority level with respect to a probability of being accessed; and
the first and second priority levels are different.
33. The system of
35. The method of
36. The method of
37. The method of
the data block is a first data block;
the first data block has a first priority level with respect to a probability of being accessed;
a second data block stored in the first tile has a second priority level with respect to a probability of being accessed; and
the first and second priority levels are different.
38. The method of
|
The present application is related to and claims the benefit of U.S. patent application Ser. No. 12/649,659, entitled DATA STORAGE AND ACCESS IN MULTI-CORE PROCESSOR ARCHITECTURES, naming Yan Solihin as inventor, filed Dec. 30, 2009, now issued as U.S. Pat. No. 8,244,986. All subject matter of the listed application and of any and all parent, grandparent, great-grandparent, etc. applications is incorporated herein by reference to the extent such subject matter is not inconsistent herewith.
Unless otherwise indicated herein, the materials described in this section are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section.
In multi-core processor architectures, multiple processor cores may be included in a single integrated circuit die or on multiple integrated circuit dies that are arranged in a single chip package. A cache may be used to store data for access by one or more of the processor cores. The data can be a subset of data stored in a larger memory that is typically located outside of the die. Each processor may be provided with a cache that stores data for the processor. As a single piece of data may be stored in multiple caches, a cache coherence protocol may be configured to keep track of the data stored in multiple caches. The cache coherence protocol is configured to help ensure that multiple requests for data consistently result in the same data being returned.
The foregoing and other features of this disclosure will become more fully apparent from the following description and appended claims, taken in conjunction with the accompanying drawings. Understanding that these drawings depict only several embodiments in accordance with the disclosure and are, therefore, not to be considered limiting of its scope, the disclosure will be described with additional specificity and detail through use of the accompanying drawings, in which:
all arranged according to at least some embodiments presented herein.
In the following detailed description, reference is made to the accompanying drawings, which form a part hereof. In the drawings, similar symbols typically identify similar components, unless context dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented herein. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the Figures, can be arranged, substituted, combined, separated, and designed in a wide variety of different configurations, all of which are explicitly contemplated herein.
This disclosure is generally drawn, inter alia, to methods, apparatus, systems, devices, and computer program products related to performing data storage and access in multi-processor architectures.
Briefly stated technologies are generally described for a system for sending a data block stored in a cache. In some examples described herein, a system may comprise a first processor in a first tile. The first processor is effective to generate a request for a data block, the request including a destination identifier identifying a destination tile for the data block, the destination tile being distinct from the first tile. Some example systems may further comprise a second tile effective to receive the request, the second tile effective to determine a data tile including the data block, the second tile further effective to send the request to the data tile. Some example systems may still further comprise a data tile effective to receive the request from the second tile, the data tile effective to send the data block to the destination tile.
As is described in more detail below, when processor 112 requests to access a data block, such as write to or read from a data block that is stored in another tile, processor 112 may initiate a request that the data block be stored in a cache in a neighbor or adjacent tile close to tile 118. Thereafter, when processor 112 in tile 118 initiates another request to read from or write to the stored data block, the stored data block can be more quickly accessed.
An example near location hint table 117 may include an index of caches in neighboring tiles that are each storing data used by processor 112 in tile 118. Processor 112 may be configured to search near location hint table 117 to see if requested data blocks are stored in a cache of a neighboring tile.
Directory 114 may be a directory that identifies (e.g., indexes) a location associated with each data block that is stored in all of the tiles of die 102. Directory 114 may be located in a single tile on die 102 or distributed among many or all tiles. If the requested data block is not indexed in near location hint table 117, processor 112 may search directory 114, which may be in another tile, for the requested data block.
As illustrated in
Directory 114 may be distributed among tiles or centralized in one tile. If directory 114 is distributed, for example, a first range of addresses (such as 0x0000-0x1000) may be stored in a first tile, a second range of addresses (such as 0x1001-0x2000) stored in a second tile, etc. Directory 114 in the figure thus may illustrate a first portion of an entire die directory where that first portion may be stored in tile 118 and additional portions may be stored in other tiles such as tiles 130, 131, 132, etc. Near location hint table 117, location hint structure 116, and directory 114 may be, for example, data structures accessible by processor 112 or may be registers within processor 112.
In the examples presented in
In some examples, directory tile 143 may be configured to receive request 180, may search directory 114 in directory tile 143 and may determine a location where data block X is stored—for example cache 157. Tile 143 may be configured to then send request 180 to cache 157 in data tile 137. Tile 137 may be configured to send data block X to cache 153 in destination tile 133, from which data may be forwarded to the requesting processor 112 in tile 118. Transmission (or sending) of data block X to tile 133 may include a number of operations such as, for example, copying at least a part of the data block X, moving at least a part of the data block X, or some combination of copying or moving over time, etc. Moreover, read or write operations may include additional technical requirements for data communications such as asserting a read or write request line when address and/or data is available for communication over a bus, or by sending messages over an interconnection network, where the read/write operations may be either synchronous (e.g., clock based) or asynchronous (e.g., read/write acknowledge based).
In some examples, requesting tile 118 may be configured to make a similar request 180a for data block Y. A directory for data block Y may be stored in cache 152 of directory tile 132. Request 180a may include a destination identifier of neighboring cache 150 or 154. Tile 132 may be configured to receive request 180a, search directory 114 of tile 132, and send request 180a to data tile 135 where data block Y is stored. Tile 135 may be configured to send 184 data block Y to either or both of caches 150, 154 in destination tiles 130, 134. From there, data may be forwarded to the requesting processor 112 in tile 118.
In the examples presented in
Near location hint table 117 may track data location hints with a granularity that is the same as or different than a cache block. Near location hint table 117 can track a location of a cache block, a group of related cache blocks, a page, or other granularities. Table 117 is indexed by the data address based on the tracking granularity, and points to the destination tile that may store the data. Larger granularities may reduce the table size used to track, while smaller granularities allow a more flexible placement of data in different destination tiles. Coherence correctness may be maintained by the directory and a cache coherence protocol. In some examples, if an entry in table 117 has stale information such as when data is no longer in an indexed destination tile, performance may be impacted while coherence correctness may not be affected. Entries in the table may be replaced if the table overflows. If an entry in table 117 points to a neighbor tile as a destination for data at a granularity larger than a cache block, some blocks of the data may still be in the local cache, while other blocks may be located in the destination neighbor tile. When a block belonging to the particular data in the local cache is evicted, and the processor requests that block of particular data, the block may be fetched again, and the block may be placed in the destination tile. Thus, over time, blocks of the particular data will move from the local cache to the destination tile.
Tile 143 may be configured to update directory 114 and location hint structure 116 to indicate that X is now stored in tile 133. Tile 132 may be configured to update directory 114 and location hint structure 116 to indicate that Y is now stored in tiles 130 and 134. An entry in location hint structure 116 conveys the location hint information to a processor that is required to read a block. In response to this information, the processor in the requesting tile may be configured to cache the data locally. Alternatively, if the destination tile is nearby, the processor can be configured to request storage of the data in the destination tile's cache. The processor can be configured to update its own new location hint table so that the processor searches this destination tile for future requests, before inquiring the directory. The location hint structure may be used for performance improvement and it does not have to have precise information. The location hint structure may be configured to keep track of all locations for a given data, or configured to not keep track of all locations. The directory keeps track of all possible locations for a given data block. If system 100 is designed to track the location of a data block at a block granularity, the directory can be used to provide such information. If system 100 is designed to track the location of a group of blocks, a page, or other granularities, then the location hint structure may be used.
In an example, when a tile such as tile 118 requests data block X from tile 143, tile 143 may be configured to first search location hint structure 116 for a location of data block X before searching the entire directory 114. The requesting tile (tile 118 or another tile) then has the option of searching in the tile indicated by location hint structure 116 or searching in directory 114. Location hint structure 116 may be within directory 114, outside directory 114, inside die 102 or outside die 102.
In some examples, caches 150, 153, 154 may be used by tile 118 to store data blocks that may not be able to fit in cache 110 of tile 118 or to store data blocks with a different priority level than the data block stored in cache 110. For example, a block with a higher probability of being accessed in the near future may have a higher priority. Blocks with higher priority than others may be placed as near to the processor as possible to minimize the latency to access them. For example, blocks with the highest priority levels may be placed in the local cache of the requesting tile, and slightly lower priority levels may be placed in the closest neighbor tiles. Having data stored in tiles physically close to tile 118 may yield many benefits. In one example, the placement of data blocks in physical proximity of tile 118 can reduce data access times for processor 112 of tile 118. In another example, the placement of data blocks may reduce overall die traffic on die 102, and/or may minimize cache misses thereby allowing data to be accessed that would otherwise become stale or paged out. Tile 118 may be configured to continuously request that data be sent to caches other than the cache of tile 118 by sending a destination identifier that does not correspond to the cache of tile 118. In some examples, tile 118 may be configured to request that data blocks are sent to cache 110 by making the destination identifier correspond to cache 110. In some examples, the requesting processor may be inhibited from requesting more than a threshold of data be stored in a cache of a different tile. In this way, local data needed by the processor in this different tile is not negatively impacted.
As discussed previously, when tile 118 is required to access data block X, tile 118 may be configured to search near location hint table 117 for the cache storing data block X. If tile 118 cannot find an indication regarding the cache storing data block in location hint table 117, tile 118 may be configured to send a request to the directory for data block X. Similarly, if tile 118 made a request to the tile indicated in near location hit table 117 and the pertinent data block is not stored in the indicated tile, tile 118 may be configured to send a request to the directory for data block X. Near location hint table 117 need not affect cache coherence implementations using directory 114.
Among other possible benefits, system 100 may allow each tile to use a cache larger that what is present in each tile. An aggregated cache may comprise caches from a plurality of tiles that may or may not be neighbors of the tile. The aggregated cache may indicate that less cache misses may occur and less requests for data blocks off the die. A larger aggregated cache may also mean that more data may be stored in a cache and less chance of requested data evicting other needed data stored in a cache. A larger aggregated cache also may mean that multiple pieces of data need not compete for space in a single, perhaps smaller, cache.
As caches of neighbor tiles may be used, access times may remain low. A system in accordance with at least some principles of this disclosure may be implemented by making minor changes to existing cache coherence protocols. If information becomes stale, a cache directory may still be used to locate cached data. Access latency to data in cache and on chip network traffic may be reduced as a result of data being stored in cache that is in close physical proximity to requesting processors and/or because processors may less frequently access cache directories at tiles distributed throughout the die.
At block S2 a requesting tile can be adapted to send or initiate a request for a data block to a directory tile. The request may include a destination identifier indicating a tile for the requesting tile to store the data block. The directory tile includes a directory for the requested data block. In some examples, the destination tile may be in a tile that is a neighbor (e.g., a tile that is adjacent to or in close physical proximity therewith) of the requesting tile. Processing may continue from block S2 to block S4. Processing may also continue from block S2 to block S6.
At block S4 the processor in the requesting tile may be adapted to update a near location hint table to indicate that the data block may be stored in the cache of a tile corresponding to the destination identifier.
At block S6 the directory tile may be arranged to receive the request from the requesting tile. Processing may continue from block S6 to block S8.
At block S8 the directory tile may be arranged to search the directory for the appropriate data tile storing the data block and may send the received request to the data tile. Processing may continue from block S8 to block S10. Processing may also continue from block S8 to block S14.
At block S10 the directory tile may be adapted to update the directory to indicate that the data block is now stored in the tile corresponding to the destination identifier. Processing may continue from block S10 to block S12.
At block S12 the directory tile may be adapted to update a location hint structure to indicate that the data block is now stored in the tile corresponding to the destination identifier. The location hint structure can be used to determine a location of the data block before the directory is searched.
At block S14 the data tile may be adapted to receive the request from the directory tile. Processing may continue from block S14 to block S16.
At block S16 the data tile may be adapted to send the data block to the destination tile and to the requesting tile. Processing may continue from block S16 to blocks S18 and S22. At block S22 the requesting processor may be adapted to receive the data block.
At block S18 the destination tile may be adapted to receive the data block. Processing may continue from block S18 to block S20.
At block S20 the destination tile may be adapted to store the data block in a cache.
At block S40, if the requesting tile does not find a data block in its local cache, the requesting tile may be configured to request access to a data block stored in a cache outside of the requesting tile. Processing may continue from block S40 to block S42.
At block S42, the requesting tile may be configured to search a near location hint table to see if the data block is indexed in the table. Processing may continue from block S42 to block S44.
At block S44, a query can be made whether the requested data block is indexed in the near location hint table. When the data block is determined to be indexed in the near location hint table (“Yes”), processing may continue from block S44 to block S46. In some examples, blocks S45 and S46 may be performed in parallel in order to minimize the latency of accessing the directory in case the data block is not found in the data tile. Otherwise, when a determination is made that the data block is not indexed in the near location hint table (“No”), or if the location indexed in the table does not have the data block, processing may continue from block S44 to block S45.
At block S46, the requesting tile may be adapted to send a request to the pertinent data tile. Processing may continue from blocks S45 to block S58.
At block S45, the requesting tile may be adapted to send the request to a directory tile including a directory for the data block. Processing may continue from block S45 to block S48.
At block S48, the directory tile may be adapted to receive the request. Processing may continue from block S48 to block S50.
At block S50, the directory tile may be adapted to search a location hint structure for the data block. Processing may continue from block S50 to block S52.
At block S52, a query may be made to the location hint structure to determine whether the data block is indexed in the location hint structure. When the data block is determined to be indexed in the location hint structure (“Yes”), processing may continue from block S52 to block S53. At block S53, the directory tile may be adapted to send a location hint to the requesting tile. Block S53, may be followed by block S64 where the requesting tile may be effective to update the near location hint table. Processing may continue from block S52 to block S54.
At block S54, the directory tile may be adapted to search the directory for the data tile. Processing may continue from block S54 to block S56.
At block S56, the request may be sent from the directory tile to the data tile. Processing may continue from blocks S53 to block S58.
At block S58, the data tile may be adapted to receive the request from the directory tile. Processing may continue from block S58 to block S60.
At block S60, the data tile may be adapted to send the data block to the requesting tile. Processing may continue from block S60 to block S62.
At block S62, the requesting tile may be adapted to receive the data block from the data tile.
At operation 502 a requesting tile can be adapted to send or initiate a request for a data block to a directory tile. The request may include a destination identifier indicating a tile for the requesting tile to store the data block. The directory tile includes a directory for the requested data block. In some examples, the destination tile may be in a tile that is a neighbor of the requesting tile. Processing may continue from operation 502 to operation 504.
At operation 504, the processor in the requesting tile may update a near location hint table to indicate that the data block may be stored in the cache of a tile corresponding to the destination identifier. Processing may continue from operation 502 to operation 506.
At operation 506, the directory tile may be arranged to receive the request. Processing may continue from operation 506 to operation 508.
At operation 508, the directory tile may be arranged to search the directory for the appropriate data tile storing the data block and may send the request to the data tile. Processing may continue from operation 508 to operation 510 and operation 514.
At operation 510, the directory tile may be adapted to update the directory to indicate that the data block is now stored in the tile corresponding to the destination identifier. Processing may continue from operation 510 to operation 512.
At operation 512, the directory tile may be adapted to update a location hint structure to indicate that the data block is now stored in the tile corresponding to the destination identifier. The location hint structure can be used to determine a location of the data block before the directory is searched.
At operation 514, the data tile may be adapted to receive the request from the directory tile. Processing may continue from operation 514 to operations 516 and 517.
At operation 517 the data tile may be adapted to send the data block to the requesting tile. Processing may continue from operation 517 to operation 522 where the requesting processor may be adapted to receive the data block.
At operation 516 the data tile may be adapted to send the data block to the destination tile. Processing may continue from operation 516 to operation 518.
At operation 518, the destination tile may be adapted to receive the data block. Processing may continue from operation 518 to operation 520.
At operation 520, the destination tile may be adapted to store the data block in a cache.
At operation 540, a requesting tile may be configured to request access to a data block stored in a cache outside of the requesting tile. Processing may continue from operation 540 to operation 542.
At operation 542, the requesting tile may be configured to search a near location hint table to see if the data block is indexed in the table. Processing may continue from operation 542 to operation 544.
At operation 544, a query can be made whether the requested data block is indexed in the near location hint table. When the data block is determined to be indexed in the near location hint table (“Yes”), processing may continue from operation 544 to operation 546.
Otherwise, when a determination is made that the data block is not indexed in the near location hint table (“No”), or if the location indexed in the table does not have the data block, processing may continue from operation 544 to operation 545.
At operation 546, the requesting tile may be adapted to send a request to the pertinent data tile. Processing may continue from operations 546 to operation 558.
At operation 545, the requesting tile may be adapted to send the request to a directory tile including a directory for the data block. Processing may continue from operation 545 to operation 548.
At operation 548, the directory tile may be adapted to receive the request from the requesting tile. Processing may continue from operation 548 to operation 550.
At operation 550, the directory tile may be adapted to search a location hint structure for the data block. Processing may continue from operation 550 to operation 552.
At operation 552, a query may be made to determine whether the data block is indexed in the location hint structure. When the data block is determined to be indexed in the location hint structure (“Yes”), processing may continue from operation 552 to operation 553. At operation 553, the directory tile may be adapted to send a location hint to the requesting tile. Operation 553 may be followed by operation 564 where the requesting tile may be effective to update the new location hint table.
Processing may continue from operation 552 to operation 554.
At operation 554, the directory tile may be adapted to search the directory for the data tile. Operation 554 can be performed in parallel with operation 548 and 550. Processing may continue from operation 554 to operation 556.
At operation 556, the request received by the directory tile at operation 548 may be sent to the data tile identified at operation 554. Processing may continue from operations 556 to operation 558.
At operation 558, the data tile may be adapted to receive the request from the directory tile. Processing may continue from operation 558 to operation 560.
At operation 560, the data tile may be adapted to send the data block to the requesting tile. Processing may continue from operation 560 to operation 562.
At operation 562, the requesting tile may be adapted to receive the data block from the data tile.
In some implementations, signal bearing medium 302 may encompass a computer-readable medium 306, such as, but not limited to, a hard disk drive, a Compact Disc (CD), a Digital Video Disk (DVD), a digital tape, memory, etc. In some implementations, signal bearing medium 302 may encompass a recordable medium 308, such as, but not limited to, memory, read/write (R/W) CDs, R/W DVDs, etc. In some implementations, signal bearing medium 302 may encompass a communications medium 310, such as, but not limited to, a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.). Thus, for example, program product 300 may be conveyed to one or more modules of the system 100 by an RF signal bearing medium 302, where the signal bearing medium 302 is conveyed by a wireless communications medium 310 (e.g., a wireless communications medium conforming with the IEEE 802.11 standard).
Depending on the desired configuration, processor 404 may be of any type including but not limited to a microprocessor (μP), a microcontroller (μC), a digital signal processor (DSP), or any combination thereof. Processor 404 may include one more levels of caching, such as a level one cache 410 and a level two cache 412, a processor core 414, and registers 416. An example processor core 414 may include an arithmetic logic unit (ALU), a floating point unit (FPU), a digital signal processing core (DSP Core), or any combination thereof. An example memory controller 418 may also be used with processor 404, or in some implementations memory controller 418 may be an internal part of processor 404.
Depending on the desired configuration, system memory 406 may be of any type including but not limited to volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.) or any combination thereof. System memory 406 may include an operating system 420, one or more applications 422, and program data 424. Application 422 may include a data storage and access in a multi-processor architecture algorithm 426 that is arranged to perform the functions as described herein including those described with respect to system 100 of
Computing device 400 may have additional features or functionality, and additional interfaces to facilitate communications between basic configuration 402 and any required devices and interfaces. For example, a bus/interface controller 430 may be used to facilitate communications between basic configuration 402 and one or more data storage devices 432 via a storage interface bus 434. Data storage devices 432 may be removable storage devices 436, non-removable storage devices 438, or a combination thereof. Examples of removable storage and non-removable storage devices include magnetic disk devices such as flexible disk drives and hard-disk drives (HDD), optical disk drives such as compact disk (CD) drives or digital versatile disk (DVD) drives, solid state drives (SSD), and tape drives to name a few. Example computer storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data.
System memory 406, removable storage devices 436 and non-removable storage devices 438 are examples of computer storage media. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which may be used to store the desired information and which may be accessed by computing device 400. Any such computer storage media may be part of computing device 400.
Computing device 400 may also include an interface bus 440 for facilitating communication from various interface devices (e.g., output devices 442, peripheral interfaces 444, and communication devices 446) to basic configuration 402 via bus/interface controller 430. Example output devices 442 include a graphics processing unit 448 and an audio processing unit 450, which may be configured to communicate to various external devices such as a display or speakers via one or more A/V ports 452. Example peripheral interfaces 444 include a serial interface controller 454 or a parallel interface controller 456, which may be configured to communicate with external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device, etc.) or other peripheral devices (e.g., printer, scanner, etc.) via one or more I/O ports 458. An example communication device 446 includes a network controller 460, which may be arranged to facilitate communications with one or more other computing devices 462 over a network communication link via one or more communication ports 464.
The network communication link may be one example of a communication media. Communication media may typically be embodied by computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and may include any information delivery media. A “modulated data signal” may be a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media may include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency (RF), microwave, infrared (IR) and other wireless media. The term computer readable media as used herein may include both storage media and communication media.
Computing device 400 may be implemented as a portion of a small-form factor portable (or mobile) electronic device such as a cell phone, a personal data assistant (PDA), a personal media player device, a wireless web-watch device, a personal headset device, an application specific device, or a hybrid device that include any of the above functions. Computing device 400 may also be implemented as a personal computer including both laptop computer and non-laptop computer configurations.
The present disclosure is not to be limited in terms of the particular embodiments described in this application, which are intended as illustrations of various aspects. Many modifications and variations can be made without departing from its spirit and scope, as will be apparent to those skilled in the art. Functionally equivalent methods and apparatuses within the scope of the disclosure, in addition to those enumerated herein, will be apparent to those skilled in the art from the foregoing descriptions. Such modifications and variations are intended to fall within the scope of the appended claims. The present disclosure is to be limited only by the terms of the appended claims, along with the full scope of equivalents to which such claims are entitled. It is to be understood that this disclosure is not limited to particular methods, reagents, compounds compositions or biological systems, which can, of course, vary. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only, and is not intended to be limiting.
With respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity.
It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as “open” terms (e.g., the term “including” should be interpreted as “including but not limited to,” the term “having” should be interpreted as “having at least,” the term “includes” should be interpreted as “includes but is not limited to,” etc.). It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases “at least one” and “one or more” to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim recitation to embodiments containing only one such recitation, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an” (e.g., “a” and/or “an” should be interpreted to mean “at least one” or “one or more”); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should be interpreted to mean at least the recited number (e.g., the bare recitation of “two recitations,” without other modifiers, means at least two recitations, or two or more recitations). Furthermore, in those instances where a convention analogous to “at least one of A, B, and C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, and C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to “at least one of A, B, or C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, or C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase “A or B” will be understood to include the possibilities of “A” or “B” or “A and B.”
In addition, where features or aspects of the disclosure are described in terms of Markush groups, those skilled in the art will recognize that the disclosure is also thereby described in terms of any individual member or subgroup of members of the Markush group.
As will be understood by one skilled in the art, for any and all purposes, such as in terms of providing a written description, all ranges disclosed herein also encompass any and all possible subranges and combinations of subranges thereof. Any listed range can be easily recognized as sufficiently describing and enabling the same range being broken down into at least equal halves, thirds, quarters, fifths, tenths, etc. As a non-limiting example, each range discussed herein can be readily broken down into a lower third, middle third and upper third, etc. As will also be understood by one skilled in the art all language such as “up to,” “at least,” “greater than,” “less than,” and the like include the number recited and refer to ranges which can be subsequently broken down into subranges as discussed above. Finally, as will be understood by one skilled in the art, a range includes each individual member. Thus, for example, a group having 1-3 cells refers to groups having 1, 2, or 3 cells. Similarly, a group having 1-5 cells refers to groups having 1, 2, 3, 4, or 5 cells, and so forth.
While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
6959374, | Jan 29 2003 | Oracle America, Inc | System including a memory controller configured to perform pre-fetch operations including dynamic pre-fetch control |
7225300, | Sep 15 2004 | Azul Systems, Inc | Duplicate snoop tags partitioned across multiple processor/cache chips in a multi-processor system |
7552241, | May 18 2007 | Mellanox Technologies, LTD; EZCHIP SEMICONDUCTOR LTD | Method and system for managing a plurality of I/O interfaces with an array of multicore processor resources in a semiconductor chip |
7626588, | Jun 16 2006 | Nvidia Corporation | Prescient cache management |
7640399, | May 10 2006 | Advanced Micro Devices, Inc. | Mostly exclusive shared cache management policies |
7853755, | Sep 29 2006 | EZCHIP SEMICONDUCTOR LTD ; Mellanox Technologies, LTD | Caching in multicore and multiprocessor architectures |
7921266, | Jan 29 2008 | Dell Products, LP | System and method of accessing memory within an information handling system |
7975109, | May 30 2007 | SanDisk Technologies LLC | System including a fine-grained memory and a less-fine-grained memory |
7987321, | Sep 29 2006 | EZCHIP SEMICONDUCTOR LTD ; Mellanox Technologies, LTD | Caching in multicore and multiprocessor architectures |
20030140200, | |||
20050044340, | |||
20050076181, | |||
20060143384, | |||
20060265552, | |||
20070226425, | |||
20090031085, | |||
20090031086, | |||
20090274413, | |||
20100241811, | |||
20100274971, | |||
EP1376368, | |||
RE32900, | Jun 08 1987 | American Telephone and Telegraph Company, AT&T Bell Laboratories | Fast circuit switching system |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Mar 02 2012 | Empire Technology Development, LLC | (assignment on the face of the patent) | / | |||
Dec 28 2018 | Empire Technology Development LLC | CRESTLINE DIRECT FINANCE, L P | SECURITY INTEREST SEE DOCUMENT FOR DETAILS | 048373 | /0217 | |
Jun 24 2020 | CRESTLINE DIRECT FINANCE, L P | Empire Technology Development LLC | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 053047 | /0864 | |
Jul 06 2020 | Empire Technology Development, LLC | Mercury Kingdom Assets Limited | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 053378 | /0053 |
Date | Maintenance Fee Events |
Aug 26 2016 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Oct 01 2020 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Oct 01 2020 | M1555: 7.5 yr surcharge - late pmt w/in 6 mo, Large Entity. |
Sep 26 2024 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Mar 26 2016 | 4 years fee payment window open |
Sep 26 2016 | 6 months grace period start (w surcharge) |
Mar 26 2017 | patent expiry (for year 4) |
Mar 26 2019 | 2 years to revive unintentionally abandoned end. (for year 4) |
Mar 26 2020 | 8 years fee payment window open |
Sep 26 2020 | 6 months grace period start (w surcharge) |
Mar 26 2021 | patent expiry (for year 8) |
Mar 26 2023 | 2 years to revive unintentionally abandoned end. (for year 8) |
Mar 26 2024 | 12 years fee payment window open |
Sep 26 2024 | 6 months grace period start (w surcharge) |
Mar 26 2025 | patent expiry (for year 12) |
Mar 26 2027 | 2 years to revive unintentionally abandoned end. (for year 12) |