A processor may support a two-dimensional (2-D) gather instruction and a 2-D cache. The processor may perform the 2-D gather instruction to access one or more sub-blocks of data from a 2-D image stored in a memory coupled to the processor. The 2-D cache may store the sub-blocks of data in a multiple cache lines. Further, the 2-D cache may support access of more than one cache lines while preserving a 2-D structure of the 2-D image.
|
1. A processor comprising:
a decode unit to decode a two-dimensional (2-D) gather instruction;
an execution unit to perform the 2-D gather instruction to access one or more sub-blocks of data from a 2-D image stored in a memory coupled to the processor; and
a 2-D cache to store the one or more sub-blocks of data in multiple cache lines, support access to more than one cache line in a single processing cycle, preserve a two-dimensional structure of the 2-D image, and support mapping of the one or more sub-blocks to avoid read conflicts, wherein the 2-D cache includes a plurality of sets, a plurality of ways and an access logic, wherein the access logic is to
determine a set of the plurality of sets on to which a first coordinate of an image pixel of the one or more sub-blocks are mapped, wherein the set is determined using a result of a modulo operation performed on the first coordinate of the 2-D image and a total number of sets available in the 2-D cache; and
determine a way of the plurality of ways on to which a second coordinate of the image pixel within the one or more sub-blocks are mapped, wherein the way is determined using a result of a modulo operation performed on the second coordinate of the 2-D image and a total number of ways available in the 2-D cache.
7. A method in a processor comprising:
decoding a two-dimensional (2-D) gather instruction;
performing the 2-D gather instruction to access one or more sub-blocks of data from a 2-D image stored in a memory coupled to the processor;
storing the one or more sub-blocks of data in multiple cache lines of a 2-D cache, wherein the 2-D cache includes a plurality of sets and a plurality of ways;
supporting access to more than one cache line in a single processing cycle;
preserving a two-dimensional structure of the 2-D image in the 2-D cache;
mapping the one or more sub-blocks on to the plurality of sets and the plurality of ways of the 2-D cache to avoid read conflicts;
determining a set of the plurality of sets on to which a first coordinate of an image pixel of the one or more sub-blocks are mapped, wherein the set is determined using a result of a modulo operation performed on the first coordinate of the 2-D image and a total number of sets available in the 2-D cache; and
determining a way of the plurality of ways on to which a second coordinate of the image pixel within the one or more sub-blocks are mapped, wherein the way is determined using a result of a modulo operation performed on the second coordinate of the 2-D image and a total number of ways available in the 2-D cache.
12. A system comprising,
a memory;
a machine readable storage medium;
a plurality of input-output devices; and
a processor comprising a plurality of cores and a plurality of caches, including a two-dimensional (2-D) cache, wherein the 2-D cache includes a plurality of sets and a plurality of way, and wherein the processor is to
decode a 2-D gather instruction;
perform the 2-D gather instruction to access one or more sub-blocks of data from a 2-D image stored in the memory;
store the one or more sub-blocks of data in multiple cache lines of the 2-D cache;
support access to more than one cache line in a single processing cycle;
preserve a two-dimensional structure of the 2-D image;
support mapping the one or more sub-blocks on to the plurality of sets and the plurality of ways of the 2-D cache to avoid read conflicts;
determine a set of the plurality of sets on to which a first coordinate of an image pixel of the one or more sub-blocks are mapped, wherein the set is determined using a result of a modulo operation performed on the first coordinate of the 2-D image and a total number of sets available in the 2-D cache; and
determine a way of the plurality of ways on to which a second coordinate of the image pixel within the one or more sub-blocks are mapped, wherein the way is determined using a result of a modulo operation performed on the second coordinate of the 2-D image and a total number of ways available in the 2-D cache.
2. The processor of
the execution unit is further to load the one or more sub-blocks into the 2-D cache in response to performing the 2-D gather instruction; and
the 2-D gather instruction is to specify a pointer to the 2-D image, a plurality of coordinates to identify the one or more sub-blocks, and a number of elements in the one or more sub-blocks.
3. The processor of
identifying a memory block within the 2-D cache using the set and the way of the 2-D cache;
retrieving content of a tag field within the memory block; and
determining if a data stored in the memory block is evicted.
4. The processor of
a read/write logic to access the tag field from the memory block and determine if contents of the tag field is non-evicted; and
a shuffle logic to rearrange the data in the memory blocks in an order of the addresses provided by the 2-D gather instruction, wherein the memory blocks associated with the tag field that are non-evicted are chosen.
5. The processor of
6. The processor of
8. The method of
loading the one or more sub-blocks into the 2-D cache in response to performing the 2-D gather instruction;
specifying a pointer to the 2-D image in response to execution of the 2-D gather instruction; and
identifying the one or more sub-blocks and a number of elements in the one or more sub-blocks using a plurality of coordinates.
9. The method of
identifying a memory block within the 2-D cache using the set and the way of the 2-D cache;
retrieving content of a tag field within the memory block; and
determining if a data stored in the memory block is evicted.
10. The method of
accessing the tag field from the memory block and determining if content of the tag field is non-evicted; and
rearranging the data in the memory blocks in an order of the addresses provided by the 2-D gather instruction, wherein the memory blocks associated with the tag field that are non-evicted are chosen.
11. The method of
13. The system of
load the one or more sub-blocks into the 2-D cache in response to performing the 2-D gather instruction;
specify a pointer to the 2-D image in response to performing the 2-D gather instruction; and
identify the one or more sub-blocks and a number of elements in the one or more sub-blocks using a plurality of coordinates.
14. The system of
identifying a memory block within the 2-D cache using the set and the way of the 2-D cache,
retrieving content of a tag field within the memory block, and
determining if a data stored in the memory block is evicted.
15. The system of
support accessing the tag field from the memory block and determining if content of the tag field is non-evicted; and
support rearranging the data in the memory blocks in an order of the addresses provided by the 2-D gather instruction, wherein the memory blocks associated with the tag field that are non-evicted are chosen.
16. The system of
17. The system of
|
The present application is a continuation of and claims priority to U.S. patent application Ser. No. 13/220,402 filed on Aug. 29, 2011.
As semiconductor technology continues to scale, more and more functionality is being integrated into the processors in particular. For example, such processors may be capable of performing graphics and media application in addition to performing the conventional tasks. Majority of media processing algorithms use “1D or 2-D region” variation of gather. While a gather loads row or line (1×m), column (m×1), or a matrix (m×n) (for example, (2×2), (4×4), or (8×2)), the generic vgather translates this “block load” into 16 offsets and the information in the image (row length) structure is lost.
The invention described herein is illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. For example, the dimensions of some elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements.
The following description describes embodiments of a two dimensional (2-D) cache and a 2-D gather instruction. In the following description, numerous specific details such as logic implementations, resource partitioning, or sharing, or duplication implementations, types and interrelationships of system components, and logic partitioning or integration choices are set forth in order to provide a more thorough understanding of the present invention. It will be appreciated, however, by one skilled in the art that the invention may be practiced without such specific details. In other instances, control structures, gate level circuits, and full software instruction sequences have not been shown in detail in order not to obscure the invention. Those of ordinary skill in the art, with the included descriptions, will be able to implement appropriate functionality without undue experimentation.
References in the specification to “one embodiment”, “an embodiment”, “an example embodiment”, indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
Embodiments of the invention may be implemented in hardware, firmware, software, or any combination thereof. Embodiments of the invention may also be implemented as instructions stored on a machine-readable medium, which may be read and executed by one or more processors. A machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing device).
For example, a machine-readable medium may include read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other similar signals. Further, firmware, software, routines, and instructions may be described herein as performing certain actions. However, it should be appreciated that such descriptions are merely for convenience and that such actions in fact result from computing devices, processors, controllers, and other devices executing the firmware, software, routines, and instructions.
In one embodiment, the instruction set may comprise a special gather instruction, which may be referred to as a 2-D gather instruction. In one embodiment, the 2-D gather instruction may retain the two dimensional image structure or the image information related to the 2-D image structure. In one embodiment, the 2-D cache may use the image information for a special cache filling policy, which may result in a higher gather performance and low latency as compared to a generic gather instruction. A generic gather may load (or block load) up to 2 or 4 double precision floating point values from the memory address and the generic vgather translates the “block load” into 16 offsets and the information on image structure (i.e, row length) is lost.
To overcome the above disadvantage of the losing the image structure, in one embodiment, the 2-D instruction, which may retain the image and region parameters is disclosed. In one embodiment, the 2-D gather instruction may perform double stride gather, which may load the 2-D region such as (1×16, 2×8, 4×4; 8×2; or 16×1) from the 2-D image.
In one embodiment, the 2-D cache is based on the idea of 2-D locality. In one embodiment, if a program loads some pixel (x, y) from an image ‘A’ stored in a memory, then there may be a high likely-hood that the pixels around the pixel (x,y) may be used soon. Also, there may be a high likely-hood that the pixels around the pixel (x,y) may be used multiple times. In one embodiment, to take advantage of the 2-D locality, a number of small rectangular windows ‘W’ of the large image in the memory may be maintained in the cache.
In one embodiment, a 2-D cache fill policy may be used to fill the cache with the image information stored in the memory. In one embodiment, the 2-D window ‘W’ (i.e., image information) may be mapped on to a 2-D cache so as to avoid possible read conflicts for the 2-D region loads (for example, (1×16), (2×8), (4×4); (8×2); or (16×1)). In one embodiment, the image element (x, y) may be mapped on to the set and way of the cache, respectively, based on the following Equations (1) and (2) below:
Set=X mod Num_of_Sets Equation (1)
Way=Y mod Num_of_Ways Equation (2)
In one embodiment, the 2-D cache lookup may include two tasks—1) to identify the location in the cache comprising the correct data; and 2) to arrange the data in an order, which may correspond to the order of the addresses in the 2-D gather instruction. In one embodiment, the location in the cache (comprising the correct data) may be identified by comparing the address generated by the address generation unit with the tag associated with each set. In one embodiment, the data in the identified locations may be arranged in an order to correspond to an order of the addresses in the 2-D gather instruction.
An embodiment of a processor 100, which may support a 2-D cache and a 2-D gather instruction is illustrated in
In one embodiment, the pre-fetch unit 110 may fetch instructions from the memory 101 while the others instructions, which were fetched earlier are being executed. The instructions so fetched may be stored in the instruction cache 120. The instruction translational look-aside buffer (ITLB) 122 may be used to translate the virtual address to a physical address. The instructions are then provided to the decode unit 140, which may decode the macro instructions into multiple micro-operations. The micro-operations may be then sent to reservation station 150, which may dispatch the micro-operations (uops) to the one or more of the execution units 170, the vertex processing block 191 or the texture processing block 193. In one embodiment, the instructions may be dispatched to one of the units 170, 191, or 193 based on the type of the instruction. For example, if the processing relates to graphics data the instruction may be performed by the vertex processing block 191 and the texture processing block 193 and by the execution unit 170 if it is non-graphics data. In one embodiment, the instructions may be performed in an out-of-order fashion and the re-order buffer 185 may store the results of such execution in an order to retain the original program order.
In one embodiment, the 2-D gather instruction, which may be used to load the 2-D region from the 2-D image to the data cache 180 may be as given by Equation (3) below. An example 2-D gather instruction may be as given below:
Zmm1=2-D_gather_16(pImage,rowWidth,blockX,blockY,blockW,blockH,strideX,strideY);
Structurally, the 2-D gather instruction may have some similarity with the generic vgather instruction, which may be as given in the Equation (4) below:
Zmm1=vgather(pBase,offset0, . . . offset15) Equation (4)
Further, the 2-D cache structure, the 2-D cache filling policy, and the 2-D cache look-up are described in detail below with reference to
In one embodiment, the 2-D cache 180 may be viewed as a combination of multiple memory blocks each of which may be uniquely identified by a combination of the identifier of a set and a way. In one embodiment, the 2-D cache 180 may include N sets (set 0 to set N) and M ways (way 0 to way M). In one embodiment, each memory block within the 2-D cache uniquely identified by the identifier of the way and the set.
In one embodiment, the 2-D cache may be viewed as a sliding window that may slide over the windows (i.e., a group of pixels) in the image stored in the memory 101. In one embodiment, the 2-D cache 180 may store image information of one or more windows such as 204 and 208. In one embodiment, during a first time point the 2-D cache 180 may store the pixels covered by the windows 204 and 208 in the sets and ways. In other embodiment, the 2-D cache 180 may store the pixels covered by the windows 204 and then slide to cover the pixels of the window 208.
Like-wise, the 2-D cache 180 may store pixels covered by a first set of windows and then slide to store the pixels covered by the second set of windows. In one embodiment, the pixels in the window 204 in the main memory 101 may be mapped into memory blocks in the 2-D cache 180 and each memory block may be identified by a unique combination of the set number and the way number. For example, the memory block 300 may be uniquely identified by a combination of set number (N=0) and a way number (M=0). Similarly, the memory block 312 may be uniquely identified by a combination of set number (N=1) and the way number (M=2).
In one embodiment, the 2-D cache 180 may adopt a 2-D cache filling policy to fill the memory blocks within the 2-D cache 180. In one embodiment, the 2-D cache includes N sets and M ways and is two dimensional. In one embodiment, the 2-D window ‘W’ such as 204 and/or 208 in the memory 180 may be mapped on to 2-D cache 180 so as to avoid possible read conflicts for the 2-D region loads (for example, (1×16), (2×8), (4×4); (8×2); or (16×1)). In one embodiment, the image element (x, y) may be mapped on to the set and way of the cache, respectively, based on the Equations (1) and (2) above. For example, the mapping or cache filling may be implemented as Set=address [6 . . . 11] and way=Row mod Num_of_Ways.
For a 2-D cache with 32 ways, the above example of filling the cache may result in a cache filling depicted in
In one embodiment, the mapping of the two-dimensional (2-D) image using the 2-D gather instructions allows for a maximum of 2 iterations. For example, the 2-D gather instruction may gather data from a line (1×16), column (16×1), matrices (8×2), (4×4), and (2×8) and the maximum iterations involved may be equal to 2, 1, 2, 2, and 2 processing cycles, respectively.
In one embodiment, the address generation unit 160 may generate an address A1 and at least some of the bits (a1, a2, a3, . . . ak) of the address A1 may be provided as a first input to the logic X-NOR gates 630-1 to 630-P. In one embodiment, the bits in the tag may be provided as a second input to the X-NOR logic gates 630-1 to 630-P. In one embodiment, if there is a position-wise match in the bits in the tag with the bits in the address (i.e., if the bit values provided to the ex-Nor are the same), the output generated by each of the X-NOR gate 630-1 to 630-P may be logic 1. In one embodiment, if the output of all the X-NOR gates 630-1 to 630-P are equal to 1, the output generated by the AND gate 640 may be equal to logic 1 as well. In one embodiment, the tag array 600 may thus determine the memory block, which includes a tag that is equal to the address generated by the address generation unit 610.
After identifying the memory blocks such as 401-00, 401-05, 401-15, 401-22, 401-31, or 401-33, the content or the image information in the memory blocks may be provided to the read/write logic 720 and the shuffle unit 750. In one embodiment, the read/write logic 720 may access the tag portions of the memory blocks 401-00, 401-05, 401-15, 401-22, 401-31, or 401-33 and determine if the tags are still relevant (i.e., not evicted or replaced). In one embodiment, the shuffle unit 750 may rearrange the data in the non-evicted memory blocks in an order of the addresses provided by the 2-D gather instruction.
In one embodiment, the access logic 370 may access more than one cache lines, which may include non-evicted data. In one embodiment, the 2-D cache 180 may support access of up to 16 separate cache lines per single processing cycle unlike the prior art caches, which may allow one cache line to be accessed per processing cycle. In one embodiment, the data stored in the relevant memory blocks within these cache lines may be extracted by the access logic 370 and arranged by the shuffle unit 750 to generate the 2-D gather data. As a result, the 2-D cache 180 may access more than one ways per port, for example if multiple elements may be stored in the same physical bank but, within different sets. In one embodiment, the cache filling technique and the 2-D gather technique described above may minimize bank conflicts during the 2-D region loads.
The operation of the 2-D gather instruction and the 2-D cache is described with reference to the 2-D data cache 180, for example. However, the techniques described above may be performed in other caches such as L2 cache 190 or any other cache or any other memory as well.
The processor 802 that operates the computer system 800 may be one or more processor cores coupled to logic 830. In one embodiment, the processor 810 may comprise a central processing unit 803 and a memory subsystem MSS 804. In one embodiment, the CPU 802 or the GPU 803 may perform the 2-D gather instruction describe above and the cache 806 may support the 2-D cache structure, 2-D cache filling, and the 2-D gather techniques described above.
The logic 830, for example, could be chipset logic in one embodiment. The logic 830 is coupled to the memory 820, which can be any kind of storage, including optical, magnetic, or semiconductor storage. The I/O devices 860 may allow the computer system 800 to interface with the devices such as network devices or users of the computer system 800.
Certain features of the invention have been described with reference to example embodiments. However, the description is not intended to be construed in a limiting sense. Various modifications of the example embodiments, as well as other embodiments of the invention, which are apparent to persons skilled in the art to which the invention pertains are deemed to lie within the spirit and scope of the invention.
Ginzburg, Boris, Margulis, Oleg
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
6571320, | May 07 1998 | Infineon Technologies AG | Cache memory for two-dimensional data fields |
6907438, | Feb 01 2001 | GLOBALFOUNDRIES Inc | Two-dimensional inverse discrete cosine transform using SIMD instructions |
7028168, | Dec 05 2002 | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | System and method for performing matrix operations |
7649538, | Nov 03 2006 | Nvidia Corporation | Reconfigurable high performance texture pipeline with advanced filtering |
8432409, | Dec 23 2005 | MEDIATEK INC | Strided block transfer instruction |
20030221089, | |||
20070008323, | |||
20080285652, | |||
20100149202, | |||
20100268884, | |||
20110153707, | |||
WO2013032788, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Mar 02 2015 | Intel Corporation | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Jul 05 2017 | ASPN: Payor Number Assigned. |
Oct 02 2020 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Date | Maintenance Schedule |
Aug 08 2020 | 4 years fee payment window open |
Feb 08 2021 | 6 months grace period start (w surcharge) |
Aug 08 2021 | patent expiry (for year 4) |
Aug 08 2023 | 2 years to revive unintentionally abandoned end. (for year 4) |
Aug 08 2024 | 8 years fee payment window open |
Feb 08 2025 | 6 months grace period start (w surcharge) |
Aug 08 2025 | patent expiry (for year 8) |
Aug 08 2027 | 2 years to revive unintentionally abandoned end. (for year 8) |
Aug 08 2028 | 12 years fee payment window open |
Feb 08 2029 | 6 months grace period start (w surcharge) |
Aug 08 2029 | patent expiry (for year 12) |
Aug 08 2031 | 2 years to revive unintentionally abandoned end. (for year 12) |