A plurality of bits are added to virtual and physical memory addresses to specify the level at which data is stored in a multi-level cache hierarchy. When data is to be written to cache, each cache level determines whether it is permitted to store the data. Storing data at the appropriate cache level addresses the problem of cache thrashing.
|
13. A cache memory module comprising cache memory and a cache controller, the cache controller being configured to:
determine a cache level attribute for an access operation by at least one of a plurality of processors during runtime, wherein said cache level attribute comprises a plurality of bits attached permanently to a virtual address and/or a physical address of data and said determination comprises:
determining a cache level at which data is to be cached upon creation of the virtual address, wherein the step of determining the cache level includes:
determining which of the plurality of processors require access to data; and
setting the cache level in dependence on said determination; and
generating the cache level attribute corresponding to the determined cache level; and
dynamically control processor access to the cache memory based on the cache level attribute.
1. A method of dynamically controlling processor access to a cache memory level in a multi-level cache hierarchy, comprising:
determining a cache level attribute for an access operation by at least one of a plurality of processors during runtime using a processor, wherein said cache level attribute comprises a plurality of bits attached permanently to a virtual address and/or a physical address of data and said determination comprises:
determining a cache level at which data is to be cached upon creation of the virtual address, wherein the step of determining the cache level includes:
determining which of the plurality of processors require access to data; and
setting the cache level in dependence on said determination; and
generating the cache level attribute corresponding to the determined cache level; and
dynamically controlling processor access to the cache memory level based on the cache level attribute.
18. A method of dynamically controlling processor access to a cache memory level in a multi-level cache hierarchy, comprising:
determining a cache level attribute for an access operation by at least one of a plurality of processors during runtime using a processor, wherein said cache level attribute comprises a plurality of bits attached permanently to a table that relates virtual addresses and physical addresses of a memory page of data and said determination comprises:
determining a cache level at which data is to be cached upon creation of the memory page, wherein the step of determining the cache level includes:
determining which of the plurality of processors require access to data; and
setting the cache level in dependence on said determination; and
determining the cache level attribute corresponding to the determined cache level; and
dynamically controlling processor access to the cache memory level based on the cache level attribute.
14. A system for controlling processor access to a cache memory level in a multi-level cache hierarchy, comprising:
a plurality of processors;
a main memory;
means for determining a cache level attribute for an access operation by at least one of the plurality of processors during runtime, wherein said cache level attribute comprises a plurality of bits attached permanently to a virtual address and/or a physical address of data and said means for determining comprises:
means for determining a cache level at which data is to be cached upon creation of the virtual address, wherein means for determining the cache level includes:
determining which of the plurality of processors require access to data; and
setting the cache level in dependence on said determination means for generating the cache level attribute corresponding to the determined cache level; and
means for dynamically controlling processor access to the cache memory level based on the cache level attribute.
2. A method according to
3. A method according to
4. A method according to
5. A method according to
6. A method of determining the cache level attribute for controlling processor access to a cache memory level in a multi-level cache hierarchy as claimed in
determining the cache level attribute for a memory page; and
associating the cache level attribute with the memory page.
7. A method according to
determining which of the plurality of processors require access to the memory page;
and setting the cache level attribute in dependence on said determination.
8. A method according to
10. A method according to
11. A method according to
12. A method according to
15. A system according to claim, 14 including a single processor and/or multiple processors for accessing the cache memory levels.
17. A system according to
means for determining a cache level at which data is to be cached; and
means for generating a cache level attribute corresponding to the determined level.
|
This application claims priority from Indian patent application IN1313/CHE/2005, filed on Sep. 16, 2005. The entire content of the aforementioned application is incorporated herein by reference.
Cache memory is a standard feature on modern processors, permitting faster access to data as compared to accessing the main memory (RAM). However, in multiprocessor systems, cache coherency is an issue, as cache memory associated with each processor has access to a common memory resource, giving rise to the potential for discrepancies in the data stored in different caches. Multiprocessor systems conventionally guarantee cache coherency using a variety of protocols such as the MESI protocol (Modified: Exclusive: Shared: Invalid). The purpose of such protocols is to ensure that a processor has exclusive access to a memory location for performing a read or write operation.
If two or more processors write to the same memory location, each processor needs to wait for one additional memory cycle, being the cycle that validates and invalidates other processor caches. This potentially creates a ‘ping-pong’ scenario, so that accessing cached memory becomes slower than uncached memory access. The overheads for ensuring exclusive access on systems such as NUMA (Non-Uniform Memory Access) are even greater. One solution to this problem is to prevent such memory pages from being cached. However, this solution has its own drawbacks. For example, hyper threading processors may share caches, even at the cache level that is closest to the processor.
Another problem in conventional systems which is similar to the ping-pong scenario is that of false cache sharing. This may, for example, occur where different data items are always loaded into the same cache line, but each of two or more processors require concurrent access to the items.
Yet another problem with cache control in conventional systems is that an application's important cache-sensitive data may be overwritten by its own access to other less important data, or by accesses to other data made by less important applications.
The invention will now be described by way of example only with reference to the drawings in which:
According to an embodiment of the disclosure, there is provided a method of controlling processor access to a cache memory level in a multi-level cache hierarchy, comprising determining a cache level attribute for an access operation and controlling access to the cache memory level based on the cache level attribute.
By permitting cache control based on a cache level attribute, fine grain control of the cache level at which data is stored can be achieved, for both single and multiprocessor systems. In a multiprocessor system, this can lead to a reduction in cache thrashing due to concurrent access by different processors. In a single processor system, it can provide for the ability to reserve cache areas for important data and to disallow replacement of important data by data that is less important or only infrequently accessed. The step of controlling access may comprise comparing the cache memory level with a value of the cache attribute and controlling access based on said comparison. Each cache level looks at the value of the cache attribute to determine if it should permit or disallow data to be stored at that level. The cache level attribute may comprise two or more bits attached to a virtual or physical address or may comprise an entry in the page table and TLB associated with a virtual/physical address mapping.
According to another embodiment, there is also provided a method of setting a cache level attribute for controlling processor access to a cache memory level in a multi-level cache hierarchy, comprising determining a cache level attribute for a memory page; and associating the cache level attribute with the memory page.
The step of determining the cache level attribute may include determining which of a plurality of processors require access to the memory page and setting the cache level attribute in dependence on said determination. The method may include setting the cache level attribute in dependence on the importance of a process or on the frequency of access to a process.
According to yet another embodiment, there is further provided a cache memory module comprising cache memory and a cache controller, the cache controller being configured to determine a cache level attribute for an access operation and to control access to the cache memory based on the cache level attribute.
According to still another embodiment, there is still further provided a system for controlling processor access to a cache memory level in a multi-level cache hierarchy, comprising means for determining a cache level attribute for an access operation; and means for controlling access to the cache memory level based on the cache level attribute.
The internal and external caches 5, 11, main memory 6 and hard disk 7 shown in
Modern operating systems such as the HP-UX™ system operate a virtual memory management system. When a program is compiled, the compiler generates virtual addresses for the program code that represent locations in memory. However, to enable execution of the program by a CPU, the data and instructions of the associated processes or threads of execution within a process must be available to the CPU by residing in the main memory 6, also referred to as the physical memory, at the time of execution. Therefore the virtual addresses must be mapped to physical addresses within the physical memory prior to execution.
Since each process can be allocated a virtual address space irrespective of the availability of physical memory, a virtual memory management system permits the total size of user processes to exceed the size of physical memory. Portions of the virtual address space are mapped to physical memory as required, using a technique known as demand paging.
A page is the smallest unit of physical memory that can be mapped to a virtual address. For example, on the HP-UX™ system, the page size is 4 KB. Virtual pages are therefore referred to by a virtual page number VPN, while physical pages are referred to by a physical page number PPN.
The operating system maintains a table in physical memory 6 referred to as a page directory, or page table 30. This contains a complete listing of all pages currently in physical memory 6 and their corresponding virtual page numbers. The TLB 12 can be considered as a cache of the page table 30 and stores a subset of the mappings in the page table. When a processor 2, 3, 4 wishes to access a memory page, it first looks in the TLB 12 using the virtual page number as an index. If a physical page number is found in the TLB 12, which is referred to as a TLB hit, the processor knows that the required page is in the main memory 6.
If the page number is not found, which is referred to as a TLB miss, the page table 30 is checked to see if the required page exists there. If it does, which is referred to as a PDIR or page table hit, the physical page number is loaded into the TLB 12. If it does not exist, which is generally referred to as a PDIR miss or page fault, this indicates that the required page does not exist in main memory 6, and needs to be brought into memory from the hard disk 7. The process of bringing a page from the hard disk 7 into the main memory 6 is dealt with by a software page handler 32 and causes corresponding VPN/PPN entries to be made in the page table 30 and TLB 12, as is well known in conventional systems.
Access to data in a cache 31 follows similar principles. Cache memory is essentially organised as a number of equal-sized blocks called cache lines for storing data and a cache tag is associated with every cache line, which is used to describe its contents and to determine if the desired data is present. The tag includes a physical page number identifying the page in main memory where the data resides. When the processor wishes to read from or write to a location in main memory, it first checks whether the memory location is in the cache, by comparing the address of the memory location to all tags in the cache that might contain the address.
For example, for a virtually indexed cache 31, a virtual address is used to access a physical page number (PPN(Cache)) stored in the cache 31 via a cache controller 33.
If the physical page number (PPN(Cache)) corresponds to the physical page number generated by the TLB 12 (PPN(TLB)), this means that the required data is available in the cache 31, referred to as a cache hit. In the case of a cache hit, the processor can read or write the data in the cache line.
If the page numbers do not match, a cache miss occurs and the data is loaded into the cache 31 from main memory 6 for subsequent rapid access by the processor. The comparison with the PPN generated by the TLB 12 is required since blocks from many different locations in memory can be legitimately mapped to the same cache location.
The system of
The inventor has appreciated that, in the situation outlined above, cache thrashing can be prevented by storing data at the L2 cache level, rather than at L1. Since the L2 cache is shared between the first and second CPUs 41, 42, cache coherency issues do not arise. A system according to an embodiment of the disclosure provides a mechanism for determining the cache level at which data is to be stored so as to reduce the problem of cache thrashing. The cache level is specified by an attribute, referred to herein as a cache hierarchy hint (CHH).
Table 1 illustrates the possible CHH values and their effect for the cache hierarchy illustrated in
TABLE 1
CHH value
Effect
0
Uncached
1
Cache at L1
2
Cache at L2
In one example of the technique disclosed, the cache hierarchy hint bits are attached to memory pages.
For example, in a 64 bit system that supports 45 bit physical addressing and a 55 bit virtual address, bits 45 and 46 of the physical address 51 and bits 55 and 56 of the virtual address 52 are used to store the CHH bits, as illustrated in
A caching protocol for the CHH scheme shown in Table 1 is illustrated in
When data is to be written to the cache, for example, following a previous cache miss, the write cycle starts at the L1 cache, for example at a first L1 cache 43 associated with the first CPU 41 (step s1). The first L1 cache controller 44 retrieves the CHH value associated with the page to be written by examining the last two bits of the virtual address (step s2). If the CHH value specifies that the data is to be written to the L1 cache (step s3), in this example the value being ‘1’, then the data is stored in the L1 cache (step s4). If writing to the L1 cache is not permitted, the procedure moves to the next cache level (step s5), assuming there are more cache levels. For example, at level L2, the L2 cache controller 48 checks if the CHH value is 2. If it is, the data is stored at level L2 (step s4). If there are no more cache levels (step s5), the data is stored in main memory (step s6). In this example, this would occur if the CHH value is ‘0’.
The same principle applies to multiple cache levels greater than 2. For example, referring to
The CHH values for the cache hierarchy scheme of
TABLE 2
CHH value
Effect
0
Uncached
1
Cache at L1
2
Cache at L2
3
Cache at L3
It will be apparent to a person skilled in the art that the scheme can be extended for any number of cache levels by extending the number of bits used to hold the CHH values. For example, use of three CHH bits will permit up to 7 cache levels.
The way in which the CHH bits are created is illustrated in
In another example illustrated in
In another example, the cache hierarchy is associated with a single processor. In this case, the memory pages allocated to an important process can be associated with a higher level cache, for example L1, while pages that are associated with less important processes are associated with a lower level cache, for example L2 or L3. The system can therefore prevent data associated with an important process from being replaced in the cache by data associated with less important processes, or processes that will be accessed much less frequently.
An algorithm for inserting the translations is set out below in pseudo-code:
Insert Translation(Virtual_Address, Physical_Address)
Begin
Virtual_Page_No=Virtual_Address/Page_Size
CHH_Value = Extract Bits 55 and 56 from Virtual_Address
New_Physical_Address = Physical_Address
Replace bits 45 and 46 of New_Physical_Address with the bits
contained in CHH_Value
New_Physical_Page_No=New_Physical_Address/Page_Size
Insert <Virtual_Page_No, New_Physical_Page_No) into Page Table/
TLB
End
Although the above describes making the CHH values part of the physical address, it will be appreciated that, where the cache is virtually indexed, as illustrated for example in
The occurrence of cache misses on previously accessed data can help the operating system to decide whether to downgrade the CHH value of a page or not.
For example, application profilers that are usually bundled with operating systems can help to assess various application performance bottlenecks including cache misses.
Profiler data available while running applications under the control of profilers can contain cache misses at various levels. Some profilers are capable of identifying the memory locations that triggered cache misses. Based on this data, analysis can show whether the memory locations experienced cache thrashing because of a false cache hierarchy issue, i.e. because of data caching at the incorrect cache level. If cache misses are a result of false cache hierarchy problems, then one solution is to change the program source code and adjust the CHH value. Another approach is to change the CHH value on-the-fly. Algorithms to implement static and dynamic CHH adjustment are given below.
For example, the operating system uses the algorithm below to dynamically adjust the CHH values for pages in the address range passed to it, to correspond to the CHH value of the required cache level.
Algorithm Change_CHH(Range_of_address, CHH_val)
Begin
For Every Virtual Page in the Range_of_address
Begin
Insert CHH_val bits into Virtual address
Insert CHH_val bits into Physical address
Modify TLB/Page Table entries
Invalidate Caches if required so that Caching is done with respect to
CHH_val
End
End
Cache invalidation may be required to purge cached data from higher level cache when CHH_val is set to indicate caching in a lower level cache.
The above description sets out the way in which the operating system may implicitly deal with allocation of cache based on a cache level attribute. In another example of the disclosure, a programmer is able to explicitly provide cache level attributes at memory allocation time, so providing for static adjustment of CHH values. For example, a memory allocation routine such as malloc( ) can be modified to pass CHH values. So a prototype C function would look like:
void*malloc(int size, int chh)
where chh is an integer holding the CHH value.
As above, the kernel 21 would mark its physical page with the passed CHH values at the time of creating a mapping for this virtual address.
Similarly, once memory is allocated, an API would be available to modify existing CHH values, in a form such as:
int set_cache_hierarchy_hint(void*start_address, int size, int new_chh)
While examples of the disclosure have been described using a particular scheme for setting CHH values to correspond to cache levels, any scheme which maps CHH values to cache levels could be used. For example, in a two bit system, ‘11’ could indicate caching at the highest level, namely L1, ‘10’ for caching at level L2, and ‘01’ for caching at level L3. All other possible permutations are also encompassed.
The technique disclosed has been primarily described in terms of the CHH bits being attached to memory pages, either using unused or reserved bits of physical and virtual addresses or using extra bits added to the physical or virtual address. In other examples of the technique disclosed, the CHH bits are not attached to the memory pages but are accessible by the cache levels in other ways. For example, the CHH bits are stored into reserved bits of the page table and TLB, rather than forming part of the address itself.
While examples of the technique disclosed have primarily being described with reference to multiprocessor systems, the technique is applicable to single processor systems, where it permits fine grain control of the cache level at which data is to be cached.
Other embodiments or modifications to the above embodiments falling within the scope of the appended claims would be apparent to the skilled person.
Patent | Priority | Assignee | Title |
10303604, | Apr 29 2015 | GOOGLE LLC | Data caching |
10884928, | Apr 29 2015 | GOOGLE LLC | Data caching |
9600417, | Apr 29 2015 | GOOGLE LLC | Data caching |
Patent | Priority | Assignee | Title |
5802574, | Dec 28 1993 | Intel Corporation | Method and apparatus for quickly modifying cache state |
6131145, | Oct 27 1995 | Hitachi, Ltd. | Information processing unit and method for controlling a hierarchical cache utilizing indicator bits to control content of prefetching operations |
6349137, | Aug 05 1999 | Wilmington Trust, National Association, as Administrative Agent | Apparatus and method for providing support software for an agent workstation of an automatic call distributor |
6643745, | Mar 31 1998 | Intel Corporation | Method and apparatus for prefetching data into cache |
EP966300113, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jul 26 2006 | Hewlett-Packard Development Company, L.P. | (assignment on the face of the patent) | / | |||
Aug 22 2006 | KURICHIYATH, SUDHEER | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 018251 | /0381 | |
Oct 27 2015 | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | Hewlett Packard Enterprise Development LP | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 037079 | /0001 |
Date | Maintenance Fee Events |
Dec 24 2014 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Dec 19 2018 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Jan 10 2023 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Jul 19 2014 | 4 years fee payment window open |
Jan 19 2015 | 6 months grace period start (w surcharge) |
Jul 19 2015 | patent expiry (for year 4) |
Jul 19 2017 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jul 19 2018 | 8 years fee payment window open |
Jan 19 2019 | 6 months grace period start (w surcharge) |
Jul 19 2019 | patent expiry (for year 8) |
Jul 19 2021 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jul 19 2022 | 12 years fee payment window open |
Jan 19 2023 | 6 months grace period start (w surcharge) |
Jul 19 2023 | patent expiry (for year 12) |
Jul 19 2025 | 2 years to revive unintentionally abandoned end. (for year 12) |