A computer system that supports virtualization may maintain multiple address spaces. Each guest operating system employs guest virtual addresses (GVAs), which are translated to guest physical addresses (GPAs). A hypervisor, which manages one or more guest operating systems, translates GPAs to root physical addresses (RPAs). A merged translation lookaside buffer (MTLB) caches translations between the multiple addressing domains, enabling faster address translation and memory access. The MTLB can be logically addressable as multiple different caches, and can be reconfigured to allot different spaces to each logical cache.
|
23. A method of caching address translations in a memory architecture, comprising:
storing translations between a first address domain and a second address domain to a first logical portion of a cache;
storing translations between the second addressing domain and a third address domain to a second logical portion of the cache;
defining a boundary between the first and second logical portions via a register value, the boundary indicating a location within the cache, the location being defined by a value stored at a register; and
matching an address request against the cache and outputting a corresponding address result.
1. A circuit comprising:
a cache configured to store translations between address domains, the cache addressable as a first logical portion and a second logical portion, the first logical portion configured to store translations between a first address domain and a second address domain, the second logical portion configured to store translations between the second address domain and a third address domain;
a processor configured to match an address request against the cache and output a corresponding address result; and
a register configured to define a boundary between the first and second logical portions, the boundary indicating a location within the cache, the location being defined by a value stored at the register.
45. A circuit comprising:
a translation lookaside buffer (tlb) configured to store translations between address domains, the tlb addressable as a guest tlb and a root tlb, the guest tlb configured to store translations between a guest virtual address (GVA) domain and a guest physical address (GPA) domain, the root tlb configured to store translations between the GPA domain and a root physical address (RPA) domain, each entry in the cache including a bit indicating whether the entry is a member of the guest tlb or the root tlb;
a processor configured to match an address request against the cache and output a corresponding address result; and
a register configured to define a boundary between the guest tlb and the root tlb, the boundary indicating a location within the cache, the location being defined by a value stored at the register.
2. The circuit of
3. The circuit of
4. The circuit of
5. The circuit of
6. The circuit of
7. The circuit of
8. The circuit of
9. The circuit of
10. The circuit of
11. The circuit of
12. The circuit of
13. The circuit of
14. The circuit of
15. The circuit of
16. The circuit of
17. The circuit of
18. The circuit of
19. The circuit of
20. The circuit of
21. The circuit of
22. The circuit of
24. The method of
25. The method of
26. The method of
27. The method of
28. The method of
29. The method of
30. The method of
31. The method of
32. The method of
33. The method of
34. The method of
35. The method of
36. The method of
37. The method of
38. The method of
39. The method of
40. The method of
storing translations between the third addressing domain and a fourth addressing domain; and
defining a boundary between the second and third logical portions.
41. The method of
42. The method of
43. The method of
44. The method of
|
In computer systems, virtualization is a process by which computing resources, such as a hardware platform, an operating system, or memory, are simulated by a computer system, referred to as a host machine. A typical host machine operates a hypervisor, which is software or hardware that creates and runs virtual machines, also referred to as guest machines. Through hardware virtualization, the hypervisor provides each guest machine with a virtual hardware operating platform. By interfacing with the virtual operating platform, the guest machines access the computing resources of the host machine to execute their respective operations. As a result, a single host machine can support multiple operating systems or other software simultaneously through virtualization.
In a typical host machine, the virtual operating platform is presented to the guest machines as a “real” hardware platform, meaning that the virtual nature of the hardware platform should not be discernible to the guest machines. Further, the host machine should avoid conflicts between guest machines in accessing computing resources. To accomplish these goals, the host machine may implement a translation scheme between the guest software and the physical host resources. With regard to memory resources, for example, the host machine may support virtual address spaces that are presented to respective guest machines. The virtual address space appears, to the guest machine, as a “real” (physical) address space. However, the host machine translates between the virtual address spaces and a physical address space corresponding to the memory of the host machine. As a result, the host machine can manage memory resources for multiple guest machines.
Example embodiments of the present invention provide systems and methods for caching translations between address spaces in a virtualization environment. A circuit may include a cache configured to store translations between address domains, where the cache is addressable as a first logical portion and a second logical portion. The first logical portion is configured to store translations between a first address domain and a second address domain, and the second logical portion is configured to store translations between the second address domain and a third address domain. A processor is configured to match an address request against the cache and output a corresponding address result. Further, a register is configured to define a boundary between the first and second logical portions.
In further embodiments, the processor can match an address request in the first address domain against entries in the first logical portion to determine a corresponding entry having an address in the second address domain. The processor may also match the entry determined in the first logical portion against entries in the second logical portion to determine a corresponding entry having an address in the third address domain, the address result including the address in the third address domain. The processor may further match an address request in the second address domain against entries in the second logical portion to determine a corresponding entry having an address in the third address domain, the address result including the address in the third address domain.
In still further embodiments, at least a subset of entries in the cache may include an index identifier, and a decoder may be included to locate an entry in the cache based on the index identifier. The address request can include an indication of the index identifier, and the index identifier can be configured in a sequence of index identifiers, the order of the sequence being dependent on a source of the address request.
In yet still further embodiments, the location of the boundary may vary according to a value stored in the register, and a size of at least one of the first and second logical portions varies according to the value stored in the register. The processor may update entries in the cache in response to the size of the at least one of the first and second logical portions being varied.
The first address domain may be a guest virtual address domain, the second address domain may be a guest physical address domain, and the third address domain may be a root physical address domain. Each entry in the cache can include a bit indicating whether the entry is a member of the first logical portion or the second logical portion, and the address request can include an indication of the bit corresponding to a requested entry.
In still further embodiments, the processor may be configured to suppress an exception resulting from multiple matching entries during a given time period, which can occur, for example, when transitioning between sources sending the address request. Each entry in the cache can include a bit indicating a source associated with the entry. The processor may control access to the first portion and the second portion based on a source of the address request.
In yet further embodiments, the cache may be addressable as a third logical portion that is configured to store translation between the third addressing domain and a fourth addressing domain. The register may define a boundary between the second and third logical portions. The processor may search a selected one or more of the logical portions based on an indication, such as one or more bits, in the address request.
In further embodiments, upon detecting a missing entry in the cache for the address request, the processor may output a request for a translation corresponding to the missing entry. Upon receiving the translation corresponding to the missing entry, the processor may enter the translation into the cache at, for example, a randomly-determined entry of the cache. The first logical portion can include a first index and the second logical portion can include a second index, where the second index being an inversion of the first index.
In still further embodiments, a circuit may include a translation lookaside buffer (TLB) configured to store translations between address domains, the TLB addressable as a guest TLB and a root TLB. The guest TLB may store translations between a guest virtual address (GVA) domain and a guest physical address (GPA) domain, while the root TLB may store translations between the GPA domain and a root physical address (RPA) domain. The root TLB may also store translations between the RVA (root virtual address) domain and the RPA. Each entry in the cache may include a bit indicating whether the entry is a member of the guest TLB or the root TLB. The circuit may further include a processor configured to match an address request against the cache and output a corresponding address result, as well as a register configured to define a boundary between the guest TLB and the root TLB.
The foregoing will be apparent from the following more particular description of example embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating embodiments of the present invention.
A description of example embodiments of the invention follows.
A computer system that supports virtualization may maintain multiple address spaces. Each guest operating system employs guest virtual addresses (GVAs), which are translated to guest physical addresses (GPAs). The GPA space refers to a partition of the physical memory allocated to the specified guest machine. However, the GVA space (rather than the GPA space) is presented to each guest machine in order to allow greater flexibility in memory allocation. For example, the GVA space of a given guest machine may be larger than the physical memory partition allocated to it, and data may be stored to a hard disk when the memory partition is at capacity.
A software system that manages one or more guest operating systems, such as a hypervisor, translates GPAs to corresponding root physical addresses (RPAs). The RPAs, also referred to as physical system addresses or machine addresses, indicate the location of the physical memory of the host computer. Thus, to complete a memory access by a guest machine, two translations occur: a GVA is first translated to a GPA, and the GPA is then translated to a RPA.
Addresses are initially translated between addressing domains (e.g. virtual address to physical address) by reading (“walking”) a page table storing the relevant address relations. A translation lookaside buffer (TLB) is employed to cache such translations. Once a given translation is cached, the TLB is then accessed during future memory accesses requiring the given translation, thereby preventing the need for a further page table walk. In some virtualized systems, a TLB may cache translations from a GVA to a RPA. Alternatively, two physical TLBs may be employed: a first TLB storing GVA-to-GPA translations, and a second TLB storing GPA-to-RPA translations.
A memory controller 108, which may include hardware and a software portion of the guest operating platform, interfaces with the guests 104a-n to access the system memory 150. In order to access the system memory 150, the memory controller 108 first accesses the merged translation lookaside buffer (MTLB) 110. The MTLB 110 may include a single physical cache, buffer, segment register, system register, or other storage unit that is logically addressable as two distinct TLBs: a guest TLB (GTLB) 120 (a “virtual tag section”) and a root TLB (RTLB) 130 (a “physical tag section”). The GTLB 120 stores GVA-to-GPA translations, and the RTLB 130 stores GPA-to-RPA translations. The MTLB 110 may therefore appear to other components as two distinct logical TLBs while sharing a single physical structure.
During a guest memory access, the memory controller 108 may receive a GVA from a guest 104a-n, which it then matches against entries in the GTLB 120 to determine a corresponding GPA. If a match is found, then the memory controller 108 matches the located GPA against entries in the RTLB 130 to determine a corresponding RPA. With the matching RPA, the memory controller 108 accesses the indicated entry of the system memory 150 for a read or write operation by the guest 104a-n.
Entries in the GTLB 120 may initially be added by the guests 104a-n, which accesses the page tables 140 stored at the system memory 150. The page tables 140 store relations between the GVA, GPA and RPA spaces, and may be “walked” to determine address translations between those address spaces. Thus, the guests 104a-n may walk the page tables 140 to determine a GVA-to-GPA translation, and then access the GTLB 120 via a GTLB index to store the translation at the GTLB 120. Likewise, entries in the RTLB 130 may initially be added by the hypervisor 105, which accesses the page tables 140 stored at the system memory 150. The hypervisor may walk the page tables 140 to determine a GPA-to-RPA translation, and then access the RTLB 130 via a RTLB index to store the translation at the RTLB 130. Entries into the GTLB 120 and RTLB 130 may be added, as described above, in response to a reported “miss” by the memory controller 108 in a translation lookup at the GTLB 120 or RTLB 130.
Configuration of the MTLB 110, as well as operation of the system 100 during memory access and populating the MTLB 110, is described in further detail below with reference to
A programmable register (not shown), stored at the MTLB 110 or external to the MTLB 110, defines the position of the TLB partition 212 dividing the GTLB 120 and RTLB 130. Accordingly, all entries with physical indices lower than the partition 212 comprise the RTLB 130, and all entries with physical indices equal to or greater than the partition comprise the GTLB 120. The flexible partitioning between the GTLB 120 and RTLB 130 allows a system (e.g., system 100) to optimize the size of these structures, given the fixed number of total translation entries. The size of the TLBs 120, 130 can be changed at run-time. In addition, if the computer system is used in a non-virtualized environment, the partition 212 can be set such that the Root TLB takes up the entire MTLB cache 206. For such a configuration, a value may be reserved to represent 0 entries in the GTLB 120.
Software, such as the hypervisor 105 and guests 104a-n (
To facilitate access to the GTLB 120 and RTLB 130, the MTLB 110 may also include a decoder 334, which allows software (e.g., guests 104a-n or hypervisor 105) to access the entries of the MTLB 110 with an index. Each entry in the MTLB 110 may be assigned such an index, and software may use the decoder 334 to read or write a particular entry. The decode logic may employ a physical index to identify a particular MTLB entry. Guests 104a-n may be limited to generate guest logical indices for writing to the GTLB 120. A hypervisor 105 (or other software with root access) may generate either guest logical or root logical indices for writing to the GTLB 120 or RTLB 130. An index converter 332 may be implemented to transform guest indices into a physical index corresponding to an entry of the GTLB 120.
In an example embodiment, a logical index to the GTLB 120 and RTLB 130 may be configured as follows. The total MTLB size may a power of 2, and the root logical index may equal the physical index (i.e., the index of the physical cache) for all root logical indices less than the RTLB 130 size. The guest logical index may be transformed to the physical index for all guest logical indices less than the GTLB 120 size. A software-based read-only registers may indicate the size of the GTLB 120 and RTLB 130, and is updated automatically after the partition between the GTLB 120 and RTLB 130 is configured.
The MTLB 110 may be a fully associative structure. If the MTLB 110 is configured as one physical TLB with associative match logic, a search in the MTLB 110 could result in matches to either a GTLB 120 entry or a RTLB 130 entry. In order for the associative logic to distinguish between GTLB 120 and RTLB 130 entries, each entry may be tagged with a bit, referred to as a Gt bit. If the Gt bit is 1, then an entry belongs to the GTLB 120; if the Gt bit is zero, an entry belongs to the RTLB 130. Thus, all RTLB 130 entries may have the Gt bit at zero, while all GTLB 120 entries may have the Gt bit at one. When an associative lookup on the RTLB 130 is required, the search value (key) sent to the MTLB 110 has Gt set to zero. Similarly, when an associative lookup on the GTLB 120 is required, the search value sent to the MTLB 110 has Gt set to one.
Random replacement may be utilized, as part of an algorithm handling a MTLB lookup “miss,” by using a “random register.” The random register may select a random entry within a portion of the selected GTLB 120 or RTLB 130. With reference to
Referring again to
Referring back to
The hypervisor 105, or other software or hardware, may be further configured to provide for bulk invalidation of entries in the GTLB 120. By using the Gt bit as an identifier, the hypervisor 105 can quickly invalidate all GTLB 120 entries. Alternatively, a circuit or other hardware may be configured to invalidate all entries with the Gt bit set to 1. As a result, a guest context can be erased quickly from the MTLB 110 without affecting the root TLB entries, such as during a switch between guests. In one embodiment, the current guest context is identified by a register that contains an address space identifier (also referred to as a virtual machine ID or VMID). The hypervisor 105 may change this value when switching the current guest context, and hardware (as described above) may automatically invalidate the guest context using the Gt bit, when software changes this value.
In further embodiments, an MTLB may be partitioned into three or more logical TLBs to accommodate multiple guest TLBs. The multiple guest contexts may be managed statically or dynamically. In a static configuration, instead of one TLB partition (e.g., TLB partition 212 of
In the case of a failure to match to one or both of a GPA and RPA (i.e., an MTLB “miss”), the MTLB 110 is updated (420). To do so, one or both of the guest 104a and hypervisor 105 (or hardware on behalf of the guest 104a or the hypervisor 105) access the page tables 140 of the system memory 150. The guest 104a and/or hypervisor 105 walk the page tables 140 to determine the needed GVA-to-GPA and/or GPA-to-RPA translation (425). The guest 104a and/or hypervisor 105 may then write the translation(s) to the MTLB 110 (430) (e.g., as described above with reference to
ASID[7:0] (505): Address space identifier (ASID). If Gt=1, then this field holds the Guest ASID for this particular translation. If Gt=0, this field holds the Root ASID for this particular translation. This field may be ignored if G=1.
VPN (506): Virtual page number, indicating a GVA or GPA address.
G (507): Global bit. If Gt=1, then this bit represents the G bit of the Guest TLB entry corresponding to a GVA. If Gt=0, then this bit represents the G bit of a Root TLB entry corresponding to either a GPA or a RVA (Root Virtual Address).
The physical portion may include the following entries:
PFN (510): Physical Frame Number, indicates the Physical Page number of a GPA or RPA address. If Gt=1, then this field represents a GPA. If Gt=0, then this field represents an RPA.
XI (511): Execute Inhibit indicates that a mapping contains data and not instructions. If XI=1, then this page translation cannot be used for instructions and may only be used for data translations. If XI=0, then this translation can be used for either data or instructions.
RI (512): Read inhibit. The Read inhibit bit may be used to prevent a particular page from being read.
C[2:0] (513): Coherency attributes may be stored in the C field. These attributes can be used to determine the nature of the memory space (e.g. cacheable, uncached, coherent, non-coherent, I/O-space, etc).
D (514): Dirty bit. The dirty bit indicates whether a page has previously been written to.
V (515): Valid bit. The valid bit identifies whether the entry is valid or invalid.
The system 600 shown may be a portion of host machine supporting a number of guest machines through virtualization. A plurality of guests (i.e., guest machines) 604a-n operate via respective guest operating platforms (not shown), which are virtual hardware platforms managed by the hypervisor 605. The guests 604a-n access the physical system memory 650 indirectly through a GVA space provided by the guest operating platforms. In order to access the system memory 650, which is addressed through a RPA space, a GVA may be first mapped to a GPA, which is in turn mapped to a RPA at a partition of the system memory 650 allocated for the given guest 604a-n. Thus, to enable memory access by a guest 604a-n, a GVA may be translated to a GPA, which is then translated to a RPA indicating an entry of the system memory 650.
The MTLB 610 may be configured as described above with reference to the MTLB 110 of
A memory controller 608, which may include hardware and a software portion of the guest operating platform, interfaces with the guests 604a-n to access the system memory 650. In order to access the system memory 650, the memory controller 608 first accesses the μTLB 615. The μTLB 615 may include a cache, buffer, segment register, system register, or other storage unit that stores GVA-to-RPA translations. During a guest memory access, the memory controller 608 may receive a GVA from a guest 604a-n, which it then matches against entries in the μTLB 615 to determine a corresponding RPA. If a match is found, then the memory controller 608 accesses the indicated entry of the system memory 650 for a read or write operation by the guest 604a-n. If a match is not found, then the memory controller 608 may access the MTLB to determine corresponding GVA-to-GPA and GPA-to-RPA translations. The memory controller 608 may further collapse the two translations to a single GVA-to-RPA translation, and may populate the μTLB 615 with the collapsed translation.
Configuration of the μTLB 615, as well as operation of the system 600 during memory access and populating the μTLB 615, is described in further detail below with reference to
V (705): Valid bit. The valid bit indicates whether the entry is valid or invalid.
Gt (706): The Gt bit indicates if an entry belongs to the guest (Gt=1) or root context (Gt=0). If Gt=0, the VPN is a GPA or a RVA. If Gt=1, the VPN is a GVA or a GPA (for unmapped guest addresses). Use of the Gt bit avoids the need to tag each μTLB entry with the corresponding virtual machine ID (VMID).
Mask (707): These bits indicate to the comparator whether a particular address bit should be considered in the comparison or ignored.
G (708): This is the global bit. If Gt=1, then this bit represents the G bit of the Guest TLB entry corresponding to a GVA. If Gt=0, then this bit represents the G bit of a Root TLB entry corresponding to either a GPA or a RVA.
ASID (709): This is the ASID field. If Gt=1, then this field holds the Guest ASID for this particular translation. This field is ignored if G=1. If Gt=0, this field holds the Root ASID for this particular translation. This field is ignored if G=1.
VP (710): Virtual page number (for a GVA, GPA, or RVA).
ENTg[7:0] (711): ENTg is a guest entry number or another unique identifier that identifies the source of this translation. On MTLB writes, this number is used to selectively invalidate entries that might no longer represent valid translations. If Gt=0, this field is not used. ENTg[7:0]==0 which is in the Guest TLB indicates there is no guest translation in the guest TLB (e.g. unmapped guest address). ENTg is the absolute entry number (0-255) of the MTLB and not the “index” known to the guest.
ENTr[8:0] (712): ENTr is a root entry number or another unique identifier that identifies the source of this translation. Note that this field could be set for both Gt=0 and Gt=1. ENTr[8] is set to indicate that this μTLB entry does not have a MTLB root entry. This can occur if unmapped root addresses are inserted into the μTLB. ENTr is the absolute entry number (0-255) of the MTLB and not the “index” known to the root.
RP (713): Root page number. This field is either copied from the Root TLB EntryLo0 or EntryLo1 or may be a concatenation of GPA (from the guest TLB) and the RP (from the root TLB).
GRI (714): Guest read inhibit. The GRI bit reflects the value in the Guest TLB entry RI bit. This field is disregarded when Gt=0.
RI (715): Read Inhibit. The RI reflects the value the Root TLB RI bit. This field is “don't care” if Gt=0.
GD (716): Guest Dirty bit. The GD bit reflects the value of the D bit in the Guest TLB. This field may be disregarded if Gt=0.
D (717): Dirty bit. The D bit reflects the value of the D bit in the Root TLB.
C (718): Coherency bits. When Gt=1, C is taken from the guest TLB entry. When Gt=0, it is taken from the root TLB entry.
The values above are described in further detail below. If Guest=1 (i.e. a guest lookup is being performed), then only entries with Gt=1 are considered on a lookup. If Guest=0 (i.e. a root lookup is being performed), then only entries with Gt=0 are considered on a lookup.
Referring back to
Turning again to
The G bit 708 represents the “Global” bit. Global addresses are ones where the ASID is ignored. If this is a mapping from a GVA to a RPA, the G bit is copied from the guest TLB (i.e., guest translation). If the mapping is for a GPA or RVA to a RPA, the G bit is copied from the Root TLB (i.e., root translation).
The ASID field 709 represents the address space identifier for a virtual address. The ASID 709 is used to distinguish between virtual addresses belonging to different contexts. If this is a mapping from guest virtual address to root physical address, the ASID field 709 is copied from the guest TLB (i.e., guest translation). If the mapping is for a GPA to a RPA, the ASID field is copied from the root TLB (i.e., root translation). The Virtual page number (VP) field 710 may be formed from the virtual address being translated, and the Root page number (RP) field 713 may be formed dependent on the relative page sizes of the guest and root translations (described in further detail below, with reference to
Read Inhibit (RI) 715 may be used to prevent a particular page from being read. Because read permission depends on both the guest and root translation's value for this bit, both the guest and root read inhibit attributes are captured in the μTLB 615. The GRI (Guest Read Inhibit) may be copied from the guest TLB. If the translation is not a guest virtual address translation, then the GRI bit may be disregarded, and the RI bit 715 is copied from the root TLB entry. Similarly, the D (dirty) bit 717 indicates whether a page has previously been written to. Because the D bit 717 depends on both the corresponding guest and root translations' value for this bit, both the guest and root D bit attributes are captured in the μTLB 615.
The C bits 718 may relate to the coherency policy for a particular page. If mapping from GVA to RPA, the C field 718 may be copied from the corresponding entry of the guest TLB. If the mapping is for a GPA to RPA translation, then the C field 718 may be copied from the corresponding entry of the root TLB.
TABLE 1
Comparison of valid and invalid values of a μTLB entry.
Operation
Mode
Gt
GRI
RI
GD
D
GXI
XI
Action
Load
Guest
1
0
0
x
x
x
x
OK
1
1
x
x
x
x
x
Guest
Exception
1
0
1
x
x
x
x
Root
Exception
Root
0
x
0
x
x
x
x
OK
0
x
1
x
x
x
x
Root
Exception
Store
Guest
1
x
x
0
x
x
x
Guest
Exception
1
x
x
1
0
x
x
Root
Exception
1
x
x
1
1
x
x
OK
Root
0
x
x
x
0
x
x
Root
Exception
0
x
x
x
1
x
x
OK
Instruction
Guest
x
x
x
x
x
0
0
OK
x
x
x
x
x
1
x
Guest
Exception
x
x
x
x
x
0
1
Root
Exception
Root
x
x
x
x
x
x
0
OK
x
x
x
x
x
x
1
Root
Exception
Table 1 presents the possible values of a μTLB entry, and indicates valid combinations of values, as well as combinations of values that can generate an “exception” during a μTLB lookup. Upon accessing the μTLB, an exception condition may occur in the guest (e.g., an instruction fetch, a load or store violation). After the guest software addresses the exception conditions, the instruction may be re-executed. Upon re-execution, the guest permission check may pass, but a root TLB protection violation may exist. The exception would then be signaled to the root context. A μTLB exception may be considered a “miss,” and is handled as described below with reference to
In the case of a failure to match to a GPA or RVA (i.e., a μTLB “miss”), the μTLB 615 is updated (820). To do so, the memory controller 608 may operate as a μTLB “miss controller.” (In alternative embodiments, a μTLB miss controller may be configured separately from the memory controller 608). The memory controller 608 may access the MTLB 610 to retrieve corresponding GVA-to-GPA and GPA-to-RPA translations (825). (Alternatively, the memory controller 608 may access only a GPA-to-RPA translation if the given μTLB entry is a GPA-to-RPA translation.) With the corresponding translations from the MTLB 610, the memory controller 608 generates a valid GVA-to-RPA translation, and writes the translation to the μTLB 615 (830). Once the μTLB 615 is updated, the memory controller 608 may again perform a GVA match against the μTLB 615 and provide a memory access (816) upon returning a corresponding RPA (815).
To create a μTLB entry, including the fields described above with reference to
V: 1 (hardware valid bit).
Gt: 1 (this is a guest mapping).
Mask: (set to minimum of RootMask size and GuestMask size).
G: copied from Guest.TLB.
ASID: copied from Guest context.
VP: GVA.
RP: RP.
GRI: copied from Guest.TLB.
RI: copied from the Root.TLB.
GD: copied from the Guest.TLB.
D: copied from the Root.TLB.
GXI: copied from the Guest.TLB.
XI: copied from Root.TLB.
C: copied from the Guest.TLB.
ENTg[7:0]: set to the index of the Guest.TLB entry.
ENTr[8:0]: set to the index of the Root.TLB entry.
In further embodiments, the μTLB may cache multiple types of translations. For example, the μTLB can cache translations to a RPA from a GVA as described above, as well as from a GPA or a root virtual address (RVA), which may be an address of a virtual memory employed by the hypervisor. Because there can be aliases (similar addresses that represent possibly different root physical addresses) between GVAs and GPAs, the μTLB can include a Gt bit as described above. If the Gt bit of an entry is one, then the entry's VP represents a Guest Virtual Address. Similarly, if an entry's Gt bit is zero, then the entry's VP represents a GPA or RVA. The Gt bit may also enable hardware to quickly invalidate guest translations (i.e., GVA-to-GPA translations) when changing guests without disturbing mappings (i.e., GPA-to-RPA translations) owned by the hypervisor.
In still further embodiments, the μTLB may cache translations between any number of address domains. For example, some virtualization systems may implement an additional address domain between the GVA, GPA and RPA domains, such as “secure RPA” (SPA), which may be implemented between the GPA and RPA domains. To accommodate such an addressing system, a MTLB may include three or more logical portions to store translations. In a specific example including the SPA domain, the MTLB may include a first logical portion storing GVA-to-GPA translations, a second logical portion storing GPA-to-SPA translations, and a third logical portion storing SPA-to-RPA translations. Accordingly, the μTLB may be configured to cache translations between the GVA domain and the RPA domain, thereby collapsing translations between four address domains.
In still further embodiments, translations may be held in any level of a multi-level translation storage hierarchy. Each storage level can hold direct translations (i.e., translations between two successive address domains) or collapsed translations (i.e., translations between two address domains separated by one or more intermediate address domains). In a specific example, a system may be configured for four logical domains and three storage levels. The third storage level three may hold collapsed translations from a first to a final address domain. A second level may hold direct translations from first to second logical domain and collapsed translations from second to final logical domain. Lastly, a first level may hold direct translations from first to second, second to third, and third to fourth logical domains.
When a mapping between a virtual address and a root physical address changes, any TLB entries that contained information from the previous mapping between a virtual address and a root physical address may be replaced or invalid in the TLB. This replacement may be done in either hardware or software means. When either a guest virtual address to guest physical address mapping is changed by software, the guest TLB may be updated. Both the μTLB and data cache may have used information from the previous guest TLB mapping to cache either an address translation or data value, respectively. In order to maintain address and data consistency, any μTLB entry or data cache entry associated with the prior mapping may be invalidated. In one embodiment, this invalidation may be done by hardware, such as a memory controller. The ENTg and ENTr values uniquely identify the TLB entries used to form a μTLB entry or a data cache tag, and therefore may be searched when identifying matching entries.
The memory controller 1008 or other hardware may further monitor the MTLB 1010 for changes to entries therein, such as the replacement of an entry with another entry, a deletion of an entry, or an invalidation of an entry (1230). If such a change is detected, then the memory controller 1008 may search each of the μTLB 1015 and data cache 1035 to locate entries and tags corresponding to the changed entry (1235). Upon locating such entries and tags, they may be invalidated by modifying their respective “valid” bit (1240). Example embodiments of maintaining entries across the MTLB 1010, μTLB 1015 and data cache 1035 is described in further detail below.
When a GTLB 1020 entry is replaced, any μTLB 1015 or data cache 1035 entries that contain information derived from that entry may be required to be invalidated. Tag values such as the ENTg field (described above) may be used for such invalidation. In addition, a unique identifier associated with the GTLB entry may be used to find all the μTLB and data cache entries that contain information from that GTLB entry. In one embodiment, the unique identifier may be the physical index of a GTLB entry. First, a search of the μTLB and data cache entries that have Gt equal to one may be performed. All entries that match on their ENTg field with the physical index of the Guest TLB entry being replaced may be invalidated by the memory controller or other hardware. This search may be done sequentially, associatively, or some combination thereof. Using the ENTg field may simplify the invalidation task by eliminating the need to compare against virtual address tags, which might have to be adjusted to accommodate for differing address widths (e.g. cache line size versus page size), and finding the virtual address value to compare against. Using the ENTg field may also enable the use of a virtual cache by minimizing the die area required by the content-addressable memory CAMs) for invalidation and narrowing the invalidations to just the subset of tags needed to maintain address and data consistency. In an alternative embodiment, a bulk invalidation may be performed whereby all the data cache and μTLB tags with Gt equal to one are invalidated. In a further alternative, a bulk invalidation may be performed whereby all the data cache and μTLB tags regardless of their Gt bit value.
When a RTLB 1030 entry is replaced, any μTLB 1015 or data cache 1035 entries that contain information derived from that entry may be required to be invalidated. The ENTr field may be used for such invalidation. Further, a unique identifier associated with the RTLB entry may be used to find all the μTLB and data cache entries that contain information from that RTLB entry. In one embodiment, the unique identifier is the physical index of a Root TLB entry. A search of the μTLB and data cache entries may then be performed. All entries that match on their ENTr field with the physical index of the Root TLB entry being replaced are invalidated by hardware. This search may be done sequentially or associatively or some combination of the two. In contrast to a GTLB replacement case, the value of the Gt bit may be considered irrelevant because both Guest entries and Root entries may rely on a root mapping.
If the ENTr field were not implemented for invalidating entries, removing a root TLB entry may require some other form of data cache and μTLB invalidation. As an alternative, the entire data cache and μTLB may be invalidated. In a further alternative, the virtual tag may be matched for invalidation. However, upon the removal of a root TLB entry, invalidating guest entries with GVA tags can present further challenges. Specifically, in some embodiments, there is no reverse mapping between the GPA of a root TLB entry and the possibly multiple GVAs that map to it. Thus, in such an embodiment, a further component may be required to deduce the GVAs that map to the GPA that matches the root TLB entry being replaced.
When virtualization via a multi-stage address translation system is introduced, the possibility of unmapped or bypass virtual addresses may occur at any stage. In addition, mapped virtual addresses may be assigned some special attributes in the guest translation process. In one embodiment with a two-stage translation scheme, an unmapped address can occur as a GVA presented to the GTLB or as a GPA presented to the RTLB.
A consequence of the multi-stage translation is that a GVA that is normally unmapped may result in a GPA that is mapped by the RTLB. Additionally, a GVA that has some particular attribute associated with it will be translated to a GPA. A guest operating system may specify that an address should be unmapped by the guest TLB but may have no control over the root TLB. Thus, an unmapped GVA may become a mapped GPA. Generally, this would be the appropriate because the RTLB is controlled by a hypervisor, which has final control over the RPA associated with a GPA. However, for some types of transactions, if the GVA is unmapped or has some particular attribute, it may be beneficial to bypass subsequent address translations and allow the RPA to be equal to the GPA.
Accordingly, example embodiments may provide for selectively bypassing at least a portion of an address translation based on an indication in a received address.
The bypass control may provide for selectively bypassing translation at one or both of the GTLB 1320 and RTLB 1330 translations based on an attribute of the address or other indication. If a bypass is enabled, the address may be transformed (e.g., via address masking) by a fixed transform 1365 in place of the GTLB translation 1320 and/or a fixed transform 1366 in place of the RTLB translation 1330. In one embodiment, a set of bypass bits 1370 under control of privileged software are used to bypass the RTLB translation 1330. The bypass determination 1360 may be a function of the original GVA and/or attributes of that GVA and the state of the bypass bit. For example, in the MIPS architecture, a bit may be defined such that all GVA in KSEG0 (an unmapped virtual address segment) are transformed to GPAs (via address masking at the fixed transformation 1365), and the GPAs are transformed to RPAs (via address masking at the fixed transformation 1366), thereby bypassing both the GTLB translation 1320 and the RTLB translation 1330. In such a configuration, a GPA may equal a GVA after an address mask is applied, and a RPA would be equal to a GPA after a further address mask is applied. In some embodiments, an address mask may not be required between GPAs and RPAs, meaning that the GPAs are identical to the RPAs. In this example, the guest address space contains both a memory space and an input/output (I/O) space. Each of these addresses spaces may be further divided into multiple different address spaces.
A bypass determination may cause the RTLB translation 1330 to be bypassed if the GVA is contained in the address space associated with a corresponding bypass bit 1370. Each of the bypass bits may relate to an associated address space, meaning the translation may be bypassed if the received address belongs to a given address space and the bypass bit is enabled for the given address space. For example, a bypass bit 1370 may cause the RTLB translation 1330 to be bypassed if the bypass bit is set to “1” and the original GVA resides in the corresponding address space.
In the case of a bypass of a GTLB only (i.e., continuing to access the RTLB, a guest operating system may determine to bypass translation to access physical memory directly. However, a hypervisor may prohibit such direct access, and therefore continue to cause the GPA (resulting from a GTLB bypass) to be translated to a RPA.
A μTLB, as described above, may cache translations from GVAs to RPAs. Thus, in the case of a bypass, the result of a bypass may or may not be cached in the μTLB. If the result is not cached in the μTLB, then a μTLB miss may occur on subsequent accesses, and the bypass determination and masking may be repeated.
While this invention has been particularly shown and described with references to example embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims.
Snyder, II, Wilson P., Kessler, Richard E., Bertone, Michael, Chin, Bryan W., Mukherjee, Shubhendu S.
Patent | Priority | Assignee | Title |
10042778, | Sep 26 2013 | CAVIUM INTERNATIONAL; MARVELL ASIA PTE, LTD | Collapsed address translation with multiple page sizes |
10353826, | Jul 14 2017 | ARM Limited | Method and apparatus for fast context cloning in a data processing system |
10467159, | Jul 14 2017 | ARM Limited | Memory node controller |
10489304, | Jul 14 2017 | ARM Limited | Memory address translation |
10534719, | Jul 14 2017 | ARM Limited | Memory system for a data processing network |
10565126, | Jul 14 2017 | ARM Limited | Method and apparatus for two-layer copy-on-write |
10592424, | Jul 14 2017 | ARM Limited | Range-based memory system |
10613989, | Jul 14 2017 | ARM Limited | Fast address translation for virtual machines |
10884850, | Jul 24 2018 | ARM Limited | Fault tolerant memory system |
11232042, | Nov 15 2019 | Microsoft Technology Licensing, LLC | Process dedicated in-memory translation lookaside buffers (TLBs) (mTLBs) for augmenting memory management unit (MMU) TLB for translating virtual addresses (VAs) to physical addresses (PAs) in a processor-based system |
11803482, | Nov 15 2019 | Microsoft Technology Licensing, LLC | Process dedicated in-memory translation lookaside buffers (TLBs) (mTLBs) for augmenting memory management unit (MMU) TLB for translating virtual addresses (VAs) to physical addresses (PAs) in a processor-based system |
ER8126, |
Patent | Priority | Assignee | Title |
5940872, | Nov 01 1996 | INSTITUTE FOR THE DEVELOPMENT OF EMERGING ARCHITECTURES, L L C | Software and hardware-managed translation lookaside buffer |
7206903, | Jul 20 2004 | Oracle America, Inc | Method and apparatus for releasing memory locations during transactional execution |
7444493, | Sep 30 2004 | Intel Corporation | Address translation for input/output devices using hierarchical translation tables |
8356158, | Jul 15 2003 | AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED | Mini-translation lookaside buffer for use in memory translation |
8543772, | Sep 30 2003 | Intel Corporation | Invalidating translation lookaside buffer entries in a virtual machine (VM) system |
8595464, | Jul 14 2011 | Oracle International Corporation | Dynamic sizing of translation lookaside buffer for power reduction |
9208103, | Sep 26 2013 | CAVIUM INTERNATIONAL; MARVELL ASIA PTE, LTD | Translation bypass in multi-stage address translation |
9268694, | Sep 26 2013 | CAVIUM INTERNATIONAL; MARVELL ASIA PTE, LTD | Maintenance of cache and tags in a translation lookaside buffer |
20030065890, | |||
20060271760, | |||
20080276066, | |||
20090228667, | |||
20090327647, | |||
20100058358, | |||
20120297161, | |||
20130179642, | |||
20130262816, | |||
20140006681, | |||
20150089147, | |||
20150089150, | |||
20150089184, | |||
WO2013101168, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Sep 26 2013 | Cavium, Inc. | (assignment on the face of the patent) | / | |||
Nov 06 2013 | CHIN, BRYAN W | CAVIUM, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 032170 | /0937 | |
Nov 08 2013 | SNYDER, WILSON P , II | CAVIUM, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 032170 | /0937 | |
Nov 08 2013 | KESSLER, RICHARD E | CAVIUM, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 032170 | /0937 | |
Feb 04 2014 | MUKHERJEE, SHUBHENDU S | CAVIUM, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 032170 | /0937 | |
Feb 06 2014 | BERTONE, MICHAEL | CAVIUM, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 032170 | /0937 | |
Aug 16 2016 | CAVIUM NETWORKS LLC | JPMORGAN CHASE BANK, N A , AS COLLATERAL AGENT | SECURITY AGREEMENT | 039715 | /0449 | |
Aug 16 2016 | CAVIUM, INC | JPMORGAN CHASE BANK, N A , AS COLLATERAL AGENT | SECURITY AGREEMENT | 039715 | /0449 | |
Jul 06 2018 | JP MORGAN CHASE BANK, N A , AS COLLATERAL AGENT | CAVIUM NETWORKS LLC | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 046496 | /0001 | |
Jul 06 2018 | JP MORGAN CHASE BANK, N A , AS COLLATERAL AGENT | Qlogic Corporation | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 046496 | /0001 | |
Jul 06 2018 | JP MORGAN CHASE BANK, N A , AS COLLATERAL AGENT | CAVIUM, INC | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 046496 | /0001 | |
Sep 21 2018 | CAVIUM, INC | Cavium, LLC | CERTIFICATE OF CONVERSION AND CERTIFICATE OF FORMATION | 047185 | /0422 | |
Dec 31 2019 | Cavium, LLC | CAVIUM INTERNATIONAL | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 051948 | /0807 | |
Dec 31 2019 | CAVIUM INTERNATIONAL | MARVELL ASIA PTE, LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 053179 | /0320 |
Date | Maintenance Fee Events |
Sep 16 2020 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Oct 22 2024 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
May 02 2020 | 4 years fee payment window open |
Nov 02 2020 | 6 months grace period start (w surcharge) |
May 02 2021 | patent expiry (for year 4) |
May 02 2023 | 2 years to revive unintentionally abandoned end. (for year 4) |
May 02 2024 | 8 years fee payment window open |
Nov 02 2024 | 6 months grace period start (w surcharge) |
May 02 2025 | patent expiry (for year 8) |
May 02 2027 | 2 years to revive unintentionally abandoned end. (for year 8) |
May 02 2028 | 12 years fee payment window open |
Nov 02 2028 | 6 months grace period start (w surcharge) |
May 02 2029 | patent expiry (for year 12) |
May 02 2031 | 2 years to revive unintentionally abandoned end. (for year 12) |