Methods and apparatus are disclosed to provision virtual machine resources. An example method includes labeling a copy of memory associated with an established virtual machine with an execution status based on an architecture type associated with the copy, and constraining a fetch operation in response to a page fault to a labeled portion of the copy that matches an architecture type of a received processor instruction.
|
1. A method comprising:
instantiating, by a processor that executes a virtual machine manager, a first virtual machine, wherein instantiating the first virtual machine comprises allocating memory and populating the memory with pages that indicate tasks assigned to the first virtual machine;
in response to a determination that the first virtual machine is instantiated, capturing, by the processor, a copy of the memory;
inspecting, by the processor, metadata associated with the copy of the memory for architecture information that indicates
a memory architecture type of the first virtual machine based upon a central processing unit architecture of the first virtual machine,
a first portion of the memory that is related to kernel code, and
a second portion of the memory that is related to user code;
inspecting, by the processor, the metadata associated with the copy of the memory for operating system information that indicates
an operating system of the first virtual machine,
a third portion of the memory that is related to file names,
a fourth portion of the memory that is related to file addresses, and
a fifth portion of the memory that is free;
tagging, by the processor, the copy of the memory with the architecture information and the operating system information;
in response to receiving an indication that a demand for the first virtual machine has increased, generating, by the processor, a clone of the first virtual machine; and
fetching, based on the tagging, by the processor, a relevant portion of the memory that is semantically related to an instruction executed by the clone of the first virtual machine.
15. A non-transitory computer readable medium having instructions stored thereon that, when executed by a processor, cause the processor to perform operations comprising:
instantiating a first virtual machine, wherein instantiating the first virtual machine comprises allocating physical memory and populating the physical memory with pages that indicate tasks assigned to the first virtual machine;
in response to a determination that the first virtual machine is instantiated, capturing a copy of the physical memory;
inspecting, by the processor, metadata associated with the copy of the physical memory for architecture information that indicates
a memory architecture type of the first virtual machine based upon a central processing unit architecture of the first virtual machine,
a first portion of the physical memory that is related to kernel code, and
a second portion of the physical memory that is related to user code;
inspecting, by the processor, the metadata associated with the copy of the physical memory for operating system information that indicates
an operating system of the first virtual machine,
a third portion of the physical memory that is related to file names,
a fourth portion of the physical memory that is related to file addresses, and
a fifth portion of the physical memory that is free;
tagging, by the processor, the copy of the physical memory with the architecture information and the operating system information;
in response to receiving an indication that a demand for the first virtual machine has increased, generating, by the processor, a clone of the first virtual machine; and
fetching, based on the tagging, by the processor, a relevant portion of the physical memory that is semantically related to an instruction executed by the clone of the first virtual machine.
8. A system comprising:
a processor; and
a memory that stores instructions that, when executed by the processor, cause the processor to perform operations comprising
instantiating a first virtual machine, wherein instantiating the first virtual machine comprises allocating physical memory and populating the physical memory with pages that indicate tasks assigned to the first virtual machine,
in response to a determination that the first virtual machine is instantiated, capturing a copy of the physical memory,
inspecting, by the processor, metadata associated with the copy of the physical memory for architecture information that indicates
a memory architecture type of the first virtual machine based upon a central processing unit architecture of the first virtual machine,
a first portion of the physical memory that is related to kernel code, and
a second portion of the physical memory that is related to user code,
inspecting, by the processor, the metadata associated with the copy of the physical memory for operating system information that indicates
an operating system of the first virtual machine,
a third portion of the physical memory that is related to file names,
a fourth portion of the physical memory that is related to file addresses, and
a fifth portion of the physical memory that is free,
tagging, by the processor, the copy of the physical memory with the architecture information and the operating system information,
in response to receiving an indication that a demand for the first virtual machine has increased, generating, by the processor, a clone of the first virtual machine, and
fetching, based on the tagging, by the processor, a relevant portion of the physical memory that is semantically related to an instruction executed by the clone of the first virtual machine.
2. The method of
3. The method of
in response to a determination that the central processing unit architecture comprises an x86 architecture, scanning page table information for executable code and non-executable code by identifying a state of an NX bit.
4. The method of
in response to a determination that the central processing unit architecture comprises an ARM architecture, scanning page table information for executable code and non-executable code by identifying a state of an XN bit.
5. The method of
scanning page table information for executable code and non-executable code by identifying a state of an XN bit or an NX bit.
6. The method of
in response to determining that the state is true,
determining that a portion of the copy of the memory associated with the XN bit or the NX bit corresponds to user space, and
tagging the portion of the copy of the memory as the user space.
7. The method of
obtaining a frame table maintained by the operating system, the frame table being obtained from the copy of the memory;
identifying, based on a review of the frame table, a file, an associated memory location, and an address space.
9. The system of
10. The system of
in response to a determination that the central processing unit architecture comprises an x86 architecture, scanning page table information for executable code and non-executable code by identifying a state of an NX bit.
11. The system of
in response to a determination that the central processing unit architecture comprises an ARM architecture, scanning page table information for executable code and non-executable code by identifying a state of an XN bit.
12. The system of
scanning page table information for executable code and non-executable code by identifying a state of an XN bit or an NX bit.
13. The system of
in response to determining that the state is true,
determining that a portion of the copy of the memory associated with the XN bit or the NX bit corresponds to user space, and
tagging the portion of the copy of the memory as the user space.
14. The system of
obtaining a frame table maintained by the operating system, the frame table being obtained from the copy of the physical memory;
identifying, based on a review of the frame table, a file, an associated memory location, and an address space.
16. The non-transitory computer readable medium of
17. The non-transitory computer readable medium of
in response to a determination that the central processing unit architecture comprises an x86 architecture, scanning page table information for executable code and non-executable code by identifying a state of an NX bit.
18. The non-transitory computer readable medium of
in response to a determination that the central processing unit architecture comprises an ARM architecture, scanning page table information for executable code and non-executable code by identifying a state of an XN bit.
19. The non-transitory computer readable medium of
scanning page table information for executable code and non-executable code by identifying a state of an XN bit or an NX bit; and
in response to determining that the state is true,
determining that a portion of the copy of the memory associated with the XN bit or the NX bit corresponds to user space, and
tagging the portion of the copy of the memory as the user space.
20. The non-transitory computer readable medium of
obtaining a frame table maintained by the operating system, the frame table being obtained from the copy of the physical memory;
identifying, based on a review of the frame table, a file, an associated memory location, and an address space.
|
This disclosure relates generally to cloud computing, and, more particularly, to methods and apparatus to provision virtual machine resources.
In recent years, cloud computing services have been developed and deployed that allow customers to utilize computing resources without making capital expenditures to acquire such computing resources. Typically, a cloud computing service provider configures one or more computers and/or computer systems having at least one processor, memory storage and network access to the one or more computers and/or computer systems. These cloud computer systems may include any number of processors, memories and/or network access devices (e.g., network interface card(s) (NICs)) to allow any number of customers access to services provided by the computer systems. Services may include, but are not limited to, numerical processing, commercial transaction processing and/or web hosting.
In some examples, the cloud computing service provider configures the computer systems with one or more virtual machines (VMs) to service the one or more customers' computing needs. Generally speaking, VMs are virtual instances of an operating system that execute on underlying hardware resources in a time-sliced manner. A VM user is provided with computing services, such as an operating system user interface, storage space and/or applications (e.g., database query engines, numerical processing applications, graphical processing applications, web server applications, etc.) that are logically separated from any other instantiated VMs operating on the underlying hardware resources managed by the cloud computing service provider.
Methods, apparatus, and articles of manufacture are disclosed, which includes labeling a copy of memory associated with an established virtual machine with an execution status based on an architecture type associated with the copy, and constraining a fetch operation in response to a page fault to a portion of the labeled copy that matches an architecture type of a received processor instruction.
Cloud-based resource services allow customers to avoid capital expenditure in computer hardware while obtaining the benefit of having such computer resources available for computing operation(s). At least one example of a cloud-based resource service provider is Amazon® Elastic Compute Cloud (EC2), which manages network accessible computer resources for a fee. In some examples, fees charged by the cloud-based resource service provider are calculated based on a metric associated with processor utilization. In other examples, fees charged by the cloud-based resource service provider are calculated as a flat fee associated with an amount of time (e.g., minutes, hours, days, weeks, months).
Computing resources managed by the cloud-based resource service provider typically include high-end server machines having multiple processors and/or processors having multiple cores. The example computing resources managed by the cloud-based resource service provider are typically virtualized, in which a virtual machine manager (VMM) creates one or more virtual machines (VMs) that are logically separate from any other VMs instantiated by the VMM. In other words, although each VM shares underlying hardware resources on a time-slice basis, the allocated processes, memory and/or storage space of one VM are not accessible by any other VM executing on the underlying hardware resources.
Cloud computing services cater to customer workloads that may change abruptly from relatively minimal resource requirements (e.g., resource requirements measured in central processing unit (CPU) clock cycles) to more demanding resource requirements. For example, a website that hosts web-based e-mail accounts or news articles may experience peak demands around a lunch hour in response to employees checking their e-mail accounts and/or looking for news updates during their lunch break time. After the lunch break time, demand for such services may drop substantially until evening hours when such employees return to their homes. Additionally, the rate at which demand increases may occur faster than a VMM and corresponding hardware can respond.
For example, while cloud servers grow (e.g., adding VMs when demand increases) and shrink (e.g., removing VMs when no longer needed) in response to user demands, unpredictable and/or lengthy latency may occur when attempting to add additional VMs. In some examples, bandwidth intensive input/output activity, memory allocation and/or VM state replication result in VM instantiation latencies of approximately two minutes, as described by H. Andres Lagar-Cavilla et al. (“Kaleidoscope: Cloud Micro-Elasticity via VM State Coloring,” Eurosys, Apr. 10-13, 2011), the entire disclosure of which is incorporated by reference herein in its entirety. User demands are not satisfied until the one or more VMs are ready to execute. Cloud service providers fear, in part, lost revenue due to lethargic service performance and/or user experiences in which service requests consume excessive amounts of time to complete (e.g., web-page request timeout). For example, on-line advertisements that would otherwise accompany web-based services may never appear if the one or more VMs are not operational and/or delayed due to instantiation latencies.
To ensure and/or otherwise promote a responsive user experience, cloud service providers facilitate the instantiation of VMs prior to detecting one or more peaks in user demand for services. In some instances, the cloud service providers enable and/or otherwise allow tools (e.g., Amazon EC2 Autoscale, Beanstalks, RightScale, etc.) to allow VM instantiation. For example, cloud service providers may instantiate (e.g., via user-tuned tools) a number of VMs fifteen minutes before a lunch break time in anticipation of an increase in user demand. In other examples, the cloud service providers may monitor (e.g., via the aforementioned tools) a performance demand threshold value (e.g., 30% of the highest demand observed to date) and allocate VMs to accommodate for the demand threshold value plus a margin of safety above the current demand. In operation, if the currently observed demand is 30% of the highest demand ever observed, then the cloud service provider may allocate a number of VMs to accommodate 70% of the highest demand observed to date. In other words, a 40% increase in resources are allocated to additional VMs to accommodate for the possibility that demand will increase. As long as the cloud service provider allocates a number of VMs that always exceed an amount needed for current demand, users are more likely to experience services that are satisfactory.
While any number of additional VMs and corresponding hardware for the additional VMs may be allocated to ensure a satisfactory user experience, such resources are consumed regardless of whether demand increases or not. As such, some of the underlying hardware resources for each of the additional VMs are wasted rather than allocated to one or more computing processes in response to actual user requests. Additionally, some VMs are only needed for relatively short periods of time (e.g., 10 seconds) and then relinquished so that the underlying hardware resources may be allocated for other VMs, as needed. In such example circumstances, approximately two minutes of instantiation time are required to prepare a VM that is to operate for only ten seconds, resulting in substantial waste of the underlying hardware resources and substantial delay before the VM can be used. Capital investment increases due to one or more attempts by the cloud service provider to keep customers satisfied with a responsive experience. Such capital investment costs may be ultimately passed on to users, thereby affecting the competitive edge for the cloud service provider.
Methods, systems, apparatus and/or articles of manufacture disclosed herein provision VMs in a focused manner that is responsive to user demand, rather than provisioning resource heavy VMs prior to actual user demands being detected. Such on-the-fly VM provisioning reduces or even minimizes wasted resources of the underlying hardware. Additionally, methods, systems, apparatus and/or articles of manufacture disclosed herein identify one or more memory regions of a parent VM (e.g., a previously established VM) to facilitate memory resource sharing between cloned child VMs so that each child VM consumes a smaller footprint of memory resources. As each child VM consumes a smaller portion of memory resources, a greater number of child VMs may be implemented for a given hardware resource platform.
Each instantiated VM is allocated a portion of underlying hardware resources of a computer system, such as random access memory (RAM) 112. While the illustrated example of
In response to user demand, the example VMM 102 instantiates the parent VM 104 (VM1), which consumes a first portion 114 of RAM 112. Although a first portion 114 of RAM 112 is allocated for exclusive use by VM1, the whole first portion 114 may not be in active use by VM1 104. In the illustrated example of
A benefit of cloning a VM rather than instantiating a new VM is a decrease in an amount of time required before the cloned VM is handling user task(s). Generally speaking, instantiating a VM includes allocating a portion of RAM for use by the VM, accessing disk storage to retrieve pages and populate the allocated RAM with the pages associated with one or more instructions of interest, and handling page faults. Page faults may include any number of iterations of retrieving a page from disk storage, checking the retrieved page to confirm it is associated with the CPU instruction, determining the retrieved page is not correct, and re-retrieving a different page from disk storage. Additionally, the instantiation process for the VM changes states when invoking a fault handler to process one or more page faults.
Once a parent VM (an established VM) is instantiated, its corresponding physical memory is typically populated in a robust manner. For example, if the parent VM is generally associated with one or more tasks related to online analytical process (OLAP), then the corresponding memory of the parent VM (e.g., a portion of RAM 112) includes relevant kernel code instructions, user code instructions, kernel data, user data and/or files associated with OLAP task(s). As such, while a cloned VM also experiences page faults (e.g., information unavailable to a requesting instruction), one or more fetch operation(s) occur to physical memory (e.g., RAM 112) rather than disk storage, thereby saving substantial time. However, each cloned VM does not copy all the pages of its corresponding parent, which would consume a greater amount of physical memory (e.g., RAM 112). Instead, the cloned VM container includes a limited amount of metadata state information related to the parent VM, such as a current CPU state (e.g., CPU register contents).
Methods, systems, apparatus and/or articles of manufacture disclosed herein employ the cloning manager 152 to evaluate, identify and/or label parent memory 160 of VM1 104. A first sub-portion 162 of the parent memory 160 of VM1 104 is currently used by VM1 104, and a second sub-portion 164 is free memory. In operation, the example cloning manager 152 accesses the parent memory 160 of the parent VM (VM1) via the example VMM 102 and/or via direct access to the example physical memory 112. The evaluation of the parent memory 160 by the example cloning manager 152 identifies and/or otherwise classifies the memory 160 into sets of semantically-related regions to facilitate, in part, an optimized manner of fetching in response to a page fault. As used herein, memory coloring refers to labeling and/or otherwise tagging portions of memory (e.g., pages of memory) in a manner associated with their use. Semantically-related regions of memory 160 may include, but are not limited to kernel data, user data, kernel code, data code, files and/or free memory space.
In operation, the example cloning manager 152 of
Additionally, because the parent VM1 104 has already performed a relatively time-consuming fetch to store one or more pages related to kernel code in the physical memory 112, memory 168 associated with VMCL1 154 does not need to consume space for such kernel code and/or time to repeat the retrieval. Instead, VMCL1 154 retrieves kernel code 166 from the first sub-portion 162, thereby reducing the amount of memory 168 associated with VMCL1 154 and the latency associated with facilitating duplicate copies of data and/or code. In another example, if VMCL2 156 attempts to execute an instruction associated with user code, a corresponding page fault fetch is limited to only such sections of the color mapped memory that are also associated with user code, thereby reducing and/or even minimizing a number of page fault iterations before finding the correct page(s).
In operation, the example cloning manager 152 of
More specifically, information related to the parent VM memory architecture and/or operating system (OS) is retrieved and inspected to identify regions of physical memory that may be related to each other. For example, the cloning manager 152 of
In response to an increase in user demand, the example cloning manager 152 of
Additionally, because some cloned VMs require amounts of memory to store and/or process data associated with particular task(s), each cloned VM may also be allotted its own physical memory. While a portion of the physical memory 112 may be associated with a cloned VM, the corresponding portion of the physical memory 112 is not allocated to the cloned VM if it is associated with static memory. Instead, pages of the physical memory 112 that are deemed static and/or common to the parent VM are pointed-to by the example cloning manager 152 so that physical memory storage space is not duplicated among the one or more cloned VMs. A benefit associated with consolidating static memory locations includes reducing a resource footprint for the cloned VMs, thereby allowing a greater number of VMs per unit of underlying hardware than would otherwise be possible via a traditional VM instantiation approach.
In response to a parent VM completing instantiation, the example memory copier 302 obtains and/or otherwise captures a copy of at least a portion of the physical memory associated with the parent VM, such as a copy of the first sub-portion 162 used by parent VM1 104 of
The example architecture identifier 304 of
In still other examples, if the example architecture identifier 304 determines that the processor is of type ARM, then the example architecture identifier 304 invokes the example memory type identifier 306 to search for an XN status bit. An XN status bit, which is sometimes referred to as the “execute never” bit, identifies portion(s) of memory that store code versus portion(s) of memory that store data. In operation, portion(s) of memory and/or pages in which the XN status bit is true are deemed to be associated with code, while portion(s) of memory and/or pages in which the XN status bit is false are deemed to be associated with data. While the example architecture identifier 304 is described as identifying x86 and/or ARM-based processors, the methods, apparatus, systems and/or articles of manufacture disclosed herein are not limited thereto. Additionally, while the example memory type identifier 306 is described as identifying an NX and/or and XN status bit, such example bit types are included for purposes of discussion and not limitation. For instance, processors and their associated pages also include a user bit, which identifies whether the pages correspond to user related data or kernel related data. Such user bit may additionally or alternatively be used with the NX, XN, or any other status bit.
If the example memory type identifier 306 determines that memory is non-executable (e.g., the NX or XN status bit is true), then the memory type identifier 306 determines an ownership status (e.g., a user indicator) of the non-executable pages, such as whether the pages are associated with user space or kernel space. The example memory labeler 308 of
The example OS identifier 312 of
The example load balancer 206 of
The one or more cloned VMs generated by the example demand manager 310 of
Prior to performing a fetch and/or one or more prefetches in response to the fault, the example instruction type identifier 320 of
In the event that the example instruction type identifier 320 of
In the event that the example instruction type identifier 320 of
While an example manner of implementing the example cloning manager 152 has been illustrated in
Flowcharts representative of example machine readable instructions, which may be executed to implement the system 300 of
As mentioned above, the example processes of
The program 400 of
The example program 400 continues to wait for the parent VM to complete instantiation and/or for a threshold length of time thereafter sufficient to ensure the memory has been loaded by the parent VM (block 404) before the example memory copier 302 captures a copy of the physical memory 112 associated with the parent VM (block 406). As described in further detail below, the example architecture identifier 304 inspects the captured VM memory for architecture related information (block 408), which may reveal portions of memory/pages related to kernel code, kernel data, user code, and/or user data. The example OS identifier 312 also inspects the captured VM memory for information unique to the OS (block 410), which may reveal portions of memory/pages related to file names, corresponding file addresses and/or portions of memory that are free. Results from inspecting the memory for architecture related information (block 408) and information unique to the OS (block 410) are noted and/or otherwise tagged to the captured copy of physical memory 112 associated with the parent VM so that one or more fetch operation(s) may occur with respect to semantically-related regions of the memory relevant to an instruction executed by a clone VM, as described in further detail below.
In response to the example demand manager 310 detecting and/or otherwise receiving an indication that user demands are increasing (block 412) (e.g., via information received and/or otherwise retrieved from the example load balancer 206), one or more clone VMs are generated (block 414). On the other hand, in the event that the example demand manager 310 does not receive a request for resources of the underlying hardware from customers and/or users (block 412), then the demand manager 310 determines whether a demand for resources has decreased (block 416). If so, one or more previously cloned VMs may be relinquished (block 418) so that underlying resource(s) are available to the parent VM and/or one or more cloned VMs.
During operation, the memory of the parent VM, such as the example first sub-portion 162 used by VM1 104, may change. For example, portions of memory that were earlier identified as user code, user data, kernel code and/or kernel data may change. In other examples, portions of memory that were earlier identified as free space become consumed by data during execution of the parent VM. On a periodic, aperiodic, scheduled and/or manual basis, the example cloning manager 152 of the illustrated example invokes a parent VM memory reassessment (block 420), which returns the example program to block 406. As such, future attempts to fetch and/or prefetch memory/pages from the parent VM memory may occur more accurately, thereby resulting in a fewer number of page faults.
Turning to
The memory type identifier 306 of the illustrated example scans the page table information for executable versus non-executable code by identifying a state of an NX bit or an XN bit (block 504). In the event the architecture is of type x86, then the example memory type identifier 306 scans and/or otherwise searches for the state of an NX bit. On the other hand, in the event the architecture is of type ARM, then the example memory type identifier 306 of
The example memory type identifier 306 of the illustrated example determines whether the NX or XN bit is true (block 508). If the NX or XN bit is true, the example memory type identifier 306 of
Turning to
The example frame table parser 314 of the illustrated example obtains a frame table from the captured memory (block 604) and reviews the frame table to identify one or more files and associated memory location(s) (block 606). Additionally, the example frame table parser 314 of the illustrated example reviews the frame table to identify address space(s) as used (e.g., storing user data, kernel data, user code, kernel code, file(s), etc.) or free (block 608).
Turning to
The program 800 of
Turning to
As described above, the example memory labeler 308 of the illustrated example previously labeled and/or otherwise tagged the physical memory associated with the parent VM (e.g., VM1 162) with an indication of kernel data, user data, kernel code, user code, free space and/or one or more particular file(s). In view of the labeled/tagged physical memory, the example prefetch manager 322 performs one or more fetch operation(s) on the portion(s) of the physical memory that are associated with the clone VM instruction type, as identified by the example instruction type identifier 320. In the event that the example instruction type identifier 320 determines that the instruction is indicative of kernel space (block 908), then the example prefetch manager 322 constrains the prefetch to one or more pages/memory location(s) identified as associated with kernel data (block 912).
In the event that the example instruction type identifier 320 determines that the instruction is not associated with a data fetch (block 906), then the instruction type identifier 320 of the illustrated example determines whether the instruction is associated with code execution (block 914). If so, then the instruction type identifier 320 determines whether the instruction is associated with user space or kernel space (block 916). If the instruction is associated with user space, then the example prefetch manager 322 constrains the prefetch to one or more pages/memory location(s) identified as associated with user code (block 918), otherwise the prefetch manager 322 constrains the prefetch to one or more pages/memory location(s) identified as associated with kernel code (block 920). However, if the instruction type identifier 320 determines that the instruction is associated with neither a data fetch (block 906) nor code execution (block 914), then the instruction is deemed to be associated with a request to locate memory free space. As such, the example prefetch manager 322 constrains the prefetch to one or more pages/memory location(s) identified as free memory (block 922), which may later be used by the clone VM.
Turning to
The processor platform P100 of the instant example includes a processor P105. For example, the processor P105 can be implemented by one or more Intel® microprocessors. Of course, other processors from other families are also appropriate.
The processor P105 is in communication with a main memory including a volatile memory P115 and a non-volatile memory P120 via a bus P125. The volatile memory P115 may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS Dynamic Random Access Memory (RDRAM) and/or any other type of random access memory device. The non-volatile memory P120 may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory P115, P120 is typically controlled by a memory controller.
The processor platform P100 also includes an interface circuit P130. The interface circuit P130 may be implemented by any type of past, present or future interface standard, such as an Ethernet interface, a universal serial bus (USB), and/or a PCI express interface.
One or more input devices P135 are connected to the interface circuit P130. The input device(s) P135 permit a user to enter data and commands into the processor P105. The input device(s) can be implemented by, for example, a keyboard, a mouse, a touchscreen, a track-pad, a trackball, isopoint and/or a voice recognition system.
One or more output devices P140 are also connected to the interface circuit P130. The output devices P140 can be implemented, for example, by display devices (e.g., a liquid crystal display, and/or a cathode ray tube display (CRT)). The interface circuit P130, thus, typically includes a graphics driver card.
The interface circuit P130 also includes a communication device, such as a modem or network interface card to facilitate exchange of data with external computers via a network (e.g., an Ethernet connection, a digital subscriber line (DSL), a telephone line, coaxial cable, a cellular telephone system, etc.).
The processor platform P100 also includes one or more mass storage devices P150 for storing software and data. Examples of such mass storage devices P150 include floppy disk drives, hard drive disks, compact disk drives and digital versatile disk (DVD) drives.
The coded instructions of
From the foregoing, it will be appreciated that disclosed example methods, apparatus, systems and/or articles of manufacture allow new cloned VMs to initialize substantially faster than traditionally invoked VMs, which in turn allows customers and/or users of cloud-based services to receive processing services in a more responsive manner. Additionally, because example methods, apparatus, systems and/or articles of manufacture disclosed herein prevent one or more fetching and/or prefetching operations from occurring on pages of memory unrelated to a requesting processor instruction, a fewer number of iterative fetching and/or page faults occur.
Although certain example methods, apparatus, systems and articles of manufacture have been described herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus, systems and articles of manufacture fairly falling within the scope of the claims of this patent.
Joshi, Kaustubh, Hiltunen, Matti, Lagar-Cavilla, Horacio Andres, Irzak, Olga, Scannell, Adin Matthew, de Lara, Eyal, Bryant, Roy, Tumanov, Alexey
Patent | Priority | Assignee | Title |
10140172, | May 18 2016 | Cisco Technology, Inc. | Network-aware storage repairs |
10210324, | Nov 23 2015 | Armor Defense Inc. | Detecting malicious instructions on a virtual machine |
10222986, | May 15 2015 | Cisco Technology, Inc. | Tenant-level sharding of disks with tenant-specific storage modules to enable policies per tenant in a distributed storage system |
10243823, | Feb 24 2017 | Cisco Technology, Inc.; Cisco Technology, Inc | Techniques for using frame deep loopback capabilities for extended link diagnostics in fibre channel storage area networks |
10243826, | Jan 10 2015 | Cisco Technology, Inc. | Diagnosis and throughput measurement of fibre channel ports in a storage area network environment |
10254991, | Mar 06 2017 | Cisco Technology, Inc.; Cisco Technology, Inc | Storage area network based extended I/O metrics computation for deep insight into application performance |
10303534, | Jul 20 2017 | Cisco Technology, Inc.; Cisco Technology, Inc | System and method for self-healing of application centric infrastructure fabric memory |
10404596, | Oct 03 2017 | Cisco Technology, Inc.; Cisco Technology, Inc | Dynamic route profile storage in a hardware trie routing table |
10545914, | Jan 17 2017 | Cisco Technology, Inc.; Cisco Technology, Inc | Distributed object storage |
10585830, | Dec 10 2015 | Cisco Technology, Inc. | Policy-driven storage in a microserver computing environment |
10664169, | Jun 24 2016 | Cisco Technology, Inc. | Performance of object storage system by reconfiguring storage devices based on latency that includes identifying a number of fragments that has a particular storage device as its primary storage device and another number of fragments that has said particular storage device as its replica storage device |
10671289, | May 15 2015 | Cisco Technology, Inc. | Tenant-level sharding of disks with tenant-specific storage modules to enable policies per tenant in a distributed storage system |
10713203, | Feb 28 2017 | Cisco Technology, Inc. | Dynamic partition of PCIe disk arrays based on software configuration / policy distribution |
10778765, | Jul 15 2015 | Cisco Technology, Inc.; Cisco Technology, Inc | Bid/ask protocol in scale-out NVMe storage |
10826829, | Mar 26 2015 | Cisco Technology, Inc. | Scalable handling of BGP route information in VXLAN with EVPN control plane |
10872056, | Jun 06 2016 | Cisco Technology, Inc. | Remote memory access using memory mapped addressing among multiple compute nodes |
10942666, | Oct 13 2017 | Cisco Technology, Inc. | Using network device replication in distributed storage clusters |
10949370, | Dec 10 2015 | Cisco Technology, Inc. | Policy-driven storage in a microserver computing environment |
10999199, | Oct 03 2017 | Cisco Technology, Inc. | Dynamic route profile storage in a hardware trie routing table |
11055159, | Jul 20 2017 | Cisco Technology, Inc. | System and method for self-healing of application centric infrastructure fabric memory |
11252067, | Feb 24 2017 | Cisco Technology, Inc. | Techniques for using frame deep loopback capabilities for extended link diagnostics in fibre channel storage area networks |
11354039, | May 15 2015 | Cisco Technology, Inc. | Tenant-level sharding of disks with tenant-specific storage modules to enable policies per tenant in a distributed storage system |
11563695, | Aug 29 2016 | Cisco Technology, Inc. | Queue protection using a shared global memory reserve |
11570105, | Oct 03 2017 | Cisco Technology, Inc. | Dynamic route profile storage in a hardware trie routing table |
11588783, | Jun 10 2015 | Cisco Technology, Inc. | Techniques for implementing IPV6-based distributed storage space |
11847478, | Jan 17 2020 | VMware LLC | Real-time feedback associated with configuring virtual infrastructure objects using tags |
Patent | Priority | Assignee | Title |
7299468, | Apr 29 2003 | HUAWEI TECHNOLOGIES CO , LTD | Management of virtual machines to utilize shared resources |
7506096, | Oct 06 2005 | Parallels International GmbH | Memory segment emulation model for virtual machine |
7730486, | Feb 28 2005 | Hewlett Packard Enterprise Development LP | System and method for migrating virtual machines on cluster systems |
7840801, | Jan 19 2007 | International Business Machines Corporation | Architecture for supporting attestation of a virtual machine in a single step |
7886221, | Nov 05 1999 | DECENTRIX, INC | Method and apparatus for storing web site data by web site dimensions and generating a web site having complementary elements |
7987262, | Nov 19 2008 | Accenture Global Services Limited | Cloud computing assessment tool |
8561183, | Jul 31 2009 | GOOGLE LLC | Native code module security for arm instruction set architectures |
8601201, | Nov 09 2010 | GOOGLE LLC | Managing memory across a network of cloned virtual machines |
20050160424, | |||
20060184938, | |||
20080163210, | |||
20080250213, | |||
20100122248, | |||
20110022695, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Aug 23 2011 | JOSHI, KAUSTUBH | AT&T Intellectual Property I, L P | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 028633 | /0674 | |
Aug 23 2011 | HILTUNEN, MATTI | AT&T Intellectual Property I, L P | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 028633 | /0674 | |
Aug 25 2011 | LAGAR-CAVILLA, HORACIO ANDRES | AT&T Intellectual Property I, L P | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 028633 | /0674 | |
Aug 30 2011 | AT&T Intellectual Property I, L.P. | (assignment on the face of the patent) | / | |||
Aug 30 2011 | The Governing Council of the University of Toronto | (assignment on the face of the patent) | / | |||
Dec 11 2015 | IRZAK, OLGA | The Governing Council of the University of Toronto | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 037294 | /0147 | |
Dec 15 2015 | BRYANT, ROY | The Governing Council of the University of Toronto | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 050812 | /0587 | |
Feb 01 2016 | SCANNELL, ADIN MATTHEW | The Governing Council of the University of Toronto | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 050812 | /0587 | |
Apr 26 2019 | DE LARA, EYAL | The Governing Council of the University of Toronto | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 050812 | /0587 | |
Oct 16 2019 | TUMANOV, ALEXEY | The Governing Council of the University of Toronto | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 050812 | /0587 |
Date | Maintenance Fee Events |
Jul 16 2019 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Sep 25 2023 | REM: Maintenance Fee Reminder Mailed. |
Mar 11 2024 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Feb 02 2019 | 4 years fee payment window open |
Aug 02 2019 | 6 months grace period start (w surcharge) |
Feb 02 2020 | patent expiry (for year 4) |
Feb 02 2022 | 2 years to revive unintentionally abandoned end. (for year 4) |
Feb 02 2023 | 8 years fee payment window open |
Aug 02 2023 | 6 months grace period start (w surcharge) |
Feb 02 2024 | patent expiry (for year 8) |
Feb 02 2026 | 2 years to revive unintentionally abandoned end. (for year 8) |
Feb 02 2027 | 12 years fee payment window open |
Aug 02 2027 | 6 months grace period start (w surcharge) |
Feb 02 2028 | patent expiry (for year 12) |
Feb 02 2030 | 2 years to revive unintentionally abandoned end. (for year 12) |