A host computer has a plurality of containers including a first container executing therein, where the host also includes a physical network interface controller (nic). A packet handling interrupt is detected upon receipt of a first data packet associated with the first container If the first virtual machine is latency sensitive, then the packet handling interrupt is processed. If the first virtual machine is not latency sensitive, then the first data packet is queued and processing of the packet handling interrupt is delayed.
|
1. In a host computer having a plurality of containers including a first container executing therein, the host including a physical network interface controller (physical nic), a method of transmitting and receiving data packets to and from the first container, the method performed by a virtual network interface controller (virtual nic) operating on a hypervisor of the host computer comprising:
detecting a packet handling interrupt upon receiving a first data packet that is associated with the first container;
determining, by inspecting a memory-based data structure that contains latency sensitivity data for each of the plurality of containers, whether the first container is latency sensitive;
if the first container is determined to be latency sensitive, then processing the packet handling interrupt by forwarding the first data packet to a virtual switch to which the virtual nic is connected when the first data packet is received from the first container, or forwarding the first data packet to the first container when the first data packet is received from the virtual switch;
if the first container is determined to be not latency sensitive, then:
queuing the first data packet at the virtual nic; and
delaying processing of the packet handling interrupt by the virtual nic.
16. A computing system, comprising
a host computer, the host computer having a plurality of containers including a first container executing therein; and
a physical network interface controller (physical nic), wherein the system is configured to perform a method of transmitting and receiving data, packets to and from the first container by a virtual network interface controller (virtual nic) operating on a hypervisor of the host computer, the method comprising:
detecting a packet handling interrupt upon receiving a first data packet that is associated with the first container;
determining, by inspecting a memory-based data structure that contains latency sensitivity data for each of the plurality of containers, whether the first container is latency sensitive;
if the first container is determined to be latency sensitive, then processing the packet handling interrupt by forwarding the first data packet to a virtual switch to which the virtual nic is connected when the first data packet is received from the first container, or forwarding the first data packet to the first container when the first data packet is received from the virtual switch;
if the first container is determined to be not latency sensitive, then:
queuing the first data packet at the virtual nic; and delaying processing of the packet handling interrupt by the virtual nic.
11. A non-transitory computer-readable medium comprising instructions executable by a host computer, the host computer having a plurality of containers including a first container executing therein, and the host including a physical network interface controller (physical nic), where the instructions, when executed, cause the host computer to perform a method of transmitting and receiving data packets to and from the first container by a virtual network interface controller (virtual nic) operating on a hypervisor of the host computer, the method comprising:
detecting a packet handling interrupt upon receiving a first data packet that is associated with the first container;
determining, by inspecting a memory-based data structure that contains latency sensitivity data for each of the plurality of containers, whether the first container is latency sensitive;
if the first container is determined to be latency sensitive, then processing the packet handling interrupt by forwarding the first data packet to a virtual switch to which the virtual nic is connected when the first data packet is received from the first container, or forwarding the first data packet to the first container when the first data packet is received from the virtual switch;
if the first container is determined to be not latency sensitive, then:
queuing the first data packet at the virtual nic; and
delaying processing of the packet handling interrupt by the virtual nic.
3. The method of
reading from the memory-based data structure a latency sensitivity indicator for the first virtual machine; and
determining whether the latency sensitivity indicator is a predetermined value.
4. The method of
5. The method of
determining whether a packet rate for the virtual nic is less than a predetermined threshold rate;
if the packet rate for the virtual nic is not less than the predetermined threshold rate, then queuing the first data packet and delaying processing of the packet handling interrupt; and
if the packet rate for the virtual nic is less than the predetermined threshold rate, then processing the packet handling interrupt.
6. The method of
determining whether a utilization value for one or more virtual processors of the first virtual machine is greater than a predetermined utilization value;
if the utilization value for the one or more virtual processors of the first virtual machine is greater than the predetermined utilization value, then queuing the first data packet and delaying processing of the packet handling interrupt; and
if the utilization value for the one or more virtual processors of the first virtual machine is not greater than the predetermined utilization value, then processing the packet handling interrupt.
7. The method of
8. The method of
wherein the processing of the packet handling interrupt is delayed until the data structure stores greater than a predetermined number of data packets.
9. The method of
10. The method of
12. The computer-readable medium of
13. The computer-readable medium of
reading from the memory-based data structure a latency sensitivity indicator for the first virtual machine; and
determining whether the latency sensitivity indicator is a predetermined value.
14. The computer-readable medium of
15. The computer-readable medium of
determining whether a utilization value for one or more virtual processors of the first virtual machine is greater than a predetermined utilization value;
if the utilization value for the one or more virtual processors of the first virtual machine is greater than the predetermined utilization value, then queuing the first data packet and delaying processing of the packet handling interrupt; and
if the utilization value for the one or more virtual processors of the first virtual machine is not greater than the predetermined utilization value, then processing the packet handling interrupt.
|
This application claims priority to U.S. Provisional Patent Application No. 61/870,143, entitled “TECHNIQUES TO SUPPORT HIGHLY LATENCY SENSITIVE VMs,” filed Aug. 26, 2013, the contents of which is incorporated herein by reference. This application is related to: U.S. patent application Ser. No. 14/468,121, entitled “CPU Scheduler Configured to Support Latency Sensitive Virtual Machines”, filed Aug. 25, 2014; U.S. patent application Ser. No. 14/468,122, entitled “Virtual Machine Monitor Configured to Support Latency Sensitive Virtual Machines”, filed Aug. 25, 2014; and U.S. patent application Ser. No. 14/468,138, entitled “Pass-through Network Interface Controller Configured to Support Latency Sensitive Virtual Machines”, filed Aug. 25, 2014, the entire contents of which are incorporated herein by reference.
Applications characterized as “latency sensitive” are, typically, highly susceptible to execution delays and jitter (i.e., unpredictability) introduced by the computing environment in which these applications run. Examples of latency sensitive applications include financial trading systems, which usually require split-second response time when performing functions such as pricing securities or executing and settling trades.
Execution delay and jitter are often present in networked virtualized computing environments. Such computing environments frequently include a number of virtual machines (VMs) that execute one or more applications that rely on network communications. These virtualized applications communicate over the network by transmitting data packets to other nodes on the network using a virtual network interface controller (or VNIC) of the VM, which is a software emulation of a physical network interface controller (or PNIC). The use of a VNIC for network communication results in latency and jitter for a number of reasons.
First, VNIC-based communication requires transmitted and received packets to be processed by layers of networking software not required for packets that are directly transmitted and received over a PNIC. For example, data packets that are transmitted by a virtualized application are often transmitted first to a VNIC. Then, from the VNIC, the packets are passed to software modules executing in a hypervisor. Once the packets are processed by the hypervisor, they are then transmitted from the hypervisor to the PNIC of the host computer for subsequent delivery over the network. A similar, although reverse, flow is employed for data packets that are to be received by the virtualized application. Each step in the flow entails processing of the data packets and, therefore, introduces latency.
Further, VNICs are often configured to queue (or coalesce interrupts corresponding to) data packets before passing the packets to the hypervisor. While packet queueing minimizes the number of kernel calls to the hypervisor to transmit the packets, latency sensitive virtualized applications that require almost instantaneous packet transmission (such as, for example, telecommunications applications) suffer from having packets queued at a VNIC.
VNICs are also configured to consolidate inbound data packets using a scheme known as large receive offload (or LRO). Using LRO, smaller Transmission Control Protocol (TCP) packets that are received at a VNIC are consolidated into larger TCP packets before being sent from the VNIC to the virtualized application. This results in fewer TCP acknowledgments being sent from the virtualized application to the transmitter of the TCP packets. Thus, TCP packets can experience transmission delay.
Finally, a PNIC for a host computer may be configured to queue data packets that it receives. As is the case with the queuing of data packets at a VNIC, queuing data packets at a PNIC often introduces unacceptable delays for latency senstive virtualized applications.
A method of transmitting and receiving data packets to and from a container executing in a host computer is provided, the host computer having a plurality of containers executing therein, and where the host computer connects to a network through a physical NIC. The method comprises the steps of detecting a packet handling interrupt upon receiving a first data packet that is associated with the container, and determining whether the container is latency sensitive. The method further comprises the step of processing the packet handling interrupt if the container is latency sensitive. The method further comprises, if the container is not latency sensitive, then queueing the first data packet and delaying processing of the packet handling interrupt.
Further embodiments provide a non-transitory computer-readable medium that includes instructions that, when executed, enable a host computer to implement one or more aspects of the above method, as well as a computing system that includes a host computer, a physical NIC, and a virtual NIC that is configured to implement one or more aspects of the above method.
Host computer 100 is, in embodiments, a general-purpose computer that supports the execution of an operating system and one more application programs therein. In order to execute the various components that comprise a virtualized computing platform, host computer 100 is typically a server class computer. However, host computer 100 may also be a desktop or laptop computer.
As shown in
Virtual machines are software implementations of physical computing devices and execute programs much like a physical computer. In embodiments, a virtual machine implements, in software, a computing platform that supports the execution of software applications under the control of a guest operating system (OS). As such, virtual machines typically emulate a particular computing architecture. In
Hypervisor 130, as depicted in
As depicted in the embodiment of
Each VMM 131 in
In one or more embodiments, kernel 136 serves as a liaison between VMs 110 and the physical hardware of computer host 100. Kernel 136 is a central operating system component, and executes directly on host 100. In embodiments, kernel 136 allocates memory, schedules access to physical CPUs, and manages access to physical hardware devices connected to computer host 100.
As shown in
As shown in
Hardware platform 140 also includes a random access memory (RAM) 141, which, among other things, stores programs currently in execution, as well as data required for such programs. Moreover, RAM 141 stores the various data structures needed to support network data communication. For instance, the various data components that comprise virtual switch 135 (i.e., virtual ports, routing tables, and the like) are stored in RAM 141.
Further, as shown in
PNIC 142 is typically configured with one or more data queues. In some cases, the PNIC is configured with a single transmit queue (for transmitting outbound packets to the network) and a single receive queue (for receiving inbound packets from the network). Alternatively, PNIC 142 may be a multi-queue PNIC. A multi-queue PNIC has more than one transmit queue and more than one receive queue, where each transmit or receive queue can be allocated to a specific use. For example, a multi-queue PNIC 142 may be configured with two sets of transmit/receive queues. In this embodiment, a first transmit and a first receive queue may be connected to (i.e., driven by) a PNIC driver 138 connected to a first virtual switch, while a second transmit and a second receive queue is connected to a PNIC driver 138 connected to a second virtual switch. Thus, data packets transmitted by an external source for delivery to a virtual machine connected to the first virtual switch are placed (by PNIC 142) in the first/receive queue. By contrast, data packets received by PNIC 142 that are destined for a virtual machine connected to the second virtual switch are placed by PNIC 142 in the second receive queue.
In order to support the networking changes required for executing latency sensitive virtual machines, the embodiment depicted in
In addition, VM management server 150 provides for the configuration of virtual machines as highly latency sensitive virtual machines. According to one or more embodiments, VM management server 150 maintains a latency sensitivity table 155, which defines latency sensitivity characteristics of virtual machines. Latency sensitivity table 155 is described in further detail below.
As shown in
VM management agent 134 receives instructions from VM management server 150 and carries out tasks on behalf of VM management server 150. Among the tasks performed by VM management agent 134 are the configuration and instantiation of virtual machines. One aspect of the configuration of a virtual machine is whether that virtual machine is highly latency sensitive. Thus, VM management agent 134 receives a copy of latency sensitivity table 155 and saves the underlying data within RAM 141 as latency sensitivity data 143. As shown in
For each VM ID 210, latency sensitivity table 155 stores a latency sensitivity indicator. This indicator may take on two distinct values (such as Y or N), which indicates whether the corresponding virtual machine is highly latency sensitive. In other embodiments, the latency sensitive indicator may take on more than two values (e.g., High, Medium, Low, or Normal), to provide for specifying different degrees of latency sensitivity for the corresponding virtual machine. In
According to embodiments, a VM that is defined with latency sensitivity indicator of Y (or some other positive indicator) is treated by the networking software as highly latency sensitive. That is, the networking software in the VNIC and kernel is configured to determine which virtual machines are highly latency sensitive (based on the aforementioned criteria), and to transmit and receive data packets for those virtual machine in such a way so as to minimize any transmission delay for the packets. Thus, the data packets transmitted and received by VM 1102 (a highly latency sensitive virtual machine) are subjected to a minimal amount of delay (i.e., latency). By contrast, the data packets transmitted and received by VM 1101 (which is not latency sensitive) are not transmitted in a way so as to minimize any delay in the delivery of the packets. Rather, the data packets of VM 1101 are handled so as to improve the overall efficiency of execution of all virtual machines on computer host 100, which may nonetheless result in delays in packet delivery for the VM.
Conceptually, the packets may be viewed as being queued within the VNIC itself in either a transmit queue or a receive queue. The transmit queue for the VNIC queues packets that are transmitted by a process executing in the guest virtual machine that corresponds to the VNIC, and that are destined for another virtual machine executing on the same host, or, alternatively, to a network destination that is external to the host. The receive queue, on the other hand, queues packets that are transmitted by a process executing external to the virtual machine that corresponds to the VNIC, and that are destined for that virtual machine. It should be noted that packet queuing may occur, in embodiments, in the guest operating system (for packets to be transmitted from the virtual machine) and in the kernel (for packets to be received by the virtual machine).
Packet queuing reduces the interrupt rate at which the VNIC operates. That is, with packet queuing, the VNIC transmits fewer interrupts to the kernel for packets that are to be transmitted from the virtual machine. Such an interrupt comprises, in one or more embodiments, a kernel call that informs the kernel that the VNIC has a certain number of data packets that are ready to be transmitted. Similarly, with packet queuing, the VNIC transmits fewer interrupts to the guest virtual machine for packets that are to be received by the virtual machine. Such an interrupt comprises, in one or more embodiments, a software interrupt that the VNIC posts to an interrupt handler that executes in the guest virtual machine, where the software interrupt informs the interrupt handler that one or more packets have been received at the VNIC. The fewer interrupts generated by the VNIC when the VNIC queues data packets results in fewer context switches by the kernel and by the guest operating system. However, packet queuing can add jitter, and in some cases, may have a noticeable impact on average latency, especially with input/output (I/O) bound applications.
In
Similarly, queue 302 stores packets that are transmitted to VNIC 1251 via kernel 136. In embodiments, a transmitter, such as another virtual machine or an application external to computer host 100, transmits data packets for delivery to VM 1101. The packets are routed to computer host 100, after which they are forwarded, by software executing in kernel 136, to VNIC 1251. VNIC 1251 then queues the packets in queue 302. VNIC 1251 then generates a software interrupt that is received by an interrupt handler executing under control of the guest operating system in VM 1101. VNIC 1251 generates the interrupt when, for example, the number of packets in queue 302 exceeds a threshold value or when the amount of time that packets are queued in queue 302 exceeds a threshold amount of time. It should be noted that, in the embodiment illustrated in
In contrast with VM 110k, VM 1102 is a highly latency sensitive virtual machine (based on the corresponding entry for VM 1102 in latency sensitivity table 155, depicted in
As shown in
As shown in
Since TCP is a reliable data delivery service, a TCP sender relies upon acknowledgements to determine whether a given TCP packet should be retransmitted. Thus, as shown in
In contrast with VM 1101, VM 1102 is a highly latency sensitive virtual machine (based upon the entry corresponding to VM 1101 in latency sensitivity table 155, depicted in
Multi-queue PNICs are conceptually similar to single queue PNICs. Multi-queue PNICs have more than one transmit queue and more than one receive queue. This is advantageous because it increases the throughput of the PNIC, especially on multiprocessor computer hosts. Further, each transmit or receive queue may be dedicated to a single processor, thus dividing packet processing among processors and freeing certain other processors from the task of processing packets. Further, each transmit or receive queue in a multi-queue PNIC may be assigned to one or more VNICs. That is, multi-queue PNICs are often equipped with a routing module to direct packets destined for certain virtual machines into receive queues that correspond to the VNICs of those virtual machines. In similar fashion, the kernel directs network packets transmitted by certain virtual machines to transmit queues of the PNIC that correspond to those virtual machines.
Further, the interrupt rate for a multi-queue PNIC is configurable on a per-queue level. That is, each transmit or receive queue may be configured with its own interrupt rate. This scenario is illustrated in
By contrast, kernel 136 determines that queue 5012, which is allocated to VM 1102, is allocated to a highly latency sensitive virtual machine. Therefore, in the embodiment depicted, kernel 136 increases the interrupt rate for queue 5012. This has the effect of suppressing the queuing of data packets in the queue. Thus, when a data packet is placed in the transmit queue of queue 5012, an interrupt is immediately generated, which causes the packet to be transmitted without any further delay (i.e., without waiting for other packets to be placed in the transmit queue of queue 5012). Further, if a packet arrives at PNIC 142 and is destined for VM 1102, the packet is routed to the receive queue of queue 5012, whereupon an interrupt is immediately generated, which causes kernel 136 to transmit the received packet to VM 1102 without waiting for additional packets to be placed in the receive queue of 5012. In this way, network latency for VM 1102 is reduced as compared with the network latency experienced by VM 1101.
After receiving the data packet at step 610, method 600 proceeds to step 620. At step 620, software that executes as part of the VNIC determines whether the virtual machine to which the VNIC corresponds (which is either the source or destination of the packet) is highly latency sensitive. In one or more embodiments, the VNIC determines the latency sensitivity of the virtual machine by inspecting a memory-based data structure, such as latency sensitivity data 143, which itself is based on latency sensitivity table 155. According to these embodiments, if an entry for the virtual machine in latency sensitivity stores a latency sensitivity indicator that is set to Y (or some other value that indicates that the virtual machine is latency sensitive), then the VNIC determines that the corresponding virtual machine is highly latency sensitive. If, however, the latency sensitive indicator is not set to Y, then the VNIC determines that the virtual machine is not highly latency sensitive.
If, at step 620, it is determined that the virtual machine is not highly latency sensitive, then method 600 proceeds to step 650, where the received packet is queued with other packets received by the VNIC, as described below. If, however, it is determined that the virtual machine is highly latency sensitive, then method 600 proceeds to step 630.
At step 630, the VNIC determines the rate at which packets are currently being transmitted and/or received by the VNIC. According to embodiments, when the packet rate is high, queuing of data packets is allowed to take place, even for highly latency sensitive virtual machines. The reason is that virtual machines that have high packet rates do not generally suffer when packets are delayed by queuing. For these virtual machines, the system-wide benefits of queuing (i.e., fewer context switches due to a decreased interrupt rate) outweigh the extra packet delay that packet queuing causes. If the VNIC packet rate is determined to be high (i.e., that, over a predetermined time period, a large number of packets are transmitted to the VNIC), then method 600 proceeds to step 650, where the received packet is queued with other packets received by the VNIC. If the VNIC packet rate is determined to be low (i.e., that, over a predetermined time period, a small number of packets are transmitted to the VNIC), then method 600 proceeds to step 640.
At step 640, the VNIC determines the CPU utilization of the corresponding virtual machine. According to embodiments, if the CPU utilization of a virtual machine (i.e., the utilization of the virtual CPUs of the virtual machine) is low, then such a virtual machine is often less likely to be compute-bound. That is, the virtual machine is less likely to be executing intensive computations (e.g., calculating prices of financial instruments in a high-speed trading system). Rather, the virtual machine is more likely to be I/O-bound. In other words, the virtual machine is most likely waiting for I/O operations to complete before engaging in computation. In such a scenario, it is important for the virtual machine to experience as little packet delay as possible. On the other hand, in the case of a compute-bound virtual machine, packet delay is relatively unimportant in comparison to any delays in CPU processing, even for virtual machines that are determined to be highly latency sensitive.
Therefore, at step 640, if the VNIC determines that the corresponding virtual machine has low CPU utilization (i.e., that the virtual machine is not compute-bound), then method 600 proceeds to step 660. Otherwise, if the VNIC determines that the virtual machine does not have low CPU utilization (i.e., that the virtual machine is in fact compute-bound), then method 600 proceeds to step 650, where the received data packet is queued with other received data packets.
At step 660, the VNIC immediately transmits the received data packet, thus minimizing packet delay (and eliminating any delay caused by packet queuing). This scenario is illustrated in
As shown in
After the received data packet is queued with other data packets for later transmission, method 600 then proceeds to step 670. At step 670, the VNIC determines whether a queuing threshold has been exceeded. For example, the VNIC may determine that either or both transmit and receive queues therein are full, or that the number of packets stored in the queues exceeds a predetermined value. In other embodiments, the VNIC determines that the packets have been stored in the queues for greater than some predetermined amount of time.
If, at step 670, the VNIC determines that the queuing threshold has not been exceeded, then method 600 proceeds directly to step 690. However, if the VNIC determines that the queuing threshold has been exceeded, then method 600 proceeds to step 680. At step 680, the queued packets are transmitted by the VNIC. For example, if the queued data packets are to be received by an application executing in the virtual machine, then the VNIC posts a software interrupt to the virtual machine, indicating that the packets are ready to be received by the virtual machine. On the other hand, if the packets are to be transmitted from the virtual machine to another virtual machine (via a virtual switch) or to a target application executing outside of the host computer (via a PNIC of the host computer), then the VNIC posts a software interrupt to the hypervisor (or, in some embodiments, the VNIC makes a kernel call to the hypervisor), indicating that the data packets are ready to be transmitted.
After transmitting the data packets at step 680, method 600 proceeds to step 690. At step 690, the VNIC determines whether more data packets should be received. In one or more embodiments, VNIC polls the virtual machine or the hypervisor to determine whether additional packets are available. The polling takes place at a predetermined interval. In other embodiments, the VNIC is enabled to receive a software interrupt from the virtual machine or the hypervisor indicating that additional data packets are ready to be received by the VNIC. If the VNIC determines that more data packets are to be received, then method 600 returns to step 610 to receive the data packet. Method 600 then cycles through the steps described above. If, however, the VNIC determines that there are no more data packets (or, alternatively, that the VNIC has been disabled for receiving data packets), then method 600 terminates.
Certain embodiments as described above involve a hardware abstraction layer on top of a host computer. The hardware abstraction layer allows multiple containers to share the hardware resource. These containers, isolated from each other, have at least a user application running therein. The hardware abstraction layer thus provides benefits of resource isolation and allocation among the containers. In the foregoing embodiments, virtual machines are used as an example for the containers and hypervisors as an example for the hardware abstraction layer. As described above, each virtual machine includes a guest operating system in which at least one application runs. It should be noted that these embodiments may also apply to other examples of containers, such as containers not including a guest operating system, referred to herein as “OS-less containers” (see, e.g., www.docker.com). OS-less containers implement operating system-level virtualization, wherein an abstraction layer is provided on top of the kernel of an operating system on a host computer. The abstraction layer supports multiple OS-less containers, each including an application and its dependencies. Each OS-less container runs as an isolated process in userspace on the host operating system and shares the kernel with other containers. The OS-less container relies on the kernel's functionality to make use of resource isolation (CPU, memory, block I/O, network, etc.) and separate namespaces and to completely isolate the application's view of the operating environments. By using OS-less containers, resources can be isolated, services restricted, and processes provisioned to have a private view of the operating system with their own process ID space, file system structure, and network interfaces. Multiple containers can share the same kernel, but each container can be constrained to only use a defined amount of resources such as CPU, memory and I/O.
Although one or more embodiments have been described herein in some detail for clarity of understanding, it should be recognized that certain changes and modifications may be made without departing from the spirit of the disclosure. The various embodiments described herein may employ various computer-implemented operations involving data stored in computer systems. For example, these operations may require physical manipulation of physical quantities—usually, though not necessarily, these quantities may take the form of electrical or magnetic signals, where they or representations of them are capable of being stored, transferred, combined, compared, or otherwise manipulated. Further, such manipulations are often referred to in terms, such as producing, yielding, identifying, determining, or comparing. Any operations described herein that form part of one or more embodiments of the disclosure may be useful machine operations. In addition, one or more embodiments of the disclosure also relate to a device or an apparatus for performing these operations. The apparatus may be specially constructed for specific required purposes, or it may be a general purpose computer selectively activated or configured by a computer program stored in the computer. In particular, various general purpose machines may be used with computer programs written in accordance with the teachings herein, or it may be more convenient to construct a more specialized apparatus to perform the required operations.
The various embodiments described herein may be practiced with other computer system configurations including hand-held devices, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, and the like.
One or more embodiments of the present disclosure may be implemented as one or more computer programs or as one or more computer program modules embodied in one or more computer readable media. The term computer readable medium refers to any data storage device that can store data which can thereafter be input to a computer system—computer readable media may be based on any existing or subsequently developed technology for embodying computer programs in a manner that enables them to be read by a computer. Examples of a computer readable medium include a hard drive, network attached storage (NAS), read-only memory, random-access memory (e.g., a flash memory device), a CD (Compact Discs) —CD-ROM, a CD-R, or a CD-RW, a DVD (Digital Versatile Disc), a magnetic tape, and other optical and non-optical data storage devices. The computer readable medium can also be distributed over a network coupled computer system so that the computer readable code is stored and executed in a distributed fashion.
Although one or more embodiments of the present disclosure have been described in some detail for clarity of understanding, it will be apparent that certain changes and modifications may be made within the scope of the claims. Accordingly, the described embodiments are to be considered as illustrative and not restrictive, and the scope of the claims is not to be limited to details given herein, but may be modified within the scope and equivalents of the claims. In the claims, elements and/or steps do not imply any particular order of operation, unless explicitly stated in the claims.
Many variations, modifications, additions, and improvements are possible. Plural instances may be provided for components, operations or structures described herein as a single instance. Boundaries between various components, operations and data stores are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within the scope of the disclosure(s). In general, structures and functionality presented as separate components in exemplary configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements may fall within the scope of the appended claim(s)
Zheng, Haoqiang, Singaravelu, Lenin, Hecht, Daniel Michael, Smith, Garrett, Agarwal, Shilpi
Patent | Priority | Assignee | Title |
10025619, | Sep 22 2017 | International Business Machines Corporation | Accounting and enforcing non-process execution by container-based software transmitting data over a network |
10223153, | Sep 22 2017 | International Business Machines Corporation | Accounting and enforcing non-process execution by container-based software transmitting data over a network |
10241827, | Sep 22 2017 | International Business Machines Corporation | Accounting and enforcing non-process execution by container-based software transmitting data over a network |
10387178, | Oct 29 2014 | Red Hat Israel, Ltd.; Red Hat Israel, Ltd | Idle based latency reduction for coalesced interrupts |
10545786, | Sep 22 2017 | International Business Machines Corporation | Accounting and enforcing non-process execution by container-based software transmitting data over a network |
10581636, | Oct 20 2017 | Parallels International GmbH | Network tunneling for virtual machines across a wide-area network |
10708082, | Aug 31 2018 | Juniper Networks, Inc. | Unified control plane for nested clusters in a virtualized computing infrastructure |
10810038, | Sep 22 2017 | International Business Machines Corporation | Accounting and enforcing non-process execution by container-based software receiving data over a network |
11700313, | Oct 20 2017 | Parallels International GmbH | Seamless remote network redirection |
9971619, | Oct 15 2014 | KEYSIGHT TECHNOLOGIES SINGAPORE SALES PTE LTD | Methods and systems for forwarding network packets within virtual machine host systems |
9971620, | Oct 15 2014 | KEYSIGHT TECHNOLOGIES SINGAPORE SALES PTE LTD | Methods and systems for network packet impairment within virtual machine host systems |
ER8420, |
Patent | Priority | Assignee | Title |
7236459, | May 06 2002 | CA, INC | Method and apparatus for controlling data transmission volume using explicit rate control and queuing without data rate supervision |
7765543, | Dec 17 2003 | VMware LLC | Selective descheduling of idling guests running on a host computer system |
8005022, | Jul 20 2006 | Oracle America, Inc | Host operating system bypass for packets destined for a virtual machine |
8166485, | Aug 10 2009 | Avaya Inc. | Dynamic techniques for optimizing soft real-time task performance in virtual machines |
8364997, | Dec 22 2009 | Intel Corporation | Virtual-CPU based frequency and voltage scaling |
8943252, | Aug 16 2012 | Microsoft Technology Licensing, LLC | Latency sensitive software interrupt and thread scheduling |
20050281279, | |||
20070150898, | |||
20100106874, | |||
20100125843, | |||
20100229173, | |||
20100274940, | |||
20110197003, | |||
20110247001, | |||
20120254862, | |||
20140215463, | |||
20140282514, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Aug 25 2014 | VMware, Inc. | (assignment on the face of the patent) | / | |||
Sep 05 2014 | ZHENG, HAOQIANG | VMWARE, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 037385 | /0444 | |
Sep 08 2014 | SINGARAVELU, LENIN | VMWARE, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 037385 | /0444 | |
Oct 27 2014 | AGARWAL, SHILPI | VMWARE, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 037385 | /0444 | |
Dec 21 2015 | HECHT, DANIEL MICHAEL | VMWARE, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 037385 | /0444 | |
Dec 28 2015 | SMITH, GARRETT | VMWARE, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 037385 | /0444 | |
Nov 21 2023 | VMWARE, INC | VMware LLC | CHANGE OF NAME SEE DOCUMENT FOR DETAILS | 067102 | /0314 |
Date | Maintenance Fee Events |
Dec 30 2020 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Date | Maintenance Schedule |
Jul 11 2020 | 4 years fee payment window open |
Jan 11 2021 | 6 months grace period start (w surcharge) |
Jul 11 2021 | patent expiry (for year 4) |
Jul 11 2023 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jul 11 2024 | 8 years fee payment window open |
Jan 11 2025 | 6 months grace period start (w surcharge) |
Jul 11 2025 | patent expiry (for year 8) |
Jul 11 2027 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jul 11 2028 | 12 years fee payment window open |
Jan 11 2029 | 6 months grace period start (w surcharge) |
Jul 11 2029 | patent expiry (for year 12) |
Jul 11 2031 | 2 years to revive unintentionally abandoned end. (for year 12) |