Systems and methods for compacting a virtual machine file are presented. In one example, the system accesses a source virtual machine file associated with a guest file system. The system creates a destination virtual machine file based on the guest file system and initializes a block allocation table of the destination virtual machine file. The system accesses a block allocation table of the source virtual machine file and, for each block of the source virtual machine file, determines whether the block is in use. If so, the system copies the block to the destination virtual machine file and updates the block allocation table of the destination virtual machine file. If not, the system does not copy the block or update the block allocation table of the destination virtual machine file, thereby reducing the destination virtual machine file's size compared to the source virtual machine file's size.

Patent
   9311375
Priority
Feb 07 2012
Filed
Feb 07 2012
Issued
Apr 12 2016
Expiry
Sep 10 2033
Extension
581 days
Assg.orig
Entity
Large
24
184
currently ok
16. A system for compacting a virtual machine file, the system comprising a physical computing system configured to:
access a source virtual machine file associated with a guest file system;
create a destination virtual machine file based on the guest file system;
initialize a block allocation table of the destination virtual machine file;
access a block allocation table of the source virtual machine file and for each block of the source virtual machine file identified in the block allocation table of the source virtual machine file:
determine whether the block includes data that is not marked for deletion from the source virtual machine file;
in response to determining that the block includes data that is not marked for deletion from the source virtual machine file, copy the block to the destination virtual machine file and update the block allocation table of the destination virtual machine file; and
in response to determining either that the block does not include data or that the block includes data which is marked for deletion from the source virtual machine file, refrain from copying the block and refrain from updating the block allocation table of the destination virtual machine file, thereby reducing a size of the destination virtual machine file compared to a size of the source virtual machine file.
30. Non-transitory computer storage configured to store executable instructions that when executed by a processor cause the processor to:
access a source virtual machine file comprising a dynamic virtual machine file associated with a guest file system;
determine the guest file system associated with the source virtual machine file;
create a destination virtual machine file based on the guest file system, the destination virtual machine file comprising a dynamic virtual machine file;
initialize a block allocation table of the destination virtual machine file;
access a block allocation table of the source virtual machine file and for each block of the source virtual machine file identified in the block allocation table of the source virtual machine file:
determine whether the block includes data that is not marked for deletion from the source virtual machine file;
in response to determining that the block includes data that is not marked for deletion from the source virtual machine file, copy the block to the destination virtual machine file and update the block allocation table of the destination virtual machine file; and
in response to determining either that the block does not include data or that the block includes data which is marked for deletion from the source virtual machine file, not copy the block and not update the block allocation table of the destination virtual machine file thereby reducing a size of the destination virtual machine file compared to a size of the source virtual machine file.
1. A method performed by a physical computer system for compacting a virtual machine file, the method comprising:
under control of a physical computer system configured for use with virtual machines associated with virtual machine files:
accessing a source virtual machine file associated with a guest file system;
determining the guest file system associated with the source virtual machine file;
creating a destination virtual machine file based on the guest file system;
initializing a block allocation table of the destination virtual machine file;
accessing a block allocation table of the source virtual machine file and for each block of the source virtual machine file identified in the block allocation table of the source virtual machine file:
determining whether the block includes data that is not marked for deletion from the source virtual machine file;
in response to determining that the block includes data that is not marked for deletion from the source virtual machine file, copying the block to the destination virtual machine file and updating the block allocation table of the destination virtual machine file; and
in response to determining either that the block does not include data or that the block includes data which is marked for deletion from the source virtual machine file, not copying the block and not updating the block allocation table of the destination virtual machine file, thereby reducing a size of the destination virtual machine file compared to a size of the source virtual machine file.
25. A system for compacting a virtual machine file, the system comprising:
a physical host server comprising a virtualization layer configured to support a parent partition and one or more child partitions, the parent partition comprising a virtual machine management system and a compactor, the one or more child partitions each comprising a virtual machine associated with a guest operating system and one or more applications;
a data store comprising one or more virtual machine files configured to be accessed by the one or more child partitions; and
the compactor configured to:
access, from the data store, a source virtual machine file comprising a dynamic virtual machine file and a guest file system;
create a destination virtual machine file based, at least in part, on the guest file system, the destination virtual machine file comprising a dynamic virtual machine file;
initialize a block allocation table of the destination virtual machine file;
access a block allocation table of the source virtual machine file and for each block of the source virtual machine file identified in the block allocation table of the source virtual machine file:
determining whether the block includes data that is not marked for deletion from the source virtual machine file;
in response to determining that the block includes data that is not marked for deletion from the source virtual machine file, causing the block to be copied to the destination virtual machine file and updating the block allocation table of the destination virtual machine file; and
in response to determining either that the block does not include data or that the block includes data which is marked for deletion from the source virtual machine file, not causing the block to be copied and not updating the block allocation table of the destination virtual machine file thereby reducing a size of the destination virtual machine file compared to a size of the source virtual machine file.
2. The method of claim 1, further comprising copying a file header of the source virtual machine file to the destination virtual machine file.
3. The method of claim 1, further comprising copying a file footer of the source virtual machine file to the destination virtual machine file.
4. The method of claim 1, wherein determining whether the block is in use comprises:
accessing a logical cluster number bitmap of the source virtual machine file;
identifying one or more clusters associated with the block; and
using the logical cluster number bitmap to determine whether at least one of the one or more clusters is in use.
5. The method of claim 1, further comprising copying metadata associated with the source virtual machine file to the destination virtual machine file.
6. The method of claim 5, further comprising updating the metadata associated with the destination virtual machine file based on the blocks copied from the source virtual machine file to the destination virtual machine file.
7. The method of claim 1, further comprising:
determining the size of the source virtual machine file prior to creating the destination virtual machine file; and
in response to determining that the size of the source virtual machine file satisfies a threshold, creating the destination virtual machine file.
8. The method of claim 7, wherein determining that the size of the source virtual machine file satisfies the threshold further comprises determining that the size of the source virtual machine file exceeds a threshold size.
9. The method of claim 1, further comprising:
determining a percentage of blocks not in use prior to creating the destination virtual machine file; and
in response to determining that the percentage of blocks not in use satisfies a threshold, creating the destination virtual machine file.
10. The method of claim 9, wherein determining that the percentage of blocks not in use satisfies the threshold further comprises determining that the percentage of blocks not in use exceeds a threshold percentage.
11. The method of claim 1, wherein an initial size of the destination virtual machine file is less than the size of the source virtual machine file.
12. The method of claim 1, wherein the source virtual machine file comprises a dynamic virtual machine file.
13. The method of claim 1, wherein the destination virtual machine file comprises a dynamic virtual machine file.
14. The method of claim 1, wherein determining whether the block is in use comprises determining whether the block is in use based, at least in part, on the block allocation table.
15. The method of claim 1, further comprising deactivating a source virtual machine associated with the source virtual machine file prior to accessing the source virtual machine file.
17. The system of claim 16, wherein the physical computing system is further configured to determine the guest file system associated with the source virtual machine file.
18. The system of claim 16, wherein the physical computing system is further configured to copy a file header of the source virtual machine file to the destination virtual machine file.
19. The system of claim 16, wherein the physical computing system is further configured to copy a file footer of the source virtual machine file to the destination virtual machine file.
20. The system of claim 16, wherein determining whether the block is in use comprises:
accessing a logical cluster number bitmap of the source virtual machine file;
identifying one or more clusters associated with the block; and
using the logical cluster number bitmap to determine whether at least one of the one or more clusters is in use.
21. The system of claim 16, wherein the physical computing system is further configured to copy metadata associated with the source virtual machine file to the destination virtual machine file.
22. The system of claim 21, wherein the physical computing system is further configured to update the metadata associated with the destination virtual machine file based on the blocks copied from the source virtual machine file to the destination virtual machine file.
23. The system of claim 16, wherein the physical computing system is further configured to:
determine the size of the source virtual machine file prior to creating the destination virtual machine file; and
in response to determining that the size of the source virtual machine file satisfies a threshold, create the destination virtual machine file.
24. The system of claim 16, wherein the physical computing system is further configured to:
determine a percentage of blocks not in use prior to creating the destination virtual machine file; and
in response to determining that the percentage of blocks not in use satisfies a threshold, create the destination virtual machine file.
26. The system of claim 25, wherein the compactor is further configured to determine the guest file system associated with the source virtual machine file.
27. The system of claim 25, further comprising a duplicator configured to copy a block to the destination virtual machine file in response to the compactor determining that the block is in use.
28. The system of claim 25, wherein each of the one or more applications access resources of the host server via a hypervisor.
29. The system of claim 25, wherein each of the one or more applications access resources of the host server via a virtual machine management system.

The present disclosure relates to virtual machines. More specifically, the present disclosure relates to systems and methods for compacting a virtual machine file.

Many companies take advantage of virtualization solutions to consolidate several specialized physical servers and workstations into fewer servers running virtual machines. Each virtual machine can be configured with its own set of virtual hardware (e.g., processor, memory, ports, and the like) such that specialized services that each of the previous physical machines performed can be run in their native operating system. For example, a virtualization layer, or hypervisor, can allocate the computing resources of one or more host servers into one or more virtual machines and can further provide for isolation between such virtual machines. In such a manner, the virtual machine can be a representation of a physical machine by software.

In many virtual machine implementations, each virtual machine is associated with at least one virtual machine disk, hard disk, or image located in one or more files in a data store. These virtual machine disks or images are commonly referred to as virtual machine storage files or virtual machine files. The virtual machine image can include files associated with a file system of a guest operating system.

This disclosure describes examples of systems and methods for compacting a virtual machine file. In one embodiment, a method for compacting a virtual machine file may be performed by a physical computer system. Typically, the method may be performed with a dynamic virtual machine file. However, in some implementations, the method may be performed with a static virtual machine file. A system performing the method may access a source virtual machine file associated with a guest file system. The system can create a destination virtual machine file based on the guest file system. Typically, the destination virtual machine is a dynamic virtual machine. However, in some implementations, the destination virtual machine file may be a static virtual machine file regardless of whether the source virtual machine is a dynamic or static virtual machine file. Alternatively, the destination virtual machine file can be the same type of virtual machine file as the source virtual machine file. After creating the destination virtual machine file, the system initializes a block allocation table of the destination virtual machine file. The system can access a block allocation table of the source virtual machine file. For each block of the source virtual machine file identified in the block allocation table of the source virtual machine file, the system can determine whether the block is in use. In response to determining that the block is in use, the system can copy the block to the destination virtual machine file and update the block allocation table of the destination virtual machine file. In response to determining that the block is not in use, the system does not copy the block and does not update the block allocation table of the destination virtual machine file thereby reducing a size of the destination virtual machine file compared to a size of the source virtual machine file.

In certain embodiments, the system may determine the guest file system associated with the source virtual machine file. The system may copy a file header of the source virtual machine to the destination virtual machine. Further, in certain implementations, the system copies a file footer of the source virtual machine to the destination virtual machine.

To determine whether the block is in use, the system, in some embodiments, accesses a logical cluster number bitmap inside the guest file system of the source virtual machine file. The system identifies one or more clusters associated with the block. The system may use the logical cluster number bitmap to determine whether at least one of the one or more clusters is in use.

In some embodiments, the system may copy metadata associated with the source virtual machine file to the destination virtual machine file. Further, in some embodiments, the system may update the metadata associated with the destination virtual machine file based on the blocks copied from the source virtual machine file to the destination virtual machine file.

In certain embodiments, the system determines the size of the source virtual machine file prior to creating the destination virtual machine file. In response to determining that the size of the source virtual machine file satisfies a threshold, the system may create the destination virtual machine file. In some embodiments, determining that the size of the source virtual machine file satisfies the threshold further comprises the system determining whether the size of the source virtual machine exceeds a threshold size.

In some embodiments, the system determines a percentage of blocks not in use prior to creating the destination virtual machine file. In response to determining that the percentage of blocks not in use satisfies a threshold, the system may create the destination virtual machine file. For certain embodiments, determining that the percentage of blocks not in use satisfies the threshold further comprises the system determining that the percentage of blocks not in use exceeds a threshold percentage.

In some implementations, the initial size of the destination virtual machine file is less than the size of the source virtual machine file.

Throughout the drawings, reference numbers are re-used to indicate correspondence between referenced elements. The drawings are provided to illustrate embodiments of the inventions described herein and not to limit the scope thereof.

FIG. 1 illustrates an embodiment of a virtual computing environment.

FIG. 2 illustrates an embodiment of a host server capable of compacting virtual machine files.

FIGS. 3A-3C illustrate an example of a virtual machine file.

FIG. 3D illustrates an example of a compacted virtual machine file.

FIG. 4 presents a flowchart for an embodiment of a compaction process.

FIG. 5 presents a flowchart for an example of a process for determining if a block is in use.

Computer systems access an increasing amount of data, whether that data is associated with media such as music, or video, or whether that data is associated with virtual machines. Computer systems traditionally access this vast amount of data using organized units referred to as volumes. A volume can be a logical entity that may include other logical entities, e.g., files and directories. Traditionally, volumes were created on physical media such as hard disks.

Recent developments in virtualization have led to a logical volume being stored inside a file on a physical storage disk. For example, a virtual disk image can be a file on a physical disk, which has a well-defined, published or proprietary, format and can be interpreted by a virtualization system as a hard disk. Examples of such developments include Microsoft® Windows® storing iSCSI (Internet Small Computer System Interface) volumes inside a Virtual Hard Disk (VHD) file, and Windows® 7 storing a volume inside a VHD file and being able to boot from the VHD file. Another example of a logical volume being stored in a file is a volume stored in the VMware® “VMDK” file format.

The VHD file format is associated with Microsoft® Hyper-V and Citrix® Xen virtualization. Both Microsoft and Citrix virtualization products use a VHD file format. Also, some backup applications backup data into VHD volumes. The Microsoft Windows Storage Server product presents block storage devices accessible via the iSCSI protocol and these block storage devices are “backed” by VHD file based volumes.

Virtual machine files can be large and it is not uncommon for a virtual machine file to exceed 10s or 100s of gigabytes (GB) in size. With a static or fixed size virtual machine file, the virtual machine file is created with its entire memory size allocated. For example, if a user (e.g., an administrator) configures a virtual machine file with a 100 GB capacity, the entire 100 GB will be allocated when the virtual machine file is created. Due to overhead, it is possible for the virtual machine file to be a little smaller or a little larger than 100 GB; however; the total size of the virtual machine file generally remains at substantially 100 GB during the lifetime of the virtual machine file.

Unfortunately, it can be difficult to determine the amount of space needed for a virtual machine file before the file is used by user. Thus, when first creating a particular virtual machine file, some users (e.g., administrators) may allocate much more space for the virtual machine file than, in hindsight, is necessary for the virtual machine file based on its use after creation. One solution to this problem is to use a dynamic virtual machine file or a dynamically expanding virtual machine file. With a dynamic virtual machine file, the virtual machine file can be created or initialized with a subset of its total capacity initially allocated. Then, as more space is required by the virtual machine, the virtual machine file may be allocated additional space up to its total capacity. Thus, in some such cases, space may not be allocated to the virtual machine file unless and until the space is actually required by the virtual machine file. For example, a user (or administrator) can request or configure a virtual machine file with a total capacity of 100 GB, however, the virtual machine file may initially be created with a smaller initial size (e.g., 50 megabytes). If a user writes several media files to the virtual machine file of the virtual machine causing the virtual machine file to require more than the initial allocated space, the dynamically expanding virtual machine file may be allocated additional space up to the 100 GB total capacity initially configured by the user (or administrator) as the maximum capacity for the virtual machine file. In some cases, an administrator may select the initial capacity or size of the virtual machine file. However, in some cases, the initial size is a characteristic of the virtual machine file that may be preconfigured or may be determined based on the system creating the virtual machine file.

Generally, dynamically expanding virtual machine files are not dynamically shrinking. Therefore, continuing the previous example, if the user deletes the previously added media files, the virtual machine file remains the same size as the virtual machine file before the media files were deleted. In effect, the space originally allocated to the deleted media files remains allocated to the virtual machine file (even though no longer used); therefore, this space cannot be allocated to other users of the system. Thus, like static virtual machine files, it is often the case that dynamic virtual machine files will waste storage space. Sometimes, the dynamic virtual machine files will waste as much space as a static virtual machine file with an underutilized storage space allocation.

In some cases, in an attempt to reclaim unused space, an administrator, or other authorized user, will use a marking utility or a zero-filling utility inside the virtual machine file to mark or fill with zeros unused space within the virtual machine file. A compacting tool can be used to create a copy of the virtual machine file without any of the zero-filled space resulting in a smaller version of the virtual machine file. Although this compaction reduces the size of the virtual machine file, it suffers from some drawbacks. For example, with a dynamically expanding virtual machine file, using a zero-filling utility can cause the dynamically expanding virtual machine file to grow to its maximum capacity. Thus, although a specific dynamically expanding virtual machine file may rarely if ever grow, at least temporarily, to its maximum capacity when used for its created purpose, it may still be necessary to reserve enough space for the virtual machine file's maximum capacity to allow for the zero-filling based method of compaction.

Further, each time a compaction operation is performed using the zero-filling approach, a number of write operations are performed to zero-fill the virtual machine file. Typically, hard disks, and other storage devices (e.g., flash memory), are designed to handle a specific number of write operations per logical or physical storage unit (e.g., bit, byte, sector, track, etc.). After the number of write operations is exceeded, there is an increased probability of disk failures and other hard disk, or storage device, related errors occurring. Thus, zero-filling based compaction of a virtual machine file may reduce the lifespan of the storage device that stores the virtual machine file. In addition, it is often the case that the number of virtual machines that can be hosted by a physical server is bound by the disk Input/Output (I/O) capacity of the storage device associated with the physical server, as opposed to, or in addition to, the Central Processing Unit (CPU) capacity of the physical server. Zero-filling a virtual machine file may use a large percentage of the available disk I/O capacity thereby reducing the performance of other virtual machines hosted by the physical server, or virtualization host.

The present disclosure describes embodiments of a system and a method for compacting a virtual machine file without some or all of the aforementioned problems or drawbacks. In some embodiments of the present disclosure, the system can access a virtual machine file and copy one or more blocks that include undeleted data to a new or destination virtual machine file while ignoring blocks, for example, that include no data, zero-filled data, data marked for deletion, or data that is otherwise not in use by the file system of the virtual machine file. In some implementations, the system can determine whether a block is in use by accessing metadata associated with the virtual machine file. Advantageously, embodiments of the system can identify unused blocks or blocks with deleted data without zero-filling the virtual machine file thereby reducing the number of write operations used to compact the virtual machine file compared to the aforementioned zero-filling compaction process.

In certain embodiments, the system compacts a source dynamic virtual machine file by copying the data blocks that are in use by the source dynamic machine file to a destination dynamic virtual machine file and then deletes the source dynamic virtual machine file. Alternatively, the system may copy the data blocks that are in use by the source dynamic machine file to a static virtual machine file. As another embodiment, the system may copy the data blocks from a static virtual machine file to a dynamic virtual machine file or a destination static virtual machine file. All possible combinations of compaction for dynamic and static virtual machine files are contemplated. For instance, it is possible for a static virtual machine file to be copied to a dynamic virtual machine file, for the dynamic virtual machine file to be compacted, and then for the compacted dynamic virtual machine file to be copied to another static virtual machine file, which may be allocated less storage space than the initial static virtual machine file.

Advantageously, certain embodiments of the system can analyze a virtual machine file over a period of time to determine the average or maximum amount of space used by the virtual machine file. The system can then compact the virtual machine file to a smaller static virtual machine file or to a dynamic virtual machine file based on the average and/or maximum space used by the virtual machine file. For example, if the system determines that a source static virtual machine file over a period of two years used less than 50% of the total space allocated to it, the system may compact the virtual machine file to a destination static virtual machine file with a smaller capacity (e.g., 50% or 75% of the total capacity of the source virtual machine file). Alternatively, the system may compact the virtual machine file to a dynamic virtual machine file having a reduced total capacity (e.g., 50% or 75% of the total capacity of the source dynamic virtual machine file).

In certain embodiments, the system compacts the virtual machine file in response to a command from a user or administrator. Alternatively, the system may compact the virtual machine file in response to the amount or percentage of unused space or deleted blocks in the virtual machine file exceeding a threshold. In some implementations, the system compacts the virtual machine file in response to the threshold being exceeded for a period of time. For example, the system may compact a virtual machine file that exceeds 50% free space for a period of one week.

Although this disclosure generally describes various embodiments of compacting and copying a source virtual machine file to a destination virtual machine file, in some embodiments, the source virtual machine file can be a first virtual machine file and the destination virtual machine file can be a second virtual machine file.

Example of a Virtual Computing Environment

FIG. 1 illustrates an embodiment of a virtual computing environment 100 in accordance with the present disclosure. In general, the virtual computing environment 100 can include any number of computing systems for executing virtual machine files. Further, in certain embodiments, the virtual machine environment 100 includes one or more systems and/or services for compacting a virtual machine file.

As shown in FIG. 1, the virtual computing environment 100 can include a host server 102 in communication with a data store 112. Although only one host server 102 and data store 112 is depicted, it is possible for the virtual computing environment to include more than one host server 102 and/or data store 112. In certain embodiments, the host server 102 can be implemented with one or more physical computing devices. Further, the data store 112 may be implemented using one or more physical data storage devices.

In certain embodiments, the host server 102 is configured to host one or more virtual machines (not shown) executing on top of a virtualization layer 140. The virtualization layer 140 may include one or more partitions (e.g., the parent partition 104, the child partition 106, or the child partition 108) that are configured to include the one or more virtual machines. Further, the virtualization layer 140 may include, for example, a hypervisor 110 that decouples the physical hardware of the host server 102 from the operating systems of the virtual machines. Such abstraction allows, for example, for multiple virtual machines with different operating systems and applications to run in isolation or substantially in isolation on the same physical machine. The hypervisor 110 can also be referred to as a virtual machine monitor (VMM) in some implementations.

The virtualization layer 140 can include a thin piece of software that runs directly on top of the hardware platform of the host server 102 and that virtualizes resources of the machine (e.g., a native or “bare-metal” hypervisor). In such embodiments, the virtual machines can run, with their respective operating systems, on the virtualization layer 140 without the need for a host operating system. Examples of such bare-metal hypervisors can include, but are not limited to, ESX SERVER by VMware, Inc. (Palo Alto, Calif.), XEN and XENSERVER by Citrix Systems, Inc. (Fort Lauderdale, Fla.), ORACLE VM by Oracle Corporation (Redwood City, Calif.), HYPER-V by Microsoft Corporation (Redmond, Wash.), VIRTUOZZO by Parallels, Inc. (Switzerland), and the like.

In other embodiments, the host server 102 can include a hosted architecture in which the virtualization layer 140 runs within a host operating system environment. In such embodiments, the virtualization layer 140 can rely on the host operating system for device support and/or physical resource management. Examples of hosted virtualization layers can include, but are not limited to, VMWARE WORKSTATION and VMWARE SERVER by VMware, Inc., VIRTUAL SERVER by Microsoft Corporation, PARALLELS WORKSTATION by Parallels, Inc., Kernel-Based Virtual Machine (KVM) (open source), and the like.

Some or all of the virtual machines can include a guest operating system and associated applications. In such embodiments, a virtual machine accesses the resources (e.g., privileged resources) of the host server 102 through the virtualization layer 140. However, in some implementations, the virtual machines can access at least some of the resources of the host server 102 directly.

The host server 102 can communicate with the data store 112 to access data stored in one or more virtual machine files. For instance, the data store 112 can include one or more virtual machine file systems 114 that maintain virtual machine files 116, virtual disk files, or virtual machine images for some or all of the virtual machines on the host server 102. The virtual machine files 116 can include dynamic virtual machine files, static or fixed size virtual machine files, or a combination of the two. The virtual machine files 116 can store operating system files, program files, application files, and other data of the virtual machines. Example formats of virtual disk files can include VHD, VMDK, VDI, and so forth.

In certain embodiments, the virtual machine file system 114 includes a VMWARE VMFS cluster file system provided by VMware, Inc. In such embodiments, the VMFS cluster file system enables multiple host servers (e.g., with installations of ESX server) to have concurrent access to the same virtual machine storage and provides on-disk distributed locking to ensure that the same virtual machine is not powered on by multiple servers at the same time. Other platforms may have different file systems (such as, e.g., an NTFS, HFS, FAT, or EXT file system). In other embodiments, the virtual machine file system 114 is stored on the host server 102 instead of in a separate data store.

The data store 112 can include any physical or logical storage for holding virtual machine files 116. For instance, the data store 112 can be implemented as local storage for the host server 102, accessible using a serial advanced technology attachment (SATA) protocol, a small computer system interface (SCSI) protocol, or the like. The data store 112 can also be implemented as part of a storage area network (SAN) or network attached storage (NAS). Accordingly, the data store 112 can be accessed over a network using a protocol such as a fibre channel protocol (FCP), an Internet SCSI (iSCSI) protocol, a network file system (NFS) protocol, a common Internet file system (CIFS) protocol, a file transfer protocol (FTP), a secure FTP (SFTP) protocol, combinations of the same, or the like. The data store 112 can also include one or more redundant arrays of independent disks (RAID) or the like.

The virtual computing environment 100 further includes a network 130 for communication between the host server 102 and a management server 120. The network 130 can provide wired or wireless communication between the host server 102, the management server 120, and/or the data store 112. The network 130 can be a local area network (LAN), a wide area network (WAN), the Internet, an intranet, combinations of the same, or the like. In certain embodiments, the network 130 can be configured to support secure shell (SSH) tunneling or other secure protocol connections for the transfer of data between the host server 102 and/or the data store 112. In certain embodiments, some or all of the management server 120, the host server 102, and the data store 112 may communicate with each other without the network 130.

The management server 120 can be implemented as one or more computing devices. In the embodiment illustrated in FIG. 1, the management server 120 includes a compactor 122, a duplicator 124, and a user interface 126. Although depicted as a separate system, in some implementations, the host server 102 may include some or all of the management server 120. For example, the host server 102 may include the management server 120 in its entirety, or it may include, e.g., the compactor 122, but not the duplicator 124, or vice versa.

In certain embodiments, the management server 120 can use the compactor 122 to compact one or more virtual machine files 116. Generally, the compactor 122 is configured to compact dynamic virtual machine files. However, in some embodiments, the compactor 122 may be configured to compact a static or fixed size virtual machine file. In certain embodiments, by compacting the virtual machine files 116, the compactor 122 can reduce the size of the virtual machine files 116 on the data store 112, thereby advantageously freeing storage space for use by other users, applications, or virtual machines.

In some implementations, the compactor 122 may use the duplicator 124 to duplicate part or all of a virtual machine file 116 as part of the compaction process. Further, the duplicator 124 may be configured to duplicate a virtual machine file 116 independently of a compaction process. For example, the duplicator 124 may be used to create backup copies of a virtual machine file 116. In some embodiments, the duplicator 124 may use any of the methods or processes disclosed in U.S. Patent Publication No. 2011/0035358, titled “Optimized Copy of Virtual Machine Storage Files,” which is hereby incorporated by reference herein in its entirety for all that it contains.

The user interface 126 can be configured to display to a user and/or receive from a user information relating to operation of the management server 120. In certain embodiments, the user interface 126 causes the display of one or more windows for obtaining user input and/or outputting status information with respect to the management server 120, the host server 102, and/or one or more of the virtual machine files 116. Using the user interface 126, a user can cause the compactor 122 to compact a virtual machine file 116.

Alternatively, or in addition, the user can set one or more thresholds associated with the number of blocks of the virtual machine file 116 that are not in use. In certain embodiments, the compactor 122 can compact the virtual machine file 116 in response to the management server 120 determining that the number of blocks not in use, and/or no longer in use, satisfies one or more of the thresholds. In some cases, some or all of the thresholds may be associated with the percentage of blocks of the virtual machine file 116 that are not in use. Further, for some implementations the thresholds may be associated with the number or percentage of blocks of the virtual machine file 116 that are in use. In various embodiments, satisfying a threshold can include meeting a threshold value, exceeding a threshold value, or not exceeding a threshold value. For example, the compactor 122 may compact a virtual machine file 116 when the number of free blocks is greater than or equal to the number of free blocks associated with the threshold. Further, the term “block” is used in its ordinary and customary sense and can represent an abstraction referring to a sequence of bits or bytes which may be read substantially in parallel. For example, each data block may be 32- or 64-bytes and each time a block is accessed, the 32- or 64-bytes may be accessed substantially in parallel.

In some embodiments, the threshold may be associated with the size of the virtual machine file. For example, if the size of the virtual machine file exceeds 5 GB, or 50% of the space allocated to the virtual machine file, or some other measure of virtual machine file size, the compactor 122 may perform a compaction process (e.g., the process described below with reference to FIG. 4). In some embodiments, if the compaction process does not result in the size of the virtual machine file being reduced below the threshold and/or by a goal amount (e.g., size or percentage), the compactor 122 may modify the threshold used for initiating the compaction process. Alternatively, or in addition, if the virtual machine file was compacted within a threshold time period, the compactor 122 may delay performing the compaction process again at least until the threshold time period has been exceeded. Advantageously, in certain embodiments, modifying the threshold based on the result of performing the compaction process reduces the probability of performing the compaction process more often than desired by a user or than is desirable for, for example, performance of the virtual machine file. Similarly, in some embodiments, delaying repeat performance of the compaction process within a threshold time period reduces the frequency of performing the compaction process to a rate selected by a user or to a rate that optimizes the availability of the virtual machine file versus the opportunity to reclaim wasted storage space.

Example Host Server

FIG. 2 illustrates an embodiment of a host server 102 capable of compacting virtual machine files (e.g., virtual machine file 251a or virtual machine file 251b) in accordance with the present disclosure. As described above in relation to FIG. 1, the host server 102 may include a virtualization layer 140, which can include a hypervisor 110 that allows for multiple isolated operating systems to run on the host server 102 at the same time. In the illustrated implementation, the hypervisor 110 is a native or “bare-metal” hypervisor that runs directly on top of the hardware platform of the host server 102. The hypervisor 110 supports multiple partitions 104, 106, and 108 on the host server 102. Partitions are logical units of isolation in which operating systems can execute. The partition 104 is the parent (or root) partition that runs a host operating system 224 (e.g., Microsoft Windows Server). The parent partition 104 can create one or more child partitions 106 and 108 which operate virtual machines 251a and 251b having guest operating systems 254a and 254b and applications 252a and 252b, respectively. In some virtualization implementations, there is one parent partition 104 and there can be zero, one, two, or more child partitions.

A virtual machine management system 228 can run in the parent partition 104 and may provide direct access to hardware devices (e.g., data store 112, processors, memory, graphics cards, etc.). The virtual machine management system 228 also can be responsible for managing the state of the virtual machines 251a and 251b running in the child partitions 106 and 108, respectively. In the illustrated embodiment, the child partitions 106 and 108 do not have direct access to hardware resources. The child partitions 106 and 108 may make requests (e.g., input/output (I/O) requests) to virtual devices, which can be redirected using inter-partition communication (e.g., a virtual bus) to the parent partition 104 (e.g., the virtual machine management system 228 in some embodiments), which directs the request (e.g., via the hypervisor 110) to an appropriate hardware device (e.g., a data store 112). In certain embodiments, the parent partition 104 may be a privileged virtual machine with direct access to the underlying I/O hardware.

In the example host server 102 illustrated in FIG. 2, the parent partition 104 includes a compactor 236 and a duplicator 240. As described above with reference to FIG. 1, a compactor and/or a duplicator can be included as part of a management server 120 and/or as part of a host server 102. In certain embodiments, the parent partition 104 may include a user interface (not shown) that includes some or all of the functionality of the user interface 126. Similarly, in certain embodiments, the compactor 236 and the duplicator 240 may include some or all of the embodiments described above with respect to the compactor 122 and the duplicator 124, respectively.

The compactor 236 may be configured to compact a virtual machine file associated with a virtual machine running in a child partition. For example, the compactor 236 may compact the virtual machine file associated with the virtual machine 251a running on the child partition 106. In some implementations, the compactor 236 may compact dynamic virtual machine files based on the size of the virtual machine file reaching or exceeding a threshold associated with the number or percentage of blocks that are not in use by the virtual machine or that include free blocks or data marked for deletion.

The duplicator 240 may be configured to duplicate a virtual machine file associated with a virtual machine running in a child partition. For example, the duplicator 240 may duplicate the virtual machine file associated with the virtual machine 251a running on the child partition 106.

In the example shown in FIG. 2, the virtual machines 251a and 251b in the child partitions 106 and 108 may be attached, mounted, or associated with one or more virtual disk files or virtual machine files 116 in the data store 112. In some implementations, the virtual machine file may include one or more virtual hard disks (VHDs). A virtual machine file, such as a VHD file, can include disk partitions and a file system, which can contain volumes, directories, folders, files, metadata, etc. In some implementations, a VHD attached to a virtual machine running in a child partition is simply a file to the parent partition. Thus, in some such implementations, what appears to be an entire file system volume when seen from within a running child virtual machine (e.g., running in the child partitions 106 or 108) is actually a large file when seem from the parent virtual machine (e.g., from the parent partition 104).

Example of a Virtual Machine File

FIGS. 3A-3C illustrate an example of a virtual machine file 310 in accordance with the present disclosure. The virtual machine file 310 can be associated with any type of virtual machine. Further, the virtual machine file 310 may be a dynamic, or dynamically expanding, virtual machine file. The example presented in FIGS. 3A-3C is intended to be illustrative of certain aspects of a compacting process and is not intended to be limiting.

FIG. 3A illustrates the example virtual machine file 310 when it is first created or initialized. The virtual machine file 310 includes a file header 312, metadata 314, a block allocation table (BAT) 316, a file footer 318, and one data block 320. In certain embodiments, the virtual machine file 310 may start with more or less data blocks when it is first created or initialized. Generally, the data block 320 includes the raw data that the virtual machine has written to the virtual hard disk associated with the virtual machine file 310.

The file header 312, virtual hard drive header, or dynamic virtual hard drive header, may include location information for the BAT 316 as well as additional parameters associated with the BAT 316 structure. In some embodiments, if the virtual machine file 310 is part of a differencing backup, the file header 312 may include pointers to data locations in the parent virtual hard disk.

Metadata 314 may include any information associated with the virtual machine file 310 that facilitates the maintenance, operation, and identification of the virtual machine file 310. For example, in some embodiments the metadata 314 may include information identifying the file systems, the guest operating system, and/or the applications of the virtual machine file 310. In some embodiments, the metadata 314 may include a Logical Cluster Number (LCN) bitmap that may be used to identify clusters of the virtual machine file 310 that are free or in use. The clusters may be equivalent to a data block of the virtual machine file 310. Alternatively, each cluster may be a fraction or multiple of a data block. For example, each data block may comprise four clusters. Typically, free blocks and/or free clusters include blocks and/or clusters without data or with data marked for deletion. However, in some embodiments, free blocks and/or clusters may refer to blocks and/or clusters without data. Each block and/or cluster may be sized based on the parameters of the virtual machine when the virtual machine file 310 is created.

The BAT 316 may be used to map virtual disk offsets to memory locations on the physical hard disk of the host server 102 or the data store 112. When the virtual machine file 310 is first created or initialized, the BAT 316 may include no entries if the virtual machine is created or initialized with no data blocks. In some embodiments, the virtual machine file 310 may not include a BAT 316. For example, some virtual machine files may use one or more pointers to map the entire virtual machine file to a set of contiguous memory locations on the physical hard disk instead of, or in addition to, using a BAT 316.

The file footer 318 may include any information associated with the virtual machine file 310. For example, the file footer 318 may include the size of the virtual hard disk, the virtual hard disk geometry, an identity of the virtual hard disk creator, etc. In some embodiments, the file footer 318 includes the metadata 314. Although FIG. 3A depicts a single file footer 318 following the data block 320, in some implementations, an additional copy, or mirror, of the file footer 318 may exist at the beginning of the virtual machine file 310 before the file header 312.

FIG. 3B illustrates the virtual machine file 310 after it has increased in size due to allocations of additional data blocks 322, 324, 326, 328, and any number of additional data blocks between data blocks 326 and 328 as indicated by the ellipse in FIG. 3B. With each additional data block that is added to the virtual machine file 310, the BAT 316 is modified to map the virtual disk offset associated with the additional data block to the location on the physical hard disk that includes the virtual machine file 310.

FIG. 3C illustrates the virtual machine file 310 after data blocks 320 and 324 have been deleted or marked for deletion, as indicated by the cross-hatch pattern of the blocks 320 and 324 in FIG. 3C. With each data block that is deleted or marked for deletion from the virtual machine file 310, the BAT 316 may be modified to remove the mapping between the virtual disk offsets of the deleted blocks and the locations on the physical hard disk that stores the actual data blocks.

After a number or percentage of data blocks have been deleted, or marked for deletion, the management server 120, compactor 122, compactor 236, or virtual machine management system 228 may initiate a compaction process. This compaction process is described in further detail below with respect to FIG. 4. In some embodiments, the compaction process is initiated based on whether a threshold related to the amount or percentage of free space or unused blocks in the virtual machine file is satisfied. This threshold may be based on the size of the virtual machine file, the amount or percentage of allocated blocks used, the amount or percentage of free blocks, the amount or percentage of allocated blocks marked for deletion, or a combination thereof. In some implementations, the number, percentage, size, threshold or any other appropriate value may be fixed or assigned by a system administrator. In other implementations, the number, percentage, size, threshold or any other appropriate value may dynamically adjusted (e.g., by the management server 120) based on usage statistics for the data store 112.

FIG. 3D illustrates an example of a compacted virtual machine file 350 in accordance with the present disclosure. The compacted virtual machine file 350 can also be considered the destination virtual machine file and the virtual machine file 310 can be considered the source virtual machine file. The compacted virtual machine file 350 represents a copy of the virtual machine file 310 after the compaction process. The compacted virtual machine file may include a file header 352, metadata 354, a BAT 356, a file footer 358, and a number of data blocks 362, 366, 368, and any number of data blocks between 366 and 368 as indicated by the ellipse in FIG. 3D.

In some embodiments, the file header 352, the metadata 354, and the file footer 358 may be copies or modified copies of the file header 312, the metadata 314, and the file footer 358, respectively. Alternatively, the file header 352, the metadata 354, and the file footer 358 may be created as part of the initialization process when the virtual machine file 350 is created.

The data blocks that were not deleted, or not marked for deletion, from the virtual machine file 310 may be copied or duplicated during the compaction process to the compacted virtual machine file 350. For example, the data blocks 362, 366, and 368 may correspond to copies of the data blocks 322, 326, and 328, respectively, from the virtual machine file 310. The data blocks deleted, or marked for deletion, from the virtual machine file 310 are not copied or duplicated to the virtual machine file 350 during the compaction process, which advantageously may increase the speed and efficiency of the compaction process. For example, the data blocks 320 and 324 are not copied to the virtual machine file 350 during the compaction process.

The BAT 356 can be initialized to indicate that there are zero entries in the compacted virtual machine file 350 when the compacted virtual machine file 350 is created. As blocks which are in use, or not free, from the virtual machine file 310 are copied to the compacted virtual machine file 350, the BAT 356 may be modified to map the virtual offsets of the added data blocks to the locations of the data blocks on the physical hard disk.

Once the virtual machine file 310 is compacted to the compacted virtual machine file 350, the virtual machine file 310 can, but need not, be deleted. As the compacted virtual machine file 350 does not include the data blocks that are not in use by the virtual machine associated with the virtual machine file 310, hard disk space is saved.

In other implementations, the compacting or duplicating process may (additionally or alternatively) access other data structures stored within the virtual machine file that contain information about the file system associated with the virtual machine file. For example, some virtual machine files may include a Master File Table (MFT), catalog file, inodes, or vnodes, or other metadata providing file system information. Such metadata may be stored as the metadata 314 in the example file 310.

Although FIGS. 3A-3D depict the process of expanding and then compacting a dynamically expanding virtual hard disk, it is possible to implement a similar process with a static or fixed size virtual hard disk. In such embodiments, if it is determined that the percentage of space used by a virtual machine with a static virtual hard disk remains below a threshold for a period of time, the static virtual hard disk can be compacted to a smaller static virtual hard disk. Further, in some embodiments, the compaction process, embodiments of which are described further with respect to FIG. 4, can be used to convert a dynamically expanding virtual hard disk to a fixed sized virtual hard disk and vice versa.

Example Compaction Process

FIG. 4 presents a flowchart for an embodiment of a compaction process 400 in accordance with the present disclosure. In some embodiments, some or all of the process 400 can be performed as part of a backup process to reduce the size of backups of virtual machine files. Alternatively, or in addition, some or all of the process 400 can be performed as part of a transmission process to reduce the size of a virtual machine file that may be transmitted over a network (e.g., the network 130). Further, some or all of the process 400 may be performed to reduce a size of a virtual hard disk file. The process 400 can be implemented by any system capable of compacting a virtual machine file. For example, the process 400 may be implemented, at least in part, by the management server 120, the compactor 122, the duplicator 124, the virtual machine management system 228, the compactor 236, and the duplicator 240. To simplify discussion, and not to limit the disclosure, the process 400 will be described as being performed by the compactor 236.

The process 400 begins at block 402 when, for example, the compactor 236 accesses a source virtual machine file (e.g., the virtual machine file 251a or the virtual machine file 310). In some cases, the source virtual machine associated with the source virtual machine file can be deactivated (e.g., powered down, turned off, or have its execution ceased) prior to the process 400 accessing the virtual machine file. Accordingly, in some such cases, the source virtual machine need not be executing while the compactor 236 performs the process 400. Thus, in certain such cases, the compactor 236 need not monitor a running virtual machine to watch for file deletions. The compactor 236 determines the file system of the source virtual machine at block 404. Blocks 402 and 404 can include mounting and accessing as a volume the source virtual machine file by using the host operating system 224 Application Programmer Interfaces (APIs). For example, if the host operating system 224 is Windows 7, the APIs may include the OpenVirtualDisk and AttachVirtualDisk APIs. In some implementations, once the compactor 236 has obtained a handle to the source virtual machine file, the compactor 236 can determine the file system using, for example, the FSCTL_FILESYSTEM_GET_STATISTICS or the FSCTLE_QUERY_FILE_SYSTEM_RECOGNITION APIs. Alternatively, the compactor 236 may determine the file system of the source virtual machine by using a built-in API library associated with the file system of the source virtual machine.

At block 406, the compactor 236 creates a destination virtual machine file (e.g., the virtual machine file 251b or the virtual machine file 350) based on the file system identified at block 404. Then, the compactor 236 copies a file header from the source virtual machine file to the destination virtual machine file at block 408. Copying the file header may further include modifying the file header based on metadata specific to the destination virtual machine file, such as the location of the BAT.

At block 410, the compactor 236 accesses metadata associated with the source virtual machine. In certain embodiments, the metadata can include a LCN bitmap that tracks clusters to determine whether the clusters are free or in use. The source file system may be divided into an administrative unit called a logical cluster. Each logical cluster may be a fixed size based on the file system or determined when the source virtual machine file is formatted. Each cluster may be equivalent in size to a data block in the source virtual machine file or alternatively, may be some multiple or fraction of the data block size. In some embodiments, the metadata accessed may be based on the file system type identified at the block 404. The compactor 236 may access the LCN bitmap as a whole, or in portions. In some embodiments, the compactor 236 may use the FSCTL_GET_NTFS_VOLUME_DATA API to determine the volume cluster size and the number of logical clusters used within the guest operating system (e.g., the guest operating system 254a) of the source virtual machine file.

At block 412, the compactor 236 copies some or all of the metadata identified at the block 410 from the source virtual machine file to the destination virtual machine file. At block 414, the compactor 236 initializes the BAT of the destination virtual machine file. Initializing the BAT of the destination virtual machine file may include configuring the BAT to indicate that no data blocks are associated with the destination virtual machine file. Alternatively, initializing the BAT may include configuring the BAT to reflect a default number of data blocks. As another alternative, initializing the BAT of the destination virtual machine file may include configuring the BAT to reflect the number of data blocks the compactor 236 has determined are to be copied from the source virtual machine file to the destination virtual machine file.

At block 416, the compactor 236 accesses the BAT of the source virtual machine file. The compactor 236 accesses an unprocessed block from the source virtual machine file at block 418. An unprocessed block can generally include any data block associated with the source virtual machine that the compactor 236 has not accessed to determine whether the data block is free, marked for deletion, deleted, or in use.

Using one or more of the BAT, the LCN bitmap, and metadata associated with the source virtual machine file, the compactor 236 determines at decision block 420 whether the block is in use. Determining whether the block is in use may include determining whether the block includes data that is not marked for deletion. A non-limiting example process for determining whether a block is in use is described below with respect to FIG. 5. If the compactor 236 determines that the block is in use, the compactor 236 copies the block to the destination virtual machine file at block 422. The compactor 236 may then update the BAT of the destination virtual machine file at block 424. Updating the BAT may include mapping the data blocks that are copied to the destination virtual machine to a location on the physical hard drive where the data associated with the data blocks is stored.

After block 424, or if the compactor 236 determines at decision block 420 that the block is not in use, the compactor determines at decision block 426 whether additional blocks exist in the source virtual machine file. If the compactor 236 determines additional unprocessed blocks exist, the compactor 236 proceeds to perform the process associated with the block 418 and accesses another unprocessed block. If the compactor 236 determines that no more unprocessed blocks exist at decision block 426, the compactor copies the file footer from the source virtual machine file to the destination virtual machine file at block 428. In some embodiments, the compactor 236 may also copy a mirror of the file footer from the beginning of the source virtual machine file to the beginning of the destination virtual machine file. After block 428 is performed, the source virtual machine file may be deleted. The storage space reclaimed by deletion of the source virtual machine file may be used by the destination virtual machine file, or by a virtual machine associated with the destination virtual machine file. Further, in some cases, the reclaimed storage space may be used by other virtual machines or virtual machine files. In addition, in some cases the host server 102 may use the reclaimed space for any purpose regardless of whether it is for a virtual machine or some other purpose.

In some embodiments, the compactor 236 may use the duplicator 240 to perform the copying of the data blocks, file header, file footer(s), or metadata. In some embodiments, the duplicator 240 can be used to copy the source virtual machine file to a destination virtual machine file without performing a compaction process. In such embodiments, the duplicator 240 may copy the blocks that are in use from the source virtual machine file to the destination virtual machine file. The duplicator 240 can update one or more of a block allocation table, a LCN bitmap, and one or more pointers associated with the destination virtual machine file to duplicate the free space or blocks of the source virtual machine file that are not in use without physically copying the free blocks or blocks not in use. Advantageously, in certain embodiments, the ability to modify the BAT, LCN bitmap, and pointers of the destination virtual machine file without copying free blocks enables more efficient and faster copying of the source virtual machine file compared to a process that copies every block of the source virtual machine file regardless of whether the block is in use.

In some embodiments, the compactor 236 may update the metadata associated with the destination virtual machine file based on the blocks copied to the destination virtual machine file.

In some embodiments, the process 400 may be performed in response to a request from a user, such as an administrator. This request may be specified using, for example, the user interface 126.

Alternatively, or in addition, in some embodiments, some or all of the process 400 may be triggered in response to a determination that the number or percentage of free blocks, or blocks not in use, is above or equal to a threshold. For some cases, the process 400 may be triggered if the size of the virtual machine file exceeds a threshold. In other cases, the process 400 is triggered if both the size of the virtual machine file and a percentage of free blocks is above a threshold. In some embodiments, blocks marked for deletion may be included in the determination of free blocks. In some implementations, the threshold for determining whether to perform the process 400 may be set by a user or system administrators. In some implementations, the threshold for determining whether to perform the process 400 may be dynamically adjusted, e.g., based on usage statistics for the system. Alternatively, or in addition, the threshold may be based on the type of virtual machine file, the size of the virtual machine file, how often the virtual machine file is accessed, or any other parameter that can be used for setting a threshold for determining whether to perform the process 400. In some cases, the threshold may differ based on whether the virtual machine file is a dynamic or dynamically expanding virtual machine file, or a static or fixed size virtual machine file.

For example, if the source virtual machine file is a static virtual machine file that utilizes on average 25% of the originally allocated space and for a period of time (e.g., one year) has never exceeded 30% of the originally allocated space, the compactor 236 may use process 400 to compact the source virtual machine file to a static virtual machine file that is half the size of the source virtual machine file. Alternatively, in the preceding example, the compactor 236 may compact the source virtual machine file to a dynamic virtual machine file thereby ensuring the space is available if the use profile for the source virtual machine file changes during a later time period.

In some alternative embodiments, or in addition to the embodiments described above, the compactor 236 may use a source pointer associated with the source virtual machine file and a destination pointer associated with the destination virtual machine file to perform the process 400. For example, during an initial performance of the process of block 418, the compactor 236 may point a source pointer to a first unprocessed block in the set of blocks of the source virtual machine file. If the compactor 236 determines that the first unprocessed block is in use at decision block 420, the compactor 236 copies the block to the destination virtual machine file and advances a source and a destination file pointer by the size of the copied block. If the compactor 236 determines that the first unprocessed block is not in use at decision block 420, the block is not copied and the destination file pointer is not advanced. However, the source pointer is advanced by the size of the block. The process may then be repeated if it is determined at decision block 426 that more blocks exist in the source virtual machine file. Thus, a compacted version of the source virtual machine file may be created by copying blocks that are in use, but not copying blocks that are not in use. In some embodiments, blocks that are not in use are not copied, but both the source and destination file pointers are advanced. Advantageously, in certain embodiments, by advancing both the source and destination pointers, but not copying blocks that are not in use, a copying process can be performed that is faster and causes reduced wear on a physical storage device compared to a copy process that copies both the used and free blocks.

Although process 400 has been described in a specific order, process 400 is not limited as such and one or more of the operations of the process 400 may be performed in a different sequence. For example, in some embodiments, the block 424 may be performed after the compactor 236 determines that no more blocks exist at decision block 426.

Example of a Process for Determining if a Block is in Use

FIG. 5 presents a flowchart for an example of a process 500 for determining if a block is in use. In some embodiments, some or all of the process 500 can be performed as part of a backup process to reduce the size of backups of virtual machine files. Alternatively, or in addition, some or all of the process 500 can be performed as part of a transmission process to reduce the size of a virtual machine file that may be transmitted over a network (e.g., the network 130). Further, some or all of the process 500 may be performed as part of the operation associated with the block 420 of the example process 400 described with reference to FIG. 4. The process 500 can be implemented by any system capable of compacting (or copying) a virtual machine file. For example, the process 500 may be implemented, at least in part, by the management server 120, the compactor 122, the duplicator 124, the virtual machine management system 228, the compactor 236, and the duplicator 240. To simplify discussion, and not to limit the disclosure, the process 500 will be described as being performed by the compactor 236.

The process 500 begins at block 502 when, for example, the compactor 236, which may be included as part of a host file system, accesses an entry in a BAT associated with a block. In some cases, the block 502 may occur as part of the block 418 and/or 420 of the process 400. The BAT may be part of a virtual machine file (e.g., the source virtual machine file) with a guest operating system. However, it is often the case that the guest operating system is unaware of the BAT.

At decision block 504, the compactor 236 determines whether the entry indicates that the block is in use. If not, the compactor 236 excludes the block from a destination virtual machine file at block 506. As discussed above, in some cases, the destination or target virtual machine file includes a copy, which may be a compacted copy, of a source virtual machine file.

If the compactor 236 determines at the decision block 504 that the entry indicates that the block is in use, the compactor 236 accesses one or more entries in a LCN bitmap associated with one or more clusters associated with the block at block 508. The LCN bitmap may be included as part of the virtual machine file. In contrast to the BAT, the guest file system and/or guest operating system is generally aware of the LCN bitmap. In some cases, each block is associated with one cluster, and thus, there may be a one to one correspondence between blocks in the BAT and clusters in the LCN bitmap. However, in some cases, multiple clusters may be associated with a block. For example, each block may be associated with four clusters. Generally, a cluster is associated with a single block. However, in certain cases, a cluster may be associated with multiple blocks.

At decision block 510, the compactor 236 determines whether one or more entries in the LCN bitmap indicate that at least one cluster associated with the block is in use. As previously indicated, a cluster, or block, that is in use can include any cluster, or block, that is not marked for deletion or that includes data. If the compactor 236 determines that no clusters associated with the block are in use, then the compactor 236 excludes the block from a destination virtual machine file at block 506.

If the compactor 236 determines that at least one cluster associated with the block is in use, then the compactor 236 copies the block to the destination virtual machine file at block 512. In some embodiments, the block 512 can include one or more of the embodiments described above with respect to the blocks 422. Further, the block 512 can include updating the BAT of the destination virtual machine file to map data blocks of the destination virtual machine file to locations on the physical storage device that stores the data blocks. Further, in some cases, updating the BAT can include mapping entries in the BAT to clusters in the LCN bitmap of the destination virtual machine file.

Examples of Possible Advantages of Certain Embodiments of Systems and Methods for Compacting a Virtual Machine File

As described herein, there may be a number of advantages to using the systems and processes described herein for compacting and/or copying a virtual machine file. In some cases, a virtual machine file can be compacted without an associated virtual machine running or being powered on. Thus, it is possible for the virtual machine file to be compacted before a virtual machine is powered on, executed, or initiated using the virtual machine file. Further, it is possible for the virtual machine file to be compacted after a virtual machine using the virtual machine file is shut down or powered off. In certain situations, the virtual machine file may be compacted while the virtual machine using the virtual machine file is powered on, active, or running.

It is generally possible for the compactor 236 to compact any type of virtual machine file without modifying the virtual machine file or the operation of a virtual machine using the virtual machine file. Thus, it is possible for the compactor 236 to be used to compact existing configurations of virtual machine files designed to be used with existing types of virtual machines.

Further, the compactor 236 can compact a virtual machine file without monitoring the activity of the virtual machine associated with the virtual machine file. Thus, in some cases, the compactor 236 can compact the virtual machine file without monitoring the occurrence of deletions, writes, or other operations performed by the virtual machine. Advantageously, because the compactor 236 can compact the virtual machine file without monitoring the activity of the virtual machine, it is possible for the compaction to occur at any time and on any number of virtual machine files whether or not the compactor 236 has access to the virtual machine files at all times. For example, the compactor 236 need not be executing while the virtual machine is also executing. Thus, in some cases, compaction of virtual machine files may be performed as a batch process.

Moreover, the compactor 236 can compact the virtual machine file without the virtual machine, or any other system, being required to use special writes or deletions to indicate which blocks to copy or which blocks to exclude or not copy. For example, unlike some systems, the compactor 236 can identify blocks that have been deleted without the use of a special sequence (e.g., a zero-filled block) or a predefined pattern to mark deleted logical or physical blocks. Therefore, in some cases, the compactor 236 can determine whether a block is in use without comparing a block to a special data or bit sequence. Further, in some such cases, the compactor 236 need not link a logical block of a file to be deleted with a designated block that corresponds to a physical block (e.g., a block on a physical storage device).

Once storage space has been freed by use of the compactor 236 to compact a source virtual machine file, the freed storage space may be used by the destination virtual machine file, by another virtual machine file, or by the host server 102 regardless of whether the host server 102 uses the free storage space for a virtual machine file.

Terminology

For purposes of illustration, certain aspects, advantages and novel features of various embodiments of the inventions have been described herein. It is to be understood that not necessarily all such advantages may be achieved in accordance with any particular embodiment of the inventions disclosed herein. Thus, the inventions disclosed herein can be embodied or carried out in a manner that achieves or optimizes one advantage or group of advantages as taught herein without necessarily achieving other advantages as can be taught or suggested herein. Further, no element, feature, block, or step, or group of elements, features, blocks, or steps, are necessary or indispensable to each embodiment. Additionally, all possible combinations, subcombinations, and rearrangements of systems, methods, features, elements, modules, blocks, and so forth are within the scope of this disclosure.

Depending on the embodiment, certain acts, events, or functions of any of the algorithms, methods, or processes described herein can be performed in a different sequence, can be added, merged, or left out all together (e.g., not all described acts or events are necessary for the practice of the algorithm). Moreover, in certain embodiments, acts or events can be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors or processor cores or on other parallel architectures, rather than sequentially.

The various illustrative logical blocks, modules, processes, methods, and algorithms described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, operations, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. The described functionality can be implemented in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosure.

The various illustrative logical blocks and modules described in connection with the embodiments disclosed herein can be implemented or performed by a machine, such as a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor can be a microprocessor, but in the alternative, the processor can be a controller, microcontroller, or state machine, combinations of the same, or the like. A processor can also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.

The blocks, operations, or steps of a method, process, or algorithm described in connection with the embodiments disclosed herein can be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module can reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, an optical disc (e.g., CD-ROM or DVD), or any other form of volatile or non-volatile computer-readable storage medium known in the art. A storage medium can be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium can be integral to the processor. The processor and the storage medium can reside in an ASIC. The ASIC can reside in a user terminal. In the alternative, the processor and the storage medium can reside as discrete components in a user terminal.

Conditional language used herein, such as, among others, “can,” “might,” “may,” “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or states. Thus, such conditional language is not generally intended to imply that features, elements, blocks, and/or states are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without author input or prompting, whether these features, elements and/or states are included or are to be performed in any particular embodiment. The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list. The use of sequential, or time-ordered language, such as “then”, “next”, “subsequently” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to facilitate the flow of the text and is not intended to limit the sequence of operations performed. Thus, some embodiments may be performed using the sequence of operations described herein, while other embodiments may be performed following a different sequence of operations.

While the above detailed description has shown, described, and pointed out novel features as applied to various embodiments, it will be understood that various omissions, substitutions, and changes in the form and details of the devices or algorithms illustrated can be made without departing from the spirit of the disclosure. As will be recognized, certain embodiments of the inventions described herein can be embodied within a form that does not provide all of the features and benefits set forth herein, as some features can be used or practiced separately from others. The scope of certain inventions disclosed herein is indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Naik, Dilip C.

Patent Priority Assignee Title
10108497, Sep 29 2015 EMC IP HOLDING COMPANY LLC Point-in-time restore using SQL VDI incremental backup over SQL VSS snapshot backup and recover
10120711, Aug 23 2014 VMware LLC Rapid suspend/resume for virtual machines via resource sharing
10152345, Aug 23 2014 OMNISSA, LLC Machine identity persistence for users of non-persistent virtual desktops
10203978, Dec 20 2013 VMware LLC Provisioning customized virtual machines without rebooting
10282092, Sep 09 2015 CITIGROUP TECHNOLOGY, INC. Methods and systems for creating and maintaining a library of virtual hard disks
10321167, Jan 21 2016 WASABI TECHNOLOGIES LLC Method and system for determining media file identifiers and likelihood of media file relationships
10545836, Sep 04 2014 International Business Machines Corporation Hypervisor agnostic interchangeable backup recovery and file level recovery from virtual disks
10657068, Mar 22 2018 Intel Corporation Techniques for an all persistent memory file system
10719492, Dec 07 2016 WASABI TECHNOLOGIES LLC Automatic reconciliation and consolidation of disparate repositories
10776041, May 14 2019 EMC IP HOLDING COMPANY LLC System and method for scalable backup search
10776223, Apr 26 2019 EMC IP HOLDING COMPANY LLC System and method for accelerated point in time restoration
10977063, Dec 20 2013 VMware LLC Elastic compute fabric using virtual machine templates
11036400, Apr 26 2019 EMC IP HOLDING COMPANY LLC System and method for limiting restoration access
11061732, May 14 2019 EMC IP HOLDING COMPANY LLC System and method for scalable backup services
11099941, Apr 23 2019 EMC IP HOLDING COMPANY LLC System and method for accelerating application service restoration
11106544, Apr 26 2019 EMC IP HOLDING COMPANY LLC System and method for management of largescale data backup
11119685, Apr 23 2019 EMC IP HOLDING COMPANY LLC System and method for accelerated data access
11163647, Apr 23 2019 EMC IP HOLDING COMPANY LLC System and method for selection of node for backup in distributed system
11385970, Sep 04 2014 International Business Machines Corporation Backing-up blocks from virtual disks in different virtual machine environments having different block lengths to a common data format block length
11436036, Jun 23 2020 EMC IP HOLDING COMPANY LLC Systems and methods to decrease the size of a compound virtual appliance file
11531478, Jun 07 2021 Dell Products L.P. Optimizing memory usage to enable larger-sized deployments and client servicing
11755627, Apr 18 2017 United Services Automobile Association (USAA); UNITED SERVICES AUTOMOBILE ASSOCIATION USAA Systems and methods for centralized database cluster management
9684567, Sep 04 2014 International Business Machines Corporation Hypervisor agnostic interchangeable backup recovery and file level recovery from virtual disks
9886352, Apr 27 2012 University of British Columbia De-duplicated virtual machine image transfer
Patent Priority Assignee Title
4130867, Jun 19 1975 Honeywell Information Systems Inc. Database instruction apparatus for determining a database record type
4648031, Jun 21 1982 International Business Machines Corporation Method and apparatus for restarting a computing system
4665520, Feb 01 1985 International Business Machines Corporation Optimistic recovery in a distributed processing system
4916608, May 30 1986 International Business Machines Corporation Provision of virtual storage resources to an operating system control program
5222235, Feb 01 1990 BMC SOFTWARE, INC Databases system for permitting concurrent indexing and reloading of data by early simulating the reload process to determine final locations of the data
5297279, May 30 1990 Texas Instruments Incorporated; TEXAS INSTRUMENTS INCORPORATED, A CORP OF DE System and method for database management supporting object-oriented programming
5325505, Sep 04 1991 Storage Technology Corporation Intelligent storage manager for data storage apparatus having simulation capability
5333314, Apr 20 1987 Hitachi, Ltd. Distributed data base system of composite subsystem type, and method of fault recovery for the system
5414650, Mar 24 1993 COMPRESSION TECHNOLOGY SOLUTIONS LLC Parsing information onto packets using context-insensitive parsing rules based on packet characteristics
5422979, Sep 11 1991 Infineon Technologies AG Fuzzy logic controller with optimized storage organization
5423037, Mar 17 1992 Sun Microsystems, Inc Continuously available database server having multiple groups of nodes, each group maintaining a database copy with fragments stored on multiple nodes
5455945, May 19 1993 Yardley, Benham and Rasch, LLC System and method for dynamically displaying entering, and updating data from a database
5530855, Oct 13 1992 International Business Machines Corporation Replicating a database by the sequential application of hierarchically sorted log records
5551020, Mar 28 1994 FLEXTECH SYSTEMS, INC System for the compacting and logical linking of data blocks in files to optimize available physical storage
5553303, Aug 31 1990 Google Inc Data processing system for dynamically switching access control process and for performing recovery process
5596504, Apr 10 1995 Clemson University Apparatus and method for layered modeling of intended objects represented in STL format and adaptive slicing thereof
5596747, Nov 27 1991 NEC Corporation Method and apparatus for reorganizing an on-line database system in accordance with an access time increase
5634052, Oct 24 1994 International Business Machines Corporation System for reducing storage requirements and transmission loads in a backup subsystem in client-server environment by transmitting only delta files from client to server
5640561, Oct 13 1992 International Business Machines Corporation Computerized method and system for replicating a database using log records
5655081, Mar 08 1995 BMC SOFTWARE, INC System for monitoring and managing computer resources and applications across a distributed computing environment using an intelligent autonomous agent architecture
5664186, May 21 1992 International Business Machines Corporation Computer file management and backup system
5721915, Dec 30 1994 International Business Machines Corporation Interaction between application of a log and maintenance of a table that maps record identifiers during online reorganization of a database
5758356, Sep 19 1994 Hitachi, Ltd.; Hitachi Software Engineering Co., Ltd. High concurrency and recoverable B-tree index management method and system
5761667, Aug 07 1996 BMC Software, Inc.; BMC SOFTWARE, INC Method of optimizing database organization using sequential unload/load operations
5761677, Jan 03 1996 Oracle America, Inc Computer system method and apparatus providing for various versions of a file without requiring data copy or log operations
5774717, Dec 15 1995 SAP SE Method and article of manufacture for resynchronizing client/server file systems and resolving file system conflicts
5778377, Nov 04 1994 LENOVO SINGAPORE PTE LTD Table driven graphical user interface
5778392, Apr 01 1996 Symantec Corporation Opportunistic tile-pulling, vacancy-filling method and apparatus for file-structure reorganization
5796934, May 31 1996 Oracle International Corporation Fault tolerant client server system
5799322, Jan 24 1995 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P System and method for stopping updates at a specified timestamp in a remote duplicate database facility
5822780, Dec 31 1996 EMC IP HOLDING COMPANY LLC Method and apparatus for hierarchical storage management for data base management systems
5848416, Jun 06 1994 Nokia Telecommunications Oy Method and apparatus for storing and retrieving data and a memory arrangement
5893924, Jul 28 1995 International Business Machines Corporation System and method for overflow queue processing
5933818, Jun 02 1997 ENT SERVICES DEVELOPMENT CORPORATION LP Autonomous knowledge discovery system and method
5933820, May 20 1996 International Business Machines Corporation System, method, and program for using direct and indirect pointers to logically related data and targets of indexes
5940832, Mar 10 1994 Google Inc Dynamic database structuring method and apparatus, and database clustering method and apparatus
5943677, Oct 31 1997 Oracle International Corporation Sparsity management system for multi-dimensional databases
5948108, Jun 12 1997 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Method and system for providing fault tolerant access between clients and a server
5951694, Jun 07 1995 Microsoft Technology Licensing, LLC Method of redirecting a client service session to a second application server without interrupting the session by forwarding service-specific information to the second server
5951695, Jul 25 1997 Hewlett Packard Enterprise Development LP Fast database failover
5956489, Jun 07 1995 Microsoft Technology Licensing, LLC Transaction replication system and method for supporting replicated transaction-based services
5956504, Mar 04 1996 THE CHASE MANHATTAN BANK, AS COLLATERAL AGENT Method and system for compressing a data stream in a database log so as to permit recovery of only selected portions of the data stream
5978594, Sep 30 1994 BMC Software, Inc. System for managing computer resources across a distributed computing environment by first reading discovery information about how to determine system resources presence
5983239, Oct 29 1997 International Business Machines Corporation Storage management system with file aggregation supporting multiple aggregated file counterparts
5990810, Feb 17 1995 ROCKSOFT LIMITED Method for partitioning a block of data into subblocks and for storing and communcating such subblocks
5991761, Jan 10 1997 BMC Software, Inc. Method of reorganizing a data entry database
5995958, Mar 04 1997 System and method for storing and managing functions
6003022, Jun 24 1994 INTERNATIONAL BUISNESS MACHINES CORPORATION Database execution cost and system performance estimator
6016497, Dec 24 1997 Microsoft Technology Licensing, LLC Methods and system for storing and accessing embedded information in object-relational databases
6026412, Dec 30 1994 International Business Machines Corporation Interaction between application of a log and maintenance of a table that maps record identifiers during online reorganization of a database
6029195, Nov 29 1994 Pinpoint Incorporated System for customized electronic identification of desirable objects
6067410, Feb 09 1996 NORTONLIFELOCK INC Emulation repair system
6067545, Aug 02 1996 Hewlett Packard Enterprise Development LP Resource rebalancing in networked computer systems
6070170, Oct 01 1997 GLOBALFOUNDRIES Inc Non-blocking drain method and apparatus used to reorganize data in a database
6119128, Mar 30 1998 International Business Machines Corporation Recovering different types of objects with one pass of the log
6122640, Sep 22 1998 CA, INC Method and apparatus for reorganizing an active DBMS table
6151607, Mar 10 1997 Microsoft Technology Licensing, LLC Database computer system with application recovery and dependency handling write cache
6157932, Jun 04 1998 Wilmington Trust, National Association, as Administrative Agent Method of updating a redundant service system while preserving transaction data in a database featuring on-line resynchronization
6185699, Jan 05 1998 International Business Machines Corporation Method and apparatus providing system availability during DBMS restart recovery
6243715, Nov 09 1998 WSOU Investments, LLC Replicated database synchronization method whereby primary database is selected queries to secondary databases are referred to primary database, primary database is updated, then secondary databases are updated
6253212, Jun 23 1998 Oracle International Corporation Method and system for maintaining checkpoint values
6289357, Apr 24 1998 CA, INC Method of automatically synchronizing mirrored database objects
6314421, May 12 1998 SHARNOFF, DAVID M Method and apparatus for indexing documents for message filtering
6343296, Sep 03 1999 Lucent Technologies Inc. On-line reorganization in object-oriented databases
6363387, Oct 20 1998 SYBASE, INC , A CORP OF DELAWARE Database system providing methodology for enhancing concurrency using row update bit and deferred locking
6411964, Dec 23 1998 International Business Machines Corporation Methods for in-place online reorganization of a database
6460048, May 13 1999 International Business Machines Corp Method, system, and program for managing file names during the reorganization of a database object
6470344, May 29 1999 Oracle International Corporation Buffering a hierarchical index of multi-dimensional data
6477535, Nov 25 1998 CA, INC Method and apparatus for concurrent DBMS table operations
6499039, Sep 23 1999 EMC IP HOLDING COMPANY LLC Reorganization of striped data during file system expansion in a data storage system
6519613, Oct 01 1997 GLOBALFOUNDRIES Inc Non-blocking drain method and apparatus for use in processing requests on a resource
6523035, May 20 1999 BMC Software, Inc. System and method for integrating a plurality of disparate database utilities into a single graphical user interface
6584474, Aug 31 1998 CA, INC Method and apparatus for fast and comprehensive DBMS analysis
6606626, Oct 20 1998 SYBASE, INC , A CORPORATION OF DELAWARE Database system with lock manager enhancement for improving concurrency
6631478, Jun 18 1999 Cisco Technology, Inc. Technique for implementing high performance stable storage hierarchy in a computer network
6671721, Apr 22 1999 GOOGLE LLC Object oriented framework mechanism and method for distributing and managing heterogenous operations of a network application
6681386, May 22 2000 GOOGLE LLC Method, system, and program for parameter expansion, generation, and execution of scripts in a networked environment
6691139, Jan 31 2001 VALTRUS INNOVATIONS LIMITED Recreation of archives at a disaster recovery site
6721742, May 31 2000 GOOGLE LLC Method, system and program products for modifying globally stored tables of a client-server environment
6728780, Jun 02 2000 Sun Microsystems, Inc. High availability networking with warm standby interface failover
6834290, Nov 15 1999 QUEST SOFTWARE INC F K A DELL SOFTWARE INC ; Aventail LLC System and method for developing a cost-effective reorganization plan for data reorganization
6859889, Apr 27 2000 Mitsubishi Denki Kabushiki Kaisha Backup system and method for distributed systems
6907512, May 21 2002 Microsoft Technology Licensing, LLC System and method for filtering write operations to a storage medium containing an operating system image
6950834, Mar 29 2000 International Business Machines Corporation Online database table reorganization
6959441, May 09 2000 International Business Machines Corporation Intercepting system API calls
7065538, Feb 11 2000 QUEST SOFTWARE INC F K A DELL SOFTWARE INC ; Aventail LLC System and method for reconciling transactions between a replication system and a recovered database
7085900, May 30 2002 LinkedIn Corporation Backup technique for data stored on multiple storage devices
7093086, Mar 28 2002 Veritas Technologies LLC Disaster recovery and backup using virtual machines
7340486, Oct 10 2002 Network Appliance, Inc System and method for file system snapshot of a virtual logical disk
7370164, Mar 21 2006 Veritas Technologies LLC Backup of virtual machines from the base machine
7447854, Dec 30 2005 VMware LLC Tracking and replicating changes to a virtual disk
7461103, Feb 11 2000 QUEST SOFTWARE INC F K A DELL SOFTWARE INC ; Aventail LLC System and method for reconciling transactions between a replication system and a recovered database
7546325, Jan 27 2006 Hitachi, Ltd. Backup system, file server and backup method
7610331, Sep 13 2000 Syniverse ICX Corporation System and method for dynamic uploading and execution of applications and drivers between devices
7657581, Jul 29 2004 HITACHI VANTARA LLC Metadata management for fixed content distributed data storage
7669020, May 02 2005 Veritas Technologies LLC Host-based backup for virtual machines
7707185, Oct 19 2006 VMware LLC Accessing virtual data storage units to offload operations from a computer system hosting a virtual machine to an offload server
7752487, Aug 08 2006 Open Invention Network, LLC System and method for managing group policy backup
7765400, Nov 08 2004 Microsoft Technology Licensing, LLC Aggregation of the knowledge base of antivirus software
7769722, Dec 08 2006 EMC IP HOLDING COMPANY LLC Replication and restoration of multiple data storage object types in a data network
7805423, Nov 15 1999 QUEST SOFTWARE INC F K A DELL SOFTWARE INC ; Aventail LLC System and method for quiescing select data modification operations against an object of a database during one or more structural operations
7844577, Jul 15 2002 Veritas Technologies LLC System and method for maintaining a backup storage system for a computer system
7849267, Jun 30 2006 International Business Machines Corporation Network-extended storage
7895161, May 29 2007 RAKUTEN GROUP, INC Storage system and method of managing data using same
7925850, Feb 16 2007 VMware LLC Page signature disambiguation for increasing the efficiency of virtual machine migration in shared-page virtualized computer systems
8001596, May 03 2007 Microsoft Technology Licensing, LLC Software protection injection at load time
8010495, Apr 25 2006 Virtuozzo International GmbH Method and system for fast generation of file system snapshot bitmap in virtual environment
8046550, Jul 14 2008 QUEST SOFTWARE INC F K A DELL SOFTWARE INC ; Aventail LLC Systems and methods for performing backup operations of virtual machine files
8060476, Jul 14 2008 QUEST SOFTWARE INC F K A DELL SOFTWARE INC ; Aventail LLC Backup systems and methods for a virtual computing environment
8135930, Jul 14 2008 QUEST SOFTWARE INC F K A DELL SOFTWARE INC ; Aventail LLC Replication systems and methods for a virtual computing environment
8166265, Jul 14 2008 QUEST SOFTWARE INC F K A DELL SOFTWARE INC ; Aventail LLC Systems and methods for performing backup operations of virtual machine files
8250033, Sep 29 2008 EMC IP HOLDING COMPANY LLC Replication of a data set using differential snapshots
8286019, Sep 26 2007 Hitachi, Ltd. Power efficient data storage with data de-duplication
8335902, Jul 14 2008 QUEST SOFTWARE INC F K A DELL SOFTWARE INC ; Aventail LLC Systems and methods for performing backup operations of virtual machine files
8412848, May 29 2009 ORIX GROWTH CAPITAL, LLC Method and apparatus for content-aware and adaptive deduplication
8464214, Sep 15 2005 CA, INC Apparatus, method and system for building software by composition
8627198, Sep 29 2000 Microsoft Technology Licensing, LLC Method for synchronously binding an external behavior to a web page element
20030145074,
20040024961,
20040030822,
20040236803,
20050114614,
20050278280,
20060005189,
20060020932,
20060143501,
20060155735,
20060218544,
20070208918,
20070234334,
20070244938,
20080082593,
20080155208,
20080177994,
20080201414,
20080244028,
20080244577,
20080250406,
20090007100,
20090089781,
20090158432,
20090216816,
20090216970,
20090240904,
20090265403,
20090300023,
20100030983,
20100049930,
20100058013,
20100070725,
20100076934,
20100077165,
20100088277,
20100115332,
20100122248,
20100185587,
20100228913,
20100235813,
20100235831,
20100235832,
20100257331,
20100262585,
20100262586,
20100262802,
20100293140,
20100306412,
20110035358,
20110047340,
20110145199,
20110153697,
20110154325,
20110161294,
20110225122,
20120084598,
20120109897,
20120210417,
20120272240,
20120297246,
20130007506,
20130014102,
20130097599,
20130151802,
20130185716,
20130246685,
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
Executed onAssignorAssigneeConveyanceFrameReelDoc
Feb 07 2012DELL SOFTWARE INC.(assignment on the face of the patent)
Feb 07 2012NAIK, DILIP C Quest Software, IncASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0276820174 pdf
Jul 01 2013Quest Software, IncDELL SOFTWARE INC CHANGE OF NAME SEE DOCUMENT FOR DETAILS 0310350914 pdf
Oct 29 2013DELL SOFTWARE INC BANK OF AMERICA, N A , AS ADMINISTRATIVE AGENTPATENT SECURITY AGREEMENT ABL 0318980001 pdf
Oct 29 2013Dell USA L PBANK OF AMERICA, N A , AS ADMINISTRATIVE AGENTPATENT SECURITY AGREEMENT ABL 0318980001 pdf
Oct 29 2013FORCE10 NETWORKS, INC BANK OF AMERICA, N A , AS ADMINISTRATIVE AGENTPATENT SECURITY AGREEMENT ABL 0318980001 pdf
Oct 29 2013GALE TECHNOLOGIES, INC BANK OF AMERICA, N A , AS ADMINISTRATIVE AGENTPATENT SECURITY AGREEMENT ABL 0318980001 pdf
Oct 29 2013Perot Systems CorporationBANK OF AMERICA, N A , AS ADMINISTRATIVE AGENTPATENT SECURITY AGREEMENT ABL 0318980001 pdf
Oct 29 2013SECUREWORKS, INC BANK OF AMERICA, N A , AS ADMINISTRATIVE AGENTPATENT SECURITY AGREEMENT ABL 0318980001 pdf
Oct 29 2013Dell Products L PBANK OF AMERICA, N A , AS ADMINISTRATIVE AGENTPATENT SECURITY AGREEMENT ABL 0318980001 pdf
Oct 29 2013DELL MARKETING L P BANK OF AMERICA, N A , AS ADMINISTRATIVE AGENTPATENT SECURITY AGREEMENT ABL 0318980001 pdf
Oct 29 2013CREDANT TECHNOLOGIES, INC BANK OF AMERICA, N A , AS ADMINISTRATIVE AGENTPATENT SECURITY AGREEMENT ABL 0318980001 pdf
Oct 29 2013COMPELLENT TECHNOLOGIES, INC BANK OF AMERICA, N A , AS ADMINISTRATIVE AGENTPATENT SECURITY AGREEMENT ABL 0318980001 pdf
Oct 29 2013BOOMI, INC BANK OF AMERICA, N A , AS ADMINISTRATIVE AGENTPATENT SECURITY AGREEMENT ABL 0318980001 pdf
Oct 29 2013ASAP SOFTWARE EXPRESS, INC BANK OF AMERICA, N A , AS ADMINISTRATIVE AGENTPATENT SECURITY AGREEMENT ABL 0318980001 pdf
Oct 29 2013APPASSURE SOFTWARE, INC BANK OF AMERICA, N A , AS ADMINISTRATIVE AGENTPATENT SECURITY AGREEMENT ABL 0318980001 pdf
Oct 29 2013Dell IncBANK OF AMERICA, N A , AS ADMINISTRATIVE AGENTPATENT SECURITY AGREEMENT ABL 0318980001 pdf
Oct 29 2013WYSE TECHNOLOGY L L C BANK OF AMERICA, N A , AS ADMINISTRATIVE AGENTPATENT SECURITY AGREEMENT ABL 0318980001 pdf
Oct 29 2013Dell IncBANK OF AMERICA, N A , AS COLLATERAL AGENTPATENT SECURITY AGREEMENT TERM LOAN 0318990261 pdf
Oct 29 2013WYSE TECHNOLOGY L L C BANK OF AMERICA, N A , AS COLLATERAL AGENTPATENT SECURITY AGREEMENT TERM LOAN 0318990261 pdf
Oct 29 2013SECUREWORKS, INC BANK OF AMERICA, N A , AS COLLATERAL AGENTPATENT SECURITY AGREEMENT TERM LOAN 0318990261 pdf
Oct 29 2013Perot Systems CorporationBANK OF AMERICA, N A , AS COLLATERAL AGENTPATENT SECURITY AGREEMENT TERM LOAN 0318990261 pdf
Oct 29 2013GALE TECHNOLOGIES, INC BANK OF AMERICA, N A , AS COLLATERAL AGENTPATENT SECURITY AGREEMENT TERM LOAN 0318990261 pdf
Oct 29 2013FORCE10 NETWORKS, INC BANK OF AMERICA, N A , AS COLLATERAL AGENTPATENT SECURITY AGREEMENT TERM LOAN 0318990261 pdf
Oct 29 2013Dell USA L PBANK OF AMERICA, N A , AS COLLATERAL AGENTPATENT SECURITY AGREEMENT TERM LOAN 0318990261 pdf
Oct 29 2013DELL SOFTWARE INC BANK OF AMERICA, N A , AS COLLATERAL AGENTPATENT SECURITY AGREEMENT TERM LOAN 0318990261 pdf
Oct 29 2013Dell Products L PBANK OF AMERICA, N A , AS COLLATERAL AGENTPATENT SECURITY AGREEMENT TERM LOAN 0318990261 pdf
Oct 29 2013DELL MARKETING L P BANK OF AMERICA, N A , AS COLLATERAL AGENTPATENT SECURITY AGREEMENT TERM LOAN 0318990261 pdf
Oct 29 2013CREDANT TECHNOLOGIES, INC BANK OF AMERICA, N A , AS COLLATERAL AGENTPATENT SECURITY AGREEMENT TERM LOAN 0318990261 pdf
Oct 29 2013COMPELLENT TECHNOLOGIES, INC BANK OF AMERICA, N A , AS COLLATERAL AGENTPATENT SECURITY AGREEMENT TERM LOAN 0318990261 pdf
Oct 29 2013BOOMI, INC BANK OF AMERICA, N A , AS COLLATERAL AGENTPATENT SECURITY AGREEMENT TERM LOAN 0318990261 pdf
Oct 29 2013ASAP SOFTWARE EXPRESS, INC BANK OF AMERICA, N A , AS COLLATERAL AGENTPATENT SECURITY AGREEMENT TERM LOAN 0318990261 pdf
Oct 29 2013APPASSURE SOFTWARE, INC BANK OF AMERICA, N A , AS COLLATERAL AGENTPATENT SECURITY AGREEMENT TERM LOAN 0318990261 pdf
Oct 29 2013WYSE TECHNOLOGY L L C BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS FIRST LIEN COLLATERAL AGENTPATENT SECURITY AGREEMENT NOTES 0318970348 pdf
Oct 29 2013SECUREWORKS, INC BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS FIRST LIEN COLLATERAL AGENTPATENT SECURITY AGREEMENT NOTES 0318970348 pdf
Oct 29 2013Perot Systems CorporationBANK OF NEW YORK MELLON TRUST COMPANY, N A , AS FIRST LIEN COLLATERAL AGENTPATENT SECURITY AGREEMENT NOTES 0318970348 pdf
Oct 29 2013Dell USA L PBANK OF NEW YORK MELLON TRUST COMPANY, N A , AS FIRST LIEN COLLATERAL AGENTPATENT SECURITY AGREEMENT NOTES 0318970348 pdf
Oct 29 2013APPASSURE SOFTWARE, INC BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS FIRST LIEN COLLATERAL AGENTPATENT SECURITY AGREEMENT NOTES 0318970348 pdf
Oct 29 2013ASAP SOFTWARE EXPRESS, INC BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS FIRST LIEN COLLATERAL AGENTPATENT SECURITY AGREEMENT NOTES 0318970348 pdf
Oct 29 2013BOOMI, INC BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS FIRST LIEN COLLATERAL AGENTPATENT SECURITY AGREEMENT NOTES 0318970348 pdf
Oct 29 2013COMPELLENT TECHNOLOGIES, INC BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS FIRST LIEN COLLATERAL AGENTPATENT SECURITY AGREEMENT NOTES 0318970348 pdf
Oct 29 2013CREDANT TECHNOLOGIES, INC BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS FIRST LIEN COLLATERAL AGENTPATENT SECURITY AGREEMENT NOTES 0318970348 pdf
Oct 29 2013Dell IncBANK OF NEW YORK MELLON TRUST COMPANY, N A , AS FIRST LIEN COLLATERAL AGENTPATENT SECURITY AGREEMENT NOTES 0318970348 pdf
Oct 29 2013DELL MARKETING L P BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS FIRST LIEN COLLATERAL AGENTPATENT SECURITY AGREEMENT NOTES 0318970348 pdf
Oct 29 2013Dell Products L PBANK OF NEW YORK MELLON TRUST COMPANY, N A , AS FIRST LIEN COLLATERAL AGENTPATENT SECURITY AGREEMENT NOTES 0318970348 pdf
Oct 29 2013DELL SOFTWARE INC BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS FIRST LIEN COLLATERAL AGENTPATENT SECURITY AGREEMENT NOTES 0318970348 pdf
Oct 29 2013FORCE10 NETWORKS, INC BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS FIRST LIEN COLLATERAL AGENTPATENT SECURITY AGREEMENT NOTES 0318970348 pdf
Oct 29 2013GALE TECHNOLOGIES, INC BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS FIRST LIEN COLLATERAL AGENTPATENT SECURITY AGREEMENT NOTES 0318970348 pdf
Sep 07 2016BANK OF AMERICA, N A , AS ADMINISTRATIVE AGENTASAP SOFTWARE EXPRESS, INC RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400650216 pdf
Sep 07 2016BANK OF AMERICA, N A , AS ADMINISTRATIVE AGENTAPPASSURE SOFTWARE, INC RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400650216 pdf
Sep 07 2016BANK OF AMERICA, N A , AS ADMINISTRATIVE AGENTCOMPELLANT TECHNOLOGIES, INC RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400650216 pdf
Sep 07 2016BANK OF AMERICA, N A , AS ADMINISTRATIVE AGENTCREDANT TECHNOLOGIES, INC RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400650216 pdf
Sep 07 2016BANK OF AMERICA, N A , AS ADMINISTRATIVE AGENTDell IncRELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400650216 pdf
Sep 07 2016BANK OF AMERICA, N A , AS ADMINISTRATIVE AGENTDell Products L PRELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400650216 pdf
Sep 07 2016BANK OF AMERICA, N A , AS ADMINISTRATIVE AGENTDell USA L PRELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400650216 pdf
Sep 07 2016DELL PRODUCTS, L P CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTSECURITY AGREEMENT0400300187 pdf
Sep 07 2016BANK OF AMERICA, N A , AS COLLATERAL AGENTWYSE TECHNOLOGY L L C RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400400001 pdf
Sep 07 2016BANK OF AMERICA, N A , AS COLLATERAL AGENTSECUREWORKS, INC RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400400001 pdf
Sep 07 2016BANK OF AMERICA, N A , AS COLLATERAL AGENTFORCE10 NETWORKS, INC RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400400001 pdf
Sep 07 2016BANK OF AMERICA, N A , AS COLLATERAL AGENTDELL SOFTWARE INC RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400400001 pdf
Sep 07 2016BANK OF AMERICA, N A , AS COLLATERAL AGENTDell USA L PRELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400400001 pdf
Sep 07 2016BANK OF AMERICA, N A , AS COLLATERAL AGENTDell Products L PRELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400400001 pdf
Sep 07 2016BANK OF AMERICA, N A , AS COLLATERAL AGENTDell IncRELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400400001 pdf
Sep 07 2016BANK OF AMERICA, N A , AS COLLATERAL AGENTCREDANT TECHNOLOGIES, INC RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400400001 pdf
Sep 07 2016BANK OF AMERICA, N A , AS ADMINISTRATIVE AGENTDELL SOFTWARE INC RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400650216 pdf
Sep 07 2016BANK OF AMERICA, N A , AS ADMINISTRATIVE AGENTFORCE10 NETWORKS, INC RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400650216 pdf
Sep 07 2016BANK OF AMERICA, N A , AS ADMINISTRATIVE AGENTPerot Systems CorporationRELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400650216 pdf
Sep 07 2016Aventail LLCCREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTSECURITY AGREEMENT0400300187 pdf
Sep 07 2016BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS COLLATERAL AGENTWYSE TECHNOLOGY L L C RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400650618 pdf
Sep 07 2016BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS COLLATERAL AGENTSECUREWORKS, INC RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400650618 pdf
Sep 07 2016BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS COLLATERAL AGENTPerot Systems CorporationRELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400650618 pdf
Sep 07 2016BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS COLLATERAL AGENTFORCE10 NETWORKS, INC RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400650618 pdf
Sep 07 2016BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS COLLATERAL AGENTDELL SOFTWARE INC RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400650618 pdf
Sep 07 2016BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS COLLATERAL AGENTDell USA L PRELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400650618 pdf
Sep 07 2016BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS COLLATERAL AGENTDell IncRELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400650618 pdf
Sep 07 2016BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS COLLATERAL AGENTCREDANT TECHNOLOGIES, INC RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400650618 pdf
Sep 07 2016BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS COLLATERAL AGENTCOMPELLENT TECHNOLOGIES, INC RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400650618 pdf
Sep 07 2016BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS COLLATERAL AGENTAPPASSURE SOFTWARE, INC RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400650618 pdf
Sep 07 2016BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS COLLATERAL AGENTASAP SOFTWARE EXPRESS, INC RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400650618 pdf
Sep 07 2016BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS COLLATERAL AGENTDELL MARKETING L P RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400650618 pdf
Sep 07 2016BANK OF AMERICA, N A , AS ADMINISTRATIVE AGENTWYSE TECHNOLOGY L L C RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400650216 pdf
Sep 07 2016BANK OF AMERICA, N A , AS ADMINISTRATIVE AGENTSECUREWORKS, INC RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400650216 pdf
Sep 07 2016BANK OF AMERICA, N A , AS ADMINISTRATIVE AGENTDELL MARKETING L P RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400650216 pdf
Sep 07 2016BANK OF AMERICA, N A , AS COLLATERAL AGENTCOMPELLENT TECHNOLOGIES, INC RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400400001 pdf
Sep 07 2016BANK OF AMERICA, N A , AS COLLATERAL AGENTAPPASSURE SOFTWARE, INC RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400400001 pdf
Sep 07 2016DELL SOFTWARE INC THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTSECURITY AGREEMENT0400390642 pdf
Sep 07 2016Dell Products L PTHE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTSECURITY AGREEMENT0400390642 pdf
Sep 07 2016Aventail LLCTHE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTSECURITY AGREEMENT0400390642 pdf
Sep 07 2016DELL SOFTWARE INC CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTSECURITY AGREEMENT0400300187 pdf
Sep 07 2016BANK OF AMERICA, N A , AS COLLATERAL AGENTPerot Systems CorporationRELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400400001 pdf
Sep 07 2016BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS COLLATERAL AGENTDell Products L PRELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400650618 pdf
Sep 07 2016BANK OF AMERICA, N A , AS COLLATERAL AGENTDELL MARKETING L P RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400400001 pdf
Sep 07 2016BANK OF AMERICA, N A , AS COLLATERAL AGENTASAP SOFTWARE EXPRESS, INC RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0400400001 pdf
Oct 31 2016THE BANK OF NEW YORK MELLON TRUST COMPANY, N A Aventail LLCRELEASE OF SECURITY INTEREST IN CERTAIN PATENTS PREVIOUSLY RECORDED AT REEL FRAME 040039 0642 0405210016 pdf
Oct 31 2016THE BANK OF NEW YORK MELLON TRUST COMPANY, N A DELL SOFTWARE INC RELEASE OF SECURITY INTEREST IN CERTAIN PATENTS PREVIOUSLY RECORDED AT REEL FRAME 040039 0642 0405210016 pdf
Oct 31 2016Credit Suisse AG, Cayman Islands BranchAventail LLCRELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0405210467 pdf
Oct 31 2016Credit Suisse AG, Cayman Islands BranchDELL PRODUCTS, L P RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0405210467 pdf
Oct 31 2016Credit Suisse AG, Cayman Islands BranchDELL SOFTWARE INC RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0405210467 pdf
Oct 31 2016DELL SOFTWARE INC CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTFIRST LIEN PATENT SECURITY AGREEMENT0405810850 pdf
Oct 31 2016DELL SOFTWARE INC CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTSECOND LIEN PATENT SECURITY AGREEMENT0405870624 pdf
Oct 31 2016THE BANK OF NEW YORK MELLON TRUST COMPANY, N A Dell Products L PRELEASE OF SECURITY INTEREST IN CERTAIN PATENTS PREVIOUSLY RECORDED AT REEL FRAME 040039 0642 0405210016 pdf
Nov 01 2016DELL SOFTWARE INC QUEST SOFTWARE INC CHANGE OF NAME SEE DOCUMENT FOR DETAILS 0448000848 pdf
Nov 14 2017Credit Suisse AG, Cayman Islands BranchQUEST SOFTWARE INC F K A DELL SOFTWARE INC CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNEE PREVIOUSLY RECORDED AT REEL: 040587 FRAME: 0624 ASSIGNOR S HEREBY CONFIRMS THE ASSIGNMENT 0448110598 pdf
Nov 14 2017Credit Suisse AG, Cayman Islands BranchAventail LLCCORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNEE PREVIOUSLY RECORDED AT REEL: 040587 FRAME: 0624 ASSIGNOR S HEREBY CONFIRMS THE ASSIGNMENT 0448110598 pdf
May 18 2018QUEST SOFTWARE INC CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTFIRST LIEN PATENT SECURITY AGREEMENT0463270347 pdf
May 18 2018CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTQUEST SOFTWARE INC F K A DELL SOFTWARE INC RELEASE OF FIRST LIEN SECURITY INTEREST IN PATENTS RECORDED AT R F 040581 08500462110735 pdf
May 18 2018QUEST SOFTWARE INC CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTSECOND LIEN PATENT SECURITY AGREEMENT0463270486 pdf
May 18 2018CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTAventail LLCRELEASE OF FIRST LIEN SECURITY INTEREST IN PATENTS RECORDED AT R F 040581 08500462110735 pdf
Feb 01 2022BINARYTREE COM LLCGoldman Sachs Bank USAFIRST LIEN INTELLECTUAL PROPERTY SECURITY AGREEMENT0589450778 pdf
Feb 01 2022ONELOGIN, INC Goldman Sachs Bank USAFIRST LIEN INTELLECTUAL PROPERTY SECURITY AGREEMENT0589450778 pdf
Feb 01 2022ERWIN, INC Goldman Sachs Bank USAFIRST LIEN INTELLECTUAL PROPERTY SECURITY AGREEMENT0589450778 pdf
Feb 01 2022CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTQUEST SOFTWARE INC RELEASE OF SECOND LIEN SECURITY INTEREST IN PATENTS0590960683 pdf
Feb 01 2022CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTQUEST SOFTWARE INC RELEASE OF FIRST LIEN SECURITY INTEREST IN PATENTS0591050479 pdf
Feb 01 2022ANALYTIX DATA SERVICES INC Goldman Sachs Bank USAFIRST LIEN INTELLECTUAL PROPERTY SECURITY AGREEMENT0589450778 pdf
Feb 01 2022QUEST SOFTWARE INC Goldman Sachs Bank USAFIRST LIEN INTELLECTUAL PROPERTY SECURITY AGREEMENT0589450778 pdf
Feb 01 2022QUEST SOFTWARE INC MORGAN STANLEY SENIOR FUNDING, INC SECOND LIEN INTELLECTUAL PROPERTY SECURITY AGREEMENT0589520279 pdf
Feb 01 2022ANALYTIX DATA SERVICES INC MORGAN STANLEY SENIOR FUNDING, INC SECOND LIEN INTELLECTUAL PROPERTY SECURITY AGREEMENT0589520279 pdf
Feb 01 2022One Identity LLCGoldman Sachs Bank USAFIRST LIEN INTELLECTUAL PROPERTY SECURITY AGREEMENT0589450778 pdf
Feb 01 2022ERWIN, INC MORGAN STANLEY SENIOR FUNDING, INC SECOND LIEN INTELLECTUAL PROPERTY SECURITY AGREEMENT0589520279 pdf
Feb 01 2022One Identity LLCMORGAN STANLEY SENIOR FUNDING, INC SECOND LIEN INTELLECTUAL PROPERTY SECURITY AGREEMENT0589520279 pdf
Feb 01 2022ONELOGIN, INC MORGAN STANLEY SENIOR FUNDING, INC SECOND LIEN INTELLECTUAL PROPERTY SECURITY AGREEMENT0589520279 pdf
Feb 01 2022ONE IDENTITY SOFTWARE INTERNATIONAL DESIGNATED ACTIVITY COMPANYMORGAN STANLEY SENIOR FUNDING, INC SECOND LIEN INTELLECTUAL PROPERTY SECURITY AGREEMENT0589520279 pdf
Feb 01 2022ONE IDENTITY SOFTWARE INTERNATIONAL DESIGNATED ACTIVITY COMPANYGoldman Sachs Bank USAFIRST LIEN INTELLECTUAL PROPERTY SECURITY AGREEMENT0589450778 pdf
Feb 01 2022BINARYTREE COM LLCMORGAN STANLEY SENIOR FUNDING, INC SECOND LIEN INTELLECTUAL PROPERTY SECURITY AGREEMENT0589520279 pdf
Date Maintenance Fee Events
Mar 18 2016ASPN: Payor Number Assigned.
Sep 23 2019M1551: Payment of Maintenance Fee, 4th Year, Large Entity.
Oct 12 2023M1552: Payment of Maintenance Fee, 8th Year, Large Entity.


Date Maintenance Schedule
Apr 12 20194 years fee payment window open
Oct 12 20196 months grace period start (w surcharge)
Apr 12 2020patent expiry (for year 4)
Apr 12 20222 years to revive unintentionally abandoned end. (for year 4)
Apr 12 20238 years fee payment window open
Oct 12 20236 months grace period start (w surcharge)
Apr 12 2024patent expiry (for year 8)
Apr 12 20262 years to revive unintentionally abandoned end. (for year 8)
Apr 12 202712 years fee payment window open
Oct 12 20276 months grace period start (w surcharge)
Apr 12 2028patent expiry (for year 12)
Apr 12 20302 years to revive unintentionally abandoned end. (for year 12)