systems and methods are provided for scanning files and directories in a distributed file system on a network of nodes. The nodes include metadata with attribute information corresponding to files and directories distributed on the nodes. In one embodiment, the files and directories are scanned by commanding the nodes to search their respective metadata for a selected attribute. At least two of the nodes are capable of searching their respective metadata in parallel. In one embodiment, the distributed file system commands the nodes to search for metadata data structures having location information corresponding to a failed device on the network. The metadata data structures identified in the search may then be used to reconstruct lost data that was stored on the failed device.

Patent
   7788303
Priority
Oct 21 2005
Filed
Oct 21 2005
Issued
Aug 31 2010
Expiry
Jun 27 2026
Extension
249 days
Assg.orig
Entity
Large
43
307
all paid
4. A system for identifying selected attributes in files stored in a distributed file system, the system comprising:
a plurality of nodes in a network, wherein each node comprises:
a processor and a memory device for locally storing data, and wherein files are distributed across the nodes such that one or more of the files are stored in the memory devices, in parts, among the plurality of nodes;
a plurality of metadata data blocks each associated with one of the files and comprising file attribute data related to the corresponding file, a file identifier, and location information for one or more content data blocks of the file, the attribute data including data indicating which nodes are used to store the file's content data blocks, the metadata data blocks distributed across the nodes and stored in the memory devices among the plurality of nodes such that, for at least one of the metadata data blocks, at least one of the content data blocks of the file associated with the metadata data block is stored on a different node than the at least one metadata data block; and
a metadata map data structure providing an indication of where metadata data blocks are stored on the respective node and comprising a plurality of entries, each of the entries corresponding to a memory location of the memory device and indicating whether a metadata data block is stored in that memory location, wherein each of the plurality of nodes are configured to:
instruct each of the nodes that locally stores metadata data blocks to determine each memory location where a metadata data block is locally stored using the respective node's metadata map data structure, read the respective metadata data blocks, and search the locally stored metadata data blocks for files which include data blocks stored on a particular node that is unavailable such that one or more of the nodes performs at least a portion of the search in parallel with at least a portion of the search of one or more of the other nodes;
receive from the nodes that store metadata data blocks, file identifiers related to files that include data blocks stored on the node that is unavailable;
access the metadata data blocks corresponding to one of the file identifiers to determine the location of at least one accessible content data block and at least one accessible parity data block corresponding to one of the files that include data blocks stored on an unavailable node;
read the at least one accessible content data block and the at least one accessible parity data block from their respective locations in the memory devices of available nodes; and
process the at least one accessible content data block and the at least one accessible parity data block to generate recovered data blocks corresponding to the one or more data blocks stored on the unavailable node by performing an exclusive—or (XOR) operation on the at least one accessible content data block and the at least one accessible parity data block.
1. A method for identifying selected attributes in files stored in a distributed file system, the method comprising:
providing a plurality of nodes in a network, wherein each node comprises:
a processor and a memory device for locally storing data, and wherein files are distributed across the nodes such that one or more of the files are stored in the memory devices, in parts, among the plurality of nodes;
a plurality of metadata data blocks each associated with one of the files and comprising file attribute data related to the corresponding file, a file identifier, and location information for one or more content data blocks of the file, the attribute data including data indicating which nodes are used to store the file's content data blocks, the metadata data blocks distributed across the nodes and stored in the memory devices among the plurality of nodes such that, for at least one of the metadata data blocks, at least one of the content data blocks of the file associated with the metadata data block is stored on a different node than the at least one metadata data block; and
a metadata map data structure providing an indication of where metadata data blocks are stored on the respective node and comprising a plurality of entries, each of the entries corresponding to a memory location of the memory device and indicating whether a metadata data block is stored in that memory location;
instructing, by the processor of the respective node, each of the nodes that locally stores metadata data blocks to determine each memory location where a metadata data block is locally stored using the respective node's metadata map data structure, read the respective metadata data blocks, and search the locally stored metadata data blocks for files which include data blocks stored on a particular node that is unavailable such that one or more of the nodes performs at least a portion of the search in parallel with at least a portion of the search of one or more of the other nodes;
receiving from the nodes that store metadata data blocks, file identifiers related to files that include data blocks stored on the node that is unavailable;
accessing the metadata data blocks corresponding to one of the file identifiers to determine the location of at least one accessible content data block and at least one accessible parity data block corresponding to one of the files that include data blocks stored on an unavailable node;
reading the at least one accessible content data block and the at least one accessible parity data block from their respective locations in the memory devices of available nodes; and
processing the at least one accessible content data block and the at least one accessible parity data block to generate recovered data blocks corresponding to the one or more data blocks stored on the unavailable node,
wherein processing the at least one accessible content data block and the at least one accessible parity data block comprises performing an exclusive or (XOR) operation on the at least one accessible content data block and the at least one accessible parity data block.
7. A computer readable medium storing program code that, in response to execution by a processor of one of a plurality of nodes in a network, causes the processor to perform operations for identifying selected attributes in files stored in a distributed file system, the operations comprising:
instructing, by a processor of one of a plurality of nodes in a network wherein each node comprises:
a processor and a memory device for locally storing data, and wherein files are distributed across the nodes such that one or more of the files are stored in the memory devices, in parts, among the plurality of nodes; and
a plurality of metadata data blocks each associated with one of the files and comprising file attribute data related to the corresponding file, a file identifier, and location information for one or more content data blocks of the file, the attribute data including data indicating which nodes are used to store the file's content data blocks, the metadata data blocks distributed across the nodes and stored in the memory devices among the plurality of nodes such that, for at least one of the metadata data blocks, at least one of the content data blocks of the file associated with the metadata data block is stored on a different node than the at least one metadata data block; and
a metadata map data structure providing an indication of where metadata data blocks are stored on the respective node and comprising a plurality of entries, each of the entries corresponding to a memory location of the memory device and indicating whether a metadata data block is stored in that memory location,
each of the nodes that locally stores metadata data blocks to determine each memory location where a metadata data block is locally stored using the respective node's metadata map data structure, read the respective metadata data blocks, and search the locally stored metadata data blocks for files which include data blocks stored on a particular node that is unavailable such that one or more of the nodes performs at least a portion of the search in parallel with at least a portion of the search of one or more of the other nodes;
receiving from the nodes that store metadata data blocks, file identifiers related to files that include data blocks stored on the node that is unavailable;
accessing the metadata data blocks corresponding to one of the file identifiers to determine the location of at least one accessible content data block and at least one accessible parity data block corresponding to one of the files that include data blocks stored on an unavailable node;
reading the at least one accessible content data block and the at least one accessible parity data block from their respective locations in the memory devices of available nodes; and
processing the at least one accessible content data block and the at least one accessible parity data block to generate recovered data blocks corresponding to the one or more data blocks stored on the unavailable node,
wherein processing the at least one accessible content data block and the at least one accessible parity data block comprises performing an exclusive—or (XOR) operation on the at least one accessible content data block and the at least one accessible parity data block.
2. The method of claim 1, further comprising:
restriping the recovered data blocks among available nodes.
3. The method of claim 1, wherein each of the respective nodes performs the search by sequentially traversing its memory device to read its respective metadata data blocks after determining which memory locations store metadata data blocks.
5. The system of claim 4, wherein each of the plurality of nodes is further configured to:
restripe the recovered data blocks among available nodes.
6. The system of claim 4, wherein each of the respective nodes performs the search by sequentially traversing its memory device to read its respective metadata data blocks after determining which memory locations store metadata data blocks.
8. The computer-readable medium of claim 7, wherein the program code is further configured to cause the processor to perform operations comprising:
restriping the recovered data blocks among available nodes.
9. The computer-readable medium of claim 7, wherein each of the respective nodes performs the search by sequentially traversing its memory device to read its respective metadata data blocks after determining which memory locations store metadata data blocks.

The present disclosure relates to U.S. patent application Ser. No. 11/256,410, titled “SYSTEMS AND METHODS FOR PROVIDING VARIABLE PROTECTION,” U.S. Pat. No. 7,346,720, titled “SYSTEMS AND METHODS FOR MANAGING CONCURRENT ACCESS REQUESTS TO A SHARED RESOURCE,” U.S. patent application Ser. No. 11/255,818, titled “SYSTEMS AND METHODS FOR MAINTAINING DISTRIBUTED DATA,” U.S. patent application Ser. No. 11/256 317, titled “SYSTEMS AND METHODS FOR USING EXCITEMENT VALUES TO PREDICT FUTURE ACCESS TO RESOURCES,” and U.S. patent application Ser. No. 11/255,337, titled “SYSTEMS AND METHODS FOR ACCESSING AND UPDATING DISTRIBUTED DATA,” each filed on even date herewith and each hereby incorporated by reference herein in their entirety.

This disclosure relates to systems and methods for scanning files in distributed file systems.

Operating systems generally manage and store information on one or more memory devices using a file system that organizes data in a file tree. File trees identify relationships between directories, subdirectories, and files.

In a distributed file system, data is stored among a plurality of network nodes. Files and directories are stored on individual nodes in the network and combined to create a file tree for the distributed file system to identify relationships and the location of information in directories, subdirectories and files distributed among the nodes in the network. Files in distributed file systems are typically accessed by traversing the overall file tree.

Occasionally, a file system may scan a portion or all of the files in the file system. For example, the file system or a user may want to search for files created or modified in a certain range of dates and/or times, files that have not been accessed for a certain period of time, files that are of a certain type, files that are a certain size, files with data stored on a particular memory device (e.g., a failed memory device), files that have other particular attributes, or combinations of the foregoing. Scanning for files by traversing multiple file tree paths in parallel is difficult because the tree may be very wide or very deep. Thus, file systems generally scan for files by sequentially traversing the file tree. However, file systems, and particularly distributed file systems, can be large enough to store hundreds of thousands of files, or more. Thus, it can take a considerable amount of time for the file system to sequentially traverse the entire file tree.

Further, sequentially traversing the file tree wastes valuable system resources, such as the availability of central processing units to execute commands or bandwidth to send messages between nodes in a network. System resources are wasted, for example, by accessing structures stored throughout a cluster from one location, which may require significant communication between the nodes and scattered access to memory devices. The performance characteristics of disk drives, for example, vary considerably based on the access pattern. Thus, scattered access to a disk drive based on sequentially traversing a file tree can significantly increase the amount of time used to scan the file system.

Thus, it would be advantageous to use techniques and systems for scanning file systems by searching metadata, in parallel, for selected attributes associated with a plurality of files. In one embodiment, content data, parity data and metadata for directories and files are distributed across a plurality of network nodes. When performing a scan of the distributed file system, two or more nodes in the network search their respective metadata in parallel for the selected attribute. When a node finds metadata corresponding to the selected attribute, the node provides a unique identifier for the metadata to the distributed file system.

According to the foregoing, in one embodiment, a method is provided for scanning files and directories in a distributed file system on a network. The distributed file system has a plurality of nodes. At least a portion of the nodes include metadata with attribute information for one or more files striped across the distributed file system. The method includes commanding at least a subset of the nodes to search their respective metadata for a selected attribute and to perform an action in response to identifying the selected attribute in their respective metadata. The subset of nodes is capable of searching their respective metadata in parallel.

In one embodiment, a distributed file system includes a plurality of nodes configured to store data blocks corresponding to files striped across the plurality of nodes. The distributed file system also includes metadata data structures stored on at least a portion of the plurality of nodes. The metadata data structures include attribute information for the files. At least two of the plurality of nodes are configured to search, at substantially the same time, their respective metadata data structures for a selected attribute.

In one embodiment, a method for recovering from a failure in a distributed file system includes storing metadata corresponding to one or more files on one or more nodes in a network. The metadata points to data blocks stored on the one or more nodes. The method also includes detecting a failed device in the distributed file system, commanding the nodes to search their respective metadata for location information corresponding to the failed device, receiving responses from the nodes, the responses identifying metadata data structures corresponding to information stored on the failed device, and accessing the identified metadata data structures to reconstruct the information stored on the failed device.

For purposes of summarizing the invention, certain aspects, advantages and novel features of the invention have been described herein. It is to be understood that not necessarily all such advantages may be achieved in accordance with any particular embodiment of the invention. Thus, the invention may be embodied or carried out in a manner that achieves or optimizes one advantage or group of advantages as taught herein without necessarily achieving other advantages as may be taught or suggested herein.

Systems and methods that embody the various features of the invention will now be described with reference to the following drawings.

FIG. 1 illustrates an exemplary block diagram of a network according to one embodiment.

FIG. 2A illustrates an exemplary file tree including metadata data structures according to one embodiment.

FIG. 2B illustrates an inode map and an inode storage on Device A in according with FIG. 2A according to one embodiment.

FIGS. 3-5 illustrate exemplary metadata data structures for directories according to certain embodiments.

FIG. 6 illustrates an exemplary metadata data structure for a file according to one embodiment.

FIG. 7 is a flow chart of a process for scanning files and directories in a distributed file system according to one embodiment.

FIG. 8 is a flow chart of a process for recovering from a failure in a distributed file system according to one embodiment.

Systems and methods which represent one embodiment and example application of the invention will now be described with reference to the drawings. Variations to the systems and methods which represent other embodiments will also be described.

For purposes of illustration, some embodiments will be described in the context of a distributed file system. The inventors contemplate that the present invention is not limited by the type of environment in which the systems and methods are used, and that the systems and methods may be used in other environments, such as, for example, the Internet, the World Wide Web, a private network for a hospital, a broadcast network for a government agency, an internal network of a corporate enterprise, an intranet, a local area network, a wide area network, and so forth. The figures and descriptions, however, relate to an embodiment of the invention wherein the environment is that of distributed file systems. It is also recognized that in other embodiments, the systems and methods may be implemented as a single module and/or implemented in conjunction with a variety of other modules and the like. Moreover, the specific implementations described herein are set forth in order to illustrate, and not to limit, the invention. The scope of the invention is defined by the appended claims.

I. Overview

Rather than sequentially traversing a file tree searching for a particular attribute during a scan, a distributed file system, according to one embodiment, commands a plurality of network nodes to search their respective metadata for the particular attribute. The metadata includes, for example, attributes and locations of file content data blocks, metadata data blocks, and protection data blocks (e.g., parity data blocks and mirrored data blocks). Thus, two or more nodes in the network can search for files having the particular attribute at the same time.

In one embodiment, when a node finds metadata corresponding to the selected attribute, the node provides a unique identifier for a corresponding metadata data structure to the distributed file system. The metadata data structure includes, among other information, the location of or pointers to file content data blocks, metadata data blocks, and protection data blocks for corresponding files and directories. The distributed file system can then use the identified metadata data structure to perform one or more operations on the files or directories. For example, the distributed file system can read an identified file, write to an identified file, copy an identified file or directory, move an identified file to another directory, delete an identified file or directory, create a new directory, update the metadata corresponding to an identified file or directory, recover lost or missing data, and/or restripe files across the distributed file system. In other embodiments, these or other file system operations can be performed by the node or nodes that find metadata corresponding to the selected attribute.

In one embodiment, the distributed file system commands the nodes to search for metadata data structures having location information corresponding to a failed device on the network. The metadata data structures identified in the search may then be used to reconstruct lost data that was stored on the failed device.

In the following description, reference is made to the accompanying drawings, which form a part hereof, and which show, by way of illustration, specific embodiments or processes in which the invention may be practiced. Where possible, the same reference numbers are used throughout the drawings to refer to the same or like components. In some instances, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. The present disclosure, however, may be practiced without the specific details or with certain alternative equivalent components and methods to those described herein. In other instances, well-known components and methods have not been described in detail so as not to unnecessarily obscure aspects of the present disclosure.

II. Distributed File System

FIG. 1 is an exemplary block diagram of a network 100 according to one embodiment of the invention. The network 100 comprises a plurality of nodes 102, 104, 106, 108, 110, 112 configured to communicate with each other through a communication medium 114. The communication medium 114 comprises, for example, the Internet or other global network, an intranet, a wide area network (WAN), a local area network (LAN), a high-speed network medium such as Infiniband, dedicated communication lines, telephone networks, wireless data transmission systems, two-way cable systems or customized computer interconnections including computers and network devices such as servers, routers, switches, memory storage units, or the like.

In one embodiment, at least one of the nodes 102, 104, 106, 108, 110, 112 comprises a conventional computer or any device capable of communicating with the network 114 including, for example, a computer workstation, a LAN, a kiosk, a point-of-sale device, a personal digital assistant, an interactive wireless communication device, an interactive television, a transponder, or the like. The nodes 102, 104, 106, 108, 110, 112 are configured to communicate with each other by, for example, transmitting messages, receiving messages, redistributing messages, executing received messages, providing responses to messages, combinations of the foregoing, or the like. In one embodiment, the nodes 102, 104, 106, 108, 110, 112 are configured to communicate RPC messages between each other over the communication medium 114 using TCP. An artisan will recognize from the disclosure herein, however, that other message or transmission protocols can be used.

In one embodiment, the network 100 comprises a distributed file system as described in U.S. patent application Ser. No. 10/007,003, entitled “System and Method for Providing a Distributed File System Utilizing Metadata to Track Information About Data Stored Throughout the System,” filed Nov. 9, 2001 which claims priority to application Ser. No. 60/309,803 filed Aug. 3, 2001, and U.S. patent application Ser. No. 10/714,326, filed Nov. 14, 2003, which claims priority to application Ser. No. 60/426,464, filed Nov. 14, 2002, all of which are hereby incorporated by reference herein in their entirety. For example, the network 100 may comprise an intelligent distributed file system that enables the storing of file data among a set of smart storage units which are accessed as a single file system and utilizes a metadata data structure to track and manage detailed information about each file. In one embodiment, individual files in a file system are assigned a unique identification number that acts as a pointer to where the system can find information about the file. Directories (and subdirectories) are files that list the name and unique identification number of files and subdirectories within the directory. Thus, directories are also assigned unique identification numbers that reference to where the system can find information about the directory.

In addition, the distributed file system may be configured to write data blocks or restripe files distributed among a set of smart storage units in the distributed file system wherein data is protected and recoverable if a system failure occurs.

In one embodiment, at least some of the nodes 102, 104, 106, 108, 110, 112 include one or more memory devices for storing file content data, metadata, parity data, directory and subdirectory data, and other system information. For example, as shown in FIG. 1, the node 102 includes device A, the node 106 includes device B, the node 108 includes devices C and D, the node 110 includes devices E, F, and G, and the node 112 includes device H. Advantageously, the file content data, metadata, parity data, and directory data (including, for example, subdirectory data) are distributed among at least a portion of the devices A-G such that information will not be permanently lost if one of the nodes 102, 104, 106, 108, 110, 112 and/or devices A-G fails. For example, the file content data, metadata, parity data, and/or directory data may be mirrored on two or more devices A-G or protected using a parity scheme (e.g., 2+1, 3+1, or the like).

A. Metadata

Metadata data structures include, for example, the device and block locations of the file's data blocks to permit different levels of replication and/or redundancy within a single file system, to facilitate the change of redundancy parameters, to provide high-level protection for metadata distributed throughout the network 100, and to replicate and move data in real-time. Metadata for a file may include, for example, an identifier for the file, the location of or pointer to the file's data blocks as well as the type of protection for each file, or each block of the file, the location of the file's protection blocks (e.g., parity data, or mirrored data). Metadata for a directory may include, for example, an identifier for the directory, a listing of the files and subdirectories of the directory as well as the identifier for each of the files and subdirectories, as well as the type of protection for each file and subdirectory. In other embodiments, the metadata may also include the location of the directory's protection blocks (e.g., parity data, or mirrored data). In one embodiment, the metadata data structures are stored in the distributed file system.

B. Attributes

In one embodiment, the metadata includes attribute information corresponding to files and directories stored on the network 100. The attribute information may include, for example, file size, file name, file type, file extension, file creation time (e.g., time and date), file access time (e.g., time and date), file modification date (e.g., time and date), file version, file permission, file parity scheme, file location, combinations of the foregoing, or the like. The file location may include, for example, information useful for accessing the physical location in the network of content data blocks, metadata data blocks, parity data blocks, mirrored data blocks, combinations of the foregoing, or the like. The location information may include, for example, node id, device, id, and address offset, though other location may be used.

C. Exemplary Metadata File Tree

Since the metadata includes location information for files and directories stored on the network 100, the distributed file system according to one embodiment uses a file tree comprising metadata data structures. For example, FIG. 2 illustrates an exemplary file tree 200 including metadata data structures referred to herein as “inodes.” In this example, the inodes are protected against failures by mirroring the inodes such that the inodes are stored on two devices. Thus, if a device fails, an inode on the failed device can be recovered by reading a copy of the inode from a non-failing device. An artisan will recognize that the inodes can be mirrored on more than two devices, can be protected using a parity scheme, or can be protected using a combination of methods. In one embodiment, the inodes are protected using the same level of protection as the data to which the inodes point.

As illustrated in the example in FIG. 2, the file tree 200 includes an inode 202 corresponding to a “/” directory (e.g., root directory). Referring to FIG. 1, the inode 202 for the root directory is mirrored on device D and device H. The inode 202 for the root directory points to an inode 204 (stored on devices A and G) for a directory named “dir1,” an inode 206 (stored on devices C and F) for a directory named “dir2,” and an inode 208 (stored on devices B and E) for a directory named “dir3.”

The inode 206 for directory dir2 points to an inode 210 (stored on devices D and G) for a directory named “dir4,” an inode 212 (stored on devices B and C) for a directory named “dir5,” an inode 214 (stored on devices A and E) for a directory named “dir6,” and an inode 216 (stored on devices A and B) for a file named “file1.zzz.” The inode 208 for directory dir3 points to an inode 218 (stored on devices A and F) for a file named “file2.xyz.” The inode 214 for the directory dir6 points to an inode 220 (stored on devices A and C) for a file named “file3.xxx,” an inode 222 (stored on devices B and C) for a file named “file4.xyz,” and an inode 224 (stored on devices D and G) for a file named “file5.xyz.” An artisan will recognize that the inodes shown in FIG. 2 are for illustrative purposes and that the file tree 200 can include any number inodes corresponding to files and/or directories and a variety of protection methods may be used.

FIG. 2A illustrates one embodiment of an inode map 230 and an inode storage 240 on Device A. The exemplary inode map 230 has multiple entries where the first entry 0 corresponds to inode storage entry 0, the second entry 1 corresponds to inode storage entry 1, and so forth. In the exemplary inode map 230, a 1 represents that an inode is storage in the corresponding inode storage 240, and a 0 represents that no inode is store in the corresponding inode storage 240. For example, inode map 230 entry 0 is 1, signifying that there is an inode stored in inode storage 240 entry 0. The inode for dir1 204 is stored in inode storage 0. Inode map 230 entry 1 is 0 signifying that there is no inode stored in inode storage 240 entry 1. The exemplary inode storage 240 of Device A stores inodes 204, 214, 216, 218, 220 in accordance with FIG. 2A.

D. Exemplary Metadata Data Structures

In one embodiment, the metadata data structure (e.g., inode) includes, attributes and a list of devices having data to which the particular inode points. For example, FIG. 3 illustrates an exemplary metadata data structure of the inode 202 for the root directory. The inode 202 for the root directory includes attribute information 302 corresponding to the inode 202. As discussed above, the attribute information may include, for example, size, name, type (e.g., directory and/or root directory), creation time, access time, modification time, version, permission, parity scheme, location, combinations of the foregoing, and/or other information related to the root directory. In one embodiment, the nodes 102, 104, 106, 108, 110, 112 in the network 100 include predefined location information for accessing the inode 202 of the root directory on device D and/or device H. In other embodiments, the exemplary inode 202 includes location information (not shown) pointing to its location on devices D and H. The inode 202 for the root directory also includes a list of devices used 304. As shown in FIG. 2, since the inodes 204, 206, 208 for directories dir1, dir2, and dir3 are stored on devices A, B, C, E, F and G, these devices are included in the list of devices used 304.

The inode 202 for the root directory also includes location information 306 corresponding to the directories dir1, dir2, and dir3. As shown in FIG. 3, in one embodiment, the location information 306 includes unique identification numbers (e.g., logical inode numbers) used to match the directories dir1, dir2, and dir3 to the physical storage locations of the inodes 204, 206, 208, respectively. In certain such embodiments, the distributed file system includes a data structure that tracks the unique identification numbers and physical addresses (e.g., identifying the node, device, and block offset) of the inodes 204, 206, 208 on the distributed file system. As illustrated, unique identifiers are provided for each of the directories dir1, dir2, dir3 such that a data structure may be configured, for example, to match the unique identifiers to physical addresses where the inodes 204, 206, 208, and their mirrored copies are stored. In other embodiments, the location information 306 includes the physical addresses of the inodes 204, 206, 208 or the location information includes two unique identifiers for each of the directories Dir1, dir2, and dir3 because the directories are mirrored on two devices. Various data structures that may be used to track the identification numbers are further discussed in U.S. patent application Ser. No. 11/255,8181, titled “SYSTEMS AND METHODS FOR MAINTAINING DISTRIBUTED DATA,” and U.S. patent application Ser. No. 11/255,337, titled “SYSTEMS AND METHODS FOR ACCESSING AND UPDATING DISTRIBUTED DATA,” each referenced and incorporated by reference above.

FIG. 4 illustrates an exemplary metadata data structure of the inode 206 for the directory dir2. As discussed above in relation to the inode 202 for the root directory, the inode 206 for the directory dir2 includes attribute information 402 corresponding to the inode 206 and a list of devices used 404. As shown in FIG. 2, the inode 216 for the file file1.zzz is stored on devices A and B. Further, the inodes 210, 212, 214 for the directories dir4, dir5, and dir6 are stored on devices A, B, C, D, E, and G. Thus, these devices are included in the list of devices used 404. The inode 206 for the directory dir2 also includes location information 406 corresponding to the directories dir4, dir5, dir6 and location information 408 corresponding to the file file1.zzz. As discussed above, the location information 406 points to the inodes 210, 212, 214 and the location information 408 points to the inode 216. Further, as discussed above, unique identifiers are provided for each of the directories dir4, dir5, dir6 and the file file1.zzz, however, other embodiments, as discussed above, may be used.

FIG. 5 illustrates an exemplary metadata data structure of the inode 214 for the directory dir6. As discussed above in relation to the inode 202 for the root directory, the inode 214 for the directory dir6 includes attribute information 502 corresponding to the inode 214 and a list of devices used 504. As shown in FIG. 2, the inodes 220, 222, 224 for the files are stored on devices A, B, C, D, and G. Thus, these devices are included in the list of devices used 504. The inode 214 for the directory dir6 also includes location information 506 corresponding to the files file3.xxx, file4.xyz, and file5.xyz. As discussed above, the location information 506 points to the inodes 220, 222, 224. Further, as discussed above, unique identifiers are provided for each of the files file3.xxx, file4.xyz, and file5, however, other embodiments, as discussed above, may be used.

FIG. 6 illustrates an exemplary metadata data structure of the inode 220 for the file file3.xxx. The inode 220 for the file file3.xxx includes attribute information 602 corresponding to the inode 220. For example, the attribute information 602 may include the size of file3.xxx, the name of file3.xxx (e.g., file3), the file type, the file extension (e.g., .xyz), the file creation time, the file access time, the file modification time, the file version, the file permissions, the file parity scheme, the file location, combinations of the foregoing, or the like.

The inode 220 also includes a list of devices used 604. In this example, content data blocks and parity data blocks corresponding to the file file3.xxx are striped across devices B, C, D, and E using a 2+1 parity scheme. Thus, for two blocks of content data stored on the devices B, C, D, and E, a parity data block is also stored. The parity groups (e.g., two content data blocks and one parity data block) are distributed such that each block in the parity group is stored on a different device. As shown in FIG. 6, for example, a first parity group may include a first content data block (block0) stored on device B, a second content data block (block1) stored on device C, and a first parity block. (parity0) stored on device D. Similarly, a second parity group may include a third content data block (block2) stored on device E, a fourth content data block (block3) stored on device B and a second parity data block (parity1) stored on device C.

The inode 220 for the file file3.xxx also includes location information 606 corresponding to the content data blocks (e.g., block0, block1, block2, and block3) and parity data blocks (e.g., parity0 and parity1). As shown in FIG. 6, in one embodiment, the location information 606 includes unique identification numbers (e.g., logical block numbers) used to match the content data blocks and parity data blocks to the physical storage locations of the respective data. In certain such embodiments, the distributed file system includes a table that tracks the unique identification numbers and physical addresses (e.g., identifying the node, device, and block offset) on the distributed file system. In other embodiments, the location information 606 includes the physical addresses of the content data blocks and parity data blocks. In one embodiment, the inode 220 for the file file3.xxx also includes location information 608 corresponding to one or more metadata data blocks that point to other content data blocks and/or parity blocks for the file file3.xxx.

III. Scanning Distributed File Systems

In one embodiment, the distributed file system is configured to scan a portion or all of the files and/or directories in the distributed file system by commanding nodes to search their respective metadata for a selected attribute. As discussed in detail below, the nodes can then search their respective metadata in parallel and perform an appropriate action when metadata is found having the selected attribute.

Commanding the nodes to search their respective metadata in parallel with other nodes greatly reduces the amount of time necessary to scan the distributed file system. For example, to read a file in path/dir1/fileA.xxx of a file tree (where “/” is the top level or root directory and “xxx” is the file extension of the file named fileA in the directory named dir1), the file system reads the file identified by the root directory's predefined unique identification number, searches the root directory for the name dir1, reads the file identified by the unique identification number associated with the directory dir1, searches the dir1 directory for the name fileA.xxx, and reads the file identified by the unique identification number associated with fileA.xxx.

For example, referring to FIG. 2A, sequentially traversing the file tree 200 may include reading the inode 202 corresponding to the root directory to determine the names and locations of the inodes 204, 206, 208. Then, the inode 204 corresponding to the directory dir1 may be read to determine the names and locations of any subdirectories and files that the inode 204 may point to.

After sequentially stepping through the subdirectory and file paths of the directory dir1, the inode 206 corresponding to the directory dir2 may be read to determine the names and locations of the subdirectories (e.g., dir4, dir5, and dir6) and files (e.g., file1.zzz) that the inode 206 points to. This process may then be repeated for each directory and subdirectory in the distributed file system. Since content data, metadata and parity data is spread throughout the nodes 102, 106, 108, 110, 112 in the network 100, sequentially traversing the file tree 200 requires a large number of messages to be sent between the nodes and uses valuable system resources. Thus, sequentially traversing the file tree 200 is time consuming and reduces the overall performance of the distributed file system.

However, commanding the nodes 102, 106, 108, 110, 112 to search their respective metadata in parallel, according to certain embodiments disclosed herein, reduces the number of messages sent across the network 100 and allows the nodes 102, 106, 108, 110, 112 to access their respective devices A-H sequentially.

In one embodiment, for example, one or more of the devices A-H are hard disk drives that are capable of operating faster when accessed sequentially. For example, a disk drive that yields approximately 100 kbytes/second when reading a series of data blocks from random locations on the disk drive may yield approximately 60 Mbytes/second when the data blocks are read from sequential locations on the disk drive. Thus, allowing the nodes 102, 106, 108, 110, 112 to respectively access their respective drives sequentially, rather than traversing an overall file tree for the network 100 (which repeatedly accesses small amounts of data scattered across the devices A-H), greatly reduces the amount of time used to scan the distributed file system.

The nodes 102, 106, 108, 110, 112 may perform additional processing, but the additional work is spread across the nodes 102, 106, 108, 110, 112 and reduces overall network traffic and processing overhead. For example, in one embodiment, rather than reading all the metadata from the node 102 across the network, the node 102 searches its metadata and only the metadata satisfying the search criteria is read across the network. Thus, overall network traffic and processing overhead is reduced.

FIG. 7 is a flow chart of one embodiment of a process 700 for scanning files and directories in a distributed file system according to one embodiment. Beginning at a start state 708, the process 700 proceeds to block 710. In block 710, the process 700 includes distributing content data blocks, metadata blocks and protection data blocks (e.g., parity data, and mirrored data) for files and directories across nodes in a network. For example, as discussed in detail above, content data blocks, metadata data blocks and protection data blocks are stored in nodes 102, 106, 108, 110, 112 in the network 100 shown in FIG. 1. The metadata data blocks include attribute information corresponding to files and directories stored on the network 100. The attribute information may include, for example, file size, file name, file type, file extension, file creation time (e.g., time and date), file access time (e.g., time and date), file modification date (e.g., time and date), file version, file permission, file parity scheme, file location, combinations of the foregoing, or the like.

From the block 710, the process 700 proceeds, in parallel, to blocks 712 and 714. In the block 714, file system operations are performed. The file system operations may include, for example, continuing to distribute data blocks for files and directories across the nodes in the network, writing files, reading files, restriping files, repairing files, updating metadata, waiting for user input, and the like. The distributed file system operations can be performed while the system waits for a command to scan and/or while the distributed file system performs a scan as discussed below.

In the block 712, the system queries whether to scan the distributed file system to identify the files and directories having a selected attribute. For example, the distributed file system or a user of the network 100 shown in FIG. 1 may want to search for files and/or directories created or modified in a certain range of dates and/or times, files that have not been accessed for a certain period of time, files that are of a certain type, files that are a certain size, files with data stored on a particular memory device (e.g., a failed memory device), files that have other particular attributes, or combinations of the foregoing. While the system performs the other file system operations in the block 714, the system continues to scan the distributed file system. In one embodiment, a scan will not be performed, for example, if a user has not instructed the distributed file system to scan, or the distributed file system has not determined that a scan is needed or desired (e.g., upon detecting that a node has failed).

If a scan is desired or needed, the process 700 proceeds to a block 716 where the distributed file system commands the nodes to search their respective metadata data blocks for a selected attribute. Advantageously, the nodes are capable of searching their metadata data blocks in parallel with one another. For example, the nodes 102, 106, 108, 110, 112 may each receive the command to search their respective metadata data blocks for the selected attribute. The nodes 102, 106, 108, 110, 112 can then execute the command as node resources become available. Thus, rather than waiting for each node 102, 106, 108, 110, 112 to scan its respective metadata data blocks one at a time, two or more of the nodes 102, 106, 108, 110, 112 that have sufficient node resources may search their respective metadata data blocks at the same time. It is recognized that the distributed file system may command a subset of the nodes to conduct the search.

In one embodiment, the metadata data blocks for a particular node are sequentially searched for the selected attribute. For example, a node may include a drive that is divided into a plurality of cylinder groups. The node may sequentially step through each cylinder group reading their respective metadata data blocks. In other embodiments, metadata data blocks within a particular node are also searched in parallel. For example, the node 108 includes devices C and D that can be searched for the selected attribute at the same time. The following exemplary pseudocode illustrates one embodiment of accessing metadata data blocks (e.g., stored in data structures referred to herein as inodes) in parallel:

for all devices (in parallel);
for each cylinder group;
for each inode with bit in map = 1;
read inode.

In a block 718, the distributed file system commands the nodes to perform an action in response to identifying the selected attribute in their respective metadata and proceeds to an end state 720. An artisan will recognize that the command to search for the selected attribute and the command to perform an action in response to identifying the selected attribute can be sent to the nodes using a single message (e.g., sent to the nodes 102, 106, 108, 110, 112) or using two separate messages. The action may include, for example, writing data, reading data, copying data, backing up data, executing a set of instructions, and/or sending a message to one or more of the other nodes in the network. For example, the node 102 may find one or more its inodes that point to files or directories created within a certain time range. In response, the node 102 may read the files or directories and write a backup copy of the files or directories.

In one embodiment, the action in response to identifying the attribute includes sending a list of unique identification numbers (e.g., logical inode number or “LIN”) for inodes identified as including the selected attribute to one or more other nodes. For example, the nodes 102, 106, 108, 110, 112 may send a list of LINs for their respective inodes with the selected attribute to one of the other nodes in the network 100 for processing. The node that receives the LINs may or may not have any devices. For example, the node 104 may be selected to receive the LINs from the other nodes 102, 106, 108, 110, 112 and to perform a function using the LINs.

After receiving the LINs from the other nodes 102, 106, 108, 110, 112, the node 104 reads the inodes identified by the LINs for the location of or pointers to content data blocks, metadata data blocks, and/or protection data blocks (e.g., parity data blocks and mirrored data blocks). In certain such embodiments, the node 104 also checks the identified inodes to verify that they still include the selected attribute. For example, the selected attribute searched for may be files and directories that have not been modified for more than 100 days and the node 104 may be configured to delete such files and directories. However, between the time that the node 104 receives the list of LINs and the time that the node 104 reads a particular identified inode, the particular identified inode may be updated to indicate that its corresponding file or directory has recently been modified. The node 104 then deletes only files and directories with identified inodes that still indicate that they have not been modified for more than 100 days.

While process 700 illustrates an embodiment for scanning files and directories in a distributed file system such that all devices are scanned in parallel, it is recognized that the process 700 may be used on a subset of the devices. For example, one or more devices of the distributed file system may be offline. In addition, the distributed file system may determine that the action to be performed references only a subset of the devices such that only those devices are scanned, and so forth.

A. Example Scan Transactions

High-level exemplary transactions are provided below that illustrate scanning a distributed file system according to certain embodiments. The exemplary transactions include a data backup transaction and a failure recovery transaction. An artisan will recognize from the disclosure herein that many other transactions are possible.

1. Example Data Backup Transaction

The following example illustrates how backup copies of information stored on the network 100 can be created by scanning the distributed file system to find files and directories created or modified during a certain time period (e.g., since the last backup copy was made). In this example, the node 104 is selected to coordinate the backup transaction on the distributed file system. An artisan will recognize, however, that any of the nodes can be selected to coordinate the backup transaction.

The node 104 begins the backup transaction by sending a command to the nodes 102, 106, 108, 110, 112 to search their respective metadata so as to identify inodes that point to files and directories created or modified within a certain time range. As discussed above, the exemplary nodes 102, 106, 108, 110, 112 are capable of searching their metadata in parallel with one another. After searching, the nodes 102, 106, 108, 110, 112 each send a list of LINs to the node 104 to identify their respective inodes that point to files or directories created or modified within the time range. The node 104 then accesses the identified inodes and reads locations of or pointers to content data blocks, metadata blocks, and/or protection data blocks corresponding to the files or directories created or modified within the time range. The node 104 then writes the content data blocks, metadata blocks, and/or protection data blocks to a backup location.

2. Example Failure Recovery Transaction

FIG. 8 is a flow chart of a process 800 for recovering from a failure in a distributed file system according to one embodiment. Failures may include, for example, a loss of communication between two or more nodes in a network or the failure of one or more memory devices in a node. For illustrative purposes, device B in the node 106 shown in FIG. 1 is assumed to have failed. However, an artisan will recognize that the process 800 can be used with other types of failures or non-failures. For example, the process 800 can be modified slightly to replace or upgrade a node or memory device that has not failed.

Beginning at a start state 808, the process 800 proceeds to block 810. In block 810, the process 800 detects a failed device in a distributed file system. For example, in one embodiment, the nodes 102, 106, 108, 110, 112 include a list of their own devices and share this list with the other nodes. When a device on a node fails, the node notifies the other nodes of the failure. For example, when device B fails, the node 106 sends a message to the nodes 102, 104, 108, 110, 112 to notify them of the failure.

In a block 812, the process 800 includes commanding the nodes to search their respective metadata for location information corresponding to the failed device. In one embodiment, the message notifying the nodes 102, 106, 108, 110, 112 of the failure of the device B includes the command to search for metadata identifies the location of content data blocks, metadata data blocks, and protection data blocks (e.g., parity data blocks and mirrored data blocks) that are stored on the failed device B.

After receiving the command to search metadata for location information corresponding to the failed device B, the nodes 102, 108, 110, 112 begin searching for inodes that include the failed device B in their list of devices used. For example, as discussed above in one embodiment, the inode 202 for the root directory is stored on devices D and H and includes the location of or pointers to the inodes 204, 206, 208 for the directories dir1, dir2 and dir3, respectively (see FIG. 2). Since a copy of the inode 208 for the directory dir3 is stored on the failed device B, the inode 202 for the root directory includes device B in its list of devices used 304 (see FIG. 3). Thus, the nodes 108 (for device D) and 112 (for device H) will include the LIN for the inode 202 in their respective lists of LINs that meet the search criteria. The following exemplary pseudocode illustrates one embodiment of generating a list of LINs for inodes that meet the search criteria:

for each allocated inode:
read allocated inode;
if needs_restripe (e.g., a portion of a file, a directory or
subdirectory, or a copy of the inode is located on the failed
device B);
return LIN.

Similarly, the nodes 108 (for device C) and 110 (for device F) will include the LIN for the inode 206 in their respective lists of LINS that meet the search criteria, the nodes 102 (for device A) and 110 (for device E) will include the LIN for the inode 214 in their respective lists of LINS that meet the search criteria, and the nodes 102 (for device A) and 108 (for device C) will include the LIN for the inode 220 in their respective lists of LINs that meet the search criteria. While this example returns the LIN of the inode, it is recognized that other information may be returned, such as, for example, the LIN for inode 208. In other embodiments, rather than return any identifier, the process may initiate reconstruction of the data or other related actions.

In other embodiments, the list of devices used for a particular inode includes one or more devices on which copies of the particular inode are stored. For example, FIG. 2A shows that the inode 208 is stored on devices B and E. The copy of the inode 208 on device E will list device B as used. Also, the node 110 (for device E) will include the LIN for the inode 208 in its list of LINs that meet the search criteria. Similarly, the device 102 (for device A) will include the LIN for the inode 216 in its list of LINs that meet the search criteria, and the node 108 (for device C) will include the LIN for inodes 212 and 222.

As discussed above, the nodes 102, 108, 110, 112 are capable of searching their respective metadata in parallel with one another. In one embodiment, the nodes 102, 108, 110, 112 are also configured to execute the command to search their respective metadata so as to reduce or avoid interference with other processes being performed by the node. The node 102, for example, may search a portion of its metadata, stop searching for a period of time to allow other processes to be performed (e.g., a user initiated read or write operation), and search another portion of its metadata. The node 102 may continue searching as the node's resources become available.

In one embodiment, the command to search the metadata includes priority information and the nodes 102, 108, 110, 112 are configured to determine when to execute the command in relation to other processes that the nodes 102, 108, 110, 112 are executing. For example, the node 102 may receive the command to search its metadata for the location information as part of the overall failure recovery transaction and it may also receive a command initiated by a user to read certain content data blocks. The user initiated command may have a higher priority than the command to search the metadata. Thus, the node 102 will execute the user initiated command before searching for or completing the search of its metadata for the location information corresponding to the failed device B.

In one embodiment, the nodes 102, 108, 110, 112 are configured to read their respective inodes found during the search and reconstruct the lost data (as discussed below) that the inodes point to on the failed device B. In the embodiment shown in FIG. 8, however, the nodes 102, 108, 110, 112 are configured to send their respective lists of LINs that meet the search criteria to one or more of the nodes 102, 104, 106, 108, 110, 112 that has the responsibility of reconstructing the data and restriping files across the distributed file system.

In a block 814, the process 800 includes receiving responses from the nodes that identify metadata data structures corresponding to information stored on the failed device. For example, the nodes 102, 108, 110 may send their lists of LINs to the node 112. In a block 816, the process 800 includes accessing the identified metadata data structures to reconstruct the lost information stored on the failed device and proceeds to an end state 818. For example, after receiving the lists LINs from the nodes 102, 108, 110, the node 112 may use the received LINs and any LINs that it has identified to read the corresponding inodes to determine the locations of content data blocks, metadata blocks and protection data blocks corresponding to the lost information on the failed device B.

For example, as discussed above, the node 112 in one embodiment may receive lists of LINs from the nodes 108 and 112 that include the LIN for the inode 202. The node 112 then reads the inode 202 from either the device D or the device H to determine that it includes pointers to the inode 208 for the directory dir3 stored on the failed device B (see FIG. 3). From the inode 202, the node 112 also determines that a mirrored copy of the inode 208 is stored on device E. Thus, the node 112 can restore the protection scheme of the inode 208 (e.g., maintaining a mirrored copy on another device) by reading the inode 208 from the device E and writing a copy of the inode 208 to one of the other devices A, C, D, F, G, H.

As another example, the node 112 also receives lists of LINs from the nodes 102 and 108 that include the LIN for the inode 220. The node 112 then reads the inode 220 from either the device A or the device C for the location of or pointers to content data blocks (block0 and block3) stored on the failed device B (see FIG. 6). In certain embodiments, the node 12 also verifies that the file3.xxx has not already been restriped such that block0 and block3 have already been recovered and stored on another device. For example, between the time that the node 112 receives the LIN for the inode 220 and the time that the node 112 reads the inode 220, the distributed file system may have received another command to restripe the file3.xxx.

As discussed above, the file3.xxx uses a 2+1 parity scheme in which a first parity group includes block0, block1 and parity0 and a second parity group includes block2, block3, and parity1. If needed or desired, the node 112 can recover the block0 information that was lost on the failed device B by using the pointers in the inode 220 to read the block1 content data block and the parity0 parity data block, and XORing block1 and parity0. Similarly, the node 112 can recover the block3 information that was lost on the failed device B by using the pointers in the inode 220 to read the block2 content data block and the parity1 parity data block, and XORing block2 and parity1. In one embodiment, the node 112 writes the recovered block0 and block3 to the remaining devices A, C, D, E F, G, H. In another embodiment, the node 112 can then change the protection scheme, if needed or desired, and restripe the file file3.xxx across the remaining devices A, C, D, E, F, G, H.

Thus, the distributed file system can quickly find metadata for information that was stored on the failed device B. Rather than sequentially traversing the entire file tree 200, the distributed file system searches the metadata of the remaining nodes 102, 108, 110, 112 in parallel for location information corresponding to the failed device B. This allows the distributed file system to quickly recover the lost data and restripe any files, if needed or desired.

While certain embodiments of the inventions have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.

Anderson, Robert J., Godman, Peter J., Mikesell, Paul A., Schack, Darren P., Dire, Nathan E.

Patent Priority Assignee Title
10067942, Nov 09 2007 TOPIA TECHNOLOGY, INC Architecture for management of digital files across distributed network
10289607, Nov 09 2007 TOPIA TECHNOLOGY, INC Architecture for management of digital files across distributed network
10642787, Nov 09 2007 TOPIA TECHNOLOGY, INC Pre-file-transfer update based on prioritized metadata
10754823, Nov 09 2007 TOPIA TECHNOLOGY, INC Pre-file-transfer availability indication based on prioritized metadata
11003622, Nov 09 2007 TOPIA TECHNOLOGY, INC Architecture for management of digital files across distributed network
11899618, Nov 09 2007 TOPIA TECHNOLOGY, INC. Architecture for management of digital files across distributed network
7882068, Aug 21 2007 EMC IP HOLDING COMPANY LLC Systems and methods for adaptive copy on write
7899800, Aug 18 2006 EMC IP HOLDING COMPANY LLC Systems and methods for providing nonlinear journaling
7900015, Apr 13 2007 EMC IP HOLDING COMPANY LLC Systems and methods of quota accounting
7917474, Oct 21 2005 EMC IP HOLDING COMPANY LLC Systems and methods for accessing and updating distributed data
7937421, Nov 14 2002 EMC IP HOLDING COMPANY LLC Systems and methods for restriping files in a distributed file system
7949636, Mar 27 2008 EMC IP HOLDING COMPANY LLC Systems and methods for a read only mode for a portion of a storage system
7949692, Aug 21 2007 EMC IP HOLDING COMPANY LLC Systems and methods for portals into snapshot data
7953704, Aug 18 2006 EMC IP HOLDING COMPANY LLC Systems and methods for a snapshot of data
7953709, Mar 27 2008 EMC IP HOLDING COMPANY LLC Systems and methods for a read only mode for a portion of a storage system
7962779, Aug 03 2001 EMC IP HOLDING COMPANY LLC Systems and methods for a distributed file system with data recovery
7966289, Aug 21 2007 EMC IP HOLDING COMPANY LLC Systems and methods for reading objects in a file system
7971021, Mar 27 2008 EMC IP HOLDING COMPANY LLC Systems and methods for managing stalled storage devices
8005865, Mar 31 2006 EMC IP HOLDING COMPANY LLC Systems and methods for notifying listeners of events
8010493, Aug 18 2006 EMC IP HOLDING COMPANY LLC Systems and methods for a snapshot of data
8015156, Aug 18 2006 EMC IP HOLDING COMPANY LLC Systems and methods for a snapshot of data
8015216, Apr 13 2007 EMC IP HOLDING COMPANY LLC Systems and methods of providing possible value ranges
8027984, Aug 18 2006 EMC IP HOLDING COMPANY LLC Systems and methods of reverse lookup
8051425, Oct 29 2004 EMC IP HOLDING COMPANY LLC Distributed system with asynchronous execution systems and methods
8055711, Oct 29 2004 EMC IP HOLDING COMPANY LLC Non-blocking commit protocol systems and methods
8060521, Dec 22 2006 EMC IP HOLDING COMPANY LLC Systems and methods of directory entry encodings
8082379, Jan 05 2007 EMC IP HOLDING COMPANY LLC Systems and methods for managing semantic locks
8112395, Aug 03 2001 EMC IP HOLDING COMPANY LLC Systems and methods for providing a distributed file system utilizing metadata to track information about data stored throughout the system
8140623, Oct 29 2004 EMC IP HOLDING COMPANY LLC Non-blocking commit protocol systems and methods
8176013, Oct 21 2005 EMC IP HOLDING COMPANY LLC Systems and methods for accessing and updating distributed data
8195905, Apr 13 2007 EMC IP HOLDING COMPANY LLC Systems and methods of quota accounting
8200632, Aug 21 2007 EMC IP HOLDING COMPANY LLC Systems and methods for adaptive copy on write
8214334, Oct 21 2005 EMC IP HOLDING COMPANY LLC Systems and methods for distributed system scanning
8214400, Oct 21 2005 EMC IP HOLDING COMPANY LLC Systems and methods for maintaining distributed data
8238350, Oct 29 2004 EMC IP HOLDING COMPANY LLC Message batching with checkpoints systems and methods
8286029, Dec 21 2006 EMC IP HOLDING COMPANY LLC Systems and methods for managing unavailable storage devices
8356013, Aug 18 2006 EMC IP HOLDING COMPANY LLC Systems and methods for a snapshot of data
8356150, Aug 18 2006 EMC IP HOLDING COMPANY LLC Systems and methods for providing nonlinear journaling
8380689, Aug 18 2006 EMC IP HOLDING COMPANY LLC Systems and methods for providing nonlinear journaling
8625464, Feb 17 2006 EMC IP HOLDING COMPANY LLC Systems and methods for providing a quiescing protocol
8966080, Apr 13 2007 EMC IP HOLDING COMPANY LLC Systems and methods of managing resource utilization on a threaded computer system
9116915, Mar 29 2012 EMC IP HOLDING COMPANY LLC Incremental scan
9143561, Nov 09 2007 TOPIA TECHNOLOGY, INC Architecture for management of digital files across distributed network
Patent Priority Assignee Title
5163131, Sep 08 1989 NetApp, Inc Parallel I/O network file server architecture
5181162, Dec 06 1989 ATEX PUBLISHING SYSTEMS CORPORATION Document management and production system
5212784, Oct 22 1990 DELPHI DATA, Automated concurrent data backup system
5230047, Apr 16 1990 International Business Machines Corporation Method for balancing of distributed tree file structures in parallel computing systems to enable recovery after a failure
5251206, May 15 1990 International Business Machines Corporation Hybrid switching system for a communication node
5258984, Jun 13 1991 International Business Machines Corporation Method and means for distributed sparing in DASD arrays
5329626, Oct 23 1990 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P System for distributed computation processing includes dynamic assignment of predicates to define interdependencies
5359594, May 15 1992 International Business Machines Corporation Power-saving full duplex nodal communications systems
5403639, Sep 02 1992 Storage Technology Corporation File server having snapshot application data groups
5548724, Mar 22 1993 HITACHI INFORMATION & TELECOMMUNICATION ENGINEERING, LTD File server system and file access control method of the same
5568629, Dec 23 1991 TAIWAN SEMICONDUCTOR MANUFACTURING CO , LTD Method for partitioning disk drives within a physical disk array and selectively assigning disk drive partitions into a logical disk array
5596709, Jun 21 1990 International Business Machines Corporation Method and apparatus for recovering parity protected data
5606669, May 25 1994 International Business Machines Corporation System for managing topology of a network in spanning tree data structure by maintaining link table and parent table in each network node
5612865, Jun 01 1995 STEELEYE TECHNOLOGY, INC Dynamic hashing method for optimal distribution of locks within a clustered system
5649200, Jan 08 1993 International Business Machines Corporation Dynamic rule-based version control system
5657439, Aug 23 1994 International Business Machines Corporation Distributed subsystem sparing
5668943, Oct 31 1994 International Business Machines Corporation Virtual shared disks with application transparent recovery
5680621, Jun 07 1995 International Business Machines Corporation System and method for domained incremental changes storage and retrieval
5694593, Oct 05 1994 Northeastern University Distributed computer database system and method
5696895, May 19 1995 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Fault tolerant multiple network servers
5734826, Mar 29 1991 International Business Machines Corporation Variable cyclic redundancy coding method and apparatus for use in a multistage network
5754756, Mar 13 1995 Hitachi, Ltd. Disk array system having adjustable parity group sizes based on storage unit capacities
5761659, Feb 29 1996 Oracle America, Inc Method, product, and structure for flexible range locking of read and write requests using shared and exclusive locks, flags, sub-locks, and counters
5774643, Oct 13 1995 Hewlett Packard Enterprise Development LP Enhanced raid write hole protection and recovery
5799305, Nov 02 1995 International Business Machines Corporation Method of commitment in a distributed database transaction
5805578, Mar 12 1996 International Business Machines Corporation Automatic reconfiguration of multipoint communication channels
5805900, Sep 26 1996 International Business Machines Corporation Method and apparatus for serializing resource access requests in a multisystem complex
5806065, May 06 1996 Microsoft Technology Licensing, LLC Data system with distributed tree indexes and method for maintaining the indexes
5822790, Feb 07 1997 Oracle America, Inc Voting data prefetch engine
5862312, Oct 24 1995 BANKBOSTON, N A Loosely coupled mass storage computer cluster
5870563, Sep 19 1992 International Business Machines Corporation Method and apparatus for optimizing message transmission
5878410, Sep 13 1996 Microsoft Technology Licensing, LLC File system sort order indexes
5878414, Jun 06 1997 International Business Machines Corp. Constructing a transaction serialization order based on parallel or distributed database log files
5884046, Oct 23 1996 PARITY NETWORKS LLC Apparatus and method for sharing data and routing messages between a plurality of workstations in a local area network
5884098, Apr 18 1996 EMC Corporation RAID controller system utilizing front end and back end caching systems including communication path connecting two caching systems and synchronizing allocation of blocks in caching systems
5884303, Mar 15 1996 International Computers Limited Parallel searching technique
5890147, Mar 07 1997 Microsoft Technology Licensing, LLC Scope testing of documents in a search engine using document to folder mapping
5917998, Jul 26 1996 International Business Machines Corporation Method and apparatus for establishing and maintaining the status of membership sets used in mirrored read and write input/output without logging
5933834, Oct 16 1997 GOOGLE LLC System and method for re-striping a set of objects onto an exploded array of storage units in a computer system
5943690, Apr 07 1997 Sony Corporation; Sony United Kingdom Limited Data storage apparatus and method allocating sets of data
5966707, Dec 02 1997 GOOGLE LLC Method for managing a plurality of data processes residing in heterogeneous data repositories
5996089, Oct 24 1995 Seachange International, Inc. Loosely coupled mass storage computer cluster
6014669, Oct 01 1997 Oracle America, Inc Highly-available distributed cluster configuration database
6021414, Sep 11 1995 Sun Microsystems, Inc. Single transaction technique for a journaling file system of a computer operating system
6029168, Jan 23 1998 PMC-SIERRA, INC Decentralized file mapping in a striped network file system in a distributed computing environment
6038570, Jun 03 1993 Network Appliance, Inc. Method for allocating files in a file system integrated with a RAID disk sub-system
6044367, Aug 02 1996 Hewlett Packard Enterprise Development LP Distributed I/O store
6052759, Aug 17 1995 Xenogenic Development Limited Liability Company Method for organizing storage devices of unequal storage capacity and distributing data using different raid formats depending on size of rectangles containing sets of the storage devices
6055543, Nov 21 1997 Verano File wrapper containing cataloging information for content searching across multiple platforms
6070172, Mar 06 1997 Alcatel Lucent On-line free space defragmentation of a contiguous-file file system
6081833, Jul 06 1995 Kabushiki Kaisha Toshiba Memory space management method, data transfer method, and computer device for distributed computer system
6081883, Dec 05 1997 Network Appliance, Inc Processing system with dynamically allocatable buffer memory
6108759, Feb 23 1995 Veritas Technologies LLC Manipulation of partitions holding advanced file systems
6117181, Mar 22 1996 Sun Microsystems, Inc. Synchronization mechanism for distributed hardware simulation
6122754, May 22 1998 International Business Machines Corporation Method and system for data recovery using a distributed and scalable data structure
6138126, May 31 1995 NetApp, Inc Method for allocating files in a file system integrated with a raid disk sub-system
6154854, Nov 09 1990 EMC Corporation Logical partitioning of a redundant array storage system
6173374, Feb 11 1998 AVAGO TECHNOLOGIES GENERAL IP SINGAPORE PTE LTD System and method for peer-to-peer accelerated I/O shipping between host bus adapters in clustered computer network
6209059, Sep 25 1997 EMC IP HOLDING COMPANY LLC Method and apparatus for the on-line reconfiguration of the logical volumes of a data storage system
6219693, Nov 04 1997 PMC-SIERRA, INC File array storage architecture having file system distributed across a data processing platform
6321345,
6334168, Feb 19 1999 International Business Machines Corporation Method and system for updating data in a data storage system
6353823, Mar 08 1999 Intel Corporation Method and system for using associative metadata
6384626, Jul 19 2000 Micro-Star Int'l Co., Ltd. Programmable apparatus and method for programming a programmable device
6385626, Nov 19 1998 EMC IP HOLDING COMPANY LLC Method and apparatus for identifying changes to a logical object based on changes to the logical object at physical level
6393483, Jun 30 1997 Emulex Corporation Method and apparatus for network interface card load balancing and port aggregation
6397311, Jan 19 1990 Texas Instruments Incorporated System and method for defragmenting a file system
6405219, Jun 22 1999 F5 Networks, Inc Method and system for automatically updating the version of a set of files stored on content servers
6408313, Dec 16 1998 Microsoft Technology Licensing, LLC Dynamic memory allocation based on free memory size
6421781, Apr 30 1998 Unwired Planet, LLC Method and apparatus for maintaining security in a push server
6434574, Dec 17 1998 Apple Inc System and method for storing and retrieving filenames and files in computer memory using multiple encodings
6449730, Oct 24 1995 SeaChange Technology, Inc. Loosely coupled mass storage computer cluster
6453389, Jun 25 1999 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Optimizing computer performance by using data compression principles to minimize a loss function
6457139, Dec 30 1998 EMC IP HOLDING COMPANY LLC Method and apparatus for providing a host computer with information relating to the mapping of logical volumes within an intelligent storage system
6463442, Jun 30 1998 Microsoft Technology Licensing, LLC Container independent data binding system
6499091, Nov 13 2000 AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED System and method for synchronizing data mirrored by storage subsystems
6502172, Mar 02 1999 VIA Technologies, Inc. Memory accessing and controlling unit
6502174, Mar 03 1999 International Business Machines Corporation Method and system for managing meta data
6523130, Mar 11 1999 Microsoft Technology Licensing, LLC Storage system having error detection and recovery
6526478, Feb 02 2000 AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED Raid LUN creation using proportional disk mapping
6546443, Dec 15 1999 Microsoft Technology Licensing, LLC Concurrency-safe reader-writer lock with time out support
6549513, Oct 12 1999 WSOU Investments, LLC Method and apparatus for fast distributed restoration of a communication network
6557114, Oct 24 1995 SeaChange Technology, Inc. Loosely coupled mass storage computer cluster
6567894, Dec 08 1999 GOOGLE LLC Method and apparatus to prefetch sequential pages in a multi-stream environment
6567926, Oct 24 1995 Seachange International, Inc. Loosely coupled mass storage computer cluster
6571244, Oct 28 1999 ZHIGU HOLDINGS LIMITED Run formation in large scale sorting using batched replacement selection
6571349, Oct 24 1995 SeaChange Technology, Inc. Loosely coupled mass storage computer cluster
6574745, Oct 24 1995 Seachange International, Inc. Loosely coupled mass storage computer cluster
6594655, Jan 04 2001 Mellanox Technologies, LTD Wildcards in radix- search tree structures
6594660, Jan 24 2000 Microsoft Technology Licensing, LLC Share latch clearing
6594744, Dec 11 2000 NetApp, Inc Managing a snapshot volume or one or more checkpoint volumes with multiple point-in-time images in a single repository
6598174, Apr 26 2000 DELL PRODUCTS, L P Method and apparatus for storage unit replacement in non-redundant array
6618798, Jul 11 2000 International Business Machines Corporation Method, system, program, and data structures for mapping logical units to a storage space comprises of at least one array of storage units
6662184, Sep 23 1999 MASCOTECH, INC Lock-free wild card search data structure and method
6671686, Nov 02 2000 Decentralized, distributed internet data management
6671704, Mar 11 1999 Hewlett Packard Enterprise Development LP Method and apparatus for handling failures of resource managers in a clustered environment
6687805, Oct 30 2000 Hewlett Packard Enterprise Development LP Method and system for logical-object-to-physical-location translation and physical separation of logical objects
6732125, Sep 08 2000 Oracle America, Inc Self archiving log structured volume with intrinsic data protection
6748429, Jan 10 2000 Oracle America, Inc Method to dynamically change cluster or distributed system configuration
6801949, Apr 12 1999 EMC IP HOLDING COMPANY LLC Distributed server cluster with graphical user interface
6848029, Jan 03 2000 DIGITAL CACHE, LLC Method and apparatus for prefetching recursive data structures
6856591, Dec 15 2000 Cisco Technology, Inc Method and system for high reliability cluster management
6895534, Apr 23 2001 Hewlett Packard Enterprise Development LP Systems and methods for providing automated diagnostic services for a cluster computer system
6907011, Mar 30 1999 International Business Machines Corporation Quiescent reconfiguration of a routing network
6917942, Mar 15 2000 International Business Machines Corporation System for dynamically evaluating locks in a distributed data storage system
6922696, May 31 2000 SRI International Lattice-based security classification system and method
6934878, Mar 22 2002 Intel Corporation Failure detection and failure handling in cluster controller networks
6940966, Feb 21 2002 VTech Telecommunications, Ltd. Method and apparatus for detection of a telephone CPE alerting signal
6954435, Apr 29 2002 STINGRAY IP SOLUTIONS LLC Determining quality of service (QoS) routing for mobile ad hoc networks
6990604, Dec 28 2001 Oracle America, Inc Virtual storage status coalescing with a plurality of physical storage devices
6990611, Dec 29 2000 DOT HILL SYSTEMS CORP Recovering data from arrays of storage devices after certain failures
7007044, Dec 26 2002 Oracle America, Inc Storage backup system for backing up data written to a primary storage device to multiple virtual mirrors using a reconciliation process that reflects the changing state of the primary storage device over time
7007097, Jul 20 2000 Hewlett Packard Enterprise Development LP Method and system for covering multiple resourcces with a single credit in a computer system
7017003, Feb 16 2004 Hitachi, LTD Disk array apparatus and disk array apparatus control method
7043485, Mar 19 2002 NetApp, Inc System and method for storage of snapshot metadata in a remote file
7069320, Oct 04 1999 TWITTER, INC Reconfiguring a network by utilizing a predetermined length quiescent state
7111305, Oct 31 2002 Oracle America, Inc Facilitating event notification through use of an inverse mapping structure for subset determination
7124264, Jan 07 2004 GOOGLE LLC Storage system, control method for storage system, and storage control unit
7146524, Aug 03 2001 EMC IP HOLDING COMPANY LLC Systems and methods for providing a distributed file system incorporating a virtual hot spare
7152182, Jun 06 2003 VALTRUS INNOVATIONS LIMITED Data redundancy system and method
7177295, Mar 08 2002 Scientific Research Corporation Wireless routing protocol for ad-hoc networks
7184421, Dec 21 2001 STINGRAY IP SOLUTIONS LLC Method and apparatus for on demand multicast and unicast using controlled flood multicast communications
7194487, Oct 16 2003 Veritas Technologies LLC System and method for recording the order of a change caused by restoring a primary volume during ongoing replication of the primary volume
7225204, Mar 19 2002 Network Appliance, Inc System and method for asynchronous mirroring of snapshots at a destination using a purgatory directory and inode mapping
7228299, May 02 2003 Veritas Technologies LLC System and method for performing file lookups based on tags
7240235, Mar 13 2002 Intel Corporation Journaling technique for write transactions to mass storage
7249118, May 17 2002 SYBASE, INC Database system and methods
7257257, Aug 19 2003 Intel Corporation Method and apparatus for differential, bandwidth-efficient and storage-efficient backups
7313614, Nov 02 2000 Oracle America, Inc Switching system
7318134, Mar 16 2004 EMC IP HOLDING COMPANY LLC Continuous data backup using distributed journaling
7346346, Nov 28 2000 AT&T MOBILITY II LLC Testing methods and apparatus for wireless communications
7373426, Mar 29 2002 Kabushiki Kaisha Toshiba Network system using name server with pseudo host name and pseudo IP address generation function
7386675, Oct 21 2005 EMC IP HOLDING COMPANY LLC Systems and methods for using excitement values to predict future access to resources
7386697, Jan 30 2004 Nvidia Corporation Memory management for virtual address space with translation units of variable range size
7440966, Feb 12 2004 International Business Machines Corporation Method and apparatus for file system snapshot persistence
7451341, Oct 06 2004 Hitachi, Ltd. Storage system and communications path control method for storage system
7509448, Jan 05 2007 EMC IP HOLDING COMPANY LLC Systems and methods for managing semantic locks
7533298, Sep 07 2005 NetApp, Inc Write journaling using battery backed cache
7546354, Jul 06 2001 EMC IP HOLDING COMPANY LLC Dynamic network based storage with high availability
7546412, Dec 02 2005 International Business Machines Corporation Apparatus, system, and method for global metadata copy repair
7551572, Oct 21 2005 EMC IP HOLDING COMPANY LLC Systems and methods for providing variable protection
7571348, Jan 31 2006 Hitachi, Ltd. Storage system creating a recovery request point enabling execution of a recovery
7590652, Aug 18 2006 EMC IP HOLDING COMPANY LLC Systems and methods of reverse lookup
7593938, Dec 22 2006 EMC IP HOLDING COMPANY LLC Systems and methods of directory entry encodings
7676691, Aug 18 2006 EMC IP HOLDING COMPANY LLC Systems and methods for providing nonlinear journaling
7680836, Aug 18 2006 EMC IP HOLDING COMPANY LLC Systems and methods for a snapshot of data
7680842, Aug 18 2006 EMC IP HOLDING COMPANY LLC Systems and methods for a snapshot of data
7685126, Aug 03 2001 EMC IP HOLDING COMPANY LLC System and methods for providing a distributed file system utilizing metadata to track information about data stored throughout the system
7739288, Dec 22 2006 EMC IP HOLDING COMPANY LLC Systems and methods of directory entry encodings
7743033, Aug 03 2001 EMC IP HOLDING COMPANY LLC Systems and methods for providing a distributed file system utilizing metadata to track information about data stored throughout the system
7752402, Aug 18 2006 EMC IP HOLDING COMPANY LLC Systems and methods for allowing incremental journaling
7756898, Mar 31 2006 EMC IP HOLDING COMPANY LLC Systems and methods for notifying listeners of events
20010042224,
20010047451,
20010056492,
20020010696,
20020035668,
20020038436,
20020055940,
20020072974,
20020075870,
20020078180,
20020083078,
20020083118,
20020087366,
20020095438,
20020124137,
20020138559,
20020156840,
20020156891,
20020156973,
20020156974,
20020156975,
20020158900,
20020161846,
20020161850,
20020161973,
20020163889,
20020165942,
20020166026,
20020166079,
20020169827,
20020174295,
20020174296,
20020178162,
20020191311,
20020194523,
20020194526,
20020198864,
20030005159,
20030014391,
20030033308,
20030061491,
20030109253,
20030120863,
20030125852,
20030131860,
20030135514,
20030149750,
20030158873,
20030163726,
20030172149,
20030177308,
20030182325,
20040003053,
20040024731,
20040024963,
20040078812,
20040133670,
20040143647,
20040153479,
20040189682,
20040199734,
20040199812,
20040205141,
20040230748,
20040260673,
20050010592,
20050066095,
20050114402,
20050114609,
20050131860,
20050131990,
20050138195,
20050171960,
20050171962,
20050187889,
20050188052,
20050192993,
20050289169,
20050289188,
20060004760,
20060041894,
20060047925,
20060059467,
20060074922,
20060083177,
20060095438,
20060101062,
20060129584,
20060129631,
20060129983,
20060155831,
20060206536,
20060230411,
20060277432,
20060288161,
20070091790,
20070094269,
20070094277,
20070094310,
20070094431,
20070094452,
20070168351,
20070171919,
20070195810,
20070233684,
20070233710,
20070255765,
20080005145,
20080010507,
20080021907,
20080031238,
20080034004,
20080044016,
20080046432,
20080046443,
20080046444,
20080046445,
20080046475,
20080046476,
20080046667,
20080059541,
20080126365,
20080151724,
20080154978,
20080155191,
20080168458,
20080243773,
20080256103,
20080256537,
20080256545,
20080294611,
20090055399,
20090055604,
20090055607,
20090210880,
20090248756,
20090248765,
20090248975,
20090249013,
20090252066,
20090327218,
20100161556,
20100161557,
EP774723,
JP2006506741,
JP4464279,
WO57315,
WO114991,
WO133829,
WO2061737,
WO3012699,
WO2004046971,
WO2008021527,
WO2008021528,
WO9429796,
//////////////////////////////////////////////////////////////////////////////////////////////////
Executed onAssignorAssigneeConveyanceFrameReelDoc
Oct 21 2005Isilon Systems, Inc.(assignment on the face of the patent)
Jan 03 2006DIRE, NATHAN E ISILON SYSTEMS, INC ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0174760412 pdf
Jan 03 2006GODMAN, PETER J ISILON SYSTEMS, INC ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0174760412 pdf
Jan 03 2006SCHACK, DARREN P ISILON SYSTEMS, INC ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0174760412 pdf
Jan 05 2006ANDERSON, ROBERT J ISILON SYSTEMS, INC ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0174760412 pdf
Jan 06 2006MIKESELL, PAUL A ISILON SYSTEMS, INC ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0174760412 pdf
Mar 22 2006ISILON SYSTEMS, INC HORIZON TECHNOLOGY FUNDING COMPANY LLCSECURITY AGREEMENT0186130916 pdf
Oct 15 2010HORIZON TECHNOLOGY FUNDING COMPANY LLCISILON SYSTEMS, INC RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0251730542 pdf
Dec 29 2010ISILON SYSTEMS LLCIVY HOLDING, INC ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0260690925 pdf
Dec 29 2010ISILON SYSTEMS, INC ISILON SYSTEMS LLCMERGER SEE DOCUMENT FOR DETAILS 0260660785 pdf
Dec 31 2010IVY HOLDING, INC EMC CorporationASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0260830036 pdf
Sep 06 2016EMC CorporationEMC IP HOLDING COMPANY LLCASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0402030001 pdf
Sep 07 2016Aventail LLCCREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTSECURITY AGREEMENT0401340001 pdf
Sep 07 2016CREDANT TECHNOLOGIES, INC CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTSECURITY AGREEMENT0401340001 pdf
Sep 07 2016Dell USA L PCREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTSECURITY AGREEMENT0401340001 pdf
Sep 07 2016DELL INTERNATIONAL L L C CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTSECURITY AGREEMENT0401340001 pdf
Sep 07 2016DELL MARKETING L P CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTSECURITY AGREEMENT0401340001 pdf
Sep 07 2016Dell Products L PCREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTSECURITY AGREEMENT0401340001 pdf
Sep 07 2016DELL SOFTWARE INC CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTSECURITY AGREEMENT0401340001 pdf
Sep 07 2016DELL SYSTEMS CORPORATIONCREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTSECURITY AGREEMENT0401340001 pdf
Sep 07 2016EMC CorporationCREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTSECURITY AGREEMENT0401340001 pdf
Sep 07 2016EMC IP HOLDING COMPANY LLCCREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTSECURITY AGREEMENT0401340001 pdf
Sep 07 2016FORCE10 NETWORKS, INC CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTSECURITY AGREEMENT0401340001 pdf
Sep 07 2016Maginatics LLCCREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTSECURITY AGREEMENT0401340001 pdf
Sep 07 2016MOZY, INC CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTSECURITY AGREEMENT0401340001 pdf
Sep 07 2016SCALEIO LLCCREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTSECURITY AGREEMENT0401340001 pdf
Sep 07 2016Spanning Cloud Apps LLCCREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTSECURITY AGREEMENT0401340001 pdf
Sep 07 2016WYSE TECHNOLOGY L L C CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTSECURITY AGREEMENT0401340001 pdf
Sep 07 2016ASAP SOFTWARE EXPRESS, INC CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTSECURITY AGREEMENT0401340001 pdf
Sep 07 2016EMC CorporationTHE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTSECURITY AGREEMENT0401360001 pdf
Sep 07 2016Maginatics LLCTHE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTSECURITY AGREEMENT0401360001 pdf
Sep 07 2016FORCE10 NETWORKS, INC THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTSECURITY AGREEMENT0401360001 pdf
Sep 07 2016EMC IP HOLDING COMPANY LLCTHE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTSECURITY AGREEMENT0401360001 pdf
Sep 07 2016DELL INTERNATIONAL L L C THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTSECURITY AGREEMENT0401360001 pdf
Sep 07 2016DELL SYSTEMS CORPORATIONTHE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTSECURITY AGREEMENT0401360001 pdf
Sep 07 2016DELL SOFTWARE INC THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTSECURITY AGREEMENT0401360001 pdf
Sep 07 2016Dell Products L PTHE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTSECURITY AGREEMENT0401360001 pdf
Sep 07 2016DELL MARKETING L P THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTSECURITY AGREEMENT0401360001 pdf
Sep 07 2016Dell USA L PTHE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTSECURITY AGREEMENT0401360001 pdf
Sep 07 2016CREDANT TECHNOLOGIES, INC THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTSECURITY AGREEMENT0401360001 pdf
Sep 07 2016MOZY, INC THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTSECURITY AGREEMENT0401360001 pdf
Sep 07 2016SCALEIO LLCTHE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTSECURITY AGREEMENT0401360001 pdf
Sep 07 2016ASAP SOFTWARE EXPRESS, INC THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTSECURITY AGREEMENT0401360001 pdf
Sep 07 2016Spanning Cloud Apps LLCTHE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTSECURITY AGREEMENT0401360001 pdf
Sep 07 2016Aventail LLCTHE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTSECURITY AGREEMENT0401360001 pdf
Sep 07 2016WYSE TECHNOLOGY L L C THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTSECURITY AGREEMENT0401360001 pdf
Mar 20 2019DELL MARKETING L P THE BANK OF NEW YORK MELLON TRUST COMPANY, N A SECURITY AGREEMENT0494520223 pdf
Mar 20 2019WYSE TECHNOLOGY L L C THE BANK OF NEW YORK MELLON TRUST COMPANY, N A SECURITY AGREEMENT0494520223 pdf
Mar 20 2019CREDANT TECHNOLOGIES, INC THE BANK OF NEW YORK MELLON TRUST COMPANY, N A SECURITY AGREEMENT0494520223 pdf
Mar 20 2019EMC IP HOLDING COMPANY LLCTHE BANK OF NEW YORK MELLON TRUST COMPANY, N A SECURITY AGREEMENT0494520223 pdf
Mar 20 2019FORCE10 NETWORKS, INC THE BANK OF NEW YORK MELLON TRUST COMPANY, N A SECURITY AGREEMENT0494520223 pdf
Mar 20 2019EMC CorporationTHE BANK OF NEW YORK MELLON TRUST COMPANY, N A SECURITY AGREEMENT0494520223 pdf
Mar 20 2019Dell USA L PTHE BANK OF NEW YORK MELLON TRUST COMPANY, N A SECURITY AGREEMENT0494520223 pdf
Mar 20 2019Dell Products L PTHE BANK OF NEW YORK MELLON TRUST COMPANY, N A SECURITY AGREEMENT0494520223 pdf
Mar 20 2019DELL INTERNATIONAL L L C THE BANK OF NEW YORK MELLON TRUST COMPANY, N A SECURITY AGREEMENT0494520223 pdf
Apr 09 2020Dell USA L PTHE BANK OF NEW YORK MELLON TRUST COMPANY, N A SECURITY AGREEMENT0535460001 pdf
Apr 09 2020Dell Products L PTHE BANK OF NEW YORK MELLON TRUST COMPANY, N A SECURITY AGREEMENT0535460001 pdf
Apr 09 2020EMC CorporationTHE BANK OF NEW YORK MELLON TRUST COMPANY, N A SECURITY AGREEMENT0535460001 pdf
Apr 09 2020FORCE10 NETWORKS, INC THE BANK OF NEW YORK MELLON TRUST COMPANY, N A SECURITY AGREEMENT0535460001 pdf
Apr 09 2020EMC IP HOLDING COMPANY LLCTHE BANK OF NEW YORK MELLON TRUST COMPANY, N A SECURITY AGREEMENT0535460001 pdf
Apr 09 2020DELL INTERNATIONAL L L C THE BANK OF NEW YORK MELLON TRUST COMPANY, N A SECURITY AGREEMENT0535460001 pdf
Apr 09 2020CREDANT TECHNOLOGIES INC THE BANK OF NEW YORK MELLON TRUST COMPANY, N A SECURITY AGREEMENT0535460001 pdf
Apr 09 2020WYSE TECHNOLOGY L L C THE BANK OF NEW YORK MELLON TRUST COMPANY, N A SECURITY AGREEMENT0535460001 pdf
Apr 09 2020DELL MARKETING L P THE BANK OF NEW YORK MELLON TRUST COMPANY, N A SECURITY AGREEMENT0535460001 pdf
Nov 01 2021Credit Suisse AG, Cayman Islands BranchFORCE10 NETWORKS, INC RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0582160001 pdf
Nov 01 2021Credit Suisse AG, Cayman Islands BranchMaginatics LLCRELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0582160001 pdf
Nov 01 2021Credit Suisse AG, Cayman Islands BranchMOZY, INC RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0582160001 pdf
Nov 01 2021Credit Suisse AG, Cayman Islands BranchSCALEIO LLCRELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0582160001 pdf
Nov 01 2021Credit Suisse AG, Cayman Islands BranchWYSE TECHNOLOGY L L C RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0582160001 pdf
Nov 01 2021Credit Suisse AG, Cayman Islands BranchEMC IP HOLDING COMPANY LLCRELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0582160001 pdf
Nov 01 2021Credit Suisse AG, Cayman Islands BranchEMC CorporationRELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0582160001 pdf
Nov 01 2021Credit Suisse AG, Cayman Islands BranchCREDANT TECHNOLOGIES, INC RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0582160001 pdf
Nov 01 2021Credit Suisse AG, Cayman Islands BranchASAP SOFTWARE EXPRESS, INC RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0582160001 pdf
Nov 01 2021Credit Suisse AG, Cayman Islands BranchAventail LLCRELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0582160001 pdf
Nov 01 2021Credit Suisse AG, Cayman Islands BranchDell USA L PRELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0582160001 pdf
Nov 01 2021Credit Suisse AG, Cayman Islands BranchDELL INTERNATIONAL, L L C RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0582160001 pdf
Nov 01 2021Credit Suisse AG, Cayman Islands BranchDELL MARKETING L P RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0582160001 pdf
Nov 01 2021Credit Suisse AG, Cayman Islands BranchDell Products L PRELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0582160001 pdf
Nov 01 2021Credit Suisse AG, Cayman Islands BranchDELL SOFTWARE INC RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0582160001 pdf
Nov 01 2021Credit Suisse AG, Cayman Islands BranchDELL SYSTEMS CORPORATIONRELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0582160001 pdf
Mar 29 2022THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTDELL MARKETING CORPORATION SUCCESSOR-IN-INTEREST TO FORCE10 NETWORKS, INC AND WYSE TECHNOLOGY L L C RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL FRAME 045455 0001 0617530001 pdf
Mar 29 2022THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTEMC CORPORATION ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MAGINATICS LLC RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL FRAME 045455 0001 0617530001 pdf
Mar 29 2022THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTDELL MARKETING L P ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO CREDANT TECHNOLOGIES, INC RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL FRAME 045455 0001 0617530001 pdf
Mar 29 2022THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTDell Products L PRELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL FRAME 045455 0001 0617530001 pdf
Mar 29 2022THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTDELL INTERNATIONAL L L C RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL FRAME 045455 0001 0617530001 pdf
Mar 29 2022THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTDell USA L PRELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL FRAME 045455 0001 0617530001 pdf
Mar 29 2022THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTDELL MARKETING CORPORATION SUCCESSOR-IN-INTEREST TO ASAP SOFTWARE EXPRESS, INC RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL FRAME 045455 0001 0617530001 pdf
Mar 29 2022THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTDell USA L PRELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL FRAME 040136 0001 0613240001 pdf
Mar 29 2022THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTEMC IP HOLDING COMPANY LLC ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MOZY, INC RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL FRAME 045455 0001 0617530001 pdf
Mar 29 2022THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTSCALEIO LLCRELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL FRAME 040136 0001 0613240001 pdf
Mar 29 2022THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTEMC IP HOLDING COMPANY LLC ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MOZY, INC RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL FRAME 040136 0001 0613240001 pdf
Mar 29 2022THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTSCALEIO LLCRELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL FRAME 045455 0001 0617530001 pdf
Mar 29 2022THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTDELL MARKETING CORPORATION SUCCESSOR-IN-INTEREST TO FORCE10 NETWORKS, INC AND WYSE TECHNOLOGY L L C RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL FRAME 040136 0001 0613240001 pdf
Mar 29 2022THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTDell Products L PRELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL FRAME 040136 0001 0613240001 pdf
Mar 29 2022THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTDELL INTERNATIONAL L L C RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL FRAME 040136 0001 0613240001 pdf
Mar 29 2022THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTDELL MARKETING L P ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO CREDANT TECHNOLOGIES, INC RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL FRAME 040136 0001 0613240001 pdf
Mar 29 2022THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTDELL MARKETING CORPORATION SUCCESSOR-IN-INTEREST TO ASAP SOFTWARE EXPRESS, INC RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL FRAME 040136 0001 0613240001 pdf
Mar 29 2022THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTEMC CORPORATION ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MAGINATICS LLC RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL FRAME 040136 0001 0613240001 pdf
Date Maintenance Fee Events
Jan 18 2012ASPN: Payor Number Assigned.
Feb 28 2014M1551: Payment of Maintenance Fee, 4th Year, Large Entity.
Jan 24 2018M1552: Payment of Maintenance Fee, 8th Year, Large Entity.
Jan 19 2022M1553: Payment of Maintenance Fee, 12th Year, Large Entity.


Date Maintenance Schedule
Aug 31 20134 years fee payment window open
Mar 03 20146 months grace period start (w surcharge)
Aug 31 2014patent expiry (for year 4)
Aug 31 20162 years to revive unintentionally abandoned end. (for year 4)
Aug 31 20178 years fee payment window open
Mar 03 20186 months grace period start (w surcharge)
Aug 31 2018patent expiry (for year 8)
Aug 31 20202 years to revive unintentionally abandoned end. (for year 8)
Aug 31 202112 years fee payment window open
Mar 03 20226 months grace period start (w surcharge)
Aug 31 2022patent expiry (for year 12)
Aug 31 20242 years to revive unintentionally abandoned end. (for year 12)