A technique for managing file systems assigns groups of related files in a file system to respective version sets. Each version set includes all files of a file system that are related to one another by one or more snapshot operations. The technique provides a version set database, which stores, in connection with each version set, an identifier of each file that belongs to the respective version set. In an example, file system operations that require information about block sharing can perform lookup operations on the version set database to narrow the scope of files that are candidates for block sharing to those of a particular version set.
|
1. A computer-implemented method for managing file systems, the method comprising:
storing a file system on storage blocks backed by a set of storage devices, the file system including a set of files, each of the set of files belonging to one of multiple version sets, each version set including a distinct subset of the set of files that form one or more nodes of a respective snapshot tree of files, at least one snapshot tree of files including multiple files that share among them one or more of the storage blocks;
organizing the version sets in a version set database, the version set database storing, in connection with each of the version sets, an identifier of each file that belongs to the respective version set; and
in response to a processor encountering an instruction to execute a file system operation involving a specified version set, performing a lookup operation on the version set database to obtain a list of files of the file system that belong to the specified version set,
wherein the set of files of the file system include file data backed by a set of the storage blocks,
wherein the file system includes per-block metadata for each of the set of the storage blocks, and
wherein the method further comprises storing version-set-identifying information in the per-block metadata for each of the set of the storage blocks, the version-set-identifying information specifying a version set to which the respective block is allocated.
15. A computer-program product including a set of non-transitory computer-readable media having instructions which, when executed by a set of processors of a computerized apparatus, cause the set of processors to perform a method for managing file systems, the method comprising:
storing a file system on storage blocks backed by a set of storage devices, the file system including a set of files, each of the set of files belonging to one of multiple version sets, each version set including a distinct subset of the set of files that form one or more nodes of a respective snapshot tree of files, at least one snapshot tree of files including multiple files that share among them one or more of the storage blocks;
organizing the version sets in a version set database, the version set database storing, in connection with each of the version sets, an identifier of each file that belongs to the respective version set; and
in response to a processor encountering an instruction to execute a file system operation involving a specified version set, performing a lookup operation on the version set database to obtain a list of files of the file system that belong to the specified version set,
wherein organizing the version sets in the version set database includes realizing the version set database in a directory structure of the file system, the directory structure including a set of directories, each of the set of directories assigned to a respective version set and grouping together files of the respective version set.
14. A computerized apparatus, comprising:
a set of processors; and
memory, coupled to the set of processors, the memory storing executable instructions, which when executed by the set of processors cause the set of processors to:
store a file system on storage blocks backed by a set of storage devices, the file system including a set of files, each of the set of files belonging to one of multiple version sets, each version set including a distinct subset of the set of files that form one or more nodes of a respective snapshot tree of files, at least one snapshot tree of files including multiple files that share among them one or more of the storage blocks;
organize the version sets in a version set database, the version set database storing, in connection with each of the version sets, an identifier of each file that belongs to the respective version set; and
in response to a processor encountering an instruction to execute a file system operation involving a specified version set, perform a lookup operation on the version set database to obtain a list of files of the file system that belong to the specified version set,
wherein the set of files of the file system include file data backed by a set of the storage blocks,
wherein the file system includes per-block metadata for each of the set of the storage blocks, and
wherein the executable instructions further cause the set of processors to store version-set-identifying information in the per-block metadata for each of the set of the storage blocks, the version-set-identifying information specifying a version set to which the respective block is allocated.
2. The method of
3. The method of
4. The method of
5. The method of
6. The method of
7. The method of
8. The method of
9. The method of
10. The method of
11. The method of
12. The method of
the processor encountering an instruction to delete a file from the file system;
interrogating at least one of (i) the per-file metadata of the file to be deleted and (ii) the version set database, to identify the version set of the file to be deleted; and
updating the version set database to reflect deletion of the file from the version set.
13. The method of
the processor encountering an instruction to move contents of a data block of the set of the storage blocks to a new block location;
interrogating the per-block metadata of the data block to obtain the version-set-identifying information stored therein;
performing a lookup operation on the version set database to identify a set of files of the version set identified by the version-set-identifying information obtained from the per-block metadata; and
updating a block pointer of any file in the identified version set that shares the data block to reflect the new block location.
16. The computer-program product of
17. The computer-program product of
18. The computer-program product of
19. The computer-program product of
wherein the set of files of the file system include file data backed by a set of the storage blocks,
wherein the file system includes per-block metadata for each of the set of the storage blocks, and
wherein the method further comprises storing version-set-identifying information in the per-block metadata for each of the set of the storage blocks, the version-set-identifying information specifying a version set to which the respective block is allocated.
|
Data storage systems are arrangements of hardware and software that include storage processors coupled to arrays of non-volatile storage devices. In typical operation, storage processors service storage requests that arrive from client machines. The storage requests specify files or other data elements to be written, read, created, or deleted, for example. The storage processors run software that manages incoming storage requests and performs various data processing tasks to organize and secure the data stored on the non-volatile storage devices.
Data storage systems often implement snapshot technology to protect the data they store. For example, a data storage system may serve a data object to various client machines. The client machines access the data object over a network and can make changes to its contents over time. To protect the data object and its state at various points in time, the data storage system may implement a snapshot policy and take snapshots, or “snaps,” of the data object at regular intervals or in response to user commands or particular events. Each snapshot provides a point-in-time version of the data object, which users of client machines can access to restore from a previous version of the data object, such as to resume use of an object from a previous, known-good state. Users may also restore from snaps to examine previous states of data objects, such as for forensic purposes.
To conserve physical storage space, data storage systems often employ block sharing relationships among data objects and their snaps. As is known, a “block” is an allocatable unit of storage, which may, for example, be 8 KB in size, although blocks can be of any size. Block sharing saves storage space by storing common data used by different files on the same blocks, rather than requiring each file to have its own blocks even when the data on the blocks are the same. Data storage systems can take snaps of many different types of data objects, including file systems, LUNs (Logical Unit Numbers), vVOLs (virtual volumes), VMDKs (virtual machine disks), VHD's (Virtual Hard Disks), and other types of objects.
Users of data storage systems expect data storage systems to serve ever-increasing numbers of data objects and their respective snaps. One proposed implementation requires a data storage system to serve thousands of VMDKs and their respective snaps from a single file system. The file system stores each VMDK and each snap as a respective file. Hosts can access the VMDK files or their snaps over a network, for example, to operate respective virtual machines.
Unfortunately, conventional file systems do not operate efficiently in the face of thousands of data objects, each having potentially thousands of snaps. What is needed is a way of simplifying the management of file systems, so that file systems can perform their operations efficiently even in the presence of very large numbers of data objects and snaps.
In contrast with conventional file system technologies, an improved technique for managing file systems assigns groups of related files in a file system to respective version sets. Each version set includes all files of the file system that are related to one another by one or more snapshot operations. Thus, a version set may include an initial, primary object, such as a VMDK, as well as all snaps of that object. Files within a version set are thus all related by snaps, while files across different version sets are not related by snaps. Version sets promote the efficiency of many file system operations by grouping together files that may have block-sharing relationships while separating files that may not have block-sharing relationships. An infrastructure for supporting version sets includes a version set database, which stores, in connection with each version set, an identifier of each file that belongs to the respective version set. File system operations that require information about block sharing can perform lookup operations on the version set database to narrow the scope of files that are candidates for block sharing to those of a particular version set, thus promoting efficiency in file system operations.
Certain embodiments are directed to a method for managing file systems. The method includes storing a file system on storage blocks backed by a set of storage devices, the file system including a set of files, each of the set of files belonging to one of multiple version sets, each version set including a distinct subset of the set of files that form one or more nodes of a respective snapshot tree of files. At least one snapshot tree of files include multiple files that share among them one or more of the storage blocks. The method further includes organizing the version sets in a version set database, the version set database storing, in connection with each of the version sets, an identifier of each file that belongs to the respective version set. The method still further includes, in response to a processor encountering an instruction to execute a file system operation involving a specified version set, performing a lookup operation on the version set database to obtain a list of files of the file system that belong to the specified version set.
Other embodiments are directed to a data storage apparatus constructed and arranged to perform the method described above. Still other embodiments are directed to a computer program product. The computer program product includes one or more non-transient, computer-readable media that stores instructions which, when executed on one or more processing units of a data storage apparatus, cause the data storage apparatus to perform the method described above. Some embodiments involve activity that is performed at a single location, while other embodiments involve activity that is distributed over a computerized environment (e.g., over a network).
The foregoing and other features and advantages will be apparent from the following description of particular embodiments of the invention, as illustrated in the accompanying drawings, in which like reference characters refer to the same or similar parts throughout the different views. In the accompanying drawings,
Embodiments of the invention will now be described. It is understood that such embodiments are provided by way of example to illustrate various features and principles of the invention, and that the invention hereof is broader than the specific example embodiments disclosed.
An improved technique for managing file systems assigns groups of related files in a file system to respective version sets. Each version set includes all files of the file system that are related to one another by one or more snapshot operations. A version set database stores, in connection with each version set, an identifier of each file that belongs to the respective version set. In an example, file system operations that require information about block sharing can thus perform lookup operations on the version set database to narrow the scope of files that are candidates for block sharing to those of a particular version set.
The network 114 can be any type of network or combination of networks, such as a storage area network (SAN), local area network (LAN), wide area network (WAN), the Internet, and/or some other type of network, for example. In an example, the hosts 110(1-N) can connect to the SP 120 using various technologies, such as Fibre Channel (e.g., through a SAN), iSCSI, NFS, SMB 3.0, and CIFS. Any number of hosts 110(1-N) may be provided, using any of the above protocols, some subset thereof, or other protocols besides those shown. The SP 120 is configured to receive IO requests 112(1-N) and to respond to such IO requests 112(1-N) by reading and/or writing the storage 180. An administrative machine 102 may also communicate over the network 114 with the SP 120, e.g., via requests 104.
The SP 120 is seen to include one or more communication interfaces 122, a set of processing units 124, and memory 130. The communication interfaces 122 include, for example, adapters, such as SCSI target adapters and network interface adapters, for converting electronic and/or optical signals received from the network 114 to electronic form for use by the SP 120. The set of processing units 124 include one or more processing chips and/or assemblies. In a particular example, the set of processing units 124 includes numerous multi-core CPUs. The memory 130 includes both volatile memory (e.g., RAM), and non-volatile memory, such as one or more ROMs, disk drives, solid state drives, and the like. The set of processing units 124 and the memory 130 together form control circuitry, which is constructed and arranged to carry out various methods and functions as described herein. Also, the memory 130 includes a variety of software constructs realized in the form of executable instructions. When the executable instructions are run by the set of processing units 124, the set of processing units 124 are caused to carry out the operations of the software constructs. Although certain software constructs are specifically shown and described, it is understood that the memory 130 typically includes many other software constructs, which are not shown, such as an operating system, various applications, processes, and daemons, for example.
The memory 130 is seen to include (i.e., realize by operation of programming code) an IO stack 140. The IO stack 140 provides an execution path for host IOs (e.g., IO requests 112(1-N)) and includes an internal representation of a file system 150. It should be understood that the file system 150 is a logical construct within the IO stack 140 and that the underlying data and metadata that support the file system 150 typically reside in the storage 180. Although only a single file system 150 is shown, it should be understood that SP 120 may host any number of file systems, like the file system 150, limited only by available computing resources and storage.
The file system 150 is seen to include multiple version sets, VS-1 through VS-M, and a version set database 160 (VS-DB). Each of the version sets VS-1 through VS-M includes one or more files that belong to a respective snapshot tree of files, meaning that the files (if greater than one is provided) are related by snapshot operations. Files within a version set may thus have block-sharing relationships with other files within the version set. A typical version set may include a VMDK and all snaps of the VMDK. Files grouped in different version sets are not related by snaps and thus may not share blocks. Version sets are thus an effective mechanism for grouping files that may share blocks while separating files that may not.
The version set database 160 organizes the version sets of the file system 150. In an example, the version set database 160 includes an entry for each of the version sets VS-1 to VS-M and stores an identifier in connection with each version set of all files in the file system 150 that belong to the respective version set. The version set database 160 may store additional information about version sets, such as a count of all files in each version set, and/or other operationally useful information. Although
In example operation, the hosts 110(1-N) issue IO requests 112(1-N) to the data storage apparatus 116 directed to files of the file system 150, such as files supporting VMDKs or their snaps. The SP 120 receives the IO requests 112(1-N) at the communication interfaces 122 and passes the IO requests to the IO stack 140. The IO stack 140 processes the IO requests 112(1-N), such as to effect read and/or write operations on the file system 150.
As the data storage apparatus 116 operates, the data storage apparatus 116 may update the files of the file system 150 and may take new snaps of those files. Each time the data storage apparatus 116 takes a new snap, the data storage apparatus 116 assigns the new snap to the version set of the new snap's parent, i.e., the file being snapped, such that the version set tends to grow. The new snap may initially share all of its data blocks with its parent, although write splitting may cause the contents of the two files to diverge over time. According to a snap retention policy in place (if there is one), the data storage apparatus 116 may also delete older snaps from the version set, thus tending to limit the rate of growth of the version set.
The use of version sets promotes efficiency in many file system operations. In one example, an administrative user of admin machine 102 may issue a request 104 to move a data block of storage used by the file system 150, e.g., as part of a file system reorganization process. Before moving the contents of the data block to a new location, the file system 150 may need to identify any block-sharing relationships in place on that data block, so as to ensure that moving the block does not disrupt any files that share the block. If the file system 150 has thousands of objects each having thousands of snaps, the process of checking all other files in the file system 150 for potential block sharing relationships would become exorbitant. Using the concept of version sets, however, only the files in one version set, as indicated by the version set database 160, need to be checked for block sharing. Files in other version sets cannot share the block and thus need not be checked. The scope of files that need to be checked for block-sharing relationships is thus reduced, potentially by orders of magnitude.
The lower-deck file system 340 is internal to the IO stack 140 and is generally hidden from users. In the example shown, the lower-deck file system 340 includes two files, a container file, “CF,” and a snap of the container file, “S1CF.” In general, container files like the container file CF store file-based realizations of data objects, such as LUNs, file systems, or vVOLs, for example. Container files thus tend to be quite large. A typical lower-deck file system 340 may contain only a single file (a container file), or may contain both a container file and any snaps of the container file, with the snaps of the container file representing snaps of the data object the container file realizes. In this example, CF is a file-based realization of the file system 150, and S1CF is a snap of the file system 150.
The lower-deck file system is further seen to include an inode table 342 having a first inode 344 for the container file and a second inode 346 for the snap. The inodes 344 and 346 each have a unique inode number in the lower-deck file system 340 and each include metadata describing the files to which they refer. Each inode also includes block pointers that point to data blocks of the sparse metavolume 320 where the file content of the respective files is stored.
The lower-deck file system 340 has a volume-based interface and expresses container files using volume-file abstractions. Here, volume-file 350 expresses container file CF with volume semantics (e.g., SCSI semantics). The volume file 350 also provides a local pool-like object for creating upper-deck slices 352, which the volume-file 350 may provision on demand to fulfill storage requests. Here, it is seen that three slices have been provisioned from the volume-file 350 to form an upper-deck sparse metavolume 360, which provides a contiguous address space for supporting the file system 150 (i.e., an upper-deck file system).
Providing the version set database 160 within the file system 150, rather than outside the file system 150, attaches the version set database 160 to the version sets themselves and underlying files that they organize. This arrangement enables management of version sets to be self-contained within the respective file systems housing the version sets and thus promotes an efficient and modular organization of version sets in the data storage apparatus 116.
The expanded view of NS-1 shows contents of its directory file 610. Here, directory entries 610a through 610d are included for each of the respective files in directory NS-1. As with the directory entry examples shown in
It may be observed that the same files that appear in the directories NS-1, /VS-2, and/VS-3 in
As further shown in
The above-referenced figures and accompanying descriptions have presented a novel infrastructure for efficiently managing file systems using the concept of version sets. Example processes that employ this novel infrastructure will now be described.
At 710, a request is received to create a new file. For example, the new file to be created may be a newly requested snap of a VMDK or other object, which a user or administrator may request explicitly or which the SP 120 may request automatically in accordance with a snap policy in place for the object. The new file, once created, would belong to a particular version set. For example, it is assumed that a version set has already been established that includes the VMDK or other object and its snaps (if there are any).
At 712, the SP 120 (or some other operating entity) identifies the version set to which the new file would belong. For example, the SP 120 may check the inode of the file being snapped (e.g., 410 of
At 714, the count is interrogated to determine whether creating the new file would cause the count to exceed a predetermined hard limit. If creating the new file in the identified version set would cause the count for that version set to exceed the hard limit, then the SP 120 may refuse the request to create the new file (716). In some examples if the hard limit would be exceeded, the SP 120 generates an alert to report that the hard limit has been reached. The alert may take the form of an event log entry and/or a message to the host machine that requested creation of the new file (if there is one). Processing of the request to create the new file would then terminate at 716. If creating the new file would not cause the count to exceed the hard limit, the process 700 proceeds to 718.
At 718, the count is tested to determine whether creating the new file would cause the count to exceed a predetermined soft limit. The data storage apparatus 116 may specify a soft limit on the number of files allowed in any or each version set. In some examples, users and/or administrators establish soft limits explicitly. In other examples, soft limits are computed automatically, e.g., as predetermined percentages (e.g., 80%) of respective hard limits. If creating the new file would cause the count of files in its version set to exceed a soft limit, the SP 120 (or other operating entity) may generate an alert (720), e.g., in the manner described above.
At 722, the request to create the new file is processed and the new file is created. The new file then becomes part of the identified version set. In an example, the new file is created regardless of whether the soft limit is exceeded.
At 810, the SP 120 (or other operating entity) encounters an instruction to move the contents of a data block to a new block location. In an example, the data block is a block storing file data within the addressable space of the file system 150 (i.e., in the sparse metavolume 360 of
At 812 (or at any suitable point), the SP 120 copies the contents of the block to be moved to the new block location.
At 814, the SP 120 interrogates the per-block metadata of the data block whose contents are to be moved to obtain the version-set-identifying information stored therein. For example the SP 120 interrogates the per-block metadata 430 (
At 816, the SP 120 performs a lookup operation on the version set database 160 to identify a set of files of the version set identified at 812. For example, the SP 120 queries the database 160 on the retrieved VSID to obtain the corresponding file list (
At 818, the SP 120 updates the block pointer of any file in the identified set of files that shares the data block to be moved to reflect the new block location. For example, the SP 120 checks the inode of each file listed in the directory entries above at the logical offset obtained from the per-block metadata 430. If the inode points to the block to be moved at the obtained logical offset, the SP 120 updates the pointer in the inode at the obtained logical offset to point to the new block location. The block to be moved may then be marked as freed.
The process 800 demonstrates how version sets can be used to reduce the scope of files that need to be checked for potential block sharing relationships when moving block contents. Thus, rather than having to check all files in the entire file system 150 for potential block sharing, the process 800 limits the files to be checked to those of a single version set.
At 912, the SP 120 interrogates at least one of (i) the per-file metadata 430 of the file to be deleted and (ii) the version set database 160, to identify the version set (VSID) of the file to be deleted. The SP 120 may use either means to obtain the VSID of the file to be deleted.
At 914, the SP 120 updates the version set database 160 to reflect the deletion of the file from the version set. For example, the SP 120 removes the file to be deleted from the file list in the database 160 (
At 1010, a file system, such as file system 150, is stored on storage blocks (e.g., those that compose slices 312 and 352) backed by a set of storage devices (e.g., in storage 180), the file system including a set of files (e.g., those belonging to version sets VS-1, VS-2, and VS-3), each of the set of files belonging to one of multiple version sets (e.g., VS-1, VS-2, or VS-3), each version set including a distinct subset of the set of files that form one or more nodes of a respective snapshot tree of files (e.g., as shown in the non-limiting examples of
At 1012, the version sets are organized in a version set database (e.g., 160). As shown in
At 1014, in response to a processor encountering an instruction (e.g., 104) to execute a file system operation involving a specified version set, a lookup operation is performed on the version set database (e.g., 160) to obtain a list of files of the file system (e.g., 150) that belong to the specified version set.
An improved technique has been described for managing file systems. The technique assigns groups of related files in a file system (e.g., 150) to respective version sets (e.g., VS-1, VS-2, and VS-3). Each version set includes all files of the file system 150 that are related to one another by one or more snapshot operations, e.g., those of a respective snapshot tree of files as shown in
Having described certain embodiments, numerous alternative embodiments or variations can be made. For example, although per-file metadata 410 has been identified herein as a file's inode, this is merely an example. Alternatively, some other per-file metadata structure or structures may be used. Likewise, although per-block metadata 430 has been identified as BMD, another per-block metadata structure or structures may be used.
Also, although the use of version sets has been shown and described in connection with a particular IO stack 140 that represents the file system 150 in the form of a container file (CF, see
Also, the directory structure implementation of the version set database 160 need not include hard links. When implementing the version set database as a directory structure, other types of links may be used, including soft links, shortcuts, and the like.
Also, a particular example has been described involving a large number of VMDKs and their respective snaps. However, the techniques hereof are not limited to VMDKs or to any type or types of data objects.
Further, although features are shown and described with reference to particular embodiments hereof, such features may be included and hereby are included in any of the disclosed embodiments and their variants. Thus, it is understood that features disclosed in connection with any embodiment are included as variants of any other embodiment.
Further still, the improvement or portions thereof may be embodied as a non-transient computer-readable storage medium, such as a magnetic disk, magnetic tape, compact disk, DVD, optical disk, flash memory, Application Specific Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA), and the like (shown by way of example as medium 750 in
As used throughout this document, the words “comprising,” “including,” and “having” are intended to set forth certain items, steps, elements, or aspects of something in an open-ended fashion. Also, as used herein and unless a specific statement is made to the contrary, the word “set” means one or more of something. This is the case regardless of whether the phrase “set of” is followed by a singular or plural object and regardless of whether it is conjugated with a singular or plural verb. Although certain embodiments are disclosed herein, it is understood that these are provided by way of example only and the invention is not limited to these particular embodiments. Those skilled in the art will therefore understand that various changes in form and detail may be made to the embodiments disclosed herein without departing from the scope of the invention.
Bono, Jean-Pierre, Armangau, Philippe, Davenport, William C., Mathews, Alexander
Patent | Priority | Assignee | Title |
10521398, | Jun 29 2016 | EMC IP HOLDING COMPANY LLC | Tracking version families in a file system |
10592469, | Jun 29 2016 | EMC IP HOLDING COMPANY LLC | Converting files between thinly and thickly provisioned states |
10838634, | Dec 30 2016 | EMC IP HOLDING COMPANY LLC | Managing storage capacity in version families having both writable and read-only data objects |
11061770, | Jun 30 2020 | EMC IP HOLDING COMPANY LLC | Reconstruction of logical pages in a storage system |
11093169, | Apr 29 2020 | EMC IP HOLDING COMPANY LLC | Lockless metadata binary tree access |
11099940, | Jun 30 2020 | EMC IP HOLDING COMPANY LLC | Reconstruction of links to orphaned logical pages in a storage system |
11210230, | Apr 30 2020 | EMC IP HOLDING COMPANY LLC | Cache retention for inline deduplication based on number of physical blocks with common fingerprints among multiple cache entries |
11232043, | Apr 30 2020 | EMC IP HOLDING COMPANY LLC | Mapping virtual block addresses to portions of a logical address space that point to the virtual block addresses |
11256577, | May 30 2020 | EMC IP HOLDING COMPANY LLC | Selective snapshot creation using source tagging of input-output operations |
11256678, | Jun 30 2020 | EMC IP HOLDING COMPANY LLC | Reconstruction of links between logical pages in a storage system |
11269547, | May 20 2020 | EMC IP HOLDING COMPANY LLC | Reusing overwritten portion of write buffer of a storage system |
11281374, | Apr 28 2020 | EMC IP HOLDING COMPANY LLC | Automated storage network reconfiguration for heterogeneous storage clusters |
11314580, | Apr 30 2020 | EMC IP HOLDING COMPANY LLC | Generating recommendations for initiating recovery of a fault domain representing logical address space of a storage system |
11334523, | Apr 30 2020 | EMC IP HOLDING COMPANY LLC | Finding storage objects of a snapshot group pointing to a logical page in a logical address space of a storage system |
11360691, | Jun 10 2020 | EMC IP HOLDING COMPANY LLC | Garbage collection in a storage system at sub-virtual block granularity level |
11366601, | Jun 22 2020 | EMC IP HOLDING COMPANY LLC | Regulating storage device rebuild rate in a storage system |
11436123, | Jun 30 2020 | EMC IP HOLDING COMPANY LLC | Application execution path tracing for inline performance analysis |
11449468, | Apr 27 2017 | EMC IP HOLDING COMPANY LLC | Enforcing minimum space guarantees in thinly-provisioned file systems |
11625169, | Jul 24 2020 | EMC IP HOLDING COMPANY LLC | Efficient token management in a storage system |
Patent | Priority | Assignee | Title |
7631155, | Jun 30 2007 | EMC IP HOLDING COMPANY LLC | Thin provisioning of a file system and an iSCSI LUN through a common mechanism |
7761456, | Apr 22 2005 | ACQUIOM AGENCY SERVICES LLC, AS ASSIGNEE | Secure restoration of data selected based on user-specified search criteria |
7818535, | Jun 30 2007 | EMC IP HOLDING COMPANY LLC | Implicit container per version set |
8612382, | Jun 29 2012 | EMC IP HOLDING COMPANY LLC | Recovering files in data storage systems |
20050065986, | |||
20090144342, |
Date | Maintenance Fee Events |
Mar 24 2021 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Date | Maintenance Schedule |
Oct 03 2020 | 4 years fee payment window open |
Apr 03 2021 | 6 months grace period start (w surcharge) |
Oct 03 2021 | patent expiry (for year 4) |
Oct 03 2023 | 2 years to revive unintentionally abandoned end. (for year 4) |
Oct 03 2024 | 8 years fee payment window open |
Apr 03 2025 | 6 months grace period start (w surcharge) |
Oct 03 2025 | patent expiry (for year 8) |
Oct 03 2027 | 2 years to revive unintentionally abandoned end. (for year 8) |
Oct 03 2028 | 12 years fee payment window open |
Apr 03 2029 | 6 months grace period start (w surcharge) |
Oct 03 2029 | patent expiry (for year 12) |
Oct 03 2031 | 2 years to revive unintentionally abandoned end. (for year 12) |