A backup system enables unmodified data to be copied to a read-only backup container that is smaller than the read-write container. The system creates and maintains structures that map the unmodified copies of data in the backing store container to locations in the read-write container. The mapping structures contain addresses of locations in the backing store container where collections of blocks of data are stored based on the original data block address of the data in the read-write container. In order to obtain the address of the location in the backing store container where a block of data is stored, the system converts the physical block number of the read-write block of data into a physical block address in the backing store container which actually contains the data. By using these mapping structures, the system provides an efficient manner of utilizing a smaller backing store container or less storage space, since the data modified during the snapshot backup process is usually substantially less than read-write on-line data.

Patent
   6061770
Priority
Nov 04 1997
Filed
Nov 04 1997
Issued
May 09 2000
Expiry
Nov 04 2017
Assg.orig
Entity
Large
188
3
all paid
7. A method of mapping a block of data for modification subsequent to a predetermined snapshot time in a read/write container to a physical block in a backing store container, said method comprising the steps of:
converting a physical block number of the block of data in the read/write container to a virtual block number, the virtual block number having a first index, a second index and a block offset;
creating a first level table in the backing store container;
accessing an entry in the first level table with the value of the first index of the virtual block number;
creating a second level table if one does not exist or if the existing one is too small;
storing a beginning block number of the second level table in the first level table entry;
accessing an entry in the second level table using the second index in the virtual block number; and
adding the block offset in the virtual block number to the second level table entry to determine a physical location in the backing store container for storing the data.
4. A method for backing up on-line data from a read/write container with locations for storage of blocks of read/write on-line data, said method comprising the steps of:
creating a backing store container that is smaller than the read/write container;
copying an unmodified copy of each block of on-line data for modification subsequent to a predetermined snapshot time from the read-write container to the backing store container;
mapping blocks of data in the backing store container with locations in the read-write container from which the blocks were copied;
converting a physical block number of a block of data in the read-write container into a virtual block number in the backing store container;
creating a first index, a second index and a block offset in the virtual block number;
creating and storing a first level table in memory and in the backing store container in response to the step of creating the backing store; and
creating and storing a second level table in the backing store container, said second level table storing a block of data that is copied from the read/write container to the backing store container;
using the virtual block number to access the first and second level tables in the backing store container; and
storing a beginning block number of the second level table in a first level table entry.
1. A computer system having a read/write container with locations for storage of blocks of read/write on-line data, said system further comprising a backup structure for backing up the on-line data at a instant in time, said backup structure comprising:
means for creating a backing store container that is smaller than the read/write container;
means for copying to the backing store container from the read/write container an unmodified copy of each block of on-line data that is to be modified subsequent to a predetermined snapshot time;
a mapping structure that maps the blocks of data in the backing store container with locations in the read-write container from which the blocks were copied;
means for converting a physical block number of a block of data in the read-write container to a virtual block number in the backing store container;
means for creating and storing a first level table in memory and in the backing store container;
means for creating and storing a second level table in the backing store container, said second level table storing a block of data that is copied from the read/write container to the backing store container;
means for accessing the first and second level tables in the backing store container using the virtual block number; and
means for storing a beginning block number of the second level table in a first level table entry.
2. The system of claim 1 further comprising:
means for accessing an entry in the first level table using the first index of the virtual block number;
means for obtaining a beginning block number of the second level table from the first level table entry;
means for accessing an entry in the second level table using the second index of the virtual block number;
means for obtaining an address from the second level table entry; and
means for adding the block offset of the virtual block number to the address from the second level table entry to determine a physical location in the backing store container where a block of data is stored.
3. The method of claim 1 further comprising means for returning an error in response to retrieving data from a full backing store container in order for a system operator to restart the backup process with a larger backing store container.
5. The method of claim 4 further comprising the steps of:
using the first index of the virtual block number to access an entry in the first level table;
obtaining a beginning block number of the second level table from the first level table entry;
using the second index of the virtual block number to access an entry in the second level table;
obtaining an address from the second level table entry; and
adding the block offset of the virtual block number to the address from the second level table entry to determine a physical location in the backing store container where a block of data is stored.
6. The method of claim 5 further comprising the step of returning an error in response to retrieving data from a full backing store container in order for a system operator to restart the backup process with a larger backing store container.
8. The method of claim 7 further comprising the steps of, in response to the steps of converting:
defining low order bits of the virtual block number as the block offset;
defining middle level bits of the virtual block number as the second index; and
defining high order bits of the virtual block number as the first index.
9. The method of claim 8 further comprising creating a backing store container that is smaller in storage size than the read/write container.
10. The method of claim 9 further comprising returning an error in response to retrieving data from a full backing store container in order for a system operator to restart the backup process with a larger backing store container.

The invention relates generally to the field of computer systems and more particularly provides a method for storing computer systems data to be backed up in logical disk partitions that are smaller than the logical disk partitions containing the on-line data.

Computer systems often perform data backups on computer files to enable recovery of lost data. To maintain the integrity of the backed-up data, a backup process must accurately back up all files or back up all modified files after the most recent backup process. A backup program copies each file that is identified as a candidate for backup from an on-line storage device to a secondary storage device. On-line storage devices are configured from on one or more disks into logical units of storage space referred to herein as "containers". Containers are created and maintained by a software entity called the "container manager". Each type of container on the system has an associated driver which processes system requests on that type of container. After a complete backup operation, the backup program verifies the backed up files to make sure that the files on the secondary storage device(usually a tape) were correctly backed up. One problem with the backup process is that files may change during the backup operation.

To avoid backing up files modified during the backup process and to enable applications to access files during the backup operation, the container manager periodically (e.g. once a day) performs a procedure that takes a "snapshot" or copy of each read-write container whereby, the container manager creates a read-only container which looks like a copy of the data in the read-write container at a particular instant in time. Thereafter, the container manager performs a "copy-on-write " procedure where an unmodified copy of data in the read-write container is copied to a read-only backup container every time there is a request to modify data in the read-write container. The container manager uses the copy-on-write method to maintain the snapshot and to enable backup processes to access and back up an unchanging, read-only copy of the on-line data at the instant the snapshot was created.

During the backup procedure, the container manager creates a "snapshot" container, a "snapshotted" container and a "backing store " container. After the container manager takes the snapshot, the snapshotted container driver processes all input/output (I/O) requests, to store data in or retrieve data from a read-write container. The snapshotted container driver processes all I/O requests to retrieve data from the read-write container by forwarding them directly to the read-write container driver. However for all I/O requests to modify data in a read-write container, the container manager first determines whether the requested block of data has been modified since the time of the snapshot. If the block has not been modified, the container manager copies the data to the backing store container and then sets an associated bit map flag in a modified-bit-map table. The modified-bit-map table contains a bit map with each bit representing one block of data in the read-write container. After setting the modified-bit-map flag, the snapshotted container driver forwards the I/O storage request to the read-write container driver.

When the backup process begins execution, it invokes I/O retrieval requests from the snapshot container. A file system, which is a component of the operating system translates the file-oriented I/O request into a logical address and forwards the request to a snapshot container driver. The snapshot container driver checks the associated bit map in the modified-bit-map table for the requested block of data. If the bit map is set, the snapshot container driver forwards the request to the backing store container driver to retrieve the unmodified copy of that block from the backing store container. The backing store container driver then processes the backup process retrieval request. If the bit map is not set, this means that the block has not been modified since the snapshot was created. The snapshot container driver forwards the request to the read-write container driver to retrieve a copy of that block of data from the read-write container. Upon retrieving the file from the backing store container or the read-write container, the backup process backs it up. After a complete backup operation, the container manager deletes the snapshotted container, the snapshot container, the backing store container, and the modified-bit-map table and thereafter forwards all I/O requests directly to the read-write container driver.

The problem with the current copy-on-write process is that the read-write container and the backing store container must be the same size to maintain a fixed mapping between the read-write container blocks and the copied backing store container blocks. Usually, however, only a small amount of the on-line data is modified between backup operations, the present copy-on-write process therefore utilizes storage space inefficiently. Therefore, it is an object of the present invention to provide a system that allows copy-on-write procedures to be performed on a backing store container that is smaller than the read-write container while ensuring that the read-write container blocks are accurately mapped to the copied backing store container blocks.

In the backup system described herein, the container manager creates and maintains structures that map the unmodified copies of data in the backing store container to locations in the read-write container. The mapping structures which may be stored in memory and/or on disks, contain addresses of locations in the backing store container where collections of blocks of data are stored based on the original block address in the read-write container. In order to obtain the address of the location in the backing store container where a block of data is stored, the container manager converts the physical block number of the read-write block of data into a physical block address in the backing store container which actually contains the data. By using these mapping structures, the system provides an efficient manner of utilizing a smaller backing store container or less storage space, since the data modified during the snapshot backup process is usually substantially less than read-write on-line data.

Specifically, in the preferred embodiment of the invention, when the container manager creates a backing store container, it creates a level 1 table and stores the level 1 in memory and in the first space in the backing store container. When a block of data is copied from the read-write container to the backing store container, the container manager creates a level 2 table, a 2K longword space in the backing store container, and stores the block of data in the level 2 table. The container manager thereafter stores the beginning block number of the level 2 table in a level 1 table entry. The level 1 table is always resident in memory but, the level 2 table is cached in memory as blocks stored in that table is needed.

Before the container manager copies a block of data into the backing store container, it converts the physical block number in the read-write container into a virtual block number. The container manager uses bits zero to three of the virtual block number as the block offset, bits four to fifteen as the second index, and bits sixteen to thirty-one as the first index. The container manager utilizes the value of first index to access an entry in the level 1 table. It then checks to see if a level 2 table is available. If a level 2 table is not available the container manager creates a level 2 table; otherwise, it utilizes the available level 2 table. If the location of the block is outside of the existing level 2 table's access range, the container manager creates a new level 2 table. It thereafter stores the beginning block number of the level 2 table in the level 1 table entry. Then it uses the value of the virtual block number's second index to access an entry in a level 2 table and it adds the virtual block number's block offset to the level 2 table entry to determine the location where it will store the data on the backing store container.

During a backup operation, to retrieve data from the backing store container the container manager uses the first index in the virtual block number to access the level 1 table for the beginning block number of the level 2 table. It indexes the level 2 table with the second index in the virtual block number and adds the block offset in the virtual block number to the level 2 table entry. Then it reads the data from that location in the backing store container.

The invention description below refers to the accompanying drawings, of which:

FIG. 1 is a schematic block diagram of a computer system in which the principles of the invention may be practiced;

FIG. 2 is a schematic block diagram illustrating the components of a copy-on-write procedure in a computer system;

FIG. 3 illustrates the read-write on-line container and the backing store container used in the inventive copy-on-write procedure;

FIG. 4 is a schematic block diagram of a virtual block number used to index the backing store container depicted in FIG. 3;

FIG. 5 is a flowchart illustrating the sequence of steps followed by a copy-on-write procedure;

FIG. 6 is a flowchart illustrating the sequence of steps followed by a backup procedure;

FIG. 7 illustrates a preferred embodiment of a data processing system configured to implement the backing store mapping table; and

FIG. 8 illustrates an alternative embodiment of a data processing system having a distributed file system architecture configured to implement the backing store mapping tables.

FIG. 1 is a schematic block diagram of a typical computer system that is configured to perform copy-on-write procedure in accordance with the present invention. The computer system processor 100 comprises a memory 106 and an input/output (I/O) subsystem 112 interconnected with a central processing unit (CPU) 108. The memory 106 comprises storage locations addressable by the CPU 108 and I/O subsystem 112 for storing software programs and data structures. An operating system 104, portions of which are typically resident in the memory 106 and executed by the CPU 108, functionally organizes the computer processor 100 by, inter alia, handling I/O operations invoked by software processes or application programs executing on the computer. The I/O subsystem 112 is, in turn, connected to a set on-line storage devices 116. These on-line storage devices 116 are partitioned into units of physical space associated with the inventive copy-on-write procedure described herein.

User applications 102 and other internal processes in the computer system invoke I/O requests from the operating system 104 by file names. A file system 110, which is a component of the operating system 104, translates the file names into logical addresses. The file system 110 forwards the I/O requests to a I/O subsystem 112 which, in turn, converts the logical addresses into physical locations in the storage devices 116 and commands the latter devices to engage in the requested storage or retrieval operations. The I/O subsystem 112 configures the physical storage devices 116 partitions into containers and stores container configuration tables in the container layer 200 of the I/O subsystem 112. Container configuration enables the system administrator to partition a disk drive into one or more virtual disks.

Typically, backup operations are performed at the request of a computer operator. In an illustrative backup approach embodiment, the file system instructs the I/O subsystem 112 to perform a conventional copy-on-write operation in response to the operator's request. As depicted in FIG. 2, in performing the copy-on-write procedure, the I/O subsystem 112 creates a snapshotted 206 container, a snapshot 208 container and a backing store 212 container. Each container has an associated container driver that process I/O requests for that container.

Before the copy-on-write procedure is performed, all I/O requests for data in the read-write container 210 go directly to the read-write container 210 driver. After the copy-on-write procedure all I/O requests go to the snapshotted container 206 driver. If the request is a storage request, the system checks the modified-bit-map table 214 to determine if the read-write container 210 block of data was modified after the snapshot 208 container was created. If the block was modified, the modified bit is set therefore, the snapshotted 206 container forwards the I/O request to the read-write on-line container 210 driver. If however, the block was not modified after snapshot 208 container was created, the container manager copies the unmodified block from the read-write container 210 to the backing store container 212 through the backing store container driver 212; the container manager sets the modified-bit-map table 214 for that block, and sends the I/O request to the read-write container 210 driver for storage in the read-write container 210.

During execution, backup processes 204 forward I/O requests for files to the snapshot container 208. The snapshot container 208 determines if the file has been modified by checking the modified-bit-map table 214 for the block where the file is stored. If the block has been modified, the snapshot container 208 driver obtains an unmodified copy of the block from the backing store 212 container. If the block has not been modified, the snapshot container 208 driver obtains the unmodified block from the read-write container 210. This ensures that backup processes 204 access an unchanging copy of the data from the time the snapshot is taken. In order to map the blocks copied from the read-write container 210 with the blocks in the backing store container 212, the system creates a backing store container 212 this is the same size as the read-write container 210. Yet, since most blocks in the read-write container 210 are usually not modified after the snapshot is taken, most of the backing store container's 212 space is not used. Such an event renders the incremental backup operation wasteful when using the conventional copy-on-write operation. The present invention is therefore directed to a mechanism for enabling reliable data mapping between copied blocks of data in the smaller backing store container 212 with original blocks of data in the larger read-write container 210.

FIG. 3 depicts the mapping tables in the backing store container 212 used by the system to map the backing store container 212 blocks of data with the associated read-write on-line container 210 blocks of data. According to the preferred embodiment of the invention, during the copy-on-write operation, the container manager creates a level 1 table 302 in the backing store container 212; the level 1 table 302 contains addresses to a set of tables in the backing store container 212 where level 2 tables 304 are stored. The level 1 table 302 is always resident in memory and stored in the beginning of the backing store container 212. Initially all entries in the level 1 table 302 in memory are set zeros. When a block of data is copied from the read-write container 210 to the backing store container 212, the container manager creates a 2K longword level 2 table 304 in memory and in the backing store container 212 to store the block of data. The beginning block number in the level 2 table 304 is stored in an entry in the level 1 table 302. As more space is needed to store data, the container manager creates new level 2 tables 304 and swaps the level 2 tables 304 in and out of memory as needed.

In order to index the backing store container 212 tables, the container manager converts the physical read-write container 210 block number into a virtual block number 400. FIG. 4 is a schematic diagram showing the virtual block number 400 created by the system. Bits zero to three of the virtual block number is defined as the block offset 402, bits four to fifteen is defined as the second index 404, and bits sixteen to thirty-one is defined as the first index 406. The size of first index 406 is variable and increases as the size of the read-write container 210 increases. Note that the bits defining each entry in the virtual block number will change as the chunk size on the system changes. The system indexes the level 1 table 302 with the first index 406 in the virtual block number 400. Based on the block number in the level 1 table 302 entry, the system indexes the appropriate level 2 table 304 with the second index 404 in the virtual block number 400. The system then adds the block offset 402 in the virtual block number 400 to the level 2 table 304 index to obtain an address for a location where it will store or retrieve the block of data.

If the backing store container fills up before a backup process completes, the container manager sends errors to any application attempting to read data through the snapshot container 208 driver. The system administrator may thereafter remove the snapshot and start a new backup procedure with a bigger backing store container 212. Note also that the snapshotted container 206 never returns errors, so no user application 202 accessing the read-write container 210 is affected by the full backing store container.

FIG. 5 is a flowchart illustrating the sequence of steps employed when performing a copy-on-write procedure with the inventive block mapping in the backing store container 212. The sequence starts at Step 500 and proceeds to Step 502 where a user application 102 or system process issues an input I/O request to the file system 110. The file system 110 accepts the file-oriented I/O request and translates it into an I/O request bound for a read-write container 210 in the I/O subsystem 112 in Step 504. The container manager 201 in the I/O subsystem 112 accepts the I/O request from the file system 110 and forwards it to the snapshotted container driver 206 in Step 506. The container manager 201 checks to see if this is a read request or if it is a write request. If it is a write request, the container manager checks the modified-bit-map table 214 to determine if the read-write on-line block where the file is stored has been modified at Step 508. If the block has been modified, the snapshotted container 206 driver forwards the I/O request to the read-write on-line container driver 210 in Step 510.

If the block has not been modified(Step 508), the container manager 201 copies the unmodified block from the read-write container 210 to the backing store container 208 in Step 512. During the copy operation, the container manager 201 converts the physical read-write block number into a virtual block number 400 in Step 514. The container manager 201 uses the first index 406 in the virtual block number to index an entry in the level 1 table 302 in memory at Step 516. Then the container manager 201 creates a level 2 table 304 and stores the beginning block number for level 2 table 304 in the level 1 table 302 entry in Step 518. Then the container manager 201 uses the virtual block number's second index 404 to access an entry in the level 2 table 304. The container manager 201 adds the virtual block number's block offset 402 to the level 2 table 304 entry and the container manager 201 and sets the associated bit map in the modified-bit-map table 214 in Step 520.

FIG. 6 is a flowchart illustrating the sequence of steps employed when performing an backup operation with the inventive block mapping in the backing store container. The sequence starts at Step 600 and proceeds to Step 602 where backup process 204 issues an retrieval I/O request to the file system 110. The file system 110 accepts the fileoriented I/O request and translates it into an I/O request bound for the snapshot container 208 in the I/O subsystem 112 in Step 604. The container manager 201 accepts the I/O request from the file system 110 and forwards it to the snapshot container 208 driver in Step 606. The system checks the modified-bit-map 214 table to determine if the requested block where the file is stored has been modified(Step 608). If the block has not been modified, the snapshot driver forwards the I/O request to the read-write container driver in Step 610.

If the block has been modified(Step 608), the snapshot driver forwards the retrieval request to the backing store container driver in Step 612. The backing store container driver retrieves the block from the backing store containing by converting the physical read-write block number into a virtual block number at Step 614. The container manager uses the virtual block number first index 406 to index an entry in the level 1 table 302 in memory. Based on the block number in the level 1 table 302 entry, the container manager indexes the level 2 table with the virtual block number's second index. Then it adds the block offset 402 to the level 2 table 304 entry and retrieves the data from that location in Step 616.

FIG. 7 is the preferred embodiment of a data processing platform configured to implement the smaller backup container mechanism. FIG. 7 depicts a memory 706 and an input/output (I/O) subsystem 712 interconnected with a central processing unit (CPU) 704. An operating system 704, handles I/O operations invoked by software processes or application programs executing on the computer. The I/O subsystem 712 comprises a container layer 714 which contains container drivers that perform I/O requests on each type of container in the system. The I/O subsystem 712 is, in turn, connected to a set online storage devices 718 through the appropriate disk drivers 816. An example of this file system 710 is the Windows NT File System (NTFS) configured to operate on the Windows NT operating system.

After a copy-on-write operation, when a user application 702 issues an I/O request to the CPU 704, the file system 710, which is a component of the operating system 708, initially attempts to resolve the request by searching the host computer memory 706; if it cannot, the file system 710 services the request by retrieving the file from disks 718 through the appropriate container driver 714 in the I/O Unit 712. The container driver 714 then forwards the I/O request to the appropriate disk driver 716 or container driver 714 with access to the physical disk drives 718.

While there has been shown and described an illustrative embodiment of a mechanism that enables container reconfiguration, it is to be understood that various other adaptations and modifications may be made within the spirit and scope of the invention. For example in an alternate embodiment of the invention, the file system and the I/O subsystem of the data processing platform need not be resident on the host computer but may, in fact, be distributed on multiple processors. FIG. 8 depicts such an alternative embodiment of the data processing platform configured to perform the container mapping mechanism. The data processing platform 800 comprises a host computer 826 coupled to a file array adapter 824 over a low latency interface. The distributed file array system architecture 800 includes a file array file system 807 which is preferably implemented in accordance with a modified client-server computing model. That is, the file system includes a client file system 808 located on the host computer 826 and a server file system is 816 resident on the adapter 824. The client file system 808 comprises a file array client 810 software driver component that interfaces with a communications manager software component 812; this latter component exchanges and processes I/O requests/responses over the interface with a complementary communications manager 814 of the adapter 824. The server file system 816 comprises, inter alia, a file array server driver 818 component. In addition, the architecture 800 includes a file array I/O subsystem 820 that is located entirely on the adapter 824. An example of a platform suitable for use with the present invention is described in copending and commonly-assigned U.S. patent application Ser. No. 08/964,304 titled, File Array Storage Architecture by Richard Napolitano et al., which application is hereby incorporated by reference as though fully set forth herein.

When performing I/O operation requests in support of a user application program executing on the platform, the client file system 808 initially attempts to resolve the request at the host computer 826; if it cannot, the client file system 808 sends commands to the server file system 816 for execution by server file system 816 or the I/O subsystem 820 of the file array adapter 824. In the case of a backup operation, the snapshot container, the snapshotted container, and the backing store container are all created and maintained in the I/O subsystem 820.

The foregoing description has been directed to specific embodiments of this invention. It will be apparent, however, that other variations and modifications may be made to the described embodiments, with the attainment of some or all of their advantages. Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the invention.

Franklin, Chris

Patent Priority Assignee Title
11681652, Dec 06 2019 EMC IP HOLDING COMPANY LLC Methods, electronic devices and computer program products for accessing data
6282672, Jan 30 1998 Hitachi, LTD System for simultaneously executing any one of plurality of applications that must be executed using static data not modified by another computer program
6366970, Apr 01 1999 Rovi Technologies Corporation Optimal handling and manipulation of high-speed streaming media in a computing device
6460054, Dec 16 1999 PMC-SIERRA, INC System and method for data storage archive bit update after snapshot backup
6567774, Jan 30 1998 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Method and system for configuring and updating networked client stations using a virtual disk and a snapshot disk
6684293, Mar 27 2001 JPMORGAN CHASE BANK, N A , AS SUCCESSOR AGENT Methods and computer readable media for preserving unique critical information during data imaging
6728922, Aug 18 2000 NetApp, Inc Dynamic data space
6792518, Aug 06 2002 EMC IP HOLDING COMPANY LLC Data storage system having mata bit maps for indicating whether data blocks are invalid in snapshot copies
6799258, Jan 10 2001 Silicon Valley Bank Methods and apparatus for point-in-time volumes
6823436, Oct 02 2001 Xyratex Technology Limited System for conserving metadata about data snapshots
6874035, Feb 02 2000 Oracle America, Inc System and methods for transforming data from a source to target platform using snapshot
6898681, Jul 02 2001 Oracle America, Inc Computer storage systems
6910112, Jul 24 2001 Microsoft Technology Licensing, LLC System and method for backing up and restoring data
6934822, Aug 06 2002 EMC IP HOLDING COMPANY LLC Organization of multiple snapshot copies in a data storage system
6941490, Dec 21 2000 EMC IP HOLDING COMPANY LLC Dual channel restoration of data between primary and backup servers
6948038, Jul 24 2001 Microsoft Technology Licensing, LLC System and method for backing up and restoring data
6948039, Dec 14 2001 DATA RECOVERY SOLUTIONS LLC Data backup and restoration using dynamic virtual storage
6952758, Jul 31 2002 International Business Machines Corporation Method and system for providing consistent data modification information to clients in a storage system
6957362, Aug 06 2002 EMC IP HOLDING COMPANY LLC Instantaneous restoration of a production copy from a snapshot copy in a data storage system
6978324, Jun 27 2000 EMC IP HOLDING COMPANY LLC Method and apparatus for controlling read and write accesses to a logical entity
6996687, Dec 20 2002 Veritas Technologies LLC Method of optimizing the space and improving the write performance of volumes with multiple virtual copies
7035881, Sep 23 2003 EMC IP HOLDING COMPANY LLC Organization of read-write snapshot copies in a data storage system
7047380, Jul 22 2003 Acronis International GmbH System and method for using file system snapshots for online data backup
7065610, Jun 27 2000 EMC IP HOLDING COMPANY LLC Method and apparatus for maintaining inventory of logical volumes stored on storage elements
7072915, Jan 22 2002 International Business Machines Corporation Copy method supplementing outboard data copy with previously instituted copy-on-write logical snapshot to create duplicate consistent with source data as of designated time
7072916, Aug 18 2000 NetApp, Inc Instant snapshot
7100089, Sep 06 2002 Hewlett Packard Enterprise Development LP Determining differences between snapshots
7162599, Jul 24 2001 Microsoft Technology Licensing, LLC System and method for backing up and restoring data
7165154, Mar 18 2002 International Business Machines Corporation System and method for data backup
7165156, Sep 06 2002 Hewlett Packard Enterprise Development LP Read-write snapshots
7168069, Jul 12 2000 STMicroelectronics, Inc Dynamic generation of multimedia code for image processing
7171538, Oct 22 2003 International Business Machines Corporation Incremental data storage method, apparatus, interface, and system
7181581, May 09 2002 Xiotech Corporation Method and apparatus for mirroring data stored in a mass storage system
7191304, Sep 06 2002 Hewlett Packard Enterprise Development LP Efficient and reliable virtual volume mapping
7225191, Jun 27 2000 EMC IP HOLDING COMPANY LLC Method and apparatus for verifying storage access requests in a computer storage system with multiple storage elements
7239581, Aug 24 2004 Veritas Technologies LLC Systems and methods for synchronizing the internal clocks of a plurality of processor modules
7246211, Jul 22 2003 Acronis International GmbH System and method for using file system snapshots for online data backup
7272666, Sep 23 2003 Veritas Technologies LLC Storage management device
7281014, Apr 14 2003 JPMORGAN CHASE BANK, N A , AS SUCCESSOR AGENT Method and apparatus for moving data between storage devices
7284016, Dec 03 2002 EMC IP HOLDING COMPANY LLC Client-server protocol for directory access of snapshot file systems in a storage system
7287133, Aug 24 2004 Veritas Technologies LLC Systems and methods for providing a modification history for a location within a data store
7287137, Jun 06 2003 Hewlett Packard Enterprise Development LP Batched, asynchronous data redundancy technique
7290101, Dec 19 2003 Veritas Technologies LLC Using data copies for redundancy
7296008, Aug 24 2004 Veritas Technologies LLC Generation and use of a time map for accessing a prior image of a storage device
7318135, Jul 22 2003 Acronis International GmbH System and method for using file system snapshots for online data backup
7337288, Dec 19 2002 Veritas Technologies LLC Instant refresh of a data volume copy
7389292, Jul 06 2001 COMPUTER ASSOCIATES THINK INC Systems and methods of information backup
7401081, Nov 28 2002 International Business Machines Corporartion Method and apparatus for providing storage control in a network of storage controllers
7409587, Aug 24 2004 Veritas Technologies LLC Recovering from storage transaction failures using checkpoints
7412583, Nov 14 2003 International Business Machines Corporation Virtual incremental storage method
7428604, Jun 27 2000 EMC Corporation Method and apparatus for moving logical entities among storage elements in a computer storage system
7434093, May 16 2005 EMC IP HOLDING COMPANY LLC Dual channel restoration of data between primary and backup servers
7454445, Aug 18 2000 Network Appliance, Inc Write allocation based on storage system map and snapshot
7457982, Apr 11 2003 NetApp, Inc Writable virtual disk of read-only snapshot file objects
7475211, Feb 13 2004 International Business Machines Corporation Method and system for restoring data
7478101, Dec 23 2003 Network Appliance, Inc System-independent data format in a mirrored storage system environment and method for using the same
7536583, Oct 14 2005 Veritas Technologies LLC Technique for timeline compression in a data store
7549032, Dec 19 2003 Veritas Technologies LLC Using data copies for redundancy
7552214, Jul 06 2001 Computer Associates Think, Inc Systems and methods of information backup
7555504, Sep 23 2003 EMC IP HOLDING COMPANY LLC Maintenance of a file version set including read-only and read-write snapshot copies of a production file
7577806, Sep 23 2003 Veritas Technologies LLC Systems and methods for time dependent data storage and recovery
7577807, Sep 23 2003 Veritas Technologies LLC Methods and devices for restoring a portion of a data store
7584337, Sep 23 2003 Veritas Technologies LLC Method and system for obtaining data stored in a data store
7587431, Sep 28 2006 EMC IP HOLDING COMPANY LLC Updating snapshots
7613947, Nov 30 2006 NetApp, Inc System and method for storage takeover
7617370, Apr 29 2005 Network Appliance, Inc Data allocation within a storage system architecture
7627727, Oct 04 2002 Veritas Technologies LLC Incremental backup of a data volume
7631120, Aug 24 2004 Veritas Technologies LLC Methods and apparatus for optimally selecting a storage buffer for the storage of data
7647466, Sep 28 2006 EMC IP HOLDING COMPANY LLC Linear space allocation mechanisms in data space
7664793, Jul 01 2003 Veritas Technologies LLC Transforming unrelated data volumes into related data volumes
7668880, May 15 2007 Offsite computer file backup system providing rapid recovery and method thereof
7685178, Oct 31 2006 NetApp, Inc System and method for examining client generated content stored on a data container exported by a storage system
7685388, Nov 01 2005 NetApp, Inc. Method and system for single pass volume scanning for multiple destination mirroring
7689861, Oct 09 2002 United Microelectronics Corp Data processing recovery system and method spanning multiple operating system
7702867, Mar 18 2002 International Business Machines Corporation System and method for data backup
7707374, Oct 22 2003 International Business Machines Corporation Incremental data storage method, apparatus, interface, and system
7711683, Nov 30 2006 NetApp, Inc Method and system for maintaining disk location via homeness
7720889, Oct 31 2006 NetApp, Inc System and method for nearly in-band search indexing
7721045, Dec 02 2003 NetApp, Inc. System and method for efficiently guaranteeing data consistency to clients of a storage system cluster
7721142, Jun 19 2001 CRISTIE SOFTWARE LIMITED Copying procedures including verification in data networks
7725667, Sep 23 2003 Veritas Technologies LLC Method for identifying the time at which data was written to a data store
7725760, Sep 23 2003 Veritas Technologies LLC Data storage system
7730222, Aug 24 2004 Veritas Technologies LLC Processing storage-related I/O requests using binary tree data structures
7730277, Oct 25 2004 NetApp, Inc. System and method for using pvbn placeholders in a flexible volume of a storage system
7734573, Dec 14 2004 Microsoft Technology Licensing, LLC Efficient recovery of replicated data items
7734594, Jul 06 2001 Computer Associates Think, Inc Systems and methods of information backup
7739546, Oct 20 2006 NetApp, Inc. System and method for storing and retrieving file system log information in a clustered computer system
7743031, Sep 06 2002 Hewlett Packard Enterprise Development LP Time and space efficient technique for creating virtual volume copies
7743227, Dec 20 2002 Veritas Technologies LLC Volume restoration using an accumulator map
7747584, Aug 22 2006 NetApp, Inc.; Network Appliance, Inc System and method for enabling de-duplication in a storage system architecture
7756831, Sep 28 2006 EMC IP HOLDING COMPANY LLC Cooperative locking between multiple independent owners of data space
7757056, Mar 16 2005 NetApp, Inc System and method for efficiently calculating storage required to split a clone volume
7769723, Apr 28 2006 Network Appliance, Inc System and method for providing continuous data protection
7797570, Nov 29 2005 NetApp, Inc.; Network Appliance, Inc System and method for failover of iSCSI target portal groups in a cluster environment
7805401, Apr 14 2003 Oracle International Corporation Method and apparatus for splitting a replicated volume
7805584, Feb 13 2004 International Business Machines Corporation Method and system for restoring data
7809917, Nov 14 2003 International Business Machines Corporation Virtual incremental storage apparatus method and system
7818299, Mar 19 2002 NetApp, Inc. System and method for determining changes in two snapshots and for transmitting changes to a destination snapshot
7827350, Apr 27 2007 Network Appliance, Inc Method and system for promoting a snapshot in a distributed file system
7827362, Aug 24 2004 Veritas Technologies LLC Systems, apparatus, and methods for processing I/O requests
7849057, Mar 30 2007 NetApp, Inc Identifying snapshot membership for blocks based on snapid
7853750, Jan 30 2007 NetApp, Inc. Method and an apparatus to store data patterns
7865486, Nov 28 2002 International Business Machines Corporation Providing storage control in a network of storage controllers
7865741, Aug 23 2006 NetApp, Inc. System and method for securely replicating a configuration database of a security appliance
7873613, Nov 28 2002 International Business Machines Corporation Providing storage control in a network of storage controllers
7882304, Apr 27 2007 NetApp, Inc System and method for efficient updates of sequential block storage
7886119, Sep 06 2002 Hewlett Packard Enterprise Development LP Time and space efficient technique for creating virtual volume copies
7899933, Jun 27 2000 EMC IP HOLDING COMPANY LLC Use of global logical volume identifiers to access logical volumes stored among a plurality of storage elements in a computer storage system
7904428, Sep 23 2003 Veritas Technologies LLC Methods and apparatus for recording write requests directed to a data store
7921077, Jun 29 2006 NetApp, Inc System and method for managing data deduplication of storage systems utilizing persistent consistency point images
7930587, Nov 30 2006 NetApp, Inc. System and method for storage takeover
7945724, Apr 26 2007 NetApp, Inc Non-volatile solid-state memory based adaptive playlist for storage system initialization operations
7949638, Oct 31 2006 NetApp, Inc. System and method for nearly in-band search indexing
7949843, Nov 01 2005 NetApp, Inc. Method and system for single pass volume scanning for multiple destination mirroring
7984085, Oct 25 2004 Network Appliance, Inc Rate of change of data using on-the-fly accounting
7984255, Sep 28 2006 EMC IP HOLDING COMPANY LLC Optimizing reclamation of data space
7987167, Aug 04 2006 NetApp, Inc Enabling a clustered namespace with redirection
7991748, Sep 23 2003 Veritas Technologies LLC Virtual data store creation and use
7996636, Nov 06 2007 NetApp, Inc. Uniquely identifying block context signatures in a storage volume hierarchy
8001090, Oct 31 2006 NetApp, Inc. System and method for examining client generated content stored on a data container exported by a storage system
8010509, Jun 30 2006 NetApp, Inc System and method for verifying and correcting the consistency of mirrored data sets
8041736, Nov 30 2006 NetApp, Inc. Method and system for maintaining disk location via homeness
8074035, Jul 22 2003 Acronis International GmbH System and method for using multivolume snapshots for online data backup
8116455, Sep 29 2006 NetApp, Inc System and method for securely initializing and booting a security appliance
8165221, Apr 28 2006 NetApp, Inc System and method for sampling based elimination of duplicate data
8219564, Apr 29 2008 NetApp, Inc. Two-dimensional indexes for quick multiple attribute search in a catalog system
8219749, Apr 27 2007 Network Appliance, Inc; NetApp, Inc System and method for efficient updates of sequential block storage
8219821, Mar 27 2007 NetApp, Inc System and method for signature based data container recognition
8250043, Aug 19 2008 NetApp, Inc. System and method for compression of partially ordered data sets
8296260, Jun 29 2006 NetApp, Inc. System and method for managing data deduplication of storage systems utilizing persistent consistency point images
8301673, Dec 29 2006 Network Appliance, Inc System and method for performing distributed consistency verification of a clustered file system
8301791, Jul 26 2007 NetApp, Inc System and method for non-disruptive check of a mirror
8332689, Jul 18 2001 VEEAM SOFTWARE GROUP GMBH Systems, methods, and computer program products for instant recovery of image level backups
8336044, Oct 09 2002 United Microelectronics Corp Method and system for deploying a software image
8365199, Dec 31 2007 S3 GRAPHICS CO , LTD Method and system for supporting multiple display devices
8370450, Jul 06 2001 CA, INC Systems and methods for information backup
8380674, Jan 09 2008 NetApp, Inc System and method for migrating lun data between data containers
8412682, Jun 29 2006 NetApp, Inc System and method for retrieving and using block fingerprints for data deduplication
8417905, Jan 10 2001 DataCore Software Corporation Methods and apparatus for point-in-time volumes
8423731, Oct 31 2006 NetApp, Inc System and method for automatic scheduling and policy provisioning for information lifecycle management
8423732, Apr 11 2003 NetApp, Inc. Writable virtual disks of read-only snapshot file objects
8495597, Jul 12 2000 STMicroelectronics, Inc. On the fly generation of multimedia code for image processing
8498417, Dec 27 2007 EMC IP HOLDING COMPANY LLC Automation of coordination of encryption keys in a SAN based environment where an encryption engine, device management, and key management are not co-located
8510524, Mar 29 2007 NetApp, Inc File system capable of generating snapshots and providing fast sequential read access
8521973, Aug 24 2004 Veritas Technologies LLC Systems and methods for providing a modification history for a location within a data store
8533158, Sep 28 2006 EMC IP HOLDING COMPANY LLC Reclaiming data space by rewriting metadata
8533410, Mar 29 2007 NetApp, Inc Maintaining snapshot and active file system metadata in an on-disk structure of a file system
8566640, Jul 19 2010 VEEAM SOFTWARE GROUP GMBH Systems, methods, and computer program products for instant recovery of image level backups
8588425, Dec 27 2007 EMC IP HOLDING COMPANY LLC Encryption key recovery in the event of storage management failure
8595454, Aug 31 2010 Veritas Technologies LLC System and method for caching mapping information for off-host backups
8639658, Apr 21 2010 Veritas Technologies LLC Cache management for file systems supporting shared blocks
8725986, Apr 18 2008 NetApp, Inc. System and method for volume block number to disk block number mapping
8732136, May 22 2006 Microsoft Technology Licensing, LLC Recovery point data view shift through a direction-agnostic roll algorithm
8762345, May 31 2007 NetApp, Inc System and method for accelerating anchor point detection
8782005, Dec 12 2012 STORAGECRAFT TECHNOLOGY CORPORATION Pruning previously-allocated free blocks from a synthetic backup
8793226, Aug 28 2007 NetApp, Inc System and method for estimating duplicate data
8799681, Dec 27 2007 EMC IP HOLDING COMPANY LLC Redundant array of encrypting disks
8825970, Apr 26 2007 NETAPP INC System and method for mounting a storage volume utilizing a block reference list
8832026, Mar 30 2007 NetApp, Inc. Identifying snapshot membership for blocks based on snapid
8856927, Jul 22 2003 Acronis International GmbH System and method for using snapshots for rootkit detection
8862639, Sep 28 2006 EMC IP HOLDING COMPANY LLC Locking allocated data space
8868495, Feb 21 2007 NetApp, Inc System and method for indexing user data on storage systems
8874864, Mar 29 2007 NetApp, Inc. Maintaining snapshot and active file system metadata in an on disk structure of a file system
8909885, Mar 29 2007 NetApp, Inc. File system capable of generating snapshots and providing fast sequential read access
8914596, Dec 01 2003 EMC IP HOLDING COMPANY LLC Virtual ordered writes for multiple storage devices
8996487, Oct 31 2006 NetApp, Inc System and method for improving the relevance of search results using data container access patterns
9002910, Jul 06 2001 CA, INC Systems and methods of information backup
9026495, May 26 2006 NetApp, Inc System and method for creating and accessing a host-accessible storage entity
9031908, Mar 31 2009 Veritas Technologies LLC Method and apparatus for simultaneous comparison of multiple backup sets maintained in a computer system
9069787, May 31 2007 NetApp, Inc. System and method for accelerating anchor point detection
9087013, Jan 14 2001 DataCore Software Corporation Methods and apparatus for point-in-time volumes
9098455, Jun 01 2004 Microsoft Technology Licensing, LLC Systems and methods of event driven recovery management
9104624, Jul 19 2010 VEEAM SOFTWARE GROUP GMBH Systems, methods, and computer program products for instant recovery of image level backups
9152503, Mar 16 2005 NetApp, Inc. System and method for efficiently calculating storage required to split a clone volume
9165003, Nov 29 2004 NetApp, Inc Technique for permitting multiple virtual file systems having the same identifier to be served by a single storage system
9201738, Aug 14 2013 GLOBALFOUNDRIES Inc. Method, computer readable storage medium and computer system for obtaining snapshots of data
9280457, Apr 18 2008 NetApp, Inc. System and method for volume block number to disk block number mapping
9344112, Apr 28 2006 NetApp, Inc Sampling based elimination of duplicate data
9400886, Jul 22 2003 Acronis International GmbH System and method for using snapshots for rootkit detection
9454536, Sep 28 2006 EMC IP HOLDING COMPANY LLC Space compaction and defragmentation mechanisms in data space
9535796, Mar 30 2010 LENOVO BEIJING LIMITED; Beijing Lenovo Software Ltd Method, apparatus and computer for data operation
9558078, Oct 28 2014 Microsoft Technology Licensing, LLC Point in time database restore from storage snapshots
9571278, Dec 27 2007 EMC IP HOLDING COMPANY LLC Encryption key recovery in the event of storage management failure
9606739, Dec 01 2003 EMC IP HOLDING COMPANY LLC Virtual ordered writes for multiple storage devices
9678977, Nov 25 2015 International Business Machines Corporation Similarity based deduplication of snapshots data
9830278, Mar 06 2008 EMC IP HOLDING COMPANY LLC Tracking replica data using key management
9870289, Dec 12 2014 CA, INC Notifying a backup application of a backup key change
9880904, Dec 12 2014 CA, INC Supporting multiple backup applications using a single change tracker
Patent Priority Assignee Title
4654819, Dec 09 1982 Radisys Corporation Memory back-up system
5535381, Jul 22 1993 Data General Corporation Apparatus and method for copying and restoring disk files
5758067, Apr 21 1995 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Automated tape backup system and method
//////////////
Executed onAssignorAssigneeConveyanceFrameReelDoc
Nov 04 1997Adaptec, Inc.(assignment on the face of the patent)
Nov 04 1997FRANKLIN, CHRISAdaptec, IncASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0088100670 pdf
Nov 19 1997Adaptec, IncAdaptec, IncCERTIFICATE OF INCORPORATION0095740600 pdf
Jun 08 2010Adaptec, IncPMC-SIERRA, INC ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0308990567 pdf
Aug 02 2013PMC-SIERRA US, INC BANK OF AMERICA, N A SECURITY INTEREST IN PATENTS0309470710 pdf
Aug 02 2013WINTEGRA, INC BANK OF AMERICA, N A SECURITY INTEREST IN PATENTS0309470710 pdf
Aug 02 2013PMC-SIERRA, INC BANK OF AMERICA, N A SECURITY INTEREST IN PATENTS0309470710 pdf
Jan 15 2016BANK OF AMERICA, N A PMC-SIERRA, INC RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0376750129 pdf
Jan 15 2016MICROSEMI STORAGE SOLUTIONS, INC F K A PMC-SIERRA, INC MORGAN STANLEY SENIOR FUNDING, INC PATENT SECURITY AGREEMENT0376890719 pdf
Jan 15 2016BANK OF AMERICA, N A PMC-SIERRA US, INC RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0376750129 pdf
Jan 15 2016BANK OF AMERICA, N A WINTEGRA, INC RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0376750129 pdf
Jan 15 2016MICROSEMI STORAGE SOLUTIONS U S , INC F K A PMC-SIERRA US, INC MORGAN STANLEY SENIOR FUNDING, INC PATENT SECURITY AGREEMENT0376890719 pdf
May 29 2018MORGAN STANLEY SENIOR FUNDING, INC MICROSEMI STORAGE SOLUTIONS, INC RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0462510271 pdf
May 29 2018MORGAN STANLEY SENIOR FUNDING, INC MICROSEMI STORAGE SOLUTIONS U S , INC RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0462510271 pdf
Date Maintenance Fee Events
Nov 10 2003M1551: Payment of Maintenance Fee, 4th Year, Large Entity.
Nov 20 2003ASPN: Payor Number Assigned.
Nov 09 2007M1552: Payment of Maintenance Fee, 8th Year, Large Entity.
Oct 19 2011M1553: Payment of Maintenance Fee, 12th Year, Large Entity.


Date Maintenance Schedule
May 09 20034 years fee payment window open
Nov 09 20036 months grace period start (w surcharge)
May 09 2004patent expiry (for year 4)
May 09 20062 years to revive unintentionally abandoned end. (for year 4)
May 09 20078 years fee payment window open
Nov 09 20076 months grace period start (w surcharge)
May 09 2008patent expiry (for year 8)
May 09 20102 years to revive unintentionally abandoned end. (for year 8)
May 09 201112 years fee payment window open
Nov 09 20116 months grace period start (w surcharge)
May 09 2012patent expiry (for year 12)
May 09 20142 years to revive unintentionally abandoned end. (for year 12)