The disclosed computer-implemented method for agentless and accelerated backup of a database may include, receiving, by a data backup device from a data server, blocks of data that provide a full backup of data of the data server. The method additionally includes receiving, by the data backup device from the data server, one or more native logs indicating one or more transactions performed by the data server. The method also includes determining, by the data backup device and based on the native logs, one or more changed blocks of the blocks of data. The method further includes providing, by the data backup device, a point in time restore of the data server by creating a synthetic full backup that overlays one or more of the blocks of data with the one or more changed blocks, and that shares remaining blocks of the blocks of data with the full backup.
|
17. A non-transitory computer-readable medium comprising one or more computer-executable instructions that, when executed by at least one processor of a computing device, cause the computing device to:
receive, by a data backup device from a data server, blocks of data that provide a full backup of data of the data server;
receive, by the data backup device from the data server, one or more native logs indicating one or more transactions performed by the data server;
determine, by the data backup device and based on the native logs, one or more changed blocks of the blocks of data;
provide, by the data backup device, a point in time restore of the data server by creating a synthetic full backup that overlays one or more of the blocks of data with the one or more changed blocks, and that shares remaining blocks of the blocks of data with the full backup; and
retaining in memory, by the data backup device, at least one of the one or more native logs as one or more transaction level restore points.
1. A computer-implemented method for agentless and accelerated backup of a database, at least a portion of the method being performed by a computing device comprising at least one processor, the method comprising:
receiving, by a data backup device from a data server, blocks of data that provide a full backup of data of the data server;
receiving, by the data backup device from the data server, one or more native logs indicating one or more transactions performed by the data server;
determining, by the data backup device and based on the native logs, one or more changed blocks of the blocks of data;
providing, by the data backup device, a point in time restore of the data server by creating a synthetic full backup that overlays one or more of the blocks of data with the one or more changed blocks, and that shares remaining blocks of the blocks of data with the full backup; and
retaining in memory, by the data backup device, at least one of the one or more native logs as one or more transaction level restore points.
9. A system for agentless and accelerated backup of a database, the system comprising:
a computing device comprising at least one physical processor; and
physical memory coupled to the at least one physical processor, wherein the at least one physical processor is configured to:
receive, by a data backup device from a data server, blocks of data that provide a full backup of data of the data server;
receive, by the data backup device from the data server, one or more native logs indicating one or more transactions performed by the data server;
determine, by the data backup device and based on the native logs, one or more changed blocks of the blocks of data;
provide, by the data backup device, a point in time restore of the data server by creating a synthetic full backup that overlays one or more of the blocks of data with the one or more changed blocks, and that shares remaining blocks of the blocks of data with the full backup; and
retain in memory, by the data backup device, at least one of the one or more native logs as one or more transaction level restore points.
2. The method of
3. The method of
4. The method of
6. The method of
7. The method of
8. The method of
10. The system of
11. The system of
14. The system of
15. The system of
16. The system of
18. The non-transitory computer-readable medium of
19. The non-transitory computer-readable medium of
20. The non-transitory computer-readable medium of
|
Most database backups today are implemented as log backups. A full backup of the database files is taken in a stable state and then logs are copied so that point in time restore is possible. The logs record each transaction that occurs in the database, such as data being deleted, making it possible to go back transaction by transaction or select a point in time by providing a transaction identifier. With log backups, the backups are consistent because incremental updates are transaction log updates and a database knows how to implement these updates consistently. Enterprise databases also support log backups natively.
The log backup methodology yields a few problems, however. For example, an agent is required to run on the production host that runs the database (i.e., data server). Additionally, restoring the database requires application specific logic. Also, periodic full backups are necessary because database restoration from log backups requires replaying the logs at the production host, which leads to increased recovery time objective (RTO). The requirement to replay the logs at the production host also means that instant access cannot be provided, and it is not possible to use an acceleration feature, which can provide a full backup for the cost of an incremental backup.
As will be described in greater detail below, the present disclosure describes various systems and methods for agentless and accelerated backup of a database by creating, by determining changed blocks of data based on native logs received from a data server, and creating, by the data backup device, a synthetic full backup that overlays one or more of blocks of data of a full backup with the one or more changed blocks, and that shares remaining blocks of the blocks of data with the full backup.
In one embodiment, a computer-implemented method for agentless and accelerated backup of a database may include, receiving, by a data backup device from a data server, blocks of data that provide a full backup of data of the data server. The method additionally includes receiving, by the data backup device from the data server, one or more native logs indicating one or more transactions performed by the data server. The method also includes determining, by the data backup device and based on the native logs, one or more changed blocks of the blocks of data. The method further includes providing, by the data backup device, a point in time restore of the data server by creating a synthetic full backup that overlays one or more of the blocks of data with the one or more changed blocks, and that shares remaining blocks of the blocks of data with the full backup.
In some embodiments of the computer-implemented method, receiving the blocks of data includes receiving the blocks of data over a file sharing mechanism that allows files copied over a file sharing protocol to be catalogued as backups. For example, the file sharing mechanism may include a network file sharing (NFS) data export mechanism. Additionally, the file sharing mechanism may include a writeable overlay. Also, the sharing mechanism may include a data deduplication engine. Additionally or alternatively, receiving the one or more native logs may include receiving the native logs from the data server configured in log replication mode.
In some embodiments of the computer-implemented method, determining the one or more changed blocks may include spinning up a database container image, by the data backup device, and using the database container image to generate a synthetic full copy by applying the native logs to the full backup. In such embodiments, creating the synthetic full backup may include performing a backup of the synthetic full copy.
In another embodiment, a system for agentless and accelerated backup of a database may include a computing device comprising at least one physical processor and physical memory coupled to the at least one physical processor. The at least one physical processor is configured to receive, by a data backup device from a data server, blocks of data that provide a full backup of data of the data server. The at least one physical processor is additionally configured to receive, by the data backup device from the data server, one or more native logs indicating one or more transactions performed by the data server. The at least one physical processor is also configured to determine, by the data backup device and based on the native logs, one or more changed blocks of the blocks of data. The at least one physical processor is further configured to provide, by the data backup device, a point in time restore of the data server by creating a synthetic full backup that overlays one or more of the blocks of data with the one or more changed blocks, and that shares remaining blocks of the blocks of data with the full backup.
In some embodiments of the system, the at least one physical processor is configured to receive the blocks of data at least in part by receiving the blocks of data over a file sharing mechanism that allows files copied over a file sharing protocol to be catalogued as backups. For example, the file sharing mechanism may include a network file sharing (NFS) data export mechanism. Additionally, the file sharing mechanism may include a writeable overlay. Also, the sharing mechanism may include a data deduplication engine. Alternatively or additionally, the at least one physical processor is configured to receive the one or more native logs at least in part by receiving the native logs from the data server configured in log replication mode.
In some embodiments of the system the at least one physical processor may be configured to determine the one or more changed blocks at least in part by spinning up a database container image, by the data backup device, and using the database container image to generate a synthetic full copy by applying the native logs to the full backup. In such embodiments, the at least one physical processor may be configured to create the synthetic full backup at least in part by performing a backup of the synthetic full copy.
In some examples, the above-described method may be encoded as computer-readable instructions on a non-transitory computer-readable medium. For example, a computer-readable medium may include one or more computer-executable instructions that, when executed by at least one processor of a computing device, may cause the computing device to receive, by a data backup device from a data server, blocks of data that provide a full backup of data of the data server. Additionally, the one or more computer-executable instructions may cause the computing device to receive, by the data backup device from the data server, one or more native logs indicating one or more transactions performed by the data server. Also, the one or more computer-executable instructions may cause the computing device to determine, by the data backup device and based on the native logs, one or more changed blocks of the blocks of data. Further, the one or more computer-executable instructions may cause the computing device to provide, by the data backup device, a point in time restore of the data server by creating a synthetic full backup that overlays one or more of the blocks of data with the one or more changed blocks, and that shares remaining blocks of the blocks of data with the full backup.
In some embodiments, of the non-transitory computer-readable medium, the one or more computer-executable instructions may cause the computing device to receive the blocks of data at least in part by receiving the blocks of data over a file sharing mechanism that allows files copied over a file sharing protocol to be catalogued as backups. Alternatively or additionally, the one or more computer-executable instructions may cause the computing device to determine the one or more changed blocks at least in part by spinning up a database container image, by the data backup device, and using the database container image to generate a synthetic full copy by applying the native logs to the full backup. In such embodiments, the one or more computer-executable instructions may cause the computing device to create the synthetic full backup at least in part by performing a backup of the synthetic full copy.
Features from any of the embodiments described herein may be used in combination with one another in accordance with the general principles described herein. These and other embodiments, features, and advantages will be more fully understood upon reading the following detailed description in conjunction with the accompanying drawings and claims.
The accompanying drawings illustrate a number of example embodiments and are a part of the specification. Together with the following description, these drawings demonstrate and explain various principles of the present disclosure.
Throughout the drawings, reference characters and descriptions indicate similar, but not necessarily identical, elements. While the example embodiments described herein are susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and will be described in detail herein. However, the example embodiments described herein are not intended to be limited to the particular forms disclosed. Rather, the present disclosure covers all modifications, equivalents, and alternatives falling within the scope of the appended claims.
The present disclosure is generally directed to systems and methods for agentless and accelerated backup of a database. Media servers (i.e., data servers) are dedicated computer appliances and/or specialized application software, ranging from an enterprise class machine providing video on demand, to a small personal computer or network attached storage (NAS) for the home, dedicated for storing various digital media (e.g., digital videos/movies, audio/music, and picture files). In information technology, a backup, or data backup is a copy of computer data taken and stored elsewhere so that it may be used to restore the original after a data loss event. Backup of data servers using data backup devices can be performed using various types of repository models, but providing features such as incremental backups, instant access, and acceleration typically requires sophisticated elements installed on the production host (e.g., agents, application specific restoration logic, etc.) which are specific to particular types of databases and/or applications. These requirements prevent data backup devices from being generic to many or all databases and can make it difficult or impossible to provide all of the aforementioned features in combination.
The data backup device disclosed herein eliminates the need for an agent on the data server while providing instant access with acceleration. The instant access feature causes a backup to look like a real data copy, but this feature is not available with current schemes in which logs have to be replayed at the data server during a restore procedure. Acceleration features determine which data blocks of a file have changed on the data server and generate and store only the new block(s) to the data backup device. The data backup device then overlays the new block on top the existing block that was changed from the previous backup. In virtualization, an application container is a controlling element for an application instance that runs within a type of virtualization scheme called container-based virtualization. By periodically spinning up a database container image on the data backup device and playing the native log files received from the data server, the data backup device is able to eliminate the need for an agent on the data server and provide instant access with acceleration features. The database container image may be capable of playing and applying logs of a plurality of different types of data bases, and/or there may be multiple types of database container images on the data backup device that can be used for a connected data server implementing a database of the appropriate type according to detected database type, a configuration, etc. As a result, the data backup device can serve various types of databases without requiring installation of an agent on the data server and need to play the logs at the data server during a database restore is reduced or eliminated.
The following will provide, with reference to
In certain embodiments, one or more of modules 102 in
As illustrated in
As illustrated in
As illustrated in
It may be readily appreciated that the system 100 for agentless and accelerated backup of a database may include a computing device comprising at least one physical processor 130 and physical memory 140 coupled to the at least one physical processor 130. Utilizing data block receipt module 104, the at least one physical processor 130 is configured to receive, by a data backup device from a data server, blocks of data that provide a full backup 121 of data of the data server. Utilizing native log receipt module 106, the at least one physical processor 130 is additionally configured to receive, by the data backup device from the data server, one or more native logs 122 indicating one or more transactions performed by the data server. Utilizing changed block(s) determination module 108, the at least one physical processor is also configured to determine, by the data backup device and based on the native logs, one or more changed blocks 124 of the blocks of data. Utilizing instant access restoration module 110, the at least one physical processor 130 is further configured to provide, by the data backup device, a point in time restore of the data server by creating a synthetic full backup 125 that overlays one or more of the blocks of data with the one or more changed blocks 124, and that shares remaining blocks of the blocks of data with the full backup 121.
In some embodiments of the system the at least one physical processor may be configured to determine the one or more changed blocks 124 at least in part by spinning up a database container image 133, by the data backup device, and using the database container image 123 to generate a synthetic full copy by applying the native logs 122 to a previous full backup (e.g., the original full backup or a previous synthetic full backup 125). In such embodiments, the at least one physical processor 130 may be configured to create the synthetic full backup 125 at least in part by performing a backup of the synthetic full copy in which an acceleration feature overlays blocks of data of the previous full backup with the changed blocks of data, and shares unchanged blocks of data with the previous full backup.
Many other devices or subsystems may be connected to system 100 in
The term “computer-readable medium,” as used herein, generally refers to any form of device, carrier, or medium capable of storing or carrying computer-readable instructions. Examples of computer-readable media include, without limitation, transmission-type media, such as carrier waves, and non-transitory-type media, such as magnetic-storage media (e.g., hard disk drives, tape drives, and floppy disks), optical-storage media (e.g., Compact Disks (CDs), Digital Video Disks (DVDs), and BLU-RAY disks), electronic-storage media (e.g., solid-state drives and flash media), and other distribution systems.
As illustrated in
At step 204, one or more of the systems described herein may receive, by the data backup device from the data server, one or more native logs indicating one or more transactions performed by the data server. For example, native log receipt module 106 may receive the native logs from the data server and store the logs in memory. In some embodiments, receiving the blocks of data at step 204 includes receiving the blocks of data over a file sharing mechanism that allows native logs to be received from a data server configured in log replication mode. Such a file sharing mechanism is described in greater detail below with reference to
At step 206, one or more of the systems described herein may determine, by the data backup device and based on the native logs, one or more changed blocks of the blocks of data. For example, changed block(s) determination module 108 may spin up a database container image, by the data backup device, and use the database container image to generate a synthetic full copy by applying the native logs to a previous backup (e.g., the full backup or a previous synthetic backup). Processing may proceed from step 206 to step 208.
At step 208, one or more of the systems described herein may provide, by the data backup device, a point in time restore of the data server by creating a synthetic full backup that overlays one or more of the blocks of data with the one or more changed blocks, and that shares remaining blocks of the blocks of data with the previous backup (e.g., the full backup or a previous synthetic backup). Creating the synthetic full backup at step 208 may include performing a backup of the synthetic full copy by cataloguing the data blocks so that the data blocks that changed overlay corresponding blocks of the previous full backup.
Turning now to
In the example of
Together, NFS data mount, NFS data export, VpFS writable overlay, and MSDP deduplication engine combine as a technology known as UniversalShares™. This technology provides a way to export data of a media server as an NFS network file share. The NFS mount provides an ability for the data server 300 to see all of the backups and data stored on the data backup device 302. A UniversalShares™ folder on the data server 300 can be copied to backup the data and deduplication is performed in the background by MSDP. Since data is already copied to the data backup device 302, a backup can be performed by pointing to the data and cataloguing it, without the need to transfer any data from the server 300 to the backup device 302.
As shown in
In a particular example using UniversalShares™ and a PostgreSQL database on data server 300, backup of a full dataset may be performed through pg_basebackup by running “USHARE” backup. Additionally, incremental dataset backup may be accomplished through archive_command. Then, accelerator may be implemented by spinning up a PostgreSQL container on Flex data backup device 302, applying the incremental dataset through pg_standby, and performing “USHARE” backup. Performing “USHARE” backup thus backs up a synthetic full copy and backs up an incremental dataset to provide point-in-time restore capabilities.
In the particular example using UniversalShares™ and a PostgreSQL database on data server 300 and the Flex data backup device 302, a regular restore may be performed from any backup through instant access. Accordingly, a point in time recovery between times T1 and T2 may be performed in part by restoring backup of T1 and restoring incremental logs from T2. Then, the point in time restore may be completed through recovery_target_time, recovery_target_xid, recovery_target_inclusive, and recovery_target_timeline.
Turning to
In contrast, a data server 450A that does not use an agent, but instead provides data and logs natively to the data backup device 452 having a DB container image 459, may perform, at 454, an initial full backup 456, and thereafter transmit native logs 458 to the data backup device 452. Subsequently, the device 452 may spin up the database container image 459, which plays the native logs 458 and applies them to the most recent full backup, which is the initial full backup 456 in this instance, resulting in a synthetic full copy. This synthetic full copy may then be catalogued as a synthetic full backup of synthetic full backups 460. Thereafter, additional native logs 462 may be received and stored on the device 452. Again, the device 452 may spin up the database container image 459, which plays the additional native logs 462 and applies them to the most recent full backup, which is a most recent synthetic full backup 460 in this instance, resulting in an additional synthetic full copy. This additional synthetic full copy may then be catalogued as an additional one of the synthetic full backups 460. Thereafter, in order to perform a restore 464 to a most recent restore point, it is only necessary for the data sever 400B to copy the latest full backup from among the synthetic full backups 460, resulting in a copied full backup 466 without having to copy or play any logs.
Additionally, it is also possible to perform a transaction level restore by selecting a particular one of the native logs by, for example, providing a transaction identifier corresponding to particular one of the native logs. Then, the most recent full backup (e.g., initial full backup or most recent synthetic backup) that precedes the selected particular one of the native logs may be copied from the device 452 to the data server 400B along with the identified native log and any other native logs that occurred in between the most recent full backup that precedes the selected particular one of the native logs and the identified native log. The resulting subset of native logs may then be played at the data server 400B and applied to the most recent full backup that precedes the selected particular one of the native logs.
From the foregoing, it should be apparent that the system that uses an agent on the data server 400A suffers from the need to perform periodic full backups, cannot take advantage of an accelerator feature, requires substantial recovery time to copy, play, and apply all of the logs during a restore, is not instant access ready, and is susceptible to breaks in an image chain. In contrast, the system that does not use an agent on the data server 450A, and that uses a database container image 459 on the data backup device 452, enjoys numerous advantages. For example, the agentless implementation with native log shipping allows the data backup device to be employed with various types of databases without installing an agent on the data server. Also, the implementation is forever incremental, resulting in no need for the data server to periodically perform full backups. Also, the agentless implementation is able to provide instant access with acceleration features, thus avoiding the need for any recovery processing by the data server 450B. Further, the agentless implementation can perform a restore at transaction level granularity with greatly reduced time required to play only a subset of logs. Finally, the agentless implementation is more resilient to disruptions in the image chain.
Turning now to
Subsequently, at second time instance T2, the data server 500 performs transactions that result in changes to Block B and Block C, resulting in Block B′ and Block C′. Data server 500 transmits native logs, at 514, to data backup device 502, which stores the native logs NL1 and NL2 in memory 508. At second time instance T2, the Block B′ and Block C′ have not yet been copied to the data backup device 502.
At third time instance T3, following second time instance T2, data backup device 502 spins up a container database image, as described herein, and plays the logs Nl1 and NL2 to determine a first set of changes C1, corresponding to Block B′ and Block C′. Determining this first set of changes C1 at third time instance T3 generates Block B′ and Block C′ and stores them in memory 508, avoiding the need to copy any changed blocks from the data server 500. The native log NL1 may be retained in memory as a transaction level restore point. In contrast, native log NL2, which is subsequent to native log NL1 and is the log last received before spinning up the container database image, may be deleted or dereferenced for garbage collection. In this example, NL2 will not be needed for a transaction level restore because a synthetic full backup developed from NL1 and NL2 will provide the same result as such a transaction level restore, but without the need to play any logs at the database server.
At fourth time instance T4, following third time instance T3, data backup device 502 creates a first synthetic full backup S1 by cataloguing blocks of the previous full backup, as recorded in full backup data structure 510, and overlaying Block C′ over Block B, and Block C′ over Block C. Thereafter, the first set of changes C1 may be deleted or dereferenced for garbage collection, thus reducing consumption of memory 508.
At fifth time instance T5, following fourth time instance T4, the data server 500 performs transactions that result in changes to Block B′ and the objects data structure 518 (i.e., by deleting File C and its member Block D), resulting in Block B″ and Objects' data structure 518. Data server 500 transmits native logs, at 520, to data backup device 502, which stores the native logs NL3 and NL4 in memory 508. At fifth time instance T5, the Block B″ and Objects' data structure 518 have not yet been copied to the data backup device 502.
At sixth time instance T6, following second time instance T5, data backup device 502 spins up the container database image, as described herein, and plays the logs NL3 and NL4 to determine a second set of changes C2, corresponding to Block B″ and Objects'.
Determining this second set of changes C2 at sixth time instance T6 obtains Block B″ and Objects' and stores them in memory 508, avoiding the need to copy any changed blocks from the data server 500. The native log NL3 may be retained in memory as a transaction level restore point. In contrast, native log NL4, which is subsequent to native log NL3 and is the log last received before spinning up the container database image, may be deleted or dereferenced for garbage collection. In this example, NL4 will not be needed for a transaction level restore because a synthetic full backup developed from NL3 and NL4 will provide the same result as such a transaction level restore, but without the need to play any logs at the database server.
At seventh time instance T7, following sixth time instance T6, data backup device 502 creates a second synthetic full backup S2 by cataloguing blocks of the previous full backup, as recorded in first synthetic full backup data structure S1, and overlaying Block B″ over Block B′, and Objects' over Objects. Thereafter, the second set of changes C2 may be deleted or dereferenced for garbage collection, thus reducing consumption of memory 508.
As shown at seventh time instance S7, the data server 500 can see three point in time restore points corresponding to full backup data structure 510, first synthetic full backup S1, and second synthetic full backup S2. Data blocks of any of the full backups may be copied to the data server 500 to perform a restore operation without need to play any logs at the data server 500. The data server 500 can also see two transaction level restore points corresponding to native logs NL1 and NL3. Transaction level restores may also be performed by copying a selected one of native logs NL1 and NL3 to the data server 500 along with data blocks of the immediately preceding full backup thereof (i.e., full backup 510 preceding NL1 or synthetic backup S1 preceding NL3). The copied native log may then be played at the data server 500 and applied to the copy of its preceding full backup in order to complete the transaction level restore. Additionally, any native logs received after a most recent synthetic full backup that have not yet been used to create a synthetic full backup may be used as transaction level restore points. The data server 500 may simply copy the data blocks referenced by the latest synthetic full backup, a selected native log, and any other native logs that precede the selected native log and are subsequent to the latest synthetic full backup, play the resulting subset of native logs, and apply them to the most recent synthetic full backup. Accordingly, the need for the data server 500 to play logs during a restore operation may be eliminated or reduced.
As set forth above, the data backup device described herein uses native log shipping mechanisms provided by most databases to transfer logs to the data backup device (e.g., NetBackup Media Server™) and may also use a file sharing mechanism, such as a file sharing interface of MSDP (UniversalShares™, VpFS) to keep the full backup. Then, bringing up a container periodically that runs the database, the data backup server applies the logs to a previous database copy stored in MSDP to get a synthetic full accelerated copy. This solution is described herein with particular reference to UniversalShares™ as a sharing mechanism and NetBackup Media Server™ as a data backup device, but it is also applicable to any data backup device having a writable overlay, such as VpFS, and a data deduplication engine, such as MSDP.
Full backup of a database may be dumped to UniversalShares™ or a file system interface of the data server, which may dump data files of the database by UniversalShares™ or by copying the TAR file. The datafiles of the database present in UniversalShares™ are backed up by a special policy, which may only catalogue the files, since the data is already present, thus completing the backup very quickly. The database may be configured to archive or replicate logs, and subsequent logs may be copied to a configured location which may be a UniversalShares™ folder, serving as an MSDP interface to VpFS/MSDP. Once copied to the archived location, the database may automatically recycle its logs. Periodically, as per an incremental schedule, a container may be brought up on the data backup device that has a database software version that is compatible with database software on the production host data server. The container image uses database native mechanisms to get logs from the archive location and apply them to its base copy and update the base copy. The backup of the updated copy may then be taken and, as per UShare policy, only updated data blocks of the data files are backed up, since MSDP, being a deduplication datastore, can determine changed blocks and back them up. This procedure ensures that a synthetic full and consistent copy of database files is always available at the data backup device, and that it is instantly accessible since no replay of logs is required in a restore operation. This solution is advantageous because it is a Flex native solution that can be used for quick proliferation or a large number of database workloads.
While the foregoing disclosure sets forth various embodiments using specific block diagrams, flowcharts, and examples, each block diagram component, flowchart step, operation, and/or component described and/or illustrated herein may be implemented, individually and/or collectively, using a wide range of hardware, software, or firmware (or any combination thereof) configurations. In addition, any disclosure of components contained within other components should be considered example in nature since many other architectures can be implemented to achieve the same functionality.
In some examples, all or a portion of example system 100 in
In various embodiments, all or a portion of example system 100 in
According to various embodiments, all or a portion of example system 100 in
In some examples, all or a portion of example system 100 in
The process parameters and sequence of steps described and/or illustrated herein are given by way of example only and can be varied as desired. For example, while the steps illustrated and/or described herein may be shown or discussed in a particular order, these steps do not necessarily need to be performed in the order illustrated or discussed. The various example methods described and/or illustrated herein may also omit one or more of the steps described or illustrated herein or include additional steps in addition to those disclosed.
While various embodiments have been described and/or illustrated herein in the context of fully functional computing systems, one or more of these example embodiments may be distributed as a program product in a variety of forms, regardless of the particular type of computer-readable media used to actually carry out the distribution. The embodiments disclosed herein may also be implemented using modules that perform certain tasks. These modules may include script, batch, or other executable files that may be stored on a computer-readable storage medium or in a computing system. In some embodiments, these modules may configure a computing system to perform one or more of the example embodiments disclosed herein.
The preceding description has been provided to enable others skilled in the art to best utilize various aspects of the example embodiments disclosed herein. This example description is not intended to be exhaustive or to be limited to any precise form disclosed. Many modifications and variations are possible without departing from the spirit and scope of the present disclosure. The embodiments disclosed herein should be considered in all respects illustrative and not restrictive. Reference should be made to the appended claims and their equivalents in determining the scope of the present disclosure.
Unless otherwise noted, the terms “connected to” and “coupled to” (and their derivatives), as used in the specification and claims, are to be construed as permitting both direct and indirect (i.e., via other elements or components) connection. In addition, the terms “a” or “an,” as used in the specification and claims, are, to be construed as meaning “at least one of.” Finally, for ease of use, the terms “including” and “having” (and their derivatives), as used in the specification and claims, are interchangeable with and have the same meaning as the word “comprising.”
Dalal, Chirag, Bharadwaj, Vaijayanti
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
10210052, | Oct 27 2015 | EMC IP HOLDING COMPANY LLC; EMC Corporation | Logless backup of databases using internet SCSI protocol |
20060064444, | |||
20070233756, | |||
20120078855, | |||
20160314046, | |||
20170300386, | |||
20200250046, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Feb 25 2020 | Veritas Technologies LLC | (assignment on the face of the patent) | / | |||
Feb 25 2020 | BHARADWAJ, VAIJAYANTI | Veritas Technologies LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 051926 | /0795 | |
Feb 25 2020 | DALAL, CHIRAG | Veritas Technologies LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 051926 | /0795 | |
Jul 01 2020 | Veritas Technologies LLC | BANK OF AMERICA, N A , AS COLLATERAL AGENT | PATENT SECURITY AGREEMENT SUPPLEMENT | 053373 | /0367 | |
Aug 11 2020 | Veritas Technologies LLC | WILMINGTON TRUST, NATIONAL ASSOCIATION, AS NOTES COLLATERAL AGENT | PATENT SECURITY AGREEMENT SUPPLEMENT | 053640 | /0780 | |
Aug 20 2020 | Veritas Technologies LLC | WILMINGTON TRUST, NATIONAL ASSOCIATION, AS NOTES COLLATERAL AGENT | SECURITY INTEREST SEE DOCUMENT FOR DETAILS | 054370 | /0134 | |
Nov 27 2020 | WILMINGTON TRUST, NATIONAL ASSOCIATION, AS COLLATERAL AGENT | Veritas Technologies LLC | TERMINATION AND RELESAE OF SECURITY INTEREST IN PATENTS AT R F 053640 0780 | 054535 | /0492 |
Date | Maintenance Fee Events |
Feb 25 2020 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Date | Maintenance Schedule |
Jun 28 2025 | 4 years fee payment window open |
Dec 28 2025 | 6 months grace period start (w surcharge) |
Jun 28 2026 | patent expiry (for year 4) |
Jun 28 2028 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jun 28 2029 | 8 years fee payment window open |
Dec 28 2029 | 6 months grace period start (w surcharge) |
Jun 28 2030 | patent expiry (for year 8) |
Jun 28 2032 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jun 28 2033 | 12 years fee payment window open |
Dec 28 2033 | 6 months grace period start (w surcharge) |
Jun 28 2034 | patent expiry (for year 12) |
Jun 28 2036 | 2 years to revive unintentionally abandoned end. (for year 12) |