Methods and systems for restoring a tenant database of a multitenant database system with multiple tenant databases is provided. In response to a request to restore a first tenant database, a system verifies that the first tenant database is present in the multitenant system database system with a same identifier when a backup of the first tenant database was taken during a backup operation and when the restore request was received. A restore method is selected based on a storage type used for storing the first tenant database. The selected restore method is executed for restoring the first tenant database utilizing an internal file associated with the first tenant database, while processing requests for other tenant databases.
|
1. A method, comprising:
verifying, by a processor, in response to a restore request to restore a first tenant database from a plurality of tenant databases of a multitenant database instance that the first tenant database is present in the multitenant system database instance with a same identifier when a backup of the first tenant database was taken during a backup operation, prior to receiving the restore request, and when the restore request was received, wherein the multitenant database instance is configured as a logical container to present the plurality tenant databases;
selecting, by the processor, a restore method for restoring the first tenant database, the restore method based on whether the first tenant database is stored using a storage area network (SAN) or a non-SAN based storage system; and
executing, by the processor, the selected restore method for restoring the first tenant database utilizing an internal file associated with the first tenant database, while processing requests for other tenant databases of the plurality of tenant databases, wherein the internal file is generated by a database application to capture a state of the multitenant database instance and a state of the first tenant database when the backup was taken, and a location of the internal file is stored with metadata of the backup by a storage system and provided to the database application by a plugin for restoring the first tenant database.
8. A non-transitory, machine readable storage medium having stored thereon instructions for performing a method, comprising machine executable code which when executed by at least one machine, causes the machine to:
verify, in response to a restore request to restore a first tenant database from a plurality of tenant databases of a multitenant database instance that the first tenant database is present in the multitenant system database instance with a same identifier when a backup of the first tenant database was taken during a backup operation, prior to receiving the restore request, and when the restore request was received, wherein the multitenant database instance is configured as a logical container to present the plurality tenant databases;
select a restore method for restoring the first tenant database, the restore method based on whether the first tenant database is stored using a storage area network (SAN) or a non-SAN based storage system; and
execute the selected restore method for restoring the first tenant database utilizing an internal file associated with the first tenant database, while processing requests for other tenant databases of the plurality of tenant databases, wherein the internal file is generated by a database application to capture a state of the multitenant database instance and a state of the first tenant database when the backup was taken, and a location of the internal file is stored with metadata of the backup by a storage system and provided to the database application by a plugin for restoring the first tenant database.
15. A system, comprising:
a memory containing machine readable medium comprising machine executable code having stored thereon instructions; and a processor coupled to the memory, the processor configured to execute the machine executable code to:
verify, in response to a restore request to restore a first tenant database from a plurality of tenant databases of a multitenant database instance that the first tenant database is present in the multitenant system database instance with a same identifier when a backup of the first tenant database was taken during a backup operation, prior to receiving the restore request, and when the restore request was received, wherein the multitenant database instance is configured as a logical container to present the plurality tenant databases
select a restore method for restoring the first tenant database, the restore method based on whether the first tenant database is stored using a storage area network (SAN) or a non-SAN based storage system; and
execute the selected restore method for restoring the first tenant database utilizing an internal file associated with the first tenant database, while processing requests for other tenant databases of the plurality of tenant databases, wherein the internal file is generated by a database application to capture a state of the multitenant database instance and a state of the first tenant database when the backup was taken, and a location of the internal file is stored with metadata of the backup by a storage system and provided to the database application by a plugin for restoring the first tenant database.
2. The method of
3. The method of
4. The method of
5. The method of
6. The method of
7. The method of
9. The non-transitory machine readable storage medium of
10. The non-transitory machine readable storage medium of
11. The non-transitory machine readable storage medium of
12. The non-transitory machine readable storage medium of
13. The non-transitory machine readable storage medium of
14. The non-transitory machine readable storage medium of
16. The system of
17. The system of
18. The system of
19. The system of
20. The system of
|
The present disclosure relates to networked storage systems and more particularly to, protecting and restoring a tenant database of a multitenant database system.
Various forms of storage systems are used today. These forms include direct attached storage (DAS) systems, network attached storage (NAS) systems, storage area networks (SANs), and others. Network storage systems are commonly used for a variety of purposes, such as providing multiple users with access to shared data, backing up data and others.
A storage system typically includes at least one computing system executing a storage operating system for storing and retrieving data on behalf of one or more client computing systems (“clients”). The storage operating system stores and manages shared data containers in a set of mass storage devices.
Storage systems are used by different applications, for example, multitenant database systems, electronic mail (email) servers, virtual machines executed within virtual machine environments (for example, a hypervisor operating environment) and others to store and protect data using backups.
A multitenant database system enables a plurality of independent tenant databases to store information using a single database instance. An example of a multitenant database system is the SAP HANA Multitenant Database Container (MDC) system provided by SAP Inc. (without derogation of any trademark rights of SAP Inc.).
Conventional storage systems typically do not enable restoring a specific tenant database without impacting other tenant databases of the multitenant database system. To restore the specific tenant database stored using a storage area network (SAN) logical unit number (LUN), conventional storage systems first restore the entire multitenant database system with all the tenant databases and then restore the specific tenant database. During a restore process, all the tenant databases become unavailable for a certain duration. This is disruptive for a tenant that does not have to be restored. Continuous efforts are being made to develop computing technology to efficiently backup and restore a specific tenant database in the multitenant database system while reducing disruption to other tenant databases.
The foregoing features and other features will now be described with reference to the drawings of the various aspects of the present disclosure. In the drawings, the same components have the same reference numerals. The illustrated aspects are intended to illustrate, but not to limit the present disclosure. The drawings include the following Figures:
As preliminary note, the terms “component”, “module”, “system,” and the like as used herein are intended to refer to a computer-related entity, either software-executing general-purpose processor, hardware, firmware and a combination thereof. For example, a component may be, but is not limited to being, a process running on a processor, a hardware-based processor, an object, an executable, a thread of execution, a program, and/or a computer.
By way of illustration, both an application running on a server and the server can be a component. One or more components may reside within a process and/or thread of execution, and a component may be localized on one computer and/or distributed between two or more computers. Also, these components can execute from various computer readable media having various data structures stored thereon. The components may communicate via local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems via the signal).
Computer executable components can be stored, for example, at non-transitory, computer readable media including, but not limited to, an ASIC (application specific integrated circuit), CD (compact disc), DVD (digital video disk), ROM (read only memory), solid state drive, hard disk, EEPROM (electrically erasable programmable read only memory), non-volatile memory or any other storage device, in accordance with the claimed subject matter.
System 100:
System 100 also includes a management system 132 that executes instructions for a management module (or application) 134 for coordinating storage services related operations (for example, backups, restore, cloning and other operations) for a multitenant database environment, described below in detail. Although the management system 132 with the management module 134 is shown as a stand-alone module, it may be implemented with other applications, for example, within a virtual machine environment described below. Furthermore, the management system 132 and the management module 134 may be referred to interchangeably throughout this specification.
In one aspect, host systems 102 may execute a plurality of applications 126A-126N (may also be referred to as “application 126” or “applications 126”), for example, an email server (Exchange server), a database application (for example, a multitenant database application and others) and others. These applications may be executed in different operating environments, for example, a virtual machine environment, Windows, Solaris, Unix and others. The applications may use storage system 120 to store information at storage devices, as described below.
To protect information associated with each application, a plugin module (shown as application plugins 128A-128N) are executed at host systems 102. The term plugin as used in this application means a processor executable layer that is customized to interface with a specific application, e.g. a multitenant database application, a file system, an operating system and others, described below in detail. The term protect means to backup an application and/or backup associated information (including configuration information, data (e.g. files, directories, structured or unstructured data, may jointly be referred to as data containers) and others).
Each host system 102 executes an agent 130A-130N that interfaces with the management module 134 and the various application plugins for managing backups, restore, cloning and other operations, as described below in detail.
In one aspect, the storage system 120 has access to a set of mass storage devices 114A-114N (may be referred to as “storage devices 114” or “storage device 114”) within at least one storage subsystem 116. The storage devices 114 may include writable storage device media such as solid state drives, storage class memory, magnetic disks, video tape, optical, DVD, magnetic tape, non-volatile memory devices for example, self-encrypting drives, or any other storage media adapted to store structured or non-structured data. The storage devices 114 may be organized as one or more groups of Redundant Array of Independent (or Inexpensive) Disks (RAID). The various aspects disclosed are not limited to any specific storage device or storage device configuration.
The storage system 120 provides a set of storage volumes to host systems 102 via the connection system 118. The storage operating system 124 can present or export data stored at storage devices 114 as a volume (or a logical unit number (LUN) for storage area network (“SAN”) based storage). Each volume may be configured to store data files (or data containers or data objects), scripts, word processing documents, executable programs, and any other type of structured or unstructured data. From the perspective of one of the client systems, each volume can appear to be a single storage drive. However, each volume can represent the storage space in one storage device, an aggregate of some or all the storage space in multiple storage devices, a RAID group, or any other suitable set of storage space. An aggregate is typically managed by the storage operating system 124 and identified by a unique identifier (not shown).
The storage system 120 may be used to store and manage information at storage devices 114 based on a request generated by application 126 executed by host system 102 or any other entity. The request may be based on file-based access protocols, for example, the Common Internet File System (CIFS) protocol or Network File System (NFS) protocol, over the Transmission Control Protocol/Internet Protocol (TCP/IP). Alternatively, the request may use block-based access protocols for SAN storage, for example, the Small Computer Systems Interface (SCSI) protocol encapsulated over TCP (iSCSI) and SCSI encapsulated over Fibre Channel (FC), object-based protocol or any other protocol.
In a typical mode of operation, one or more input/output (I/O) requests are sent over connection system 118 to the storage system 120. Storage system 120 receives the I/O requests, issues one or more I/O commands to storage devices 114 to read or write the data on behalf of the host system 102, and issues a response containing the requested data over the network 118 to the respective host system 102.
Although storage system 120 is shown as a stand-alone system, i.e. a non-cluster based system, in another aspect, storage system 120 may have a distributed architecture; for example, a cluster based system that may include a separate network module and storage module. Briefly, the network module is used to communicate with host systems 102, while the storage module is used to communicate with the storage devices 114, as described below with respect to
Alternatively, storage system 120 may have an integrated architecture, where the network and data components are included within a single chassis. The storage system 120 further may be coupled through a switching fabric to other similar storage systems (not shown) which have their own local storage subsystems. In this way, all the storage subsystems can form a single storage pool, to which any client of any of the storage servers has access.
In one aspect, the storage services module 122 at the storage system 120 coordinates storage volume backups, cloning, restore and replication for different hosts and different applications. Although the storage services module 122 is shown as a single block, it may include various modules for taking backups, executing restore operations, replicating backups from one location to another and so forth. As described below, backups and other operations may be performed using the management system 132 and the management module 134. As an example, taking backups may include taking snapshots, i.e. a point-in-time copy of a storage volume. The point-in-time copy captures all the information in a storage volume. The snapshot may be used to restore a storage volume at any given time, as described below.
One or more of the host systems (for example, 102A) may execute a virtual machine environment where a physical resource is time-shared among a plurality of independently operating processor executable virtual machines (also referred to as VMs). Each VM may function as a self-contained platform, running its own operating system (OS) and computer executable, application software. The computer executable instructions running in a VM may be collectively referred to herein as “guest software.” In addition, resources available within the VM may be referred to herein as “guest resources.”
The guest software expects to operate as if it were running on a dedicated computer rather than in a VM. That is, the guest software expects to control various events and have access to hardware resources on a physical computing system (may also be referred to as a host system) which may be referred to herein as “host hardware resources”. The host hardware resource may include one or more processors, resources resident on the processors (e.g., control registers, caches and others), memory (instructions residing in memory, e.g., descriptor tables), and other resources (e.g., input/output devices, host attached storage, network attached storage or other like storage) that reside in a physical machine or are coupled to the host system.
Referring to
In one aspect, host system 102A may also include a virtual machine monitor (VMM) 106, for example, a processor executed hypervisor layer provided by VMWare Inc., Hyper-V layer provided by Microsoft Corporation of Redmond, Wash. or any other layer type. VMM 106 presents and manages the plurality of guest OS 104A-104N executed by the host system 102A.
In one aspect, VMM 106 is executed by host system 102A with VMs 112A-112N. In another aspect, VMM 106 may be executed by an independent stand-alone computing system, often referred to as a hypervisor server or VMM server and VMs 112A-112N are presented on another computing system.
It is noteworthy that various vendors provide different virtualization environments, for example, VMware Corporation, Microsoft Corporation and others (without derogation of any third party trademark rights). The generic virtualization environment described above with respect to
VMM 106 may include or interface with a virtualization layer (VIL) 108 that provides one or more virtualized hardware resource 110 to each OS 104A-104N. VMM 106 also includes or interfaces with a hypervisor plugin (shown as application plugin 128A) and the agent 130A.
In one aspect, to protect tenant databases 123, agent 130 (e.g. at host 102) interfaces with the management module 134 (at the management system 132) via a network connection. The agent 130 receives requests for storage services (e.g. backup and restore) either from the management system 132 or any other entity. The agent 130 provides the requests to a database application plugin (e.g. a HANA plugin) 111 (may also be referred to as plugin 111). The plugin 111 is customized for application 107 and interfaces with application 107 to perform database related discovery, backup and restore operations, as described below in detail. The plugin 111 also interfaces with an operating system plugin 113 for retrieving any operating system related information for executing storage services for host 102. A storage system interface (may also be referred to as a “file system plugin”) 117 interfaces with the storage system 120 to send and receive storage related information, as described below in detail. Details of executing a backup and restore operation are provided below with respect to
Backup Process Flow 129:
The second phase of the discovery operation involves authenticating a user key provided by the database application 107 to access the system database 121. The user key maybe authenticated by the agent 130 and/or plugin 111. Once the user key is authenticated, the plugin 111 generates a query for the system database 121 with the user key. The query requests file paths for each tenant database 123 indicating where each tenant database 123 is stored. The system database 121 maintains the file paths for each tenant database 123. The file paths are used to provide access to tenant databases 123A-123N. The plugin 111 then stores a mapping of the individual files of each tenant database 123 with their respective file paths. This information may be stored at a mapping data structure (not shown) at host 102 memory (not shown). Thereafter, the plugin 111 uses the file paths and the mapping information to request the storage layout of each tenant database 123 from the storage system interface 117 that communicates with the storage system 120.
The storage layout information defines how each tenant database 123 is stored by the storage system 120. The storage system interface 117 obtains a mount point for a storage volume associated with the tenant databases 123. A mount point is a drive or volume in an operating system that is mounted to a folder that uses a file system. When SAN is used for storing the tenant databases 123, the storage layout provides a LUN name that identifies a LUN that stores the tenant databases, identity of a storage volume associated with the LUN, LUN size, identity of a storage server that manages the LUN, permissions associated with the LUN or any other LUN configuration details. For non-SAN based storage (e.g. Networked Attached Storage (NAS) accessed using NFS or CIFS protocols), the storage layout information includes an identity of each data volume storing tenant database 123A-123N files, a size of each data volume, or any other configuration details. The discovered information is then persistently stored (e.g. at database 176 shown in
A backup of database 121 and the tenant databases 123 is taken after a quiesce request is initiated by the plugin 111 for the database application 107. During a quiesce operation, the database application 107 generates an internal file to capture the state of the system database 121 and the tenant databases 123A-123N. The internal file may be proprietary to the database application 107. The internal file is used by the database application 107 during recovery as described below. Thereafter, a snapshot of a storage volume is taken by the storage system 120 using the storage services module 122 (of storage system 120). A location of the internal file for each tenant database 123 is mapped to the tenant database 123 and saved with the metadata of the snapshot taken by the storage system 120. The internal file location (i.e. an internal file path used by the host system 102 to access the internal file) with the snapshot metadata is also registered with the management module 134 and persistently stored, e.g. at database 176. Thereafter, an unquiesce operation is executed by the database application 111 to make the system database 121/tenant databases 123A-123N available for access and the backup operation is completed.
Referring now to
In block B133, during the first phase of the discovery operation, the plugin 111 sends a query to the database application 107 requesting configuration information. The configuration information includes the name of system database 121, a unique identifier for system database 121, a database instance number that identifies an instance of the database 121, the names of the tenant databases 123A-123N, the size of the system database 121, the size of each tenant database 123A-123N, and ownership information for each tenant database indicating which entity owns a specific tenant database. The configuration information also includes any configuration files that indicate how database 121 is configured on host 102 e.g. whether the database 121 is a single instance database or supports multi-tenancy.
In block B135, for the first phase of the discovery operation, the plugin 111 first authenticates a unique user key associated with the system database 121. The unique user key may be obtained directly from a user of the system database 121. The plugin 111 generates a request for database application 107 with the unique user key to obtain information regarding tenant database file paths. The database application 107 maintains this information in system database 121. The plugin 111 then maps individual files of each tenant database 123 at a mapping file. The mapping is maintained by the plugin 111 in a memory at the host 102, and the mapping information includes the identifier for each tenant database 123, the files associated with each tenant database 123 and the file paths used to access each tenant database 123.
In block B137, during a second phase of the discovery operation, the plugin 111 requests the storage system interface 117 to provide details regarding the storage layout used by the storage system 120 to store data for each tenant database 123. The storage layout includes a mount point, identity, and size of logical objects, such as storage volumes/LUNs that maybe used to store tenant database files.
Once the storage layout information is obtained, in block B139, the plugin 111 instructs the database application 107 to quiesce. This is intended to prevent modifications to the system database 121/tenant databases 123A-123N during the backup operation. The database application 107 then generates an internal file for the system database 121 and each tenant database 123A-123N. The internal file maybe proprietary to the database application 107 and captures the state of each tenant database 123 when the database application 107 is quiesced for the backup.
The location of the internal files for each tenant database 123 and the system database 121 is obtained in block B141, and the mapping information maintained by the plugin 111 is updated to map each tenant database file and the system database 121 to their corresponding internal file. This mapping is used to restore a specific tenant database, as described below.
Thereafter, in block B143, a snapshot is generated of one or more volumes (or a LUN) where the tenant databases 123 are stored. This is executed by the storage services module 122. The snapshot, and the snapshot metadata are stored by the storage system 120. The snapshot metadata includes a snapshot identifier, a size of the snapshot, a location where the snapshot is stored, and the location of the internal file for the system database 121 and each tenant database 123A-123N that is used to restore a single tenant database.
In block B145, the database application 107 is unquiesced. In block B147, the snapshot metadata with the internal file location is registered with the management module 134 so that the internal file location information can be used during a restore operation, described below.
Restore Process Flow 159:
When the tenant database files reside on a SAN mount point (i.e. stored at SAN based storage), then all tenant database files reside in a single storage LUN. If a file-level storage restore operation is executed, then it will adversely impact other tenant databases, since they are all included in the same LUN. Process 159 utilizes a “connect-and-copy” approach to restore a tenant database. In this approach, a clone of the storage volume is created, a LUN from the cloned volume is mapped to the host 102 that hosts the system database 121, a file system is built on the mapped LUN, and the required tenant database files are selectively copied from the mounted LUN to an original tenant database's file path. This approach enables selective recovery of a tenant database while other tenant databases continue to process requests to read and write information.
Referring now to
In block B163, a tenant database 123 is selected for a restore operation. The selection may be received by a GUI module (e.g. 142,
In block B165, the plugin 111 performs a plurality of checks. For example, the plugin 111 first verifies with the system database 121 to determine that the selected tenant database 123 exists within an active database file system of the database application 107. If the selected tenant database 123 does not exist, then the restore operation is aborted. If the tenant database 123 exists within the system database 121, then the plugin 111 verifies that a current (i.e. at the time of the restore operation) tenant database identifier matches the tenant database identifier that was used during the backup operation. The plugin 111 performs the verification by retrieving the current tenant database identifier from the system database 121 and comparing the retrieved identifier with snapshot metadata that stores tenant database identifiers and the internal file location of each tenant database 123. If the tenant database identifiers are different, the restore operation is aborted.
When the tenant database identifier matches with the current identifier, then access to the tenant database 123 is temporarily disabled in block B167. It is noteworthy that other tenant databases 123 continue to operate during the restore operation. A restore method is then identified based on the type of storage used to store the tenant database files. For example, if the tenant database 123 uses SAN based storage i.e. a single LUN to store the system database 121 and tenant databases 123A-123N, then process blocks B169-B173 are executed. If the tenant database uses non-SAN based storage (e.g. NFS), then file based restore operation is selected and executed in block B175.
For SAN based storage, in block B169, a clone volume is created of the storage snapshot that includes the tenant database files. The storage services module 122 is provided information regarding the storage snapshot by the storage system interface 117 to enable generating the snapshot clone volume.
In block B171, a LUN from the cloned volume is mapped to the host 102 that hosts the system database 121. A file system is then built at the mapped LUN using the storage system interface 117.
In block B173, the selected tenant database files from the mapped LUN are selectively copied by the storage system interface 117 to the original path of the tenant database 123 that was recorded in the snapshot metadata with the internal file location for each tenant database 123. The filesystem on the mapped LUN is disabled using the storage system interface 117 and the clone is dismounted.
From block B173, the process 159 proceeds to block B177. In block B177, the tenant database files are recovered from the copied files and connected to the system database 121. In order to perform a full recovery, the system database 121 uses the internal file it created for the tenant database that is being restored. Since the internal file location was stored with the snapshot metadata for the selected tenant database, the internal file location is easily retrieved by the storage system interface 117 and provided to the plugin 111. The plugin 111 provides the internal file location to the database application 107 retrieves the internal file from the internal file location and uses the retrieved internal file and the recovered tenant database files to restore the tenant database 123 to the same state logged in the internal file.
For non-SAN based storage, in block B175, the tenant database files are restored using file based restore i.e. individual files are restored from the backup operation of
In one aspect, a method for restoring a tenant database from a plurality of tenant databases is provided. The method includes verifying (Block B165,
In another aspect, a non-transitory, machine readable storage medium having stored thereon instructions for performing a method is provided. The machine executable code which when executed by at least one machine, causes the machine to: verify in response to a request to restore a first tenant database from a plurality of tenant databases of a multitenant database system that the first tenant database is present in the multitenant system database system with a same identifier when a backup of the first tenant database was taken during a backup operation and when the restore request was received; select a restore method for restoring the first tenant database, the restore method based on a storage type used for storing the first tenant database by a storage system; and execute the selected restore method for restoring the first tenant database utilizing an internal file associated with the first tenant database, while processing requests for other tenant databases of the plurality of tenant databases. A location of the internal file is stored with metadata of the backup by the storage system and provided to a database application for restoring the first tenant database.
In yet another aspect, a system having a memory containing machine readable medium comprising machine executable code having stored thereon instructions; and a processor coupled to the memory is provided. The processor is configured to execute the machine executable code to: verify in response to a request to restore a first tenant database from a plurality of tenant databases of a multitenant database system that the first tenant database is present in the multitenant system database system; select a restore method for restoring the first tenant database, the restore method based on a storage type used for storing the first tenant database by a storage system; and execute the selected restore method for restoring the first tenant database utilizing an internal file associated with the first tenant database, while processing requests for other tenant databases of the plurality of tenant databases. A location of the internal file is stored with metadata of the backup by the storage system and provided to a database application for restoring the first tenant database.
In one aspect, the innovative processes and systems described herein are rooted in computing technology that enable a multitenant database system to restore a specific tenant database without impacting other tenant databases. The disclosed computing technology improves the functionality of a host system by providing access to tenant databases that are not related to the specific tenant database that is being restored at a given time. Because other tenant databases continue to function while the specific tenant database is being restored, this reduces disruption for client computing devices using a multiple tenant database system to store and retrieve data.
Management System 132:
The management module 134 includes a graphical user interface (GUI) module 142 that presents a GUI at a display device, for example, a monitor, a smart phone, tablet or any other display device type. The GUIs may be used by different users to interface with the management system 132 and its components, for example to backup and restore a tenant database 123A-123N, as described above. It is noteworthy that the various aspects described herein are not limited to any specific GUI type because a command line interface (CLI) may also be used to implement the adaptive aspects described herein.
The management module 134 may also include a web server 144 for enabling Internet based communication. As an example, web server 144 may be implemented as an Internet Information Services (IIS) for a Windows® Server web server (without derogation of any third party trademark rights). The web server 144 interfaces with a workflow engine 158 that coordinates and manages various tasks that are performed by the different components of the management module 134.
In one aspect, the workflow engine 158 coordinates dataset 164 creation, policy allocation and manage a database 176 that is different from the databases 121/123A-123N, described above. The workflow engine 158 communicates with various agents 130A-130N for host system related operations including backup and restore of databases 121/123A-123N.
In one aspect, the management module 134 also includes a protection module 146, a provisioning module 148, a discovery module 150, a recovery module 152, a cloning module 154, a role based access control (RBAC) module 156, a storage abstraction layer (may also be referred to as “SAL”) 173, a hypervisor abstraction layer (may also be referred to as “HAL”) 175, a scheduler 160, a job manager 182, a remote installation module 166, a log manager 168, a policy data structure 170, a reports module 172 and a database access layer 174 that can access the database (or any other data structure type) 176. The database 176 stores various data structures (or objects) in a format that allows the management module 134 to handle storage services for different applications/host systems.
The log manager 168 collects logs from agents 130 and the various plugins. The logs can then be presented to a user via a GUI. The logs may be for event management and audit for various management system 132 operations. The logs may be collected for a job based on a job identifier.
The protection module 146 is used to enforce a policy for a storage services related job (e.g. backup and/or restore of a tenant database). The protection module 146 maintains a protection policy for a plurality of objects (or protection group) that are to be backed up and provides protection service for backing up the protection group objects. Based on an application object, a call is made to an appropriate plugin for providing the appropriate protection service.
In one aspect, protection module 146 maintains protection group objects for abstract representation of a container of application objects where protection of application objects is defined by policy objects. The protection group objects map to dataset objects 164 (shown as dataset 164).
The provisioning module 148 allows a user to configure and provision a LUN/volume (used interchangeably) that may be used to store information (e.g. for a multitenant database system). The provisioning module 148 allows a user to set a LUN size and appropriate permissions for using the LUN, for example, reading and writing data, permission for changing a LUN size, deleting a LUN and other operations. Storage volume information is saved in a standard format at database 176 and includes, name of the storage volume, storage connection identifier (described below), size, a junction path, date volume was created and an aggregate.
The discovery module 150 interfaces with the agents 130A-130N executed at different host systems to obtain information regarding the host systems, storage resources used by various applications and information regarding data containers that are protected (i.e. backed up) and unprotected. The discovery module 150 also facilitates discovery and management of application specific objects, for example, VMs, databases, hypervisor and others. Based on the application type, an appropriate plugin is used to discover different objects.
In one aspect, the discovery module 150 initiates a discovery operation with the agents 130. An application programming interface (API) presented by the management system 132 determines if an agent 130 is installed at a host 102. If the agent 130 is installed, then the agent 130 discovers the various plugins at that host 102. If the agent 130 is not installed, then it is installed by the remote installation module 166 and the appropriate plugins are installed as well.
The cloning module 154 is used to clone storage volumes that are maintained by the storage system 120. The cloning module 154 is also used for managing the life cycle of a clone. The term clone as used herein is a duplicate copy of a snapshot. The term clone life cycle management means generating a clone, refreshing a clone and deleting a clone based on user defined policies and requirements. Refreshing a clone means deleting an existing clone, generating a new snapshot and then creating the clone again.
The RBAC module 156 stores information regarding different clients/entities that are given access to storage. For example, a business unit may be allowed to read certain storage volumes and may not be allowed to backup, clone, replicate or delete any storage volume. The RBAC module 156 manages the various roles and access type for different applications that may be executed in different host systems/computing environments including a multitenant database environment described above.
In one aspect, RBAC module 156 includes an authentication and authorization module. User authentication may happen at multiple end points, for example, via a GUI login, a login API for clients or plugins and others. The authentication module authenticates users against different domain/subsystem requirements, for example, an Active Directory, a local Windows machine host system, open LDAP (lightweight directory protocol) and others. Once a user is authenticated, an encrypted token is generated based on user information. In another aspect, a hash token is generated based on the generated token. The hashed token is saved at database 176. The hashed token may be based on MD5 (Message Digest Algorithm, 5, SHA (secure hash algorithm)-1 or any other technique.
When the authenticated user logs back in, the user passes the token and the management system 132 decrypts the token, validates the token and regenerates the hash value. The hash value is compared with the stored hash value in database 176.
In one aspect, the authorization module of the RBAC module 156 creates custom roles (for example, an administrator, backup administrator, backup operator, backup viewer, restore administrator and others), modifies existing roles, assigns and unassigns permissions to and from a role (for example, a dataset, policy, host, storage connection, a dashboard, a report, discovery, remote installation and others), assigns and unassigns users to roles and assigns and unassigns resources (for example, hosts, datasets, policy and others).
In one aspect, roles determine a set of capabilities that are available to members of the role. For example, a backup administrator may be allowed to add a host system, add a tenant database, install plugins, create a dataset, create a backup dataset, delete a dataset, create policies, delete backups, restore applications and others. A backup operator may be allowed to start and stop existing dataset jobs, monitor backups, view backup reports and perform application level restore operations. A backup viewer may be given read only access to backups, view existing backups and review job session details. A restore administrator may be allowed to perform restore operations using existing backups of multitenant databases. The adaptive aspects described herein are not limited to these roles.
All the tasks conducted by the management module 134 are organized and monitored by the job manager 182. The job schedules are managed by the scheduler 160. When a new job arrives (e.g. to take a backup of database 121/databases 123A-123N), the job manager 182 stores the job information in a database (for example, 176) with other existing jobs. The job manager 182 creates sub-tasks for executing the appropriate workflows. The sub-tasks depend on the nature of the job (for example, backup, restore, cloning or others). The job manager 182 updates the status of each task to provide real-time updates via a GUI.
In one aspect, the policy data structure 170 is used to store polices for different stored objects (for example, databases 121/123A-123N, data structures, VMs, storage volumes and others). The policy information is configurable and may be changed by a user. In one aspect, the policy data structure 170 format is the same across different applications. This enables the management module 134 to manage storage services across different platforms with different requirements and operating parameters. The policy data structure 170 includes various policy objects to store various policies each with a set of attributes that can be applied to any dataset. The policy object stores a policy identifier, a policy name, description, a backup schedule policy, a retention count as to how long a backup is to be retained, a replication policy to replicate a backup from one location to another, types of backups, application consistent backups, and verification policies to verify a backup and others. It is noteworthy that a policy object may be shared across multiple datasets for multiple applications/plugins.
The reports module 172 is used to provide reports to users. The reports may be for different applications and in different formats.
In one aspect, the management module 134 maintains the dataset 164 for different applications and application objects, including databases 121/123A-123N. Each dataset is uniquely identified and named. The dataset format for managing replication for different applications is the same, regardless of how the applications handle information. A dataset may be associated with a policy data structure that defines how an object is to be protected. The dataset format used by the various aspects of the present disclosure allows a user to add or remove stored objects that need to be protected.
In one aspect, dataset 164 represents a container of application objects where protection attributes may be defined in terms of backup policy, replication profiles and retention policies. Dataset 164 is a basic unit that is used to manage backup and restore operations described above. A user can add any permissible resource to the dataset from multiple host systems/applications.
The database access layer 174 saves information in the database 176. The database 176 may be used to store information that is discovered by the discovery module 150, policy information, host information, datasets and other information.
In one aspect, the database 176 may store various data structures for managing the storage services and providing status to users. As an example, the database schema for the database 176 is application format independent and may include various data structures to identify different host systems to specific login information, a backup metadata structure for storing information regarding backups including the internal file location of an internal file generated by the database application 107 (
In one aspect, SAL 173 stores information regarding the various storage resources that are used and available for different hosts including host 102 of
In one aspect, HAL 175 is used to communicate with another plugin that is used to collect information related to storage used by different virtual machines.
Agent 130:
SAL 173A maintains the storage footprint/layout for each application. SAL 173A interfaces with each plugin (e.g. plugin 111) to obtain storage resources that are managed by storage system 120 and made available to different applications. In one aspect, SAL 173A uses ZAPIs (Zephyr Application Programming Interface) to send and receive data from storage system 120.
In another aspect, the agent 130 interfaces with storage system 120 via the storage system interface 117. An example of storage system interface is SnapDrive provided by NetApp Inc. (with derogation of any trademark rights of NetApp Inc.).
The agent 130 interfaces with different types of plugins, for example, plugins 111, 113 and 155. The plugin 111 is aware of database 121 format and configuration and provides database specific information to the agent 130.
The backup verification module 197 verifies a backup or replication operation. Since different applications may have different requirements, the backup verification module 197 facilitates the verification for different applications.
The cloning module 199 assists in cloning a snapshot and a log backup module 198 assists in backing up logs. As an example, a database application 107 (
Clustered Storage System:
The clustered storage system 202 includes a plurality of nodes 208.1-208.3, a cluster switching fabric 210, and a plurality of mass storage devices 212.1-212.3 (similar to 118,
The network modules 214.1-214.3 include functionality that enables the respective nodes 208.1-208.3 to connect to one or more of the client systems 204.1-204.2 over the computer network 206, while the storage modules 216.1-216.3 connect to one or more of the storage devices 212.1-212.3.
The management modules 218.1-218.3 provide management functions for the clustered storage system 202. Accordingly, each of the plurality of server nodes 208.1-208.3 in the clustered storage server arrangement provides the functionality of a storage server.
A switched virtualization layer including a plurality of virtual interfaces (VIFs) 220 is provided below the interface between the respective network modules 214.1-214.3 and the client systems 204.1-204.2, allowing storage 212.1-212.3 associated with the nodes 208.1-208.3 to be presented to the client systems 204.1-204.2 as a single shared storage pool. For example, the switched virtualization layer may implement a virtual interface architecture.
The clustered storage system 202 can be organized into any suitable number of virtual servers (VServer or storage virtual machines (SVM)) 222A-222N, in which each virtual storage system represents a single storage system namespace with separate network access. Each virtual storage system has a user domain and a security domain that are separate from the user and security domains of other virtual storage systems. Server systems 204 can access storage space via a VServer from any node of the clustered system 202.
Each of the nodes 208.1-208.3 may be defined as a computer adapted to provide application services to one or more of the client systems 204.1-204.2. In this context, a SVM is an instance of an application service provided to a client system. The nodes 208.1-208.3 are interconnected by the switching fabric 210, which, for example, may be embodied as a Gigabit Ethernet switch or any other switch type.
Although
The server systems 204.1-204.2 (similar to host systems 102A-102N of
In a typical mode of operation, one of the server systems 204.1-204.2 transmits an NFS or CIFS request for data to one of the nodes 208.1-208.3 within the clustered storage system 202, and the VIF 220 associated with the respective node receives the client request. It is noted that each VIF 220 within the clustered system 202 is a network endpoint having an associated IP address. The server request typically includes a file handle for a data file stored in a specified volume on at storage 212.1-212.3.
Storage System Node:
System 224 may include a plurality of processors 226A and 226B, a memory 228, a network adapter 234, a cluster access adapter 238 (used for a cluster environment), a storage adapter 240 and local storage 236 interconnected by a system bus 232. The local storage 236 comprises one or more storage devices utilized by the processors to locally store configuration and other information.
The cluster access adapter 238 comprises a plurality of ports adapted to couple system 224 to other nodes of a cluster as described above with respect to
System 224 is illustratively embodied as a dual processor storage system executing a storage operating system 230 (similar to 124,
The memory 228 illustratively comprises storage locations that are addressable by the processors and adapters for storing programmable instructions and data structures. The processor and adapters may, in turn, comprise processing elements and/or logic circuitry configured to execute the programmable instructions and manipulate the data structures. It will be apparent to those skilled in the art that other processing and memory means, including various computer readable media, may be used for storing and executing program instructions described herein.
The storage operating system 230, portions of which is typically resident in memory and executed by the processing elements, functionally organizes the system 224 by, inter alia, invoking storage operations in support of the storage service provided by storage system 120. An example of operating system 230 is the DATA ONTAP® (Registered trademark of NetApp, Inc. operating system available from NetApp, Inc. that implements a Write Anywhere File Layout (WAFL® (Registered trademark of NetApp, Inc.)) file system. However, it is expressly contemplated that any appropriate storage operating system may be enhanced for use in accordance with the inventive principles described herein. As such, where the term “ONTAP” is employed, it should be taken broadly to refer to any storage operating system that is otherwise adaptable to the teachings of this invention.
The network adapter 234 comprises a plurality of ports adapted to couple the system 224 to one or more server systems over point-to-point links, wide area networks, virtual private networks implemented over a public network (Internet) or a shared local area network. The network adapter 234 thus may comprise the mechanical, electrical and signaling circuitry needed to connect storage system 120 to the network. Illustratively, the computer network may be embodied as an Ethernet network or a FC network.
The storage adapter 240 cooperates with the storage operating system 230 executing on the system 224 to access information requested by the management system 132 and/or host 102. The information may be stored on any type of attached array of writable storage device media such as optical devices, DVD, magnetic tape, bubble memory, electronic random-access memory, flash memory devices, micro-electro mechanical and any other similar media adapted to store information.
The storage adapter 240 comprises a plurality of ports having input/output (I/O) interface circuitry that couples to storage devices over an I/O interconnect arrangement, such as a conventional high-performance, FC link topology.
In another aspect, instead of using a separate network and storage adapter, a converged adapter is used to process both network and storage traffic.
Storage Operating System:
As an example, operating system 230 may include several modules, or “layers”. These layers include a file system manager 303 that keeps track of a directory structure (hierarchy) of the data stored in storage devices and manages read/write operations, i.e. executes read/write operations on storage devices in response to server system 102 requests.
Operating system 230 may also include a protocol layer 303 and an associated network access layer 305, to allow system 200 to communicate over a network with other systems, such as server system 102 and management system 132. Protocol layer 303 may implement one or more of various higher-level network protocols, such as NFS, CIFS, Hypertext Transfer Protocol (HTTP), TCP/IP and others, as described below.
Network access layer 305 may include one or more drivers, which implement one or more lower-level protocols to communicate over the network, such as Ethernet. Interactions between server systems 102 and mass storage devices 114/212 are illustrated schematically as a path, which illustrates the flow of data through operating system 230.
The operating system 230 may also include a storage access layer 307 and an associated storage driver layer 309 to communicate with a storage device. The storage access layer 307 may implement a higher-level storage protocol, such as RAID (redundant array of inexpensive disks), while the storage driver layer 309 may implement a lower-level storage device access protocol, such as FC, SCSI or any other protocol.
It should be noted that the software “path” through the operating system layers described above needed to perform data storage access for a client request may alternatively be implemented in hardware. That is, in an alternate aspect of the disclosure, the storage access request data path may be implemented as logic circuitry embodied within a field programmable gate array (FPGA) or an ASIC. This type of hardware implementation increases the performance of the file service provided by storage system 120.
As used herein, the term “storage operating system” generally refers to the computer-executable code operable on a computer to perform a storage function that manages data access and may implement data access semantics of a general-purpose operating system. The storage operating system can also be implemented as a microkernel, an application program operating over a general-purpose operating system, such as UNIX@ or Windows®, or as a general-purpose operating system with configurable functionality, which is configured for storage applications as described herein.
In addition, it will be understood to those skilled in the art that the invention described herein may apply to any type of special-purpose (e.g., file server, filer or storage serving appliance) or general-purpose computer, including a standalone computer or portion thereof, embodied as or including a storage system. Moreover, the teachings of this disclosure can be adapted to a variety of storage system architectures including, but not limited to, a network-attached storage environment, a storage area network and a disk assembly directly attached to a client or host computer. The term “storage system” should therefore be taken broadly to include such arrangements in addition to any subsystems configured to perform a storage function and associated with other equipment or systems.
Processing System:
The processing system 400 includes one or more processors 402 and memory 404, coupled to a bus system 405. The bus system 405 shown in
The processors 402 are the central processing units (CPUs) of the processing system 400 and, thus, control its overall operation. In certain aspects, the processors 402 accomplish this by executing programmable instructions stored in memory 404. A processor 402 may be, or may include, one or more programmable general-purpose or special-purpose microprocessors, digital signal processors (DSPs), programmable controllers, application specific integrated circuits (ASICs), programmable logic devices (PLDs), or the like, or a combination of such devices.
Memory 404 represents any form of random-access memory (RAM), read-only memory (ROM), flash memory, or the like, or a combination of such devices. Memory 404 includes the main memory of the processing system 400. Instructions 406 which implements techniques introduced above may reside in and may be executed (by processors 402) from memory 404. For example, instructions 406 may include code used for executing the process blocks of
Also connected to the processors 402 through the bus system 405 are one or more internal mass storage devices 410, and a network adapter 412. Internal mass storage devices 410 may be or may include any conventional medium for storing large volumes of data in a non-volatile manner, such as one or more magnetic or optical based disks. The network adapter 412 provides the processing system 400 with the ability to communicate with remote devices (e.g., storage servers) over a network and may be, for example, an Ethernet adapter, a FC adapter, or the like. The processing system 400 also includes one or more input/output (I/O) devices 408 coupled to the bus system 405. The I/O devices 408 may include, for example, a display device, a keyboard, a mouse, etc.
Cloud Computing: The system and techniques described above are applicable and especially useful in the cloud computing environment where storage is presented and shared across different platforms. Cloud computing means computing capability that provides an abstraction between the computing resource and its underlying technical architecture (e.g., servers, storage, networks), enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction. The term “cloud” is intended to refer to a network, for example, the Internet and cloud computing allows shared resources, for example, software and information to be available, on-demand, like a public utility.
Typical cloud computing providers deliver common business applications online which are accessed from another web service or software like a web browser, while the software and data are stored remotely on servers. The cloud computing architecture uses a layered approach for providing application services. A first layer is an application layer that is executed at client computers. In this example, the application allows a client to access storage via a cloud.
After the application layer is a cloud platform and cloud infrastructure, followed by a “server” layer that includes hardware and computer software designed for cloud specific services. The storage systems described above can be a part of the server layer for providing storage services. Details regarding these layers are not germane to the inventive aspects.
Thus, a method and apparatus for protecting multitenant databases have been described. Note that references throughout this specification to “one aspect” or “an aspect” mean that a particular feature, structure or characteristic described in connection with the aspect is included in at least one aspect of the present disclosure. Therefore, it is emphasized and should be appreciated that two or more references to “an aspect” or “one aspect” or “an alternative aspect” in various portions of this specification are not necessarily all referring to the same aspect. Furthermore, the particular features, structures or characteristics being referred to may be combined as suitable in one or more aspects of the present disclosure, as will be recognized by those of ordinary skill in the art.
While the present disclosure is described above with respect to what is currently considered its preferred aspects, it is to be understood that the disclosure is not limited to that described above. To the contrary, the disclosure is intended to cover various modifications and equivalent arrangements within the spirit and scope of the appended claims.
Rachapudi, Amarnath, Prabhu, Vasantha, Puranik, Karkala Pallavi, Dawani, Kumar
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
10198327, | Sep 22 2015 | EMC IP HOLDING COMPANY LLC | Method and system for performing group recovery from a recovery source to multiple recovery targets |
7885938, | Feb 27 2008 | Veritas Technologies LLC | Techniques for granular recovery of data from local and remote storage |
20080162491, | |||
20150254142, | |||
20180373604, | |||
20190311040, | |||
20210191903, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Apr 16 2020 | RACHAPUDI, AMARNATH | NetApp, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 052433 | /0652 | |
Apr 16 2020 | PRABHU, VASANTHA | NetApp, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 052433 | /0652 | |
Apr 17 2020 | NetApp, Inc. | (assignment on the face of the patent) | / | |||
Apr 17 2020 | PURANIK, KARKALA PALLAVI | NetApp, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 052433 | /0652 | |
Apr 17 2020 | DAWANI, KUMAR | NetApp, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 052433 | /0652 |
Date | Maintenance Fee Events |
Apr 17 2020 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Date | Maintenance Schedule |
Oct 11 2025 | 4 years fee payment window open |
Apr 11 2026 | 6 months grace period start (w surcharge) |
Oct 11 2026 | patent expiry (for year 4) |
Oct 11 2028 | 2 years to revive unintentionally abandoned end. (for year 4) |
Oct 11 2029 | 8 years fee payment window open |
Apr 11 2030 | 6 months grace period start (w surcharge) |
Oct 11 2030 | patent expiry (for year 8) |
Oct 11 2032 | 2 years to revive unintentionally abandoned end. (for year 8) |
Oct 11 2033 | 12 years fee payment window open |
Apr 11 2034 | 6 months grace period start (w surcharge) |
Oct 11 2034 | patent expiry (for year 12) |
Oct 11 2036 | 2 years to revive unintentionally abandoned end. (for year 12) |