Centralized role-based access control (RBAC) for storage servers can include operating multiple storage servers, each configured to provide a set of clients with access to stored data, and using a separate network server to provide centralized RBAC. The network server may include an api proxy to proxy requests to access individual APIs of a storage server by an application which is external to the network server and the storage server and may control access to the individual APIs of the storage servers on a per-api, per-user and per-object basis. The api proxy may filter responses to api calls based on the access privileges of the user of the application which sent the api call. In some embodiments, the network server may implement a Windows domain server, an LDAP server or the like to evaluate security credentials of administrative users on behalf of multiple storage servers.

Patent
   7913300
Priority
Apr 08 2005
Filed
Apr 08 2005
Issued
Mar 22 2011
Expiry
Jan 01 2029
Extension
1364 days
Assg.orig
Entity
Large
244
11
all paid
1. A method comprising:
operating a network server which communicates with a storage server through a network, wherein the network server includes a management application for controlling management tasks associated with the storage server;
using the network server to proxy requests to access a plurality of APIs of the storage server, the requests being from a client application which is external to the storage server and the network server, wherein using the network server to proxy a request to access a given api of the plurality of APIs includes:
in the network server, storing a set of access privileges for a plurality of users, including a user of the client application;
receiving from the client application a first api call for accessing the given api of the storage server, the first api call having associated therewith a first set of security credentials not associated with the storage server;
based on the first set of security credentials, determining whether the user of the client application is authorized to access the management application of the network server;
if the user is an authorized user of the management application, then using the network server to look up a second set of security credentials associated with the storage server, and sending a second api call, for accessing the given api to the storage server with the second set of security credentials;
receiving a result of executing the given api from the storage server;
looking up a set of access privileges associated with the user of the client application;
filtering the result of executing the given api based on the set of access privileges associated with the user of the client application; and
providing the filtered result to the client application as a response to the first api call; and
using the network server to provide access to a selected subset of the plurality of APIs of the storage server based on a role associated with the client application.
8. A processing system comprising:
a processor;
a network interface through which to communicate with a plurality of storage servers over a network; and
a storage facility storing:
a storage management application, for execution by the processor, to enable remote management of the storage servers by a user, and an api proxy, for execution by the processor, to proxy requests to access a plurality of APIs of the storage servers on a per-api basis, the requests originating from a client application which is external to the processing system and the storage servers, including providing access control to a selected subset of the plurality of APIs of the storage server based on a role associated with the client application, wherein the api proxy further is to:
in the api proxy, store a set of access privileges for a plurality of users, including a user of the client application;
receive from the client application an api call for accessing an api of a given storage server of the plurality of storage servers, the api having associated therewith a first set of security credentials not associated with the given storage server;
based on the first set of security credentials, determining whether the user of the client application is authorized to access the storage management application of the storage facility;
if the user is an authorized user of the storage management application, then looking up a second set of security credentials associated with the given storage server, and sending a second api call, for accessing the api, to the given storage server with the second set of security credentials;
receiving a result of executing the api from the storage server;
looking up a set of access privileges associated with the user of the client application;
filtering the result of executing the api based on the set of access privileges associated with the user of the client application; and
providing the filtered result to the client application as a response to the first api call.
2. A method as recited in claim 1, wherein using the network server to provide access to a selected subset of the plurality of APIs of the storage server based on a role associated with the client application comprises:
providing access control on a per-user basis for a plurality of users.
3. A method as recited in claim 2, wherein using the network server to provide access to a selected subset of the plurality of APIs of the storage server based on a role associated with the client application comprises:
using the network server to control access to individual objects maintained by the storage server.
4. A method as recited in claim 3, wherein using the network server to proxy requests to access a plurality of APIs of the storage server comprises implementing a tunneling api in the proxy to proxy api calls to the storage server transparently to the client application.
5. A method as recited in claim 1, further comprising;
receiving a query of capabilities of a user from the client application; and sending to the client application a response indicating capabilities of the user.
6. A method as recited in claim 1, wherein the result comprises a set of objects, and wherein at least one object, of the set of objects is not included in the filtered result.
7. A method as recited in claim 1, further comprising, in the network server:
receiving an api call from the client application;
in response to the api call, dynamically selecting the storage server to receive the api call, from among a plurality of storage servers; and
proxying the api call to the storage server.
9. A processing system as recited in claim 8, wherein the proxy implements a tunneling api to proxy the requests to access the individual APIs to the storage servers, transparently to the client application.
10. A processing system as recited in claim 8, wherein the processing system provides centralized control of access to individual APIs of the storage servers by a plurality of applications which are external to the processing system and the storage servers.
11. A processing system as recited in claim 8, wherein the processing system maintains, and is operable to evaluate, a set of security credentials for use in accessing at least-one of the storage servers.
12. A processing system as recited in claim 8, wherein the processing system comprises both an access policy decision point (PDP) and an access policy enforcement point (PEP) for purposes, of controlling access: to the storage servers.
13. A processing system as recited in claim 8, wherein the result comprises a set of objects, and wherein at least one object of the set of the objects is omitted from the filtered result.

At least one embodiment of the present invention pertains to storage systems, and more particularly, to centralized role-based access control for storage servers.

Various forms of network storage systems are known today. These forms include network attached storage (NAS), storage area networks (SANs), and others. Network storage systems are commonly used for a variety of purposes, such as providing multiple users with access to shared data, backing up critical data (e.g., by data mirroring), etc.

A network storage system includes at least one storage server, which is a processing system configured to store and retrieve data on behalf of one or more client processing systems (“clients”). In the context of NAS, a storage server may be a file server, which is sometimes called a “filer”. A filer operates on behalf of one or more clients to store and manage shared files in a set of mass storage devices, such as magnetic or optical disks or tapes. The mass storage devices may be organized into one or more volumes of a Redundant Array of Inexpensive Disks (RAID). Filers are made by Network Appliance, Inc. of Sunnyvale, Calif.

In a SAN context, the storage server provides clients with block-level access to stored data, rather than file-level access. Some storage servers are capable of providing clients with both file-level access and block-level access, such as certain Filers made by Network Appliance, Inc.

A business enterprise or other organization that manages large volumes of data may operate multiple storage servers concurrently. These storage servers may be connected to each other through one or more networks. The storage servers and other network components may be managed by one or more network administrators (also called “administrative users” or simply “administrators”), who are responsible for configuring, provisioning and monitoring the storage servers, scheduling backups, troubleshooting problems with the storage servers, performing software upgrades, etc. These management tasks can be accomplished by the administrator using a separate management console on the network, which is a computer system that runs storage management software application specifically designed to manage a distributed storage infrastructure. An example of such storage management software is DataFabric® Manager (DFM), made by Network Appliance, Inc. of Sunnyvale, Calif.

To prevent unauthorized users from accessing and controlling functions of the storage servers, there is a need for some form of access control. There are two forms of access control: authentication and authorization. Authentication is the process of determining whether a particular user is who he claims to be, such as by verifying a username and a password. Authorization is the process of determining whether a particular user is allowed to do or access a particular function, feature, etc. Access control can be provided, at least in part, by the use of usernames and passwords, such as by assigning a username and password to each storage server, where only an authorized administrator knows the correct username and password.

Organizations which use more than one storage server often would like to have the same administrative user manage all of the storage servers with only a single username and password. Preferably, this user (and possibly administrative users) should have easy access to all of the storage servers within his network. Furthermore, connecting a new storage server to the network should be seamless, and the administrative user should be able to access it easily.

One way of accomplishing this is by assigning the same username and password to each storage server. The user names and passwords can be uploaded to the storage servers upon an initial boot, letting administrators access the appliances with only a script change. However, this solution is unwieldy for very large organizations with many administrators. With this approach, it is cumbersome to create new administrators, change passwords, and delete old administrators.

Furthermore, it may be desirable to apply different access privileges to different administrators with respect to the storage servers. For example, one network administrator may have full access privileges to control any function of any storage server, whereas another network administrator may only be authorized to control data backup operations. As a result, at least some of the different users need to have different user names and passwords. The above-mentioned approach, therefore, becomes particularly cumbersome in such situations.

Some prior art storage management software can provide centralized control of access by one or more administrators to one or more storage servers on a network. An administrator initially gains access to a management console equipped with such software by providing a username and password. Once authenticated, the administrator has access to all of the applications included in the management console and can invoke these applications with respect to any of the managed storage servers. In addition, such software can enforce different access privileges for different authorized users; for example, one authorized user may have both read and write privileges while another authorized user has only read privileges.

It may be desirable, however, to allow certain functions of a storage server to be controlled or invoked by one or more software applications that reside on computer systems separate from the storage servers or the usual management console; such applications are referred to herein as “third party” applications. For example, the storage servers and the storage management software may be made by a particular manufacturer, such as Network Appliance, Inc.; however, from a storage system user's perspective, it may be desirable for a storage management software application of another vendor to be able to access the storage servers as a third party application. As another example, a third party application might be a data backup application that resides on a computer other than the management console or the storage servers. In such a scenario, there is also a need for a convenient and centralized mechanism for controlling access by the third party application to the storage servers.

The present invention includes methods and related apparatus for centralized control of administrative access to storage servers. In one embodiment, such a method includes operating a network server which communicates with a storage server through a network, and using the network server to proxy requests to access individual APIs of the storage server by an application which is external to the network server and the storage server.

In another embodiment, the method includes receiving at a storage server a request by a user for administrative access to the storage server. The storage server is configured to provide a set of clients with access to data stored in a set of mass storage devices. The request includes a first set of security credentials. In response to detecting a predetermined indicator in the request, the first set of security credentials is forwarded to a network server for evaluation. If the network server determines that the user is an authorized user, based on the first set of credentials, the storage server receives a second set of security credentials from the network server to allow the user to access the storage server.

The invention further includes a method of executing a client application for data storage related operations. In certain embodiments, the method includes identifying a user of the client application and sending to a storage management application, via a network, a query of access privileges associated with the user. The access privileges are for controlling access to functions of a storage server managed by the storage management application. The method further includes receiving a response from the storage management application indicating access privileges associated with the user, and configuring a user interface of the client application based on the access privileges indicated in the response.

Other aspects of the invention will be apparent from the accompanying figures and from the detailed description which follows.

One or more embodiments of the present invention are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:

FIG. 1 illustrates a network environment in which the invention can be implemented;

FIG. 2 shows the use of a centralized network server to provide access control to multiple storage servers;

FIG. 3 shows a high-level example of a storage management application which includes an API proxy through which a third party application can access a storage server;

FIG. 4 is a flow diagram showing a process of API proxying to access a storage server;

FIG. 5 schematically shows the elements involved in the process of FIG. 4; and

FIG. 6 is a high-level block diagram of a processing system.

Various techniques for providing centralized role-based access control (RBAC) of storage servers are described below. In certain embodiments of the invention, multiple storage servers are operated on a network, each configured to provide a set of clients with access to data stored in a set of mass storage devices, and a centralized network server is used to provide centralized control of administrative access to the storage servers. The centralized server may be a storage management console which comprises storage management software to perform the centralized RBAC and other storage management related functions.

In certain embodiments, the storage management software stores access privileges for multiple users and includes an API proxy to provide transparent, centralized, API-level RBAC for the storage servers. In particular, the storage management software may transparently proxy requests by one or more “third party” software applications (software applications that are external to the main management console and the storage servers) to access individual APIs of the storage servers. That is, the management software may control access to APIs of the storage servers, by third party applications, on a per-API, per-user and per-object basis. The proxying may include implementing a tunneling API to proxy API calls to the storage servers in a manner which is transparent to the third party applications. The API proxy may provide in-line filtering of responses to API calls, based on user access privileges.

Hence, in certain embodiments the management software acts as both a centralized policy decision point (PDP) and a centralized policy enforcement point (PEP) for RBAC purposes. In other embodiments, the management software acts as a centralized PDP only, and the storage servers acts as their own PEPs. Other scenarios are also possible, as will be apparent from the description which follows. In addition, user interfaces of the third party applications can be tailored in appearance and behavior according to a set of dynamic RBAC rules.

Network Environment

FIG. 1 shows a network environment in which the invention can be implemented. In FIG. 1, a number of storage servers 2 are each coupled locally to a separate storage subsystem 4, each of which includes multiple mass storage devices. The storage servers 2 are also coupled through a network 3 to a number of clients 1. Each storage subsystem 4 is managed by its corresponding storage server 2. Each storage server 2 receives and responds to various read and write requests from the clients 1, directed to data stored in or to be stored in the corresponding storage subsystem 4.

Each of the clients 1 may be, for example, a conventional personal computer (PC), workstation, or the like. Each storage server 2 may be, for example, a file server used in a NAS mode (a “filer”), a block-based storage server such as used in a storage area network (SAN), or other type of storage server. The network 3 may be, for example, a local area network (LAN), a wide area network (WAN), or other type of network or a combination of networks. The mass storage devices in each storage subsystem 4 may be, for example, conventional magnetic disks, optical disks such as CD-ROM or DVD based storage, magneto-optical (MO) storage, or any other type of non-volatile storage devices suitable for storing large quantities of data. The storage devices in each storage subsystem 4 can be organized as a Redundant Array of Inexpensive Disks (RAID), in which case the corresponding storage server 2 accesses the storage subsystem 4 using an appropriate RAID protocol.

Also connected to the network 3 are one or more management consoles 5, each of which includes a storage management (software) application 6. One or more third party storage-related software applications 7 may also be operatively coupled to the network 3 in one or more other computer systems 8. The third party applications 7 may include, for example, one or more data backup applications, snapshot management applications, etc. Note, however, that for purposes of this description, a “third party application” generally means any software application other than the primary management application(s) and can be essentially any type of software application. In addition, a third party application can be implemented on a management console, or on storage client 1, or on any other computer system which has access to the network 3.

Centralized Access Control Using a Directory Server

In certain embodiments of the invention, centralized control of access to the storage servers 2 by network administrators is provided by a separate server on the network 3, to authenticate users based on their security credentials (i.e., username and password), on behalf of the storage servers 2. This functionality is illustrated in FIG. 2. The authentication server 21 contains (or has access to) a centralized database 22 which contains all of the user names and passwords associated with all of the storage servers 2. A centralized location of usernames, passwords and access controls keeps administration simple and expandable. The authentication server 21 may be implemented in one of the management consoles 5 or as a separate computer system on the network 3. With a single database, modifying user credentials can be done simply.

Examples of implementations which can be used to store usernames and password in the centralized database 22 are Network Information Service (NIS), Lightweight Directory Access Protocol (LDAP), or Microsoft Windows NT LAN Manager (NTLM)/Common Internet File System (CIFS). In a Microsoft Windows environment, users who have administrative power on the Windows domain will automatically have administrative power on a storage server 2 without any setup cost on the storage server 2. The setup can be done as part of the “cifs setup” command. This command tells the storage server 2 which server to authenticate against. All operations can be managed by the Microsoft Windows domain server (i.e., access control server), and the storage server 2 will authenticate any users against the domain server's database.

In certain embodiments of the invention, a network domain 28 is defined to include a group of storage servers 2 on behalf of which the authentication server 21 is to provide authentication. An administrator desiring to access one of the storage servers 2 can initiate the authentication process by using a conventional technique to log on to that storage server, such as Telnet, for example. The user then inputs a set of security credentials, i.e. a username and password, which are not necessarily recognizable by the storage server 2.

However, the username is defined to include a predetermined indicator of the network domain 28. For example, the username may be chosen as “Domain/admin1”, where “Domain” is the name of the network domain 28 and “admin1” identifies the particular administrator. The target storage server 2 detects the presence of the domain identifier in the username, and in response, forwards the security credentials provided by the user to the authentication server 21 for evaluation.

The authentication server 21 evaluates the credentials by determining if the credentials match entries in its local access control list. If the user is thereby determined to be an authorized user, the authentication server 21 returns a second set of credentials to the storage server, which are the credentials that the storage server recognizes.

As an alternative to the foregoing approach, if the administrator initially has the second set of credentials, he could instead provide the second set of credentials directly to the target storage server 2, bypassing the authentication server 21. Of course, many variations upon the foregoing processes are possible.

In a non-Windows environment, an administrator can set up the authentication server 21 as an LDAP server, for example, to contain usernames and passwords. On first boot for a storage server 2, the administrator sets an option in the storage server 2 to point to the authentication server 21. The option indicates where the storage server 2 should look for user information and in what order. This can be set to cause the storage server 2 to contact the authentication server 21 for administrative authentication information instead of obtaining such information from its internal files, for example. From then on, any authorization request (including authenticating via username and password) 24 sent to a storage server 2 will be forwarded to the authentication server 21 for evaluation, with the result 26 returned by the authentication server 21 to the storage server 2.

In certain embodiments, if the user is attempting to perform a specific operation on the storage server 2 (e.g., accessing a particular volume), the storage server 2 also sends to server 21 an indication of the operation the user is attempting to perform and an indication of the specific object(s) (e.g., volume, file, etc.) to which the operation relates. In such an embodiment the server 21 can be an authorization server (not merely an authentication server), which acts as a PEP, deciding whether the user has access to the operation he wants to perform and/or the targeted object. Although the command is initially sent to the storage server 2, it is rerouted for authorization purposes.

Centralized Access Control Using API-Level Proxying

Referring again to FIG. 1, it may be desirable to enable one or more third party software applications 7, which may be storage management applications, to invoke or control functions of the storage servers 2, while maintaining a convenient, centralized mechanism for controlling such access. Further, it may be desirable to control such access on an API-by-API basis. In other words, a particular administrator or application may be authorized to access certain APIs of a storage server (e.g., specific commands or functions of an application) but not others.

According to certain embodiments of the invention, therefore, at least one of the storage management applications 6 can perform transparent, centralized proxying of access by the third party applications to individual APIs of the storage servers 2, while providing centralized, per-API control of such access. This is done in certain embodiments by implementing a tunneling API in the storage management application 6 which provides this functionality. This allows the storage management application 6 to operate as a centralized PDP and, in at least some embodiments, as a centralized PEP, for RBAC of the storage servers 2.

There are many types of storage-related APIs that can be proxied, and for which centralized RBAC can be provided, by using the technique introduced here. For example, the APIs may relate to functions such as monitoring, auditing, storage provisioning, backup, data recovery, data deletion, and quoting management to name just a few. As a more specific example, monitoring APIs might include an API for listing the volumes on a storage server 2, an API for getting general information about a storage server 2, an API for reporting the status of all active quotas, or an API for getting the status of a Fibre Channel service. Auditing APIs might include an API to read an arbitrary file on a storage server, bypassing file access control, or an API to list the software license codes on a storage server 2. Provisioning APIs might include, for example, an API create a new volume, an API to make a logical unit number (LUN) available to one or more hosts, or an API to export some existing file system as a CIFS share. Of course, there are many other types of APIs that can be proxied. The details of these APIs are not necessary for an understanding of the present invention.

FIG. 3 shows a high-level example of a storage management application. The storage management application 30, which can be an example of one of the storage management applications 6 in FIG. 1, includes a graphical user interface (GUI) engine 31, various functional modules 32, an API proxy 33, an access control list (ACL) 34, and a database of storage server credentials 35. The GUI engine 31 generates a GUI to allow an administrator to access these functions. The various functional modules 32 enable various management related tasks to be performed on storage servers, such as configuring, provisioning and monitoring, scheduling backups, troubleshooting problems, performing software upgrades, etc. The particular functionality and design of these modules 32 are not germane to the present invention and therefore need not be described herein.

In general, the API proxy 33 allows a software module, such as a third party application 7 (see FIG. 1), to invoke actions in another software module running on a target device, such as a storage server 2. The actions are specified in a data structure which marshals an API name and arguments (input parameters). Input parameters are typed and can be integers, Booleans, strings, structures, and arrays of structures. The same kind of data structure is used to return the output resulting from the invocation of the action on the target device. In certain embodiments, the API proxy 33 accomplishes this in a way that allows third party applications to be written in different programming languages, without having to change the management application 30. This can be accomplished by, for example, using XML as the marshalling format for the data structure. Examples of the formats of these data structures are provided below. Transmission of the marshaled XML from one machine to another can be done in any of various ways, such as hypertext transfer protocol (HTTP) or secure HTTP (HTTPS).

Thus, the API proxy 33 is responsible for the transparent proxying of API-level access requests from third party applications 7 to the storage servers 2. In certain embodiments of the invention, the API proxy 33 accomplishes this by implementing a tunneling API. The ACL 34 contains usernames, passwords and privileges information, which are used by the API proxy 33 to determine whether to grant or deny the request by the third party applications 7.

This technique is described further now with reference to FIGS. 4 and 5. FIG. 4 shows an example of the overall process which may be performed by API proxy 33. As shown in FIG. 5, the API proxy 33 is an intermediary between a third party application 7 and a storage server 2. The storage server 2 includes one or more applications 50, each of which implements a number of APIs 51 to provide various different functions.

Initially, at block 401 in FIG. 4, a user of the third party application 7 gains access to the management software application 30 (which operates in a management console 5) by providing a correct username and password over a network connection. Thereafter, a user input directed to the third party application 7 or an automated action (e.g., at a predetermined event or time) triggers a process to call a particular API of the storage server 2 (“the target storage server”) from the third party application 7 at block 402. The particular API to be called may be specified by (or according to) the user input or by predetermined programming. The third party application 7 then responds to this action at block 403 by embedding the specified API call within a tunneling API call, which is described further below, attaching a Hypertext Transfer Protocol (HTTP) header containing the security credentials of the user (i.e., username and password) to the tunneling API call, and then sending the tunneling API call 55 with header to the API proxy 33. An example of the format of the tunneling API call is provided below. The security credentials provided by the user are to gain access to the management application 30 only and are not recognized by the target storage server or any other storage server.

At the management console 5, the API proxy 33 receives the tunneling API call 55 transmitted from the third party application 7 and, at block 404, checks the security credentials in the header against the ACL 34. In certain embodiments of the invention, the API proxy 33 only determines whether the user is authorized to access the management software application 30, and any authorized user is granted full access privileges. In other embodiments, users can have either full access privileges or privileges to access to only specified APIs and/or objects (e.g., volumes, files, etc.), as specified in the ACL 34.

Further, the API proxy 33 may perform in-line filtering of a storage server's responses, based on the user's capabilities. For example, a third party application 7 may submit an API call to the management application 30 to view all volumes that meet specified criteria. If the result returned by the API is, for example, volumes A, B, and C, but the user of the third party application 7 does not have access to volume B, then the API proxy 33 will only return objects A and C to the third party application 7, not volume B.

Referring again to FIG. 4, block 404, if the user is determined not to be authorized (i.e., authorized to access the management software application, the specific API and/or the specific object(s) specified in the call), the API proxy 33 causes an error message to be returned to the third party application 7 at block 405, where an appropriate message is output to the user at block 406. If, on the other hand, the user is determined to be authorized, the API proxy 33 then looks up, in the storage server credentials database 35 at block 407, the appropriate credentials (username and password) for accessing the target storage server 2. At block 408 the proxy API 33 then generates a new API call 56 with an HTTP header containing the credentials for accessing the target storage server 2 and then forwards the new API call 56 with the new header to the target storage server 2. An example of the format of the new API call is provided below.

At block 409 the target storage server 2 receives this API call from the API proxy 33 and validates the credentials (the credentials should always be valid in this step, since they were sent from the management application 5). The target storage server 2 then executes the specified API at block 410, and returns the results of executing the API to the API proxy 33 at block 411. The API proxy 33 then receives the results from the storage server 2 and forwards them to the third party application 7 at block 412. At block 413, the third party application receives the results of executing the API and processes them as appropriate. Examples of the formats of these responses are discussed below.

In certain embodiments, a non-transparent API proxy 33 is implemented in accordance with the following interface description:

/************************************************************************
 *
 * @api api-proxy
 * @desc Proxy an API request to a third party and return the
 *   API response.
 * @errno   EINVALIDINPUTERROR
 * @errno   EINTERNALERROR
 * @errno   EACCESSDENIED
 *
 * @input  target
 * @type  string
 * @desc   The target host. May be a hostname (qualified or unqualified) or
 *   a vfiler name.
 *
 * @input  request
 * @type  api-request-info
 * @desc   The request to be forwarded to another server.
 *
 * @input  username
 * @type  string, optional
 * @desc   User account to use for executing the API. If none is
 *   specified, the highest privilege available will be
 *   attempted. The proxy server may have a security policy
 *   that restricts the accepted values for this field. Invalid
 *   values will cause EACCESSDENIED.
 *
 * @input  timeout
 * @type  integer, optional
 * @desc  Number of seconds that the proxy server should wait for a
 *   response before giving up.
 *
 * @output  response
 * @type api-response-info
 * @desc  The response from the other server.
 *
 *
 * @typedef   api-request-info
 * @desc   One API request.
 *
 *  @element   name
 *  @type   string
 *  @desc   API name. The proxy server may have a security policy
 *   that restricts the accepted values for this field. Invalid
 *   values will cause EACCESSDENIED.
 *
 *  @element   args
 *  @type   api-args-info, optional
 *  @desc   The API arguments.
 *
 * @typedef   api-args-info
 * @desc   Arguments to an API request (contents variable).
 *
 * @typedef   api-response-info
 * @desc   One API response.
 *
 *  @element   status
 *  @type   string
 *  @desc   Status of the response. May be “passed” or “failed”.
 *
 *  @element   errno
 *  @type   integer, optional
 *  @desc   Error code. Only present if status is “failed”.
 *
 *  @element   reason
 *  @type   string, optional
 *  @desc   Reason string. Only present if status is “failed”.
 *
 *  @element   results
 *  @type   api-results-info, optional
 *  @desc   The API results. Only present if status is “passed”.
 *
 * @typedef   api-results-info
 * @desc   Results of a successful API (contents variable).
 *
 ************************************************************************/

In certain embodiments of the invention, the security policy is to always ignore the contents of the username field. All values will be accepted for the name field of api-request-info. Other embodiments may have static or dynamic policies that restrict what APIs a user can invoke and what user identities the API proxy 33 will allow the user to invoke them as.

The API proxy 33 protocol will now be further described, according to certain embodiments of the invention. Assume there are three machines involves in a transaction:

THIRDPARTY is running a third party management application 7.

MANAGER is a management console 5.

STORAGE is a storage server 2.

The following is an example of how the raw data might look in a request (API call) if no API proxy was involved:

THIRD- <?xml version=‘1.0’ encoding=‘utf-8’ ?>
PARTY → <!DOCTYPE netapp SYSTEM ‘file:/etc/netapp_filer.dtd’>
STORAGE <netapp xmlns=“http://www.netapp.com/filer/admin”
version=“1.0”>
 <volume-options-list-info>
  <volume>vol0</volume>
 </volume-options-list-info>
</netapp>
STOR- <?xml version=‘1.0’ encoding=‘UTF-8’ ?>
AGE → <!DOCTYPE netapp SYSTEM ‘/na_admin/netapp_filer.dtd’>
THIRD- <netapp version=‘1.1’
PARTY xmlns=‘http://www.netapp.com/filer/admin’>
 <results status=“passed”>
  <options>
   <volume-option-info>
    <name>raidsize</name>
    <value>8</value>
   </volume-option-info>
   <volume-option-info>
    <name>maxdirsize</name>
    <value>10485</value>
   </volume-option-info>
   <volume-option-info>
    <name>raidtype</name>
    <value>raid4</value>
   </volume-option-info>
  </options>
 </results>
</netapp>

In contrast, the following is an example of the raw data in a request when the API proxy 33 s involved:

THIRD- <?xml version=‘1.0’ encoding=‘utf-8’ ?>
PARTY → <!DOCTYPE netapp SYSTEM ‘file:/etc/netapp_filer.dtd’>
MANA- <netapp xmlns=“http://www.netapp.com/filer/admin”
GER version=“1.0”>
 <api-proxy>
  <target>STORAGE</target>
  <request>
   <name>volume-options-list-info</name>
   <args>
    <volume>vol0</volume>
   </args>
  </request>
 </api-proxy>
</netapp>
MANA- <?xml version=‘1.0’ encoding=‘utf-8’ ?>
GER → <!DOCTYPE netapp SYSTEM ‘file:/etc/netapp_filer.dtd’>
STORAGE <netapp xmlns=“http://www.netapp.com/filer/admin”
version=“1.0”>
 <volume-options-list-info>
  <volume>vol0</volume>
 </volume-options-list-info>
</netapp>
STOR- <?xml version=‘1.0’ encoding=‘UTF-8’ ?>
AGE → <!DOCTYPE netapp SYSTEM ‘/na_admin/netapp_filer.dtd’>
MANA- <netapp version=‘1.1’
GER xmlns=‘http://www.netapp.com/filer/admin’>
 <results status=“passed”>
  <options>
   <volume-option-info>
    <name>raidsize</name>
    <value>8</value>
   </volume-option-info>
   <volume-option-info>
    <name>maxdirsize</name>
    <value>10485</value>
   </volume-option-info>
   <volume-option-info>
    <name>raidtype</name>
    <value>raid4</value>
   </volume-option-info>
  </options>
 </results>
</netapp>
MANA- <?xml version=‘1.0’ encoding=‘UTF-8’ ?>
GER → <!DOCTYPE netapp SYSTEM
THIRD- ‘http://10.34.25.41:8088/netapp_server.dtd’>
PARTY <netapp version=‘1.1’
xmlns=‘http://www.netapp.com/filer/admin’>
 <results status=“passed”>
  <response>
   <status>passed</status>
   <results>
    <options>
     <volume-option-info>
      <name>raidsize</name>
      <value>8</value>
     </volume-option-info>
     <volume-option-info>
      <name>maxdirsize</name>
      <value>10485</value>
     </volume-option-info>
     <volume-option-info>
      <name>raidtype</name>
      <value>raid4</value>
     </volume-option-info>
    </options>
   </results>
  </response>
 </results>
</netapp>

In the API proxying technique described above, the API proxy 33 can direct a request to a specific storage server 2 as determined by, for example, the API request itself or some configuration information used by the API proxy 33. In other embodiments, however, the API proxy 33 can dynamically determine to which of multiple storage servers 2 to send an API request. This dynamic determination can be made based on any one or more of various criteria, such as data in the request, the source of the request, the current configuration of the storage servers 2, the current load on the storage servers 2, other environmental conditions, etc.

In the embodiments described above, the API call from the third party application is embedded within a tunneling API call. In certain embodiments, this embedding can be made invisible to third party application, by performing this function inside a software development kit (SDK) library. In this way, the third party application can be freed from any knowledge of whether the API is being proxied or not.

RBAC Based Tailoring of Client User Interface

In accordance with certain embodiments of the invention, the user interface of a third party application 7 can be tailored in appearance and/or behavior according to RBAC rules maintained by the centralized management application 30. This functionality addresses one problem with existing solutions, i.e., that they generally expose users of an application to the full functionality of the application, sometimes allowing tasks and operations to fail when permissions to complete them do not exist.

One possible way of tailoring is to add or remove features (e.g., tasks and capabilities) to or from the application's user interface based on a user's capabilities. For example, a user may or may not have access to the “delete” operation for a particular object (e.g., a volume) stored on the storage server 2. In that case, the delete option may be hidden from view when that object is selected. As another example, the third party application may choose to display or hide an icon representing a feature based on the user's capabilities. As yet another example, the application may maintain a list (add, modify, delete) of tasks for object and choose to display as “inactive” any items for which the user does not have permissions.

Another type of tailoring is for the third party application to automatically launch entirely different sets of code, based on a determination of the user's capabilities. For example, the client application may determine whether the current user is a backup administrator and, if so, automatically launch a backup application, and if not, automatically launch a different application.

To implement these types of tailoring, the third party application 7, as the client, can formulate and execute API calls as, for example, XML queries over HTTP or HTTPS, to the management application 30. These include API calls for specifying user capabilities, for managing user capabilities, and for querying user capabilities. The third party application 7, maintains a concept of “the current user” and can transmit capability queries to the management application 30 regarding that user automatically in a manner which is transparent to the user. Based on responses to these queries, the third party application 7 maintains a list of features and tasks, which are parameterized according to the user's capabilities.

On the server side, the management application 30 includes a corresponding suite of APIs for administering user capabilities. More particularly, the management server 30 includes one or more APIs for specifying user capabilities, one or more APIs for managing user capabilities, and one or more APIs for responding to queries of user capabilities from third party applications. For example, one API may allow a third party application 7 to specify to the management application 30, “User XYZ has the ‘delete’ capability for object DEF”. Another API might allow the management application 30 to respond to a query from a third party application 7 such as, “Does user XYZ have the ‘delete’ capability for object DEF?”. Still other APIs may be used for query purposes that perform filtering operations. For example, as described above, when asked to return a list of objects, a particular object may be filtered out by the management application 30 (e.g., by the API proxy 33) if the user does not have read capabilities for that object (i.e., in-line filtering).

Thus, as described above, in certain embodiments the management application 30 acts as both a centralized PDP and a centralized PEP for RBAC purposes, where the storage servers 2 trust the management application 30. In other embodiments, the management application 30 acts as a centralized PDP only, and the storage servers acts as their own PEPs, which trust the management application 30's PDP. In other embodiments, a storage server 2 could act as a limited PDP and PEP, such as where the RBAC is on a per-API and per-user basis. Alternatively, a directory server could act as a PDP and limited PEP (e.g., enforcing only authentication), where the storage servers 2 act as a PEP, as described above.

As indicated above, the techniques introduced herein can be implemented in software, either in whole or in part. FIG. 6 is a high-level block diagram showing an example of the architecture of a processing system, at a high level, in which such software can be embodied. In certain embodiments, the processing system 60 is a management console 5. In other embodiments, the processing system 60 is a storage server 2. In still other embodiments, the processing system 60 is a separate network server or other form of processing system. Note that certain standard and well-known components which are not germane to the present invention are not shown.

The processing system 60 includes one or more processors 61 and memory 62, coupled to a bus system 63. The bus system 63 shown in FIG. 6 is an abstraction that represents any one or more separate physical buses and/or point-to-point connections, connected by appropriate bridges, adapters and/or controllers. The bus system 63, therefore, may include, for example, a system bus, a Peripheral Component Interconnect (PCI) bus, a HyperTransport or industry standard architecture (ISA) bus, a small computer system interface (SCSI) bus, a universal serial bus (USB), or an Institute of Electrical and Electronics Engineers (IEEE) standard 1394 bus (sometimes referred to as “Firewire”).

The processors 61 are the central processing units (CPUs) of the processing system 60 and, thus, control its overall operation. In certain embodiments, the processors 61 accomplish this by executing software stored in memory 62. A processor 61 may be, or may include, one or more programmable general-purpose or special-purpose microprocessors, digital signal processors (DSPs), programmable controllers, application specific integrated circuits (ASICs), programmable logic devices (PLDs), or the like, or a combination of such devices.

Memory 62 represents any form of random access memory (RAM), read-only memory (ROM), flash memory, or the like, or a combination of such devices. Memory 62 includes the main memory of the processing system 60. Memory 62 may store software which implements the techniques introduced above.

Also connected to the processors 61 through the bus system 63 are one or more internal mass storage devices 65, and a network adapter 66. Internal mass storage devices 65 may be or include any conventional medium for storing large volumes of data in a non-volatile manner, such as one or more magnetic or optical based disks. The network adapter 66 provides the processing system 60 with the ability to communicate with remote devices (e.g., clients 1) over a network and may be, for example, an Ethernet adapter, a Fibre Channel adapter, or the like. The processing system 60 may also include one or more input/output (I/O) devices 67 coupled to the bus system 63. The I/O devices 67 may include, for example, a display device, a keyboard, a mouse, etc. If the processing system 60 is a storage server 2, it may include a storage adapter (not shown), such as a Fibre Channel adapter or a SCSI adapter, to allow the storage server 2 to access a set of mass storage devices.

Thus, a method and apparatus for centralized control of administrative access to storage servers have been described. Note that references throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Therefore, it is emphasized and should be appreciated that two or more references to “an embodiment” or “one embodiment” or “an alternative embodiment” in various portions of this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures or characteristics being referred to may be combined as suitable in one or more embodiments of the invention, as will be recognized by those of ordinary skill in the art.

Although the present invention has been described with reference to specific exemplary embodiments, it will be recognized that the invention is not limited to the embodiments described, but can be practiced with modification and alteration within the spirit and scope of the appended claims. Accordingly, the specification and drawings are to be regarded in an illustrative sense rather than a restrictive sense.

Thompson, Timothy J., Yoder, Alan G., Swartzlander, Benjamin B., Klinkner, Steven R., Flank, Joshua H.

Patent Priority Assignee Title
10013317, Jan 10 2013 Pure Storage, Inc. Restoring a volume in a storage system
10037440, Jun 03 2014 Pure Storage, Inc. Generating a unique encryption key
10050935, Jul 09 2014 SHAPE SECURITY, INC. Using individualized APIs to block automated attacks on native apps and/or purposely exposed APIs with forced user interaction
10061798, Oct 14 2011 Pure Storage, Inc. Method for maintaining multiple fingerprint tables in a deduplicating storage system
10089010, Mar 15 2012 Pure Storage, Inc. Identifying fractal regions across multiple storage devices
10114574, Oct 07 2014 Pure Storage, Inc. Optimizing storage allocation in a storage system
10126982, Sep 15 2010 Pure Storage, Inc. Adjusting a number of storage devices in a storage system that may be utilized to simultaneously service high latency operations
10129337, Dec 15 2011 Amazon Technologies, Inc. Service and APIs for remote volume-based block storage
10156998, Sep 15 2010 Pure Storage, Inc. Reducing a number of storage devices in a storage system that are exhibiting variable I/O response times
10162523, Oct 04 2016 Pure Storage, Inc. Migrating data between volumes using virtual copy operation
10164841, Oct 02 2014 Pure Storage, Inc. Cloud assist for storage systems
10180879, Sep 28 2010 Pure Storage, Inc. Inter-device and intra-device protection data
10185505, Oct 28 2016 Pure Storage, Inc.; Pure Storage, Inc Reading a portion of data to replicate a volume based on sequence numbers
10191662, Oct 04 2016 Pure Storage, Inc.; Pure Storage, Inc Dynamic allocation of segments in a flash storage system
10191857, Jan 09 2014 Pure Storage, Inc. Machine learning for metadata cache management
10223545, Nov 05 2014 EMC IP HOLDING COMPANY LLC System and method for creating security slices with storage system resources and related operations relevant in software defined/as-a-service models, on a purpose built backup appliance (PBBA)/protection storage appliance natively
10228865, Sep 15 2010 Pure Storage, Inc. Maintaining a target number of storage devices for variable I/O response times in a storage system
10235065, Dec 11 2014 Pure Storage, Inc. Datasheet replication in a cloud computing environment
10235093, Jan 10 2013 Pure Storage, Inc. Restoring snapshots in a storage system
10248516, Dec 11 2014 Pure Storage, Inc. Processing read and write requests during reconstruction in a storage system
10254964, Nov 24 2014 Pure Storage, Inc. Managing mapping information in a storage system
10263770, Nov 06 2013 Pure Storage, Inc. Data protection in a storage system using external secrets
10284367, Sep 26 2012 Pure Storage, Inc. Encrypting data in a storage system using a plurality of encryption keys
10296354, Jan 21 2015 Pure Storage, Inc. Optimized boot operations within a flash storage array
10296469, Jul 24 2014 Pure Storage, Inc.; Pure Storage, Inc Access control in a flash storage system
10298561, Jun 30 2015 VMware, Inc. Providing a single session experience across multiple applications
10303549, Aug 27 2009 Pure Storage, Inc Dispersed storage network with access control and methods for use therewith
10310740, Jun 23 2015 Pure Storage, Inc.; Pure Storage, Inc Aligning memory access operations to a geometry of a storage device
10346084, Jun 25 2014 Pure Storage, Inc. Replication and snapshots for flash storage systems
10348675, Jul 24 2014 Pure Storage, Inc. Distributed management of a storage system
10353630, Sep 15 2010 Pure Storage, Inc. Simultaneously servicing high latency operations in a storage system
10359942, Oct 31 2016 Pure Storage, Inc.; Pure Storage, Inc Deduplication aware scalable content placement
10365858, Nov 06 2013 Pure Storage, Inc. Thin provisioning in a storage device
10397187, Jul 09 2014 SHAPE SECURITY, INC. Blocking automated attacks with forced user interaction
10402266, Jul 31 2017 Pure Storage, Inc Redundant array of independent disks in a direct-mapped flash storage system
10430079, Sep 08 2014 Pure Storage, Inc.; Pure Storage, Inc Adjusting storage capacity in a computing system
10430282, Oct 07 2014 Pure Storage, Inc Optimizing replication by distinguishing user and system write activity
10452289, Sep 28 2010 Pure Storage, Inc. Dynamically adjusting an amount of protection data stored in a storage system
10452290, Dec 19 2016 Pure Storage, Inc Block consolidation in a direct-mapped flash storage system
10452297, May 02 2016 Pure Storage, Inc.; Pure Storage, Inc Generating and optimizing summary index levels in a deduplication storage system
10482061, Dec 01 2014 Pure Storage, Inc. Removing invalid data from a dataset in advance of copying the dataset
10496556, Jun 25 2014 Pure Storage, Inc Dynamic data protection within a flash storage system
10521120, Mar 15 2012 Pure Storage, Inc. Intelligently mapping virtual blocks to physical blocks in a storage system
10540343, Oct 14 2011 Pure Storage, Inc. Data object attribute based event detection in a storage system
10545861, Oct 04 2016 Pure Storage, Inc.; Pure Storage, Inc Distributed integrated high-speed solid-state non-volatile random-access memory
10545987, Dec 19 2014 Pure Storage, Inc. Replication to the cloud
10564882, Jun 23 2015 Pure Storage, Inc. Writing data to storage device based on information about memory in the storage device
10585617, Jan 10 2013 Pure Storage, Inc. Buffering copy requests in a storage system
10587692, Dec 15 2011 Amazon Technologies, Inc. Service and APIs for remote volume-based block storage
10607034, Jun 03 2014 Pure Storage, Inc. Utilizing an address-independent, non-repeating encryption key to encrypt data
10613974, Oct 04 2016 Pure Storage, Inc; Pure Storage, Inc. Peer-to-peer non-volatile random-access memory
10623386, Sep 26 2012 Pure Storage, Inc. Secret sharing data protection in a storage system
10656850, Oct 28 2016 Pure Storage, Inc. Efficient volume replication in a storage system
10656864, Mar 20 2014 Pure Storage, Inc. Data replication within a flash storage array
10678433, Apr 27 2018 Pure Storage, Inc.; Pure Storage, Inc Resource-preserving system upgrade
10678436, May 29 2018 Pure Storage, Inc. Using a PID controller to opportunistically compress more data during garbage collection
10693964, Apr 09 2015 Pure Storage, Inc. Storage unit communication within a storage system
10756816, Oct 04 2016 Pure Storage, Inc.; Pure Storage, Inc Optimized fibre channel and non-volatile memory express access
10776034, Jul 26 2016 Pure Storage, Inc. Adaptive data migration
10776046, Jun 08 2018 Pure Storage, Inc. Optimized non-uniform memory access
10776202, Sep 22 2017 Pure Storage, Inc.; Pure Storage, Inc Drive, blade, or data shard decommission via RAID geometry shrinkage
10782892, Feb 18 2015 Pure Storage, Inc. Reclaiming storage space in a storage subsystem
10783131, Dec 12 2014 Pure Storage, Inc. Deduplicating patterned data in a storage system
10789211, Oct 04 2017 Pure Storage, Inc.; Pure Storage, Inc Feature-based deduplication
10809921, Feb 18 2015 Pure Storage, Inc. Optimizing space reclamation in a storage system
10810083, Sep 28 2010 Pure Storage, Inc. Decreasing parity overhead in a storage system
10817375, Sep 28 2010 Pure Storage, Inc. Generating protection data in a storage system
10831935, Aug 31 2017 Pure Storage, Inc.; Pure Storage, Inc Encryption management with host-side data reduction
10834050, Aug 22 2014 SHAPE SECURITY, INC. Modifying authentication for an application programming interface
10838640, Oct 07 2014 Pure Storage, Inc. Multi-source data replication
10838834, Dec 11 2014 Pure Storage, Inc. Managing read and write requests targeting a failed storage region in a storage system
10846216, Oct 25 2018 Pure Storage, Inc. Scalable garbage collection
10860475, Nov 17 2017 Pure Storage, Inc. Hybrid flash translation layer
10884919, Oct 31 2017 Pure Storage, Inc. Memory management in a storage system
10887086, Nov 06 2013 Pure Storage, Inc. Protecting data in a storage system
10893037, Dec 14 2005 Welch Allyn, Inc. Medical device wireless adapter
10901660, Aug 31 2017 Pure Storage, Inc.; Pure Storage, Inc Volume compressed header identification
10908835, Jan 10 2013 Pure Storage, Inc. Reversing deletion of a virtual machine
10915813, Jan 31 2018 Pure Storage, Inc. Search acceleration for artificial intelligence
10929046, Jul 09 2019 Pure Storage, Inc. Identifying and relocating hot data to a cache determined with read velocity based on a threshold stored at a storage device
10944671, Apr 27 2017 Pure Storage, Inc. Efficient data forwarding in a networked device
10970395, Jan 18 2018 PURE STORAGE, INC , A DELAWARE CORPORATION Security threat monitoring for a storage system
10983866, Aug 07 2014 Pure Storage, Inc. Mapping defective memory in a storage system
10990480, Apr 05 2019 Pure Storage, Inc. Performance of RAID rebuild operations by a storage group controller of a storage system
10999157, Oct 02 2014 Pure Storage, Inc. Remote cloud-based monitoring of storage systems
11003380, Jun 25 2014 Pure Storage, Inc. Minimizing data transfer during snapshot-based replication
11010080, Jun 23 2015 Pure Storage, Inc. Layout based memory writes
11010233, Jan 18 2018 Pure Storage, Inc; PURE STORAGE, INC , A DELAWARE CORPORATION Hardware-based system monitoring
11029853, Oct 04 2016 Pure Storage, Inc. Dynamic segment allocation for write requests by a storage system
11032243, Jul 09 2014 SHAPE SECURITY, INC. Using individualized APIs to block automated attacks on native apps and/or purposely exposed APIs with forced user interaction
11032259, Sep 26 2012 Pure Storage, Inc. Data protection in a storage system
11036393, Oct 04 2016 Pure Storage, Inc. Migrating data between volumes using virtual copy operation
11036583, Jun 04 2014 Pure Storage, Inc. Rebuilding data across storage nodes
11036596, Feb 18 2018 Pure Storage, Inc. System for delaying acknowledgements on open NAND locations until durability has been confirmed
11054996, Dec 19 2016 Pure Storage, Inc. Efficient writing in a flash storage system
11061786, Dec 11 2014 Pure Storage, Inc. Cloud-based disaster recovery of a storage system
11070382, Oct 23 2015 Pure Storage, Inc. Communication in a distributed architecture
11080154, Aug 07 2014 Pure Storage, Inc. Recovering error corrected data
11086713, Jul 23 2019 Pure Storage, Inc. Optimized end-to-end integrity storage system
11093146, Jan 12 2017 Pure Storage, Inc. Automatic load rebalancing of a write group
11093324, Jul 31 2017 Pure Storage, Inc. Dynamic data verification and recovery in a storage system
11099769, Jan 10 2013 Pure Storage, Inc. Copying data without accessing the data
11099986, Apr 12 2019 Pure Storage, Inc. Efficient transfer of memory contents
11113409, Oct 26 2018 Pure Storage, Inc. Efficient rekey in a transparent decrypting storage array
11119656, Oct 31 2016 Pure Storage, Inc. Reducing data distribution inefficiencies
11119657, Oct 28 2016 Pure Storage, Inc. Dynamic access in flash system
11128448, Nov 06 2013 Pure Storage, Inc. Quorum-aware secret sharing
11133076, Sep 06 2018 Pure Storage, Inc. Efficient relocation of data between storage devices of a storage system
11144638, Jan 18 2018 Pure Storage, Inc. Method for storage system detection and alerting on potential malicious action
11163448, Sep 08 2014 Pure Storage, Inc. Indicating total storage capacity for a storage device
11169745, Nov 06 2013 Pure Storage, Inc. Exporting an address space in a thin-provisioned storage device
11169817, Jan 21 2015 Pure Storage, Inc. Optimizing a boot sequence in a storage system
11188269, Mar 27 2015 Pure Storage, Inc. Configuration for multiple logical storage arrays
11194473, Jan 23 2019 Pure Storage, Inc. Programming frequently read data to low latency portions of a solid-state storage array
11194759, Sep 06 2018 Pure Storage, Inc.; Pure Storage, Inc Optimizing local data relocation operations of a storage device of a storage system
11216369, Oct 25 2018 Pure Storage, Inc. Optimizing garbage collection using check pointed data sets
11221970, Jun 25 2014 Pure Storage, Inc. Consistent application of protection group management policies across multiple storage systems
11231956, May 19 2015 Pure Storage, Inc. Committed transactions in a storage system
11249831, Feb 18 2018 Pure Storage, Inc. Intelligent durability acknowledgment in a storage system
11249999, Sep 04 2015 Pure Storage, Inc. Memory efficient searching
11269884, Sep 04 2015 Pure Storage, Inc. Dynamically resizable structures for approximate membership queries
11275509, Sep 15 2010 Pure Storage, Inc Intelligently sizing high latency I/O requests in a storage environment
11275681, Nov 17 2017 Pure Storage, Inc. Segmented write requests
11281394, Jun 24 2019 Pure Storage, Inc. Replication across partitioning schemes in a distributed storage system
11281577, Jun 19 2018 Pure Storage, Inc.; Pure Storage, Inc Garbage collection tuning for low drive wear
11307772, Sep 15 2010 Pure Storage, Inc. Responding to variable response time behavior in a storage environment
11327655, Apr 27 2018 Pure Storage, Inc. Efficient resource upgrade
11334254, Mar 29 2019 Pure Storage, Inc. Reliability based flash page sizing
11341117, Oct 14 2011 Pure Storage, Inc. Deduplication table management
11341136, Sep 04 2015 Pure Storage, Inc. Dynamically resizable structures for approximate membership queries
11341236, Nov 22 2019 Pure Storage, Inc. Traffic-based detection of a security threat to a storage system
11356509, Dec 15 2011 Amazon Technologies, Inc. Service and APIs for remote volume-based block storage
11385792, Apr 27 2018 Pure Storage, Inc. High availability controller pair transitioning
11385999, Oct 04 2016 Pure Storage, Inc. Efficient scaling and improved bandwidth of storage system
11397674, Apr 03 2019 Pure Storage, Inc.; Pure Storage, Inc Optimizing garbage collection across heterogeneous flash devices
11399063, Jun 04 2014 Pure Storage, Inc. Network authentication for a storage system
11403019, Apr 21 2017 Pure Storage, Inc. Deduplication-aware per-tenant encryption
11403043, Oct 15 2019 Pure Storage, Inc. Efficient data compression by grouping similar data within a data segment
11422751, Jul 18 2019 Pure Storage, Inc. Creating a virtual storage system
11435904, Sep 28 2010 Pure Storage, Inc. Dynamic protection data in a storage system
11436023, May 31 2018 Pure Storage, Inc. Mechanism for updating host file system and flash translation layer based on underlying NAND technology
11436378, Aug 31 2017 Pure Storage, Inc. Block-based compression
11442640, Oct 07 2014 Pure Storage, Inc. Utilizing unmapped and unknown states in a replicated storage system
11444849, Oct 02 2014 Pure Storage, Inc. Remote emulation of a storage system
11449485, Mar 30 2017 Pure Storage, Inc. Sequence invalidation consolidation in a storage system
11451554, May 07 2019 Bank of America Corporation Role discovery for identity and access management in a computing system
11487438, Feb 18 2015 Pure Storage, Inc. Recovering allocated storage space in a storage system
11487665, Jun 05 2019 Pure Storage, Inc.; Pure Storage, Inc Tiered caching of data in a storage system
11494109, Feb 22 2018 Pure Storage, Inc Erase block trimming for heterogenous flash memory storage devices
11500788, Nov 22 2019 Pure Storage, Inc. Logical address based authorization of operations with respect to a storage system
11520907, Nov 22 2019 Pure Storage, Inc.; PURE STORAGE, INC , A DELAWARE CORPORATION Storage system snapshot retention based on encrypted data
11537563, Oct 04 2017 Pure Storage, Inc. Determining content-dependent deltas between data sectors
11550481, Dec 19 2016 Pure Storage, Inc. Efficiently writing data in a zoned drive storage system
11561720, Jun 25 2014 Pure Storage, Inc. Enabling access to a partially migrated dataset
11561949, Dec 12 2014 Pure Storage, Inc. Reconstructing deduplicated data
11570180, Dec 23 2021 EQUE CORPORATION Systems configured for validation with a dynamic cryptographic code and methods thereof
11570249, Aug 18 2011 Amazon Technologies, Inc. Redundant storage gateways
11573727, Jan 10 2013 Pure Storage, Inc. Virtual machine backup and restoration
11579974, Sep 28 2010 Pure Storage, Inc. Data protection using intra-device parity and intra-device parity
11588633, Mar 15 2019 Pure Storage, Inc.; Pure Storage, Inc Decommissioning keys in a decryption storage system
11614893, Sep 15 2010 Pure Storage, Inc.; Pure Storage, Inc Optimizing storage device access based on latency
11615185, Nov 22 2019 Pure Storage, Inc. Multi-layer security threat detection for a storage system
11625481, Nov 22 2019 Pure Storage, Inc. Selective throttling of operations potentially related to a security threat to a storage system
11636031, Aug 11 2011 Pure Storage, Inc Optimized inline deduplication
11640244, Oct 28 2016 Pure Storage, Inc. Intelligent block deallocation verification
11645162, Nov 22 2019 Pure Storage, Inc. Recovery point determination for data restoration in a storage system
11651075, Nov 22 2019 Pure Storage, Inc.; PURE STORAGE, INC , A DELAWARE CORPORATION Extensible attack monitoring by a storage system
11657146, Nov 22 2019 Pure Storage, Inc. Compressibility metric-based detection of a ransomware threat to a storage system
11657155, Nov 22 2019 Pure Storage, Inc Snapshot delta metric based determination of a possible ransomware attack against data maintained by a storage system
11662909, Nov 24 2014 Pure Storage, Inc Metadata management in a storage system
11662936, Jan 10 2013 Pure Storage, Inc. Writing data using references to previously stored data
11675898, Nov 22 2019 Pure Storage, Inc. Recovery dataset management for security threat monitoring
11681568, Aug 02 2017 STYRA, INC Method and apparatus to reduce the window for policy violations with minimal consistency assumptions
11687418, Nov 22 2019 Pure Storage, Inc. Automatic generation of recovery plans specific to individual storage elements
11704036, May 02 2016 Pure Storage, Inc. Deduplication decision based on metrics
11706024, Nov 06 2013 Pure Storage, Inc. Secret distribution among storage devices
11720691, Nov 22 2019 Pure Storage, Inc. Encryption indicator-based retention of recovery datasets for a storage system
11720692, Nov 22 2019 Pure Storage, Inc. Hardware token based management of recovery datasets for a storage system
11720714, Nov 22 2019 Pure Storage, Inc. Inter-I/O relationship based detection of a security threat to a storage system
11733908, Jan 10 2013 Pure Storage, Inc. Delaying deletion of a dataset
11734097, Jan 18 2018 Pure Storage, Inc. Machine learning-based hardware component monitoring
11741244, Aug 24 2018 STYRA, INC. Partial policy evaluation
11755751, Nov 22 2019 Pure Storage, Inc. Modify access restrictions in response to a possible attack against data stored by a storage system
11762712, Aug 23 2018 STYRA, INC. Validating policies and data in API authorization system
11768623, Jan 10 2013 Pure Storage, Inc Optimizing generalized transfers between storage systems
11775189, Apr 03 2019 Pure Storage, Inc. Segment level heterogeneity
11775392, Dec 11 2014 Pure Storage, Inc. Indirect replication of a dataset
11797386, Sep 28 2010 Pure Storage, Inc. Flexible RAID layouts in a storage system
11803567, Dec 19 2014 Pure Storage, Inc. Restoration of a dataset from a cloud
11811619, Oct 02 2014 Pure Storage, Inc. Emulating a local interface to a remotely managed storage system
11841984, Jun 03 2014 Pure Storage, Inc. Encrypting data with a unique key
11847336, Mar 20 2014 Pure Storage, Inc. Efficient replication using metadata
11853463, Aug 23 2018 STYRA, INC. Leveraging standard protocols to interface unmodified applications and services
11853584, Jan 10 2013 Pure Storage, Inc. Generating volume snapshots
11853733, Aug 14 2020 STYRA, INC. Graphical user interface and system for defining and maintaining code-based policies
11869586, Jul 11 2018 Pure Storage, Inc.; Pure Storage, Inc Increased data protection by recovering data from partially-failed solid-state devices
11875349, Jun 22 2018 MasterCard International Incorporated Systems and methods for authenticating online users with an access control server
11881989, Jun 30 2011 Amazon Technologies, Inc. Remote storage gateway management using gateway-initiated connections
11886707, Feb 18 2015 Pure Storage, Inc. Dataset space reclamation
11899986, Nov 06 2013 Pure Storage, Inc. Expanding an address space supported by a storage system
11914861, Sep 08 2014 Pure Storage, Inc. Projecting capacity in a storage system based on data reduction levels
8112812, Aug 04 2005 Konica Minolta Business Technologies, Inc. Recording medium and device administration apparatus
8145818, Oct 10 2006 Hitachi, Ltd. Access right managing method for accessing multiple programs
8156516, Mar 29 2007 EMC IP HOLDING COMPANY LLC Virtualized federated role provisioning
8453166, Apr 14 2010 Bank of America Corporation; Bank of American Corporation Data services framework visibility component
8572023, Apr 14 2010 Bank of America Corporation Data services framework workflow processing
8601263, May 18 2010 GOOGLE LLC Storing encrypted objects
8601600, May 18 2010 GOOGLE LLC Storing encrypted objects
8607358, May 18 2010 GOOGLE LLC Storing encrypted objects
8650657, May 18 2010 GOOGLE LLC Storing encrypted objects
8756338, Apr 29 2010 NetApp, Inc. Storage server with embedded communication agent
8813256, Sep 30 2008 GEMALTO SA Regulator of commands which are destined for a sensitive application
8856881, Feb 26 2009 GENPACT LUXEMBOURG S À R L II, A LUXEMBOURG PRIVATE LIMITED LIABILITY COMPANY SOCIÉTÉ À RESPONSABILITÉ LIMITÉE Method and system for access control by using an advanced command interface server
8886907, May 18 2010 Google Inc Accessing objects in hosted storage
9081950, May 29 2012 International Business Machines Corporation Enabling host based RBAC roles for LDAP users
9104507, Jun 25 2010 Developer platform
9106634, Jan 02 2013 Microsoft Technology Licensing, LLC Resource protection on un-trusted devices
9148283, May 18 2010 GOOGLE LLC Storing encrypted objects
9218200, Aug 21 2008 VMware, Inc.; VMWARE, INC Selective class hiding in open API component architecture system
9245149, Mar 31 2015 Kaspersky Lab AO System and method for controling privileges of consumers of personal data
9258274, Jul 09 2014 SHAPE SECURITY, INC Using individualized APIs to block automated attacks on native apps and/or purposely exposed APIs
9396342, Jan 15 2013 International Business Machines Corporation Role based authorization based on product content space
9471807, Nov 05 2014 EMC IP HOLDING COMPANY LLC System and method for creating a security slices with storage system resources and related operations relevant in software defined/as-a-service models, on a purpose built backup appliance (PBBA)/protection storage appliance natively
9584501, Jan 02 2013 Microsoft Technology Licensing, LLC Resource protection on un-trusted devices
9588842, Dec 11 2014 Pure Storage, Inc. Drive rebuild
9589008, Jan 10 2013 Pure Storage, Inc. Deduplication of volume regions
9646039, Jan 10 2013 Pure Storage, Inc. Snapshots in a storage system
9684460, Sep 15 2010 Pure Storage, Inc. Proactively correcting behavior that may affect I/O performance in a non-volatile semiconductor storage device
9710165, Feb 18 2015 Pure Storage, Inc.; Pure Storage, Inc Identifying volume candidates for space reclamation
9727485, Nov 24 2014 Pure Storage, Inc.; Pure Storage, Inc Metadata rewrite and flatten optimization
9729506, Aug 22 2014 SHAPE SECURITY, INC Application programming interface wall
9749333, May 05 2014 OLIVER LLOYD PTY LTD ACN 108 899 323 AS TRUSTEE OF THE WWITE UNIT TRUST Shared access appliance, device and process
9773007, Dec 01 2014 Pure Storage, Inc.; Pure Storage, Inc Performance improvements in a storage system
9779268, Jun 03 2014 Pure Storage, Inc.; Pure Storage, Inc Utilizing a non-repeating identifier to encrypt data
9792045, Mar 15 2012 Pure Storage, Inc. Distributing data blocks across a plurality of storage devices
9800602, Sep 30 2014 SHAPE SECURITY, INC Automated hardening of web page content
9804973, Jan 09 2014 Pure Storage, Inc. Using frequency domain to prioritize storage of metadata in a cache
9811551, Oct 14 2011 Pure Storage, Inc. Utilizing multiple fingerprint tables in a deduplicating storage system
9817608, Jun 25 2014 Pure Storage, Inc. Replication and intermediate read-write state for mediums
9864761, Aug 08 2014 Pure Storage, Inc. Read optimization operations in a storage system
9864769, Dec 12 2014 Pure Storage, Inc.; Pure Storage, Inc Storing data utilizing repeating pattern detection
9880779, Jan 10 2013 Pure Storage, Inc. Processing copy offload requests in a storage system
9891858, Jan 10 2013 Pure Storage, Inc. Deduplication of regions with a storage system
9977600, Nov 24 2014 Pure Storage, Inc. Optimizing flattening in a multi-level data structure
Patent Priority Assignee Title
7092942, May 31 2002 Oracle International Corporation Managing secure resources in web resources that are accessed by multiple portals
7185359, Dec 21 2001 Microsoft Technology Licensing, LLC Authentication and authorization across autonomous network systems
7234032, Nov 20 2003 KYNDRYL, INC Computerized system, method and program product for managing an enterprise storage system
20030088786,
20030208378,
20030225889,
20040083367,
20050172151,
20050229236,
20050251522,
20060230281,
///////
Executed onAssignorAssigneeConveyanceFrameReelDoc
Apr 07 2005FLANK, JOSHUA H Network Appliance, IncASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0164680018 pdf
Apr 07 2005KLINKNER, STEVEN R Network Appliance, IncASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0164680018 pdf
Apr 07 2005SWARTZLANDER, BENJAMIN B Network Appliance, IncASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0164680018 pdf
Apr 07 2005THOMPSON, TIMOTHY J Network Appliance, IncASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0164680018 pdf
Apr 07 2005YODER, ALAN G Network Appliance, IncASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0164680018 pdf
Apr 08 2005NetApp, Inc.(assignment on the face of the patent)
Mar 10 2008Network Appliance, IncNetApp, IncCHANGE OF NAME SEE DOCUMENT FOR DETAILS 0253640739 pdf
Date Maintenance Fee Events
Feb 16 2011ASPN: Payor Number Assigned.
Sep 22 2014M1551: Payment of Maintenance Fee, 4th Year, Large Entity.
Sep 24 2018M1552: Payment of Maintenance Fee, 8th Year, Large Entity.
Sep 22 2022M1553: Payment of Maintenance Fee, 12th Year, Large Entity.


Date Maintenance Schedule
Mar 22 20144 years fee payment window open
Sep 22 20146 months grace period start (w surcharge)
Mar 22 2015patent expiry (for year 4)
Mar 22 20172 years to revive unintentionally abandoned end. (for year 4)
Mar 22 20188 years fee payment window open
Sep 22 20186 months grace period start (w surcharge)
Mar 22 2019patent expiry (for year 8)
Mar 22 20212 years to revive unintentionally abandoned end. (for year 8)
Mar 22 202212 years fee payment window open
Sep 22 20226 months grace period start (w surcharge)
Mar 22 2023patent expiry (for year 12)
Mar 22 20252 years to revive unintentionally abandoned end. (for year 12)