A database system is provided with user-directed application-side storage tiering control functionality. A database node of a database system comprises a daemon to communicate with a plurality of storage devices on a plurality of storage tiers of the database system; the daemon further configured to implement storage tiering control functionality based on a user-specified data selection and tier imperative for at least first and second storage tiers comprising respective disjoint subsets of the plurality of storage tiers. An application executing on the database node provides the user-specified data selection and tier imperative to the daemon for processing in response to a predefined keyword. The daemon performs the storage tiering control functionality in response to a programmatic call comprising one or more predefined keywords, such as predefined base command verbs, tier imperative keywords and/or environmental qualifiers. A shunt robot optionally implements the storage tiering control functionality on the at least first and second storage tiers.

Patent
   9594780
Priority
Jun 28 2013
Filed
Jun 28 2013
Issued
Mar 14 2017
Expiry
Dec 12 2033
Extension
167 days
Assg.orig
Entity
Large
127
6
currently ok
17. A method comprising:
configuring a daemon on a database node to communicate with a plurality of storage devices on a plurality of storage tiers of a database system; and
further configuring the daemon to implement storage tiering control functionality in response to a command-based user-specified dynamic data selection of at least one database object subset and a tier imperative for at least first and second storage tiers comprising respective disjoint subsets of the plurality of storage tiers based on said user-specified data selection and tier imperative, wherein said command-based user-specified dynamic data selection comprises one or more verb phrases to dynamically generate said at least one database object subset from data in an active memory of said database node, wherein said daemon executes in said active memory;
the daemon thereby being configured to dynamically control movement of said at least one dynamically generated database object subset between the at least first and second storage tiers in response to said command-based user-specified dynamic data selection and tier imperative.
1. A database node of a database system, comprising:
a daemon to communicate with a plurality of storage devices on a plurality of storage tiers of the database system;
the daemon further configured to implement storage tiering control functionality in response to a command-based user-specified dynamic data selection of at least one database object subset and a tier imperative for at least first and second storage tiers comprising respective disjoint subsets of the plurality of storage tiers, wherein said command-based user-specified dynamic data selection comprises one or more verb phrases to dynamically generate said at least one database object subset from data in an active memory of said database node, wherein said daemon executes in said active memory;
the daemon thereby being configured to dynamically control movement of said at least one dynamically generated database object subset between the at least first and second storage tiers in response to said command-based user-specified dynamic data selection and tier imperative;
the daemon being implemented utilizing at least one processing device coupled to a memory.
25. A database system comprising:
a database node;
a plurality of storage tiers having at least first and second storage tiers comprising respective disjoint subsets of the plurality of storage tiers; and
a daemon on said database node to communicate with a plurality of storage devices on said plurality of storage tiers;
the daemon further configured to implement storage tiering control functionality in response to a command-based user-specified dynamic data selection of at least one database object subset and a tier imperative for said at least first and second storage tiers, wherein said command-based user-specified dynamic data selection comprises one or more verb phrases to dynamically generate said at least one database object subset from data in an active memory of said database node, wherein said daemon executes in said active memory; and
the daemon thereby being configured to dynamically control movement of said at least one dynamically generated database object subset between the at least first and second storage tiers in response to said command-based user-specified dynamic data selection and tier imperative.
2. The database node of claim 1, wherein an application executing on said database node provides said user-specified data selection and tier imperative to the daemon for processing in response to a predefined keyword.
3. The database node of claim 1, wherein said daemon performs said storage tiering control functionality in response to a programmatic call comprising one or more predefined keywords.
4. The database node of claim 3, wherein said one or more predefined keywords comprise one or more predefined base command verbs to define said user-specified data selection.
5. The database node of claim 3, wherein said one or more predefined keywords comprise one or more tier imperative keywords to define said tier imperative.
6. The database node of claim 3, wherein said one or more predefined keywords comprise one or more environmental qualifiers to define an environment of said database system.
7. The database node of claim 1, wherein the daemon is further configured to communicate with one or more clients over a network and to process requests from said clients.
8. The database node of claim 1, wherein the daemon further comprises a shunt robot to implement said storage tiering control functionality on said at least first and second storage tiers.
9. The database node of claim 8, wherein at least a portion of the shunt robot is implemented using data migration software at least a portion of which is stored in said memory and executed by said at least one processing device.
10. The database node of claim 9, wherein the shunt robot spawns one or more storage realignment activity threads.
11. The database node of claim 10, wherein a number of said storage realignment activity threads is determined based on an assessment of one or more of resources in said database system and a complexity of a re-tiering task.
12. The database node of claim 1, wherein the daemon further comprises a lexical parser to perform lexical parsing in context using one or more predefined keywords.
13. The database node of claim 1, wherein the daemon further comprises a control scrubber to evaluate one or more of a syntax validity and security.
14. The database node of claim 1, wherein the daemon further comprises a storage option scrubber to evaluate one or more of a profile and risks of one or more of the database host and storage resources.
15. The database node of claim 1, wherein said user-specified tier imperative allows said user to selectively promote and demote said selected at least one database object subset.
16. A processing platform comprising the database node of claim 1.
18. The method of claim 17, wherein an application executing on said database node provides said user-specified data selection and tier imperative to the daemon for processing in response to a predefined keyword.
19. The method of claim 17, further comprising the step of configuring the daemon to perform said storage tiering control functionality in response to a programmatic call comprising one or more predefined keywords.
20. The method of claim 19, wherein said one or more predefined keywords comprise one or more of predefined base command verbs to define said user-specified data selection, tier imperative keywords to define said tier imperative and environmental qualifiers to define an environment of said database system.
21. The method of claim 17, wherein the daemon further comprises a shunt robot to implement said storage tiering control functionality on said at least first and second storage tiers.
22. The method of claim 21, further comprising the step of configuring the shunt robot to spawn one or more storage realignment activity threads.
23. The method of claim 17, wherein said user-specified tier imperative allows said user to selectively promote and demote said selected at least one database object subset.
24. A computer program product comprising a non-transitory machine-readable storage medium having encoded therein executable code of one or more software programs, wherein the one or more software programs when executed cause the daemon to perform the steps of the method of claim 17.
26. The database system of claim 25, wherein an application executing on said database system provides said user-specified data selection and tier imperative to the daemon for processing in response to a predefined keyword.
27. The database system of claim 25, wherein said daemon performs said storage tiering control functionality in response to a programmatic call comprising one or more predefined keywords.
28. The database system of claim 27, wherein said one or more predefined keywords comprise one or more of predefined base command verbs to define said user-specified data selection, tier imperative keywords to define said tier imperative and environmental qualifiers to define an environment of said database system.
29. The database system of claim 25, wherein the daemon further comprises a shunt robot to implement said storage tiering control functionality on said at least first and second storage tiers.
30. The database system of claim 29, wherein the shunt robot spawns one or more storage realignment activity threads.

The field relates generally to data storage, and more particularly, to storage re-tiering in database systems.

A database system allows multiple client devices to share access to databases over a network. In conventional database implementations, it can be difficult to balance the conflicting requirements of storage capacity and IO throughput. IO operations on object storage servers are generally performed directly with back-end storage arrays associated with those servers, and the corresponding storage devices may not be well matched to the current needs of the system. This can lead to situations in which either performance is less than optimal or the costs of implementing the system become excessive.

SQL (Structured Query Language) is a programming language that can be used to manage data stored in a relational database management system (RDBMS). The most common operation in SQL is the query, which is performed with a declarative select statement. A SQL query retrieves data from one or more tables or expressions. Tiered storage is a data storage environment consisting of two or more kinds of storage delineated by differences in price, performance, capacity and/or functionality. The performance of a SQL query may be impaired when the target data is stored on multiple storage tiers or when the target data is stored on a storage tier offering a lower performance level.

Accordingly, despite the many advantages of database systems, a need remains for additional improvements, particularly with regard to IO operations. A need exists for user-directed storage tiering control that allows selected data, independent of database or application environment, to be promoted and/or demoted among storage tiers in context of dynamic business data priorities from either inside or outside of application code. For example, further acceleration of IO operations, leading to enhanced system performance relative to conventional arrangements, would be desirable. Additionally or alternatively, an ability to achieve particular levels of performance at lower cost would be advantageous.

Illustrative embodiments of the present invention provide database systems that implement user-directed application-side storage tiering control functionality, so as to provide significant improvements relative to conventional arrangements. In one embodiment, a database system comprises at least one database node, comprising: a daemon to communicate with a plurality of storage devices on a plurality of storage tiers of the database system; the daemon further configured to implement storage tiering control functionality based on a user-specified data selection and tier imperative for at least first and second storage tiers comprising respective disjoint subsets of the plurality of storage tiers, the daemon thereby being configured to control movement of data between the at least first and second storage tiers.

According to one aspect of the invention, an application executing on the database node provides the user-specified data selection and tier imperative to the daemon for processing in response to a predefined keyword. According to another aspect of the invention, the daemon performs the storage tiering control functionality in response to a programmatic call comprising one or more predefined keywords. The predefined keywords include, for example, predefined base command verbs to define the user-specified data selection; tier imperative keywords to define the tier imperative; and/or environmental qualifiers to define an environment of the database system.

According to yet another aspect of the invention, the daemon comprises a shunt robot to implement the storage tiering control functionality on the at least first and second storage tiers. The exemplary shunt robot optionally spawns one or more storage realignment activity threads. In various embodiments, the daemon further comprises a lexical parser to perform lexical parsing in context using one or more predefined keywords; a control scrubber to evaluate one or more of a syntax validity and security; and/or a storage option scrubber to evaluate one or more of a profile and risks of one or more of the database host and storage resources.

As noted above, illustrative embodiments described herein provide significant improvements relative to conventional arrangements. In some of these embodiments, use of a user-directed application-side storage tiering control function allows dynamic balancing of storage capacity and IO throughput requirements in a database system, thereby allowing particular levels of performance to be achieved at a significantly lower cost than would otherwise be possible.

FIG. 1 is a block diagram of a database system having multiple storage tiers and a SQL Intercept Tier Alignment Robot (SITAR) daemon in an illustrative embodiment of the invention;

FIG. 2 is a block diagram of an exemplary embodiment of a SITAR daemon executing on a database node of FIG. 1;

FIG. 3 is a table illustrating an exemplary set of command verbs, tier imperatives and environmental qualifiers and the resulting contextual action;

FIGS. 4 and 5 illustrate exemplary programmatic calls to implement storage re-tiering in accordance with aspects of the invention; and

FIGS. 6 through 11 comprise exemplary pseudo code for various components of the SITAR daemon of FIG. 2.

Illustrative embodiments of the present invention will be described herein with reference to exemplary database systems and associated clients, servers, storage arrays and other processing devices. It is to be appreciated, however, that the invention is not restricted to use with the particular illustrative database system and device configurations shown. Accordingly, the term “database system” as used herein is intended to be broadly construed, so as to encompass, for example, dedicated standalone and/or distributed or parallel database systems, and other types of database systems implemented using one or more clusters of processing devices.

Aspects of the present invention provide user-directed storage tiering control that allows selected data to be promoted and/or demoted among storage tiers. In one exemplary embodiment, a SQL Intercept Tier Alignment Robot (SITAR) daemon is provided on one or more nodes of a database system. The SITAR daemon allows a user to specify a subset of a database (e.g., a selected data range and/or segments of data within data file blocks) to be promoted and/or demoted among a plurality of storage tiers. The data selection can be performed across a plurality of tiers, which may be, for example, partitioned by date, and the selected data can be directed to a designated tier.

According to a further feature of the invention, storage tiering control can be provided from within or outside of a database application to re-tier SQL objects at a selected level. Storage tiering is initiated from within a database application by employing a recognized verb keyword that provides an indication to the database application to provide the following syntax to the SITAR daemon 250 as a pass-through for further processing. In this manner, the disclosed exemplary SITAR daemon provides intelligent and real-time movement of tables, table spaces and user-specified data selections among storage tiers in applications interacting with SQL compliant databases. While the present invention is implemented using SQL queries, other query languages could be employed as would be apparent to a person of ordinary skill in the art.

As discussed hereinafter, in one exemplary embodiment, the disclosed SITAR daemon is a rule-based mechanism to move user-specified data selections across storage tiers in real-time. The exemplary SITAR daemon processes user actions initiated from within or outside of a database application to validate, weight, prioritize, and aggregate action task elements in context of an active resource burden and encapsulated best practices for storage tiering. Once validated, parsed, and indexed, the SITAR daemon spawns, monitors and journals storage realignment activity threads based on those and other operational criteria as detailed in the lexical parse engine schematic.

FIG. 1 shows a database system 100 configured in accordance with an illustrative embodiment of the present invention. The database system 100 comprises a plurality of clients 102, one or more database nodes 200-1 through 200-N, as discussed further below in conjunction with FIG. 2, and a plurality of storage tiers 112-1 through 112-N. More particularly, the exemplary database system 100 comprises N clients denoted 102-1, 102-N, and N storage tiers denoted 112-1 through 112-N which may comprise a storage array or other type of storage device. The clients 102 communicate with the database nodes 200 over a network 106.

Storage arrays utilized in the database system 100 may comprise, for example, storage products such as VNX and Symmetrix VMAX, both commercially available from EMC Corporation of Hopkinton, Mass. A variety of other storage products may be utilized to implement at least a portion of the object storage targets of the database system 100.

The network 106 may comprise, for example, a global computer network such as the Internet, a wide area network (WAN), a local area network (LAN), a satellite network, a telephone or cable network, a cellular network, a wireless network such as WiFi or WiMAX, or various portions or combinations of these and other types of networks. The term “network” as used herein is therefore intended to be broadly construed, so as to encompass a wide variety of different network arrangements, including combinations of multiple networks possibly of different types.

The storage tiers 112 in the present embodiment are arranged into first, second and third storage tiers 112-1 and 112-2, also denoted as Storage Tiers 1, 2 and 3, although it is to be appreciated that less than or more than three storage tiers may be used in other embodiments. The first, second and third storage tiers 112 comprise respective disjoint subsets of storage. More particularly, the first storage tier 112-1 comprises flash storage, the second storage tier 112-2 comprises fibre channel storage and the third storage tier 112-N comprises SATA storage. As shown in FIG. 1, each storage tier maintains database persistence.

As discussed further below in conjunction with FIGS. 4 and 5, the exemplary clients 102 comprise a SITAR application database client 102-1 and an ANSI SQL client 102-2. The client 102 may also be referred to herein as simply a “user.” The term “user” should be understood to encompass, by way of example and without limitation, a user device, a person utilizing or otherwise associated with the device, a software client executing on a user device or a combination thereof. An operation described herein as being performed by a user may therefore, for example, be performed by a user device, a person utilizing or otherwise associated with the device, a software client or by a combination thereof.

The different storage tiers 112-1 through 112-N in this embodiment comprise different types of storage devices having different performance characteristics. As indicated above, the first storage tier 112-1 provides storage of a first type, such as flash storage, the second storage tier 112-2 provides storage of a second type, such as fibre channel storage, and the third storage tier 112-N provides storage of a third type, such as SATA (Serial ATA) storage.

The flash storage devices of the first storage tier 112-1 are generally significantly faster in terms of read and write access times than the fibre channel storage devices of the second storage tier 112-2. The flash storage devices are therefore considered “fast” devices in this embodiment relative to the “slower” fibre channel storage devices. Likewise, the fibre channel storage devices of the second storage tier 112-2 are generally significantly faster in terms of read and write access times than the SATA disk storage devices of the third storage tier 112-N. The fibre channel storage devices are therefore considered “faster” devices in this embodiment relative to the “slower” disk channel storage devices.

Accordingly, the database system 100 may be characterized in the present embodiment as having a “fast” storage tier 112-1, a “medium” storage tier 112-2, and a “slow” storage tier 112-N, where “fast,” “medium” and “slow” in this context are relative terms and not intended to denote any particular absolute performance level. These storage tiers 112 comprise respective disjoint subsets of storage devices. However, numerous alternative tiering arrangements may be used, including more or less tiers each providing a different level of performance. The particular storage devices used in a given storage tier may be varied in other embodiments and multiple distinct storage device types may be used within a single storage tier.

The flash storage devices in the first storage tier 112-1 may be implemented, by way of example, using respective flash Peripheral Component Interconnect Express (PCIe) cards or other types of memory cards installed in a computer or other processing device that implements the corresponding storage tier 112-1. Numerous alternative arrangements are possible. Also, a variety of other types of non-volatile or volatile memory in any combination may be used to implement at least a portion of the storage devices. Examples of alternatives to flash storage devices that may be used in other embodiments of the invention include non-volatile memories such as magnetic random access memory (MRAM) and phase change random access memory (PC-RAM).

The flash storage devices of the first storage tier 112-1 generally provide higher performance than the disk storage devices of the third tier 112-N but the disk storage devices of the third storage tier 112-N generally provide higher capacity at lower cost than the flash storage devices. The exemplary tiering arrangement of FIG. 1 therefore makes it possible to dynamically balance the conflicting requirements of storage capacity and IO throughput, thereby avoiding situations in which either performance is less than optimal or the costs of implementing the system become excessive. Arrangements of this type can provide further acceleration of IO operations in the database system 100, leading to enhanced system performance relative to conventional arrangements, while additionally or alternatively providing an ability to achieve particular levels of performance at lower cost.

In the FIG. 1 embodiment, user-directed application-side storage tiering control functionality is implemented in each database node 200 using a SITAR daemon 250, as discussed further below in conjunction with FIG. 2. More particularly, in the embodiment of FIG. 1, the database nodes 200 comprise a SITAR daemon 250 configured to implement storage tiering control functionality for the storage tiers 112 which as noted above comprise respective disjoint subsets of storage devices. The SITAR daemon 250 is thereby configured to control movement of data between the storage devices of the exemplary first, second and third storage tiers 112-1 through 112-N. Examples of such movement will be described below.

Each database node 200 further comprises a processor 156 coupled to a memory 158. The processor 156 may comprise a microprocessor, a microcontroller, an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA) or other type of processing circuitry, as well as portions or combinations of such circuitry elements. The memory 158 may comprise random access memory (RAM), read-only memory (ROM) or other types of memory, in any combination.

The memory 158 and other memories disclosed herein may be viewed as examples of what are more generally referred to as “computer program products” storing executable computer program code.

Also included in the database node 200 is network interface circuitry 154. The network interface circuitry 154 allows the database nodes 200 to communicate over the network 106 with the clients 102 and storage tiers 112. The network interface circuitry 154 may comprise, for example, one or more conventional transceivers.

The SITAR daemon 250 of the database nodes 200 may be implemented at least in part in the form of software that is stored in memory 158 and executed by processor 156.

The database node 200 comprising processor, memory and network interface components as described above is an example of what is more generally referred to herein as a “processing device.” Each of the clients 102, storage tiers 112 and database nodes 200 may similarly be implemented as a processing device comprising processor, memory and network interface components.

Although multiple SITAR daemon 250 are shown in the FIG. 1 embodiment, a given database system 100 in other embodiments may comprise only a single SITAR daemon 250.

The database system 100 may be implemented, by way of example, in the form of an ORACLE™ network file system (NFS), although use of any particular database system is not a requirement of the present invention. Accordingly, database nodes 200 need not be configured with ORACLE™ functionality, but may instead represent elements of another type of database system.

As indicated previously, it is difficult in conventional database system implementations to balance the conflicting requirements of storage capacity and IO throughput. This can lead to situations in which either performance is less than optimal or the costs of implementing the system become excessive.

In the present embodiment, these and other drawbacks of conventional arrangements are addressed by providing a SITAR daemon 250 to implement storage tiering control functionality. As will be described, such arrangements advantageously allow for transparent storage re-tiering in a database system in a manner that avoids the need for any significant changes to clients and storage devices of storage tiers 112. Again, other types and configurations of multiple storage tiers and associated storage devices may be used. Exemplary features of the data migration software and other functionality associated with a SITAR daemon 250 will be described below.

It should be noted with regard to the illustrative embodiments of FIG. 1 that relatively minor modifications may be made to one or more applications or other system elements or components in order to achieve additional improvements. For example, a job scheduler or other similar component within the system 100 can also be modified so as to take full advantage of the available storage tiering functionality.

FIG. 2 is a block diagram of an exemplary embodiment of SITAR daemon 250 executing on a database node 200 of FIG. 1 that incorporates aspects of the present invention. The exemplary SITAR daemon 250 executes in the active memory 220 of a database node 200 to perform storage re-tiering using storage alignment operations based on recognized verb phrases, as discussed further below in conjunction with FIGS. 3-5, database vendor context, and optionally a factored parse complexity rating (PCR).

As shown in FIG. 2, an exemplary SITAR daemon 250 comprises a lexical parse layer 260, a control scrubber 600, a storage option scrubber 800, a shunt robot 900 and a housekeeper monitor 1000. Generally, as discussed further below in conjunction with FIGS. 3-5, the lexical parse layer 260 comprises a verb phrase parser function 700 (not shown in FIG. 2), as discussed further below in conjunction with FIG. 7, to perform lexical parsing in context using predefined keywords. The exemplary verb phrase parser 700 correlates task elements against discrete storage actions, optionally aggregating into a work entry.

As discussed further below in conjunction with FIG. 6, the exemplary control scrubber 600 prevents and/or corrects improper syntax, performs security/validity handling and optionally sets a parse complexity rating that can optionally be used to perform work thread metering. As discussed further below in conjunction with FIG. 8, the exemplary storage option scrubber 800 performs a profile/risk check of the host and storage resources, optionally resolving to an indexed task state assignment (Enabled, Deferred, Denied).

As discussed further below in conjunction with FIG. 9, the exemplary shunt robot 900 implements storage re-tiering on the storage tiers 112 by spawning storage realignment activity threads based on the assigned State, Impact Weight (IW) and Discrete Storage Action (DSA). As discussed further below in conjunction with FIG. 10, the exemplary housekeeper monitor 1000 performs monitoring and metering of shunt tasks, SITAR daemons, and SITAR performance statistics and metrics.

FIG. 3 is a table illustrating an exemplary set of command verbs 310, tier imperatives 320 and environmental qualifiers 330 and the resulting contextual action 340. As shown in column 310 of FIG. 3, an exemplary embodiment of SITAR provides a scan base command verb to select content (e.g., in the Top N rows), a modify (mod) base command verb to perform an update, a clear base command verb to release the defined data subset and a recover base command verb to restore to original tiering.

As shown in column 320 of FIG. 3, an exemplary embodiment of SITAR provides a priority tier imperative to indicate that selected data should be moved to the highest performing storage tier, a depriority tier imperative to indicate that selected data should be moved to the lowest performing (e.g., least expensive) storage tier, a turbo tier imperative to indicate that selected data should be acted upon immediately and a shift tier imperative to indicate that selected data should be shifted up or down one level in the storage tiers 112.

As shown in column 330 of FIG. 3, an exemplary embodiment of SITAR provides a number of exemplary environmental qualifiers to indicate database-vendor specific information so that SITAR takes correct action, where ora12(cls) indicates an ORACLE™ Real Application Cluster (RAC) cluster, pgres indicates postgres (database vendor), mss(cls) indicates a MICROSOFT™ SQL Server cluster, and dbtwo indicates an IBM™ DB2™ relational database management system.

It is noted that other combinations of command verbs 310, tier imperatives 320 and environmental qualifiers 330 can be employed than those combinations showed in FIG. 3, as would be apparent to a person of ordinary skill in the art.

FIG. 4 illustrates an exemplary programmatic call 400 to SITAR via an exemplary ANSI SQL pass through “hint”. The exemplary programmatic call 400 comprises a predefined SQL verb keyword 410 providing an indication to the database application that the following SITAR string 420 should be provided to (pass through to) the SITAR daemon 250 for processing. The SITAR string 420 is comprised of one command verb 310, one tier imperative 320 and one environmental qualifier 330, where v indicates version and c|n indicates a cluster, node.

FIG. 5 illustrates an exemplary programmatic call 500 to SITAR from outside of a database application with an exemplary ANSI SQL query issued as a client connection. The exemplary programmatic call 500 comprises a SITAR string 520 to be processed by the SITAR daemon 250. For example, the programmatic call 500 may comprise a select statement 530 to select identified data from a given location.

For example, a programmatic call 400 within an application may comprise:

Select /* {Scan(top(500)) Priority Oral 1(cls)} */

cus.c_name,cus.c_ord,cus.c_due,ord.order summary from customers cus, orders ord where cus.c_id=ord.c_id;

Based on the scan command verb 310 and priority tier imperative 320, this exemplary programmatic call 500 shifts the data selection from the top 500 records from the current storage tier into a tablespace based in the highest performing storage tier of an ORACLE™ database system.

In another example, a programmatic call 500 from outside an application may comprise:

{Mod Depriority dbtwo(node)}; Update orders, set order_state=‘complete’ where order_type=‘backlog’

Based on the modify select verb 310 and depriority tier imperative 320, this exemplary programmatic call 500 performs the update command for backlog orders, changes the state to complete and then moves the selected content to the lowest performing/least expensive storage tier in an IBM™ DB2™ node environment. The application is a scheduled batch job firing outside of the database, as a client of the database environment. Thus, the batch job should not consume resources needed by higher priority tasks.

Examples of operations that may be performed in the system 100 utilizing the SITAR daemon 250 for all calls to SITAR in active memory will now be described in more detail with reference to the pseudo code of FIGS. 6-11.

FIG. 6 illustrates exemplary pseudo code 600 for the control scrubber 600 of FIG. 2. As indicated above, the exemplary control scrubber 600 prevents and/or corrects improper syntax, performs security/validity handling and optionally sets a parse complexity rating that can optionally be used to perform work thread metering.

As shown in FIG. 6, the exemplary pseudo code 600 performs a hash check for invalid/unsupported syntax against an inbound request object 610 during step 1.0. During step 1.1, the control scrubber 600 confirms the database user identity associated with the inbound request 610 for Sitar Grants. A security rejection can be logged if the user identity is unauthorized.

During step 1.2, the control scrubbers 600 sets a Parse Complexity Rating (PCR), for example, based on a Vendor context, Sitar verb phrase and limit delta as follows:
PCR=A+B+C, where,

A=(database Vendor Context, e.g., ORACLE™ database system, SQL Server, DB2);

B=(Length of Sitar verb phrase 310 (command concatenation)); and

C=(Limit Delta (Actual Length−Limit specified in request 610).

FIG. 7 illustrates exemplary pseudo code 700 for a verb phrase parser. As indicated above, the verb phrase parser 700 performs lexical parsing in context using predefined keywords. As shown in FIG. 7, the exemplary pseudo code 700 extracts and chunks syntax verb phrases 310 from programmatic calls into task elements from the Syntax element based on the B parameter (above) iteratively during step 1.3. The verb phrase parser 700 confirms Task Elements and Vendor Context against the Storage Environment and PCR range to obtain a Discrete Storage Action (DSA) during step 1.3.1. The Task Elements are aggregated into a Shunt Task during step 1.3.2.; A job number token (JNT) is assigned during step 1.3.3, for example, based on a PCR and GUID (globally unique identifier) composite.

FIG. 8 illustrates exemplary pseudo code 800 for the storage option scrubber 800 of FIG. 2. As indicated above, the exemplary storage option scrubber 800 performs a profile/risk check of the host and storage resources, optionally resolving to an indexed task state assignment (Enabled, Deferred, Denied).

As shown in FIG. 8, the exemplary pseudo code 800 calculates and catalogs an impact weighting (IW) risk during step 1.4. The exemplary IW values for sampled, active resources are calculated during step 1.4.1 as follows:
IW=(DNU+HCP+DP+ASC+PCR)*State,
where the data network utilization (DNU) may comprise, for example, v$IOStat_Network; the Host CPU profile (HCP) may comprise, for example, v$SGAINFO (size, granule size, free memory), v$SGA_Dynamic; the Disk Profile (DP) based on Storage Tier Activity may comprise, for example, v$sysmetric and an Active Shunt Count (ASC).

Examples of operations that may be performed in the system 100 utilizing the SITAR daemon 250 for all validated, parsed and indexed shunt tasks will now be described in more detail with reference to the pseudo code of FIGS. 9-11.

FIG. 9 illustrates exemplary pseudo code 900 for the shunt robot 900 of FIG. 2. As indicated above, the exemplary shunt robot 900 implements storage re-tiering on the storage tiers 112 by spawning storage realignment activity threads based on the assigned State, impact weight (IW) and Discrete Storage Action (DSA).

As shown in FIG. 9, the exemplary pseudo code 900 sets the shunt spawn limit (SSL) pessimistically during step 1.5 based on the active IW. The Shunt Spawn Limit (SSL) X is set during step 1.5.1 where
F(x)=IW+10
. . . (x>75{spawn=3} . . .

The process threads for enabled shunt tasks within the SSL are spawned during step 1.1.1; the denied shunt tasks are logged during step 1.5.1.2, for example using a call to a journaler 1100 (FIG. 11); and the storage option scrubber 800 is called during step 1.5.1.3 to re-evaluate the deferred shunt tasks.

FIG. 10 illustrates exemplary pseudo code 1000 for the housekeeper monitor 1000 of FIG. 2. As indicated above, the exemplary housekeeper monitor 1000 performs monitoring and metering of shunt tasks, SITAR daemons, and SITAR performance statistics and metrics.

As shown in FIG. 10, the exemplary pseudo code 1000 logs or destructs SSLs during step 1.6 based on SSL ID and JNT. The housekeeper monitor 1000 expands on statistics and metrics collected based on Verb Phrase (keyword) during step 1.7.

FIG. 11 illustrates exemplary pseudo code 1100 for a SITAR journaler 1100 incorporating aspects of the present invention. As shown in FIG. 11, the exemplary pseudo code 1100 posts event logs and action results by JNT during step 1.8.

Among other benefits, the disclosed SITAR daemon 250 can adjust application data performance priorities in context of shifting business data priorities; optimize a configuration for storage resource alignment to those priorities; adapt to performance requirements over time and attain an improved financial return on data storage investment and use. The disclosed SITAR daemon 250 can dynamically realign selected database content at the object or data subset level to a selected storage tiering assignment, independent of database or application environment, and from inside or outside of database application code.

The exemplary SITAR daemon 250 can factor for and address the following data cost/performance challenges and needs:

It is to be appreciated that the particular operations and associated messaging illustrated in FIGS. 6-11 are exemplary only and numerous other types of operations and messaging may be used in other embodiments.

It should therefore be understood that in other embodiments different arrangements of additional or alternative elements may be used. At least a subset of these elements may be collectively implemented on a common processing platform or each such element may be implemented on a separate processing platform.

Also, numerous other arrangements of computers, servers, storage devices or other components are possible in the database system 100. Such components can communicate with other elements of the database system 100 over any type of network or other communication media.

As indicated previously, components of a SITAR daemon 250 as disclosed herein can be implemented at least in part in the form of one or more software programs stored in memory and executed by a processor of a processing device. A memory having such program code embodied therein is an example of what is more generally referred to herein as a “computer program product.”

The database system 100 or portions thereof may be implemented using one or more processing platforms each comprising a plurality of processing devices. Each such processing device may comprise processor, memory and network interface components of the type illustrated for database node 200-1 in FIG. 1.

As indicated above, database system tiering functionality such as that described in conjunction with FIGS. 2 through 11 can be implemented at least in part in the form of one or more software programs stored in memory and executed by a processor of a processing device such as a computer or server. A memory or other storage device having such program code embodied therein is an example of what is more generally referred to herein as a “computer program product.” Certain system components are implemented using a combination of software and hardware.

It should again be emphasized that the above-described embodiments of the invention are presented for purposes of illustration only. Many variations and other alternative embodiments may be used. For example, the disclosed techniques are applicable to a wide variety of other types and arrangements of database systems and associated clients, servers and other processing devices that can benefit from user-directed application-side storage tiering control functionality as described herein. Also, the particular configurations of system and device elements shown in FIG. 1 can be varied in other embodiments. Moreover, the various assumptions made above in the course of describing the illustrative embodiments should also be viewed as exemplary rather than as requirements or limitations of the invention. Numerous other alternative embodiments within the scope of the appended claims will be readily apparent to those skilled in the art.

Esposito, Jeffrey D., McClure, Anne

Patent Priority Assignee Title
10030986, Jun 29 2016 WHP WORKFLOW SOLUTIONS, INC ; GETAC TECHNOLOGY CORPORATION Incident response analytic maps
10439878, May 31 2018 EMC IP HOLDING COMPANY LLC Process-based load balancing and failover policy implementation in storage multi-path layer of host device
10474367, Dec 21 2017 EMC IP HOLDING COMPANY LLC Storage system with input-output performance control utilizing application process detection
10476960, May 01 2018 EMC IP HOLDING COMPANY LLC Host device configured to automatically discover new paths responsive to storage system prompt
10521369, Jul 13 2018 EMC IP HOLDING COMPANY LLC Host device with multi-path layer configured for per-process data reduction control
10606496, Sep 26 2018 EMC IP HOLDING COMPANY LLC Host device with multi-path layer implementing automatic standby setting for active-active configuration
10637917, Apr 30 2018 EMC IP HOLDING COMPANY LLC Host device load balancing using port load reported by storage system
10754559, Mar 08 2019 EMC IP HOLDING COMPANY LLC Active-active storage clustering with clock synchronization
10754572, Oct 09 2018 EMC IP HOLDING COMPANY LLC Migrating control of a multi-path logical device from a current MPIO driver to a target MPIO driver
10757189, Apr 30 2018 EMC IP HOLDING COMPANY LLC Service level objection based input-output selection utilizing multi-path layer of host device
10764371, Jul 13 2018 EMC IP HOLDING COMPANY LLC Host device with multi-path layer configured to provide cluster identification information to storage system
10789006, May 13 2019 EMC IP HOLDING COMPANY LLC Path-based data migration from source device to target device
10817181, Dec 24 2018 EMC IP HOLDING COMPANY LLC Host device with multi-path scheduling based at least in part on measured parameters
10838648, Dec 12 2018 EMC IP HOLDING COMPANY LLC Distributed host copy migration in a cluster environment using changed block tracking
10880217, Dec 24 2018 EMC IP HOLDING COMPANY LLC Host device with multi-path layer configured for detection and resolution of oversubscription conditions
10884935, Sep 30 2019 EMC IP HOLDING COMPANY LLC Cache allocation for controller boards based on prior input-output operations
10911402, Oct 27 2017 EMC IP HOLDING COMPANY LLC Storage system with network-wide configurable device names
10936220, May 02 2019 EMC IP HOLDING COMPANY LLC Locality aware load balancing of IO paths in multipathing software
10936335, Jan 30 2019 EMC IP HOLDING COMPANY LLC Path-based migration of control of a multi-path logical device from a current MPIO driver to a target MPIO driver
10936522, Sep 30 2019 EMC IP HOLDING COMPANY LLC Performing input-output multi-pathing from user space
10949104, May 03 2019 EMC IP HOLDING COMPANY LLC Host device configured for automatic creation of multi-path logical devices with user-defined names
10996879, May 02 2019 EMC IP HOLDING COMPANY LLC Locality-based load balancing of input-output paths
11012510, Sep 30 2019 EMC IP HOLDING COMPANY LLC Host device with multi-path layer configured for detecting target failure status and updating path availability
11012512, May 20 2020 EMC IP HOLDING COMPANY LLC Host device with automated write throttling responsive to storage system write pressure condition
11016699, Jul 19 2019 EMC IP HOLDING COMPANY LLC Host device with controlled cloning of input-output operations
11016783, Jul 25 2019 EMC IP HOLDING COMPANY LLC Secure storage access utilizing multi-path layer of host device to identify processes executed on the host device with authorization to access data of a storage system
11023134, May 22 2020 EMC IP HOLDING COMPANY LLC Addition of data services to an operating system running a native multi-path input-output architecture
11023161, Nov 25 2019 EMC IP HOLDING COMPANY LLC Host device with multi-path layer implementing efficient load balancing for active-active configuration
11032373, Oct 12 2020 EMC IP HOLDING COMPANY LLC Host-based bandwidth control for virtual initiators
11042327, Mar 10 2020 EMC IP HOLDING COMPANY LLC IO operation cloning using change information sharing with a storage system
11044313, Oct 09 2018 EMC IP HOLDING COMPANY LLC Categorizing host IO load pattern and communicating categorization to storage system
11050660, Sep 28 2018 EMC IP HOLDING COMPANY LLC Host device with multi-path layer implementing path selection based at least in part on fabric identifiers
11050825, Jan 30 2020 EMC IP HOLDING COMPANY LLC Storage system port usage information sharing between host devices
11080215, Mar 31 2020 EMC IP HOLDING COMPANY LLC Host device providing automated prediction of change intervals to reduce adverse impacts on applications
11086785, Sep 24 2019 EMC IP HOLDING COMPANY LLC Host device with storage cache aware processing of input-output operations in multi-path layer
11093144, Feb 18 2020 EMC IP HOLDING COMPANY LLC Non-disruptive transformation of a logical storage device from a first access protocol to a second access protocol
11093155, Dec 11 2019 EMC IP HOLDING COMPANY LLC Automated seamless migration with signature issue resolution
11099754, May 14 2020 EMC IP HOLDING COMPANY LLC Storage array with dynamic cache memory configuration provisioning based on prediction of input-output operations
11099755, Jan 06 2020 EMC IP HOLDING COMPANY LLC Multipath device pseudo name to logical volume mapping for host devices
11106381, Nov 27 2019 EMC IP HOLDING COMPANY LLC Automated seamless migration of logical storage devices
11126358, Dec 14 2018 EMC IP HOLDING COMPANY LLC Data migration agnostic of pathing software or underlying protocol
11126363, Jul 24 2019 EMC IP HOLDING COMPANY LLC Migration resumption using journals
11151071, May 27 2020 EMC IP HOLDING COMPANY LLC Host device with multi-path layer distribution of input-output operations across storage caches
11157203, May 15 2019 EMC IP HOLDING COMPANY LLC Adaptive load balancing in storage system having multiple input-output submission queues
11157432, Aug 28 2020 EMC IP HOLDING COMPANY LLC Configuration of block devices based on provisioning of logical volumes in a storage system
11169941, Apr 09 2020 EMC IP HOLDING COMPANY LLC Host device with automated connectivity provisioning
11175828, May 14 2020 EMC IP HOLDING COMPANY LLC Mitigating IO processing performance impacts in automated seamless migration
11175840, Jan 30 2020 EMC IP HOLDING COMPANY LLC Host-based transfer of input-output operations from kernel space block device to user space block device
11200250, Nov 30 2018 Tata Consultancy Services Limited Method and system for optimizing validations carried out for input data at a data warehouse
11204699, Mar 05 2020 EMC IP HOLDING COMPANY LLC Storage system port maintenance information sharing with host device
11204777, Nov 30 2020 EMC IP HOLDING COMPANY LLC Boot from SAN operation support on multi-pathing devices
11216200, May 06 2020 EMC IP HOLDING COMPANY LLC Partition utilization awareness of logical units on storage arrays used for booting
11223679, Jul 16 2018 EMC IP HOLDING COMPANY LLC Host device with multi-path layer configured for detection and reporting of path performance issues
11226851, Jul 10 2020 EMC IP HOLDING COMPANY LLC Execution of multipath operation triggered by container application
11231861, Jan 15 2020 EMC IP HOLDING COMPANY LLC Host device with active-active storage aware path selection
11256421, Dec 11 2019 EMC IP HOLDING COMPANY LLC Path selection modification for non-disruptive upgrade of a host device
11256446, Aug 03 2020 EMC IP HOLDING COMPANY LLC Host bus adaptor (HBA) virtualization aware multi-pathing failover policy
11265261, Mar 18 2020 EMC IP HOLDING COMPANY LLC Access path management based on path condition
11277335, Dec 26 2019 EMC IP HOLDING COMPANY LLC Host device with path selection modification responsive to mismatch in initiator-target negotiated rates
11294782, Mar 22 2021 Credit Suisse AG, Cayman Islands Branch Failover affinity rule modification based on node health information
11308004, Jan 18 2021 EMC IP HOLDING COMPANY LLC Multi-path layer configured for detection and mitigation of slow drain issues in a storage area network
11320994, Sep 18 2020 EMC IP HOLDING COMPANY LLC Dynamic configuration change control in a storage system using multi-path layer notifications
11366590, Oct 11 2019 EMC IP HOLDING COMPANY LLC Host device with multi-path layer providing dynamic control of one or more path selection algorithms
11366756, Apr 13 2020 EMC IP HOLDING COMPANY LLC Local cached data coherency in host devices using remote direct memory access
11366771, May 02 2019 EMC IP HOLDING COMPANY LLC Host device with multi-path layer configured for detection and resolution of initiator-related conditions
11368399, Mar 27 2020 EMC IP HOLDING COMPANY LLC Congestion aware multipathing based on network congestion notifications
11372951, Dec 12 2019 EMC IP HOLDING COMPANY LLC Proxy license server for host-based software licensing
11379325, Oct 04 2019 EMC IP HOLDING COMPANY LLC Path failure information sharing between host devices connected to a storage system
11379387, Aug 02 2019 EMC IP HOLDING COMPANY LLC Storage system with submission queue selection utilizing application and submission queue priority
11385824, Nov 30 2020 EMC IP HOLDING COMPANY LLC Automated seamless migration across access protocols for a logical storage device
11386023, Jan 21 2021 EMC IP HOLDING COMPANY LLC Retrieval of portions of storage device access data indicating access state changes
11392459, Sep 14 2020 EMC IP HOLDING COMPANY LLC Virtualization server aware multi-pathing failover policy
11397539, Nov 30 2020 EMC IP HOLDING COMPANY LLC Distributed backup using local access
11397540, Oct 12 2020 EMC IP HOLDING COMPANY LLC Write pressure reduction for remote replication
11397589, Mar 06 2020 EMC IP HOLDING COMPANY LLC Snapshot transmission from storage array to cloud using multi-path input-output
11409460, Dec 08 2020 EMC IP HOLDING COMPANY LLC Performance-driven movement of applications between containers utilizing multiple data transmission paths with associated different access protocols
11418594, Oct 20 2021 Dell Products L.P.; Dell Products L P Multi-path layer configured to provide link availability information to storage system for load rebalancing
11422718, May 03 2021 EMC IP HOLDING COMPANY LLC Multi-path layer configured to provide access authorization for software code of multi-path input-output drivers
11449257, Feb 21 2020 EMC IP HOLDING COMPANY LLC Host device with efficient automated seamless migration of logical storage devices across multiple access protocols
11449440, Jan 19 2021 EMC IP HOLDING COMPANY LLC Data copy offload command support across multiple storage access protocols
11455116, Dec 16 2020 EMC IP HOLDING COMPANY LLC Reservation handling in conjunction with switching between storage access protocols
11461026, Jan 21 2020 EMC IP HOLDING COMPANY LLC Non-disruptive update of host multipath device dependency
11467765, Jan 20 2021 EMC IP HOLDING COMPANY LLC Detection and mitigation of slow drain issues using response times and storage-side latency view
11494091, Jan 19 2021 EMC IP HOLDING COMPANY LLC Using checksums for mining storage device access data
11520671, Jan 29 2020 EMC IP HOLDING COMPANY LLC Fast multipath failover
11526283, Jun 08 2021 EMC IP HOLDING COMPANY LLC Logical storage device access using per-VM keys in an encrypted storage environment
11543971, Nov 30 2020 EMC IP HOLDING COMPANY LLC Array driven fabric performance notifications for multi-pathing devices
11550511, May 21 2021 EMC IP HOLDING COMPANY LLC Write pressure throttling based on service level objectives
11561699, Apr 24 2020 EMC IP HOLDING COMPANY LLC Input-output path selection using switch topology information
11567669, Dec 09 2021 Dell Products L.P. Dynamic latency management of active-active configurations using multi-pathing software
11586356, Sep 27 2021 Dell Products L.P.; Dell Products L P Multi-path layer configured for detection and mitigation of link performance issues in a storage area network
11615340, May 23 2019 EMC IP HOLDING COMPANY LLC Methods and apparatus for application prediction through machine learning based analysis of IO patterns
11620054, Apr 21 2022 Dell Products L.P.; Dell Products L P Proactive monitoring and management of storage system input-output operation limits
11620240, Dec 07 2020 EMC IP HOLDING COMPANY LLC Performance-driven access protocol switching for a logical storage device
11625232, Jun 07 2021 EMC IP HOLDING COMPANY LLC Software upgrade management for host devices in a data center
11625308, Sep 14 2021 Dell Products L.P.; Dell Products L P Management of active-active configuration using multi-pathing software
11630581, Nov 04 2020 EMC IP HOLDING COMPANY LLC Host bus adaptor (HBA) virtualization awareness for effective input-output load balancing
11640245, Feb 17 2021 Credit Suisse AG, Cayman Islands Branch Logical storage device access in an encrypted storage environment
11651066, Jan 07 2021 EMC IP HOLDING COMPANY LLC Secure token-based communications between a host device and a storage system
11656987, Oct 18 2021 Dell Products L.P.; Dell Products L P Dynamic chunk size adjustment for cache-aware load balancing
11750457, Jul 28 2021 Dell Products L.P.; Dell Products L P Automated zoning set selection triggered by switch fabric notifications
11755222, Feb 26 2021 Credit Suisse AG, Cayman Islands Branch File based encryption for multi-pathing devices
11762588, Jun 11 2021 EMC IP HOLDING COMPANY LLC Multi-path layer configured to access storage-side performance metrics for load balancing policy control
11782611, Apr 13 2021 Credit Suisse AG, Cayman Islands Branch Logical storage device access using device-specific keys in an encrypted storage environment
11789624, May 31 2022 Dell Products L.P. Host device with differentiated alerting for single points of failure in distributed storage systems
11797312, Feb 26 2021 Credit Suisse AG, Cayman Islands Branch Synchronization of multi-pathing settings across clustered nodes
11822706, May 26 2021 EMC IP HOLDING COMPANY LLC Logical storage device access using device-specific keys in an encrypted storage environment
11853586, Oct 20 2020 EMC IP HOLDING COMPANY LLC Automated usage based copy data tiering system
11886711, Jun 16 2022 Dell Products L.P.; Dell Products L P Host-assisted IO service levels utilizing false-positive signaling
11916938, Aug 28 2020 EMC IP HOLDING COMPANY LLC Anomaly detection and remediation utilizing analysis of storage area network access patterns
11928365, Mar 09 2021 Credit Suisse AG, Cayman Islands Branch Logical storage device access using datastore-level keys in an encrypted storage environment
11934659, Sep 28 2022 Dell Products L P; Dell Products L.P. Host background copy process with rate adjustment utilizing input-output processing pressure feedback from storage system
11954344, Jun 16 2021 EMC IP HOLDING COMPANY LLC Host device comprising layered software architecture with automated tiering of logical storage devices
11983429, Jun 22 2022 Dell Products L.P. Migration processes utilizing mapping entry timestamps for selection of target logical storage devices
11983432, Apr 28 2022 Dell Products L P; Dell Products L.P. Load sharing of copy workloads in device clusters
11989156, Mar 06 2023 Dell Products L.P. Host device conversion of configuration information to an intermediate format to facilitate database transitions
12099733, Oct 18 2022 Dell Products L.P. Spoofing of device identifiers in non-disruptive data migration
12105956, Sep 23 2022 Dell Products L P; Dell Products L.P. Multi-path layer configured with enhanced awareness of link performance issue resolution
12131022, Jan 12 2023 Dell Products L.P. Host device configured for automatic detection of storage system local-remote designations
12131047, Oct 14 2021 Dell Products L P; Dell Products L.P. Non-disruptive migration of logical storage devices in a Linux native multi-pathing environment
12153817, Jan 20 2023 Dell Products L P; Dell Products L.P. Host device configured for efficient delivery of offload API commands to multiple storage systems
ER1737,
ER3509,
ER5787,
ER6486,
ER912,
ER9412,
Patent Priority Assignee Title
5822780, Dec 31 1996 EMC IP HOLDING COMPANY LLC Method and apparatus for hierarchical storage management for data base management systems
9189387, Jun 24 2013 EMC IP HOLDING COMPANY LLC Combined memory and storage tiering
20050203921,
20130173557,
20140095448,
20140095790,
////////////////////////////////////////
Executed onAssignorAssigneeConveyanceFrameReelDoc
Jun 28 2013EMC IP HOLDING COMPANY LLC(assignment on the face of the patent)
Jul 10 2013ESPOSITO, JEFFREY D EMC CorporationASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0311410517 pdf
Jul 11 2013MCCLURE, ANNEEMC CorporationASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0311410517 pdf
May 26 2017Dell Products L PCREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTPATENT SECURITY INTEREST CREDIT 0427680585 pdf
May 26 2017EMC CorporationCREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTPATENT SECURITY INTEREST CREDIT 0427680585 pdf
May 26 2017EMC IP HOLDING COMPANY LLCCREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTPATENT SECURITY INTEREST CREDIT 0427680585 pdf
May 26 2017MOZY, INC CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTPATENT SECURITY INTEREST CREDIT 0427680585 pdf
May 26 2017WYSE TECHNOLOGY L L C CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTPATENT SECURITY INTEREST CREDIT 0427680585 pdf
Jun 05 2017WYSE TECHNOLOGY L L C THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS COLLATERAL AGENTPATENT SECURITY INTEREST NOTES 0427690001 pdf
Jun 05 2017MOZY, INC THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS COLLATERAL AGENTPATENT SECURITY INTEREST NOTES 0427690001 pdf
Jun 05 2017EMC IP HOLDING COMPANY LLCTHE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS COLLATERAL AGENTPATENT SECURITY INTEREST NOTES 0427690001 pdf
Jun 05 2017EMC CorporationTHE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS COLLATERAL AGENTPATENT SECURITY INTEREST NOTES 0427690001 pdf
Jun 05 2017Dell Products L PTHE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS COLLATERAL AGENTPATENT SECURITY INTEREST NOTES 0427690001 pdf
Mar 20 2019EMC IP HOLDING COMPANY LLCTHE BANK OF NEW YORK MELLON TRUST COMPANY, N A SECURITY AGREEMENT0494520223 pdf
Mar 20 2019WYSE TECHNOLOGY L L C THE BANK OF NEW YORK MELLON TRUST COMPANY, N A SECURITY AGREEMENT0494520223 pdf
Mar 20 2019FORCE10 NETWORKS, INC THE BANK OF NEW YORK MELLON TRUST COMPANY, N A SECURITY AGREEMENT0494520223 pdf
Mar 20 2019EMC CorporationTHE BANK OF NEW YORK MELLON TRUST COMPANY, N A SECURITY AGREEMENT0494520223 pdf
Mar 20 2019Dell USA L PTHE BANK OF NEW YORK MELLON TRUST COMPANY, N A SECURITY AGREEMENT0494520223 pdf
Mar 20 2019Dell Products L PTHE BANK OF NEW YORK MELLON TRUST COMPANY, N A SECURITY AGREEMENT0494520223 pdf
Mar 20 2019DELL MARKETING L P THE BANK OF NEW YORK MELLON TRUST COMPANY, N A SECURITY AGREEMENT0494520223 pdf
Mar 20 2019DELL INTERNATIONAL L L C THE BANK OF NEW YORK MELLON TRUST COMPANY, N A SECURITY AGREEMENT0494520223 pdf
Mar 20 2019CREDANT TECHNOLOGIES, INC THE BANK OF NEW YORK MELLON TRUST COMPANY, N A SECURITY AGREEMENT0494520223 pdf
Apr 09 2020EMC IP HOLDING COMPANY LLCTHE BANK OF NEW YORK MELLON TRUST COMPANY, N A SECURITY AGREEMENT0535460001 pdf
Apr 09 2020WYSE TECHNOLOGY L L C THE BANK OF NEW YORK MELLON TRUST COMPANY, N A SECURITY AGREEMENT0535460001 pdf
Apr 09 2020FORCE10 NETWORKS, INC THE BANK OF NEW YORK MELLON TRUST COMPANY, N A SECURITY AGREEMENT0535460001 pdf
Apr 09 2020EMC CorporationTHE BANK OF NEW YORK MELLON TRUST COMPANY, N A SECURITY AGREEMENT0535460001 pdf
Apr 09 2020Dell USA L PTHE BANK OF NEW YORK MELLON TRUST COMPANY, N A SECURITY AGREEMENT0535460001 pdf
Apr 09 2020Dell Products L PTHE BANK OF NEW YORK MELLON TRUST COMPANY, N A SECURITY AGREEMENT0535460001 pdf
Apr 09 2020DELL MARKETING L P THE BANK OF NEW YORK MELLON TRUST COMPANY, N A SECURITY AGREEMENT0535460001 pdf
Apr 09 2020DELL INTERNATIONAL L L C THE BANK OF NEW YORK MELLON TRUST COMPANY, N A SECURITY AGREEMENT0535460001 pdf
Apr 09 2020CREDANT TECHNOLOGIES INC THE BANK OF NEW YORK MELLON TRUST COMPANY, N A SECURITY AGREEMENT0535460001 pdf
Nov 01 2021Credit Suisse AG, Cayman Islands BranchWYSE TECHNOLOGY L L C RELEASE OF SECURITY INTEREST AT REEL 042768 FRAME 05850582970536 pdf
Nov 01 2021Credit Suisse AG, Cayman Islands BranchMOZY, INC RELEASE OF SECURITY INTEREST AT REEL 042768 FRAME 05850582970536 pdf
Nov 01 2021Credit Suisse AG, Cayman Islands BranchEMC IP HOLDING COMPANY LLCRELEASE OF SECURITY INTEREST AT REEL 042768 FRAME 05850582970536 pdf
Nov 01 2021Credit Suisse AG, Cayman Islands BranchEMC CorporationRELEASE OF SECURITY INTEREST AT REEL 042768 FRAME 05850582970536 pdf
Nov 01 2021Credit Suisse AG, Cayman Islands BranchDell Products L PRELEASE OF SECURITY INTEREST AT REEL 042768 FRAME 05850582970536 pdf
Mar 29 2022THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTDell Products L PRELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL FRAME 042769 0001 0598030802 pdf
Mar 29 2022THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTEMC CorporationRELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL FRAME 042769 0001 0598030802 pdf
Mar 29 2022THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTEMC IP HOLDING COMPANY LLC ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MOZY, INC RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL FRAME 042769 0001 0598030802 pdf
Mar 29 2022THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS NOTES COLLATERAL AGENTDELL MARKETING CORPORATION SUCCESSOR-IN-INTEREST TO WYSE TECHNOLOGY L L C RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL FRAME 042769 0001 0598030802 pdf
Date Maintenance Fee Events
Aug 20 2020M1551: Payment of Maintenance Fee, 4th Year, Large Entity.
Aug 20 2024M1552: Payment of Maintenance Fee, 8th Year, Large Entity.


Date Maintenance Schedule
Mar 14 20204 years fee payment window open
Sep 14 20206 months grace period start (w surcharge)
Mar 14 2021patent expiry (for year 4)
Mar 14 20232 years to revive unintentionally abandoned end. (for year 4)
Mar 14 20248 years fee payment window open
Sep 14 20246 months grace period start (w surcharge)
Mar 14 2025patent expiry (for year 8)
Mar 14 20272 years to revive unintentionally abandoned end. (for year 8)
Mar 14 202812 years fee payment window open
Sep 14 20286 months grace period start (w surcharge)
Mar 14 2029patent expiry (for year 12)
Mar 14 20312 years to revive unintentionally abandoned end. (for year 12)