A method for optimizing storage configuration for future demand and a system applying the method are disclosed. The system includes a monitoring module, a storage recording module, a traffic modeling module, a rule-based decision module, and a storage management module. With performance values and utilization values provided from the monitoring module, a traffic status of data access in a particular time in the future can be generated. Then, a storage configuration with the workload requirement according to some rules can be available. The storage configuration is implemented to fulfill the requirement of the traffic status of data access.
|
19. A method for optimizing storage configuration, comprising the steps of:
A. collecting storage feature information from a storage system having a plurality of storage nodes of at least two different specifications to provide different data services, the plurality of storage nodes are arranged under a current storage configuration, wherein the storage feature information indicates capability of the plurality of storage nodes to provide snapshot service, data replication service, data deduplication service, and data compression service;
B. monitoring performance values and utilization values from the storage system and a host;
C. storing the performance values and utilization values;
D. predicting a traffic status of data access in a particular time in the future according to the stored performance values and utilization values;
E. generating rules determining mapping of a workload requirement to a corresponding storage configuration and defining mapping of service capability to a corresponding storage node of the storage system based on the storage feature information;
F. checking if the traffic status of data access in the particular time fulfills the workload requirement under the current storage configuration;
G. re-configuring the current storage configuration to the corresponding storage configuration according to the rules and alarming that the storage nodes need to be upgraded or extra storage nodes need to be added in the particular time in the future while the workload requirement is not able to be fulfilled by the current storage configuration,
wherein the workload requirement comprises requirement of data service including snapshot service, data replication service, data deduplication service or data compression service.
1. A system for optimizing storage configuration, comprising:
a monitoring module, electrically connected to a workload host and a storage system having a plurality of storage nodes of at least two different specifications to provide different data services, the plurality of storage nodes are arranged under a current storage configuration, wherein performance values, utilization values and feature information are collected from each of the storage nodes by the monitoring module;
a storage recording module, electrically connected to the monitoring module, wherein the performance values and utilization values are allowed to be manually updated and stored in the storage recording module;
a traffic modeling module, electrically connected to the storage recording module, wherein the traffic modeling module generates a prediction of a traffic status of data access in a particular time in the future calculated based on the stored performance values and utilization values to check if the traffic status of data access in the particular time fulfills a workload requirement under the current storage configuration;
a rule-based decision module, electrically connected to the traffic modeling module, wherein the rule-based decision module provides rules determining mapping of the workload requirement to a corresponding storage configuration and defining mapping of service capability to a corresponding storage node of the storage system based on the feature information which indicates capability of the plurality of storage nodes to provide snapshot service, data replication service, data deduplication service, and data compression service; and
a storage management module, electrically connected to the rule-based decision module and the storage system, wherein the storage management module provides the workload requirement from the workload host to the rule-based decision module and reconfigures the current storage configuration to the corresponding storage configuration according to the rules provided from the rule-based decision module while the workload requirement is not able to be fulfilled by the current storage configuration,
wherein the workload requirement comprises requirement of data service including snapshot service, data replication service, data deduplication service or data compression service.
2. The system according to
3. The system according to
4. The system according to
5. The system according to
6. The system according to
7. The system according to
8. The system according to
9. The system according to
10. The system according to
11. The system according to
12. The system according to
13. The system according to
14. The system according to
15. The system according to
16. The system according to
17. The system according to
18. The system according to
a workload requirement managing unit, for receiving the workload requirement;
an action rule generating unit, for creating rules defining the mapping of a workload requirement to a storage configuration of the storage system,
a configuration rule generating unit, for creating rules defining the mapping of service capability to storage nodes of the storage system, and
a rule matching unit, connecting to the workload requirement managing unit, the action rule generating unit and the configuration rule generating unit, for determining the mapping of the received workload requirement to the rules.
20. The method according to
|
The present invention relates to a method and a system for storage configuration. More particularly, the present invention relates to a method and a system for optimizing storage configuration for future demand.
In data centers or enterprises, IT team members always provide more storage resources than their system needs. Usually, a cluster of spindles (Hard Disk Drive, HDD), all-flash-array (Solid-State Disk, SSD), or a hybrid of HDDs and SSDs dedicated to a workload, e.g. Virtual Desktop Infrastructure (VDI), are prepared. Over-provisioning of the storages mentioned above is just for unpredictable peaks of performance or growth of users. Storages dedicated for one workload may cause a waste of large portion of unused storage capacity and money spent on the “imaginative” peak performance which may not happen. Therefore, enterprises or cloud service providers have difficulties in budget planning and choosing the right configuration for their storage system.
A solution for the above problem is provided in the U.S. Pat. No. 8,706,962. A method of configuring a multi-tiered storage system is disclosed. The storage system comprises a number of storage tiers and each of the storage tiers includes storage devices of a particular type of storage. Data access information for storage extents (a small portion of a storage volume) to be stored in the storage system is received. Resource information for available storage tiers in the storage system to place the storage extents on is also received. A cost incurred by the storage system for placing each of the storage extents on each of the storage tiers is determined. The cost is based on a consumption of storage resources for storing a storage extent in a storage tier and calculated using the data access and resource information. Each storage extent is assigned to a particular one of the storage tiers that would incur the lowest cost to the storage system for storing the storage extent. For each storage tier, a minimum number of storage devices are selected, within the assigned storage tier. That would satisfy data access and capacity requirements for all storage extents assigned to that tier.
The method of '962 is to find out a minimum cost if one storage extent is stored. Under the cost, a combination of storage devices is selected for used. It is an economic way to utilize available storages. However, it doesn't mean that a storage system run by the method can save the most money. Fluctuation of use of the storage system in the future is unpredicted. Purchase of storages is till conservative such that a large unnecessary portions of storages will still be reserved.
Hence, a method or a system for optimizing storage configuration for future is desired. Preferably, the method or a system may provide a plan to allocate just-enough resources of storage in a period of time in the future. Once current storage infrastructure could not satisfy requirements in the period of time in the future, warning message can be provided to the IT team members and a recommended plan for new storage procurement can be available as well.
This paragraph extracts and compiles some features of the present invention; other features will be disclosed in the follow-up paragraphs. It is intended to cover various modifications and similar arrangements included within the spirit and scope of the appended claims.
According to an aspect of the present invention, a system for optimizing storage configuration is disclosed. The system includes: a monitoring module, electrically connected to a workload host and a storage system, for collecting performance values and utilization values from the workload host and/or the storage system having a plurality of storage nodes, and scanning feature information from each of the storage node; a storage recording module, electrically connected to the monitoring module, for storing the performance values and utilization values from the monitoring module or by manual update, and providing the stored performance values and utilization values; a traffic modeling module, electrically connected to the storage recording module, for providing a traffic status of data access in a particular time in the future according to the performance values and utilization values from the storage recording module; a rule-based decision module, electrically connected to the traffic modeling module, for providing rules determining mapping of a workload requirement to a storage configuration of the storage system; and a storage management module, electrically connected to the rule-based decision module and the storage system, for providing one workload requirement from the workload host to the rule-based decision module and performing one storage configuration to the storage system according to the rule provided from the rule-based decision module.
According to the present invention, the storage nodes are Hard Disks (HDDs), Solid State Drives (SSDs), or a combination of HDD(s) and SSD(s). Therefore, the storage configuration can be one of a specific HDD, a specific SSD, a combination of HDDs, a combination of SSDs and a combination of HDDs and SSDs. The storage nodes have at least two different specifications. The performance value mentioned above may be read/write Input/output Operations Per Second (IOPS), read/write latency, read/write throughput, or read/write cache hit. The utilization value may be free size in a storage node in the storage system or Central Processing Unit (CPU) load of the workload host.
Preferably, the workload requirement comprises current and future performance values, current and future utilization values, and data services. The data service is to process snapshots of a portion or all of one storage node, quality of service of operation of the storage system, replication of data in one storage node, deduplication of data in one storage node, and compression of data in one storage node. The feature information comprises online status of each storage node, offline status of each storage node, property of snapshots of a portion or all of one storage node, quality of service of operation of the storage system, property of replication of data in one storage node, property of deduplication of data in one storage node, and property of compression of data in one storage node.
In detail, the rule-based decision module further has a database with specifications and costs for all storage nodes, and provides a total cost of one storage configuration. The storage management module further provides a procurement plan for extra storage nodes that are required to complement a gap between the traffic status of data access in a particular time in the future and the maximum capacity currently storage configuration in the storage system can provide.
In order to provide a warning signal, the storage management module further alarms when the traffic status of data access in a particular time in the future exceeds the maximum capacity that all storage nodes in the storage system or currently set storage configuration can provide.
It should be noticed that the monitoring module, the storage recording module, the traffic modeling module, the rule-based decision module, or the storage management module is a physical device or a software running in at least one server. Some or all of the monitoring module, the storage recording module, the traffic modeling module, the rule-based decision module, and the storage management module can be installed in a single server. Communication between two connected modules is achieved by Inter-Process Communication (IPC) such as Remote Procedure Call (RPC).
According to the present invention, the rule mentioned above is manually created and/or provided by the rule-based decision module. In some embodiments, a preset threshold value for each performance value or utilization value is provided so that when one monitor performance value or utilization value exceeds the corresponding preset threshold value, the traffic modeling module initiates a new process to provide a traffic status of data access in a particular time in the future and the storage management module performs one storage configuration to the storage system according to a new rule provided from the rule-based decision module. When the monitoring module scans and discovers new feature information, the traffic modeling module initiates a new process to provide a traffic status of data access in a particular time in the future and the storage management module performs one storage configuration to the storage system according to a new rule provided from the rule-based decision module. When the feature information is further provided manually, the traffic modeling module initiates a new process to provide a traffic status of data access in a particular time in the future and the storage management module performs one storage configuration to the storage system according to a new rule provided from the rule-based decision module.
The rule-based decision module can further includes units as: a workload requirement managing unit, for receiving the workload requirement; an action rule generating unit, for creating rules defining the mapping of a workload requirement to a storage configuration of the storage system, a configuration rule generating unit, for creating rules defining the mapping of storage properties to storage nodes of the storage system, and a rule matching unit, connecting to the workload requirement managing unit, the action rule generating unit and the configuration rule generating unit, for determining the mapping of the received workload requirement to the rules.
According to an aspect of the present invention, a method for optimizing storage configuration is disclosed. The method includes the steps of: collecting storage feature information from a storage system; monitoring performance values and utilization values from the storage system and a host; storing the performance values and utilization values; generating a traffic status of data access in a particular time in the future according to the performance values and utilization values; mapping a storage configuration with the workload requirement according to rules; checking if the existing storage nodes meet the requirement of the traffic status of data access in a particular time in the future; re-configuring the existing storage nodes by the storage configuration if the existing storage nodes don't meet the requirement of the traffic status of data access in the particular time in the future; and alarming that the existing storage nodes need to be upgraded or extra storage nodes need to be added in the particular time in the future if the existing storage nodes don't meet the requirement of traffic status of data access in the particular time in the future. Preferably, a last step can be recommending a plan of extra storages added to the existing storage nodes or adding new storage nodes to meet the requirement of traffic status of data access in the particular time in the future.
The present invention will now be described more specifically with reference to the following embodiments.
Please refer to
Preferably, the storage system 300 is a software-defined storage. Software defined storage refers to computer data storage technologies which separate storage hardware from the software that manages the storage infrastructure. The software enabling a software defined storage environment provides policy management for feature options, such as deduplication, replication, thin provisioning, snapshots, and backup. Therefore, logically, the configuration of storage nodes may support many aspects of data accesses. Even though the storage nodes are the same devices (HDD or SSD), they may have different specifications, e.g., at least two specifications. Some of them may have storage capacity of 1 TB while other may be 2 TB. It has nothing to do with manufacturers as long as the communication specifications for every storage nodes from different manufacturers are unified and accepted to the storage system 300.
The storage configuring system 100 has five main parts. They are a monitoring module 110, a storage recording module 120, a traffic modeling module 130, a rule-based decision module 140, and a storage management module 150. The modules may be a physical device, such as a server. They can also be software running in at least one server (workload host) or more. The latter is applied in the present embodiment. The five modules can be installed in one server to function. They can also be separated into several individual servers while two or more modules can be in the same server. The way the modules are located is not limited by the present invention. Functions of each module will be introduced sequentially.
The monitoring module 110 is electrically connected to workload hosts 200, 201, and 202 and the storage system 300. Here, a number of workload hosts are used to provide a certain cloud service, including different workloads, such as Virtual Desktop Infrastructure (VDI) or Oracle database. The monitoring module 110 can collect performance values and utilization values from the workload host 200, 201, or 202 and/or the storage system 300. The performance value refers to a real data which can be obtained when the workload hosts access the storage system 300. Typically, the performance value can be read/write Input/output Operations Per Second (IOPS), read/write latency, read/write throughput, or read/write cache hit. One or some of the performance values can be received by the monitoring module 110 for further computing. The utilization value is a measurement of current status of hardware of the storage system 300 and workload hosts 200, 201, and 202. For example, the utilization value can be the free size in a storage node in the storage system 300 or Central Processing Unit (CPU) load of the workload host 202. Not like the performance value, the utilization value should be collected as comprehensive as possible. Once there is any change of the utilization value or the performance value, or one of the two values exceeds a preset threshold value, the traffic modeling module 130 initiates a new process to provide a traffic status of data access in a particular time in the future and the storage management module 150 performs one storage configuration to the storage system 300 according to a new rule provided from the rule-based decision module 140. This will be illustrated in details later.
The monitoring module 110 also processes to scan feature information from each of the storage node. The feature information includes, but not limited to, online status of each storage node, offline status of each storage node, property of snapshots of a portion (volume) or all of one storage node, quality of service of operation of the storage system 300, property of replication of data in one storage node, property of deduplication of data in one storage node, and property of compression of data in one storage node. “Property” means whether or not the storage node can provide related service. Namely, if a storage node has the property, then the service can be provided. The feature information is obtained so that an overall running status of the storage system 300 can be available and a decision for future operation can refer to the feature information.
The storage recording module 120 is electrically connected to the monitoring module 110. It stores the performance values and utilization values from the monitoring module 110. It can also receive and store the performance values and utilization values which are manually updated. The performance values and utilization values can be manually updated (maybe by IT team members) just in case there are incomplete data due to connection error between modules but can still be found by someone who supervises the storage system 300, or some historical or empirical data which can be provided. The storage recording module 120 can also provide the stored performance values and utilization values.
The traffic modeling module 130 is electrically connected to the storage recording module 120. It can provide a traffic status of data access in a particular time in the future according to the performance values and utilization values from the storage recording module 120. The traffic status of data access is a forecasted performance value in the particular time in the future. Since the performance value is a key value to check if future requirement of the workload can be fulfilled by current storage configuration, how to get the trusted and forecasted performance values is very important. For example, the traffic status of data access may be IOPS in any time after 5 minutes. However, the traffic status of data access must be available from collecting historically accumulated IOPS and analyzing them. Any suitable methods, algorithms, or modules that provide such service can be applied. It is best to utilize a storage traffic modeling system provided by the same inventor in U.S. patent application Ser. No. 14/290,533 which is herein incorporated by reference in its entirety.
The rule-based decision module 140 is electrically connected to the traffic modeling module 130. It is in charge of providing rules that determine mapping of a workload requirement to a storage configuration of the storage system 300. The rule is manually created by IT team members and/or automatically generated by itself according to available feature information. Workload requirement includes current and future performance values, current and future utilization values, and data services. Data service is a requirement from the workload host 200, 201, or 202 that the storage system 300 can support. For example, the data service is to process snapshots of a portion (volume) or all of one storage node, fulfilling quality of service of operation of the storage system 300, replication of data in one storage node, deduplication of data in one storage node, and compression of data in one storage node. The storage configuration refers to a specific HDD, a specific SSD, a combination of HDDs, a combination of SSDs, or a combination of HDDs and SSDs.
The rule-based decision module 140 is further composed of four main units. Please see
Please see
The workload requirements from the workload host 200 are added as knowledge. The knowledge triggers the rules. Therefore, CR1 and CR3 are triggered. Then, action rules of AR1 and AR5 are triggered. The rule-based decision module 140 requests the storage management module 150 to change current storage configuration with the volume 1 of the HDD2 added, which has snapshot capability.
The rule-based decision module 140 further has a database 145 with specifications and costs for all storage nodes. The database 145 can also provide a total cost of requested storage nodes in a storage configuration. Another example showing how the rule-based decision module 140 operates is illustrated below.
Please refer to
Obviously, CR1, CR3 and CR5 are triggered. The volume 1 of the HDD2 is able to take care of these workload requirements. However, the volume 1 of the HDD2 is picked but no action done here. Then, AR1, AR5, and AR8 are triggered. The volume 1 of the HDD2 is selected by the function in AR1, AR5, and AR8.
Since the storage configuring system 100 can deal with more than two workload requirements at the same time, another example of how it works is shown in
The configuration rules are changed significantly to fulfill multi-workloads. As one can see, CR1 means if a workload requirement is IOPS<4000, then the volume 1 of the HDD2 is provided for the requirement. CR2 means if a workload requirement is IOPS<4000, then the volume 2 of the HDD2 is provided for the requirement. CR3 means if a workload requirement is IOPS≥4000, then the volume 3 of the SSD3 is provided for the requirement. CR4 to CR6 refer to that if a workload requirement needs snapshots, then the volume 1 of the HDD2, the volume 2 of the HDD2 and the volume 3 of the SSD3 can all be provided for the requirement. CR7 means if the volume 1 of the HDD2 is selected and the total cost is requested by one workload requirement, then the cost of the volume 1 of the HDD2 is USD 1000. CR8 means if the volume 2 of the HDD2 is selected and the total cost is requested by one workload requirement, then the cost of the volume 2 of the HDD2 is USD 1500. CR9 means if the volume 3 of the SSD3 is selected and the total cost is requested by one workload requirement, then the cost of the volume 3 of the SSD3 is USD 2000. For the action rules, only AR1 is amended. New AR1 says that if IOPS<4000 is requested, another storage node is selected and ready to be combined as a storage configuration, a maximum of total cost is given, and the cost of the storage configuration is still within the maximum of the total cost, then the one can be used with the selected one, or else no snapshots for IOPS<4000 is provided.
Again, it is obvious that all the configuration rules are triggered. All of the volume 1 of the HDD2, the volume 2 of the HDD2, and the volume 3 of the SSD3 are qualified for the requirement R1, R2, and R3. Meanwhile, the action rules AR1, AR5 and AR8 are triggered, too. However, at this stage, the volume 2 of the HDD2 is not qualified. Because the volume 3 of the SSD3 is a must pick for R2, only one of the volume 1 of the HDD2 and the volume 2 of the HDD2 should be used to fulfill R1. Since the total cost of the volume 2 of the HDD2 and the volume 3 of the SSD3, USD 3500, is greater than the maximum of the total cost, USD 3000, the storage configuration is set to be the volume 1 of the HDD2 and the volume 3 of SSD3 for the two workloads.
The storage management module 150 is electrically connected to the rule-based decision module 140 and the storage system 300. It can provide one workload requirement from the workload host 200, 201, and/or 202 to the rule-based decision module 140 to request a corresponding storage configuration to perform to the storage system 300 according to the rule provided from the rule-based decision module 140. As mentioned above, the storage configuration will be done according to the action rules AR1, AR5, and AR8.
Due to the database 145, the storage management module 150 can further provide a procurement plan for extra storage nodes that are required to complement a gap between the traffic status of data access in a particular time in the future and the maximum capacity currently storage configuration in the storage system 300 can provide. For example, if the traffic modeling module 130 forecasts that IOPS required 48 hours later is 5000 while current storage configuration can only support IOPS of 4000 or less, a procurement plan as a solution for this requirement is proposed. As well, the storage management module 150 alarms when the traffic status of data access in a particular time in the future exceeds the maximum capacity that all storage nodes in the storage system 300 or currently set storage configuration can provide. The alarming can be set as an action rule. It should be noticed that a connection between any two connected modules mentioned above is achieved by Inter-process communication (IPC) such as Remote Procedure Call (RPC).
In another embodiment, there are some interactive modes disclosed. A preset threshold value for each performance value or utilization value is provided. Thus, when one monitor performance value or utilization value exceeds the corresponding preset threshold value, the traffic modeling module 130 initiates a new process to provide a traffic status of data access in a particular time in the future. The storage management module 150 performs one storage configuration to the storage system 300 according to a new rule provided from the rule-based decision module 140. If the monitoring module 110 scans and discovers new feature information, the traffic modeling module 130 initiates a new process to provide a traffic status of data access in a particular time in the future and the storage management module 150 performs new storage configuration to the storage system 300 according to a new rule provided from the rule-based decision module 140. In addition, when the feature information is further manually provided by the IT team members, the traffic modeling module 130 initiates a new process to provide a traffic status of data access in a particular time in the future and the storage management module 150 performs a new storage configuration to the storage system according to a new rule provided from the rule-based decision module 140.
It is clear from the above that the embodiments show a physical system to have optimizing storage configuration for future requirement. A method for achieving the same goal can be derived. Please refer to
While the invention has been described in terms of what is presently considered to be the most practical and preferred embodiments, it is to be understood that the invention needs not be limited to the disclosed embodiments. On the contrary, it is intended to cover various modifications and similar arrangements included within the spirit and scope of the appended claims, which are to be accorded with the broadest interpretation so as to encompass all such modifications and similar structures.
Chen, Wen Shyen, Huang, Ming Jen, Shih, Tsung Ming, Huang, Chun Fang
Patent | Priority | Assignee | Title |
10333984, | Feb 21 2017 | KYNDRYL, INC | Optimizing data reduction, security and encryption requirements in a network environment |
10609086, | Feb 21 2017 | KYNDRYL, INC | Optimizing data reduction, security and encryption requirements in a network environment |
Patent | Priority | Assignee | Title |
8239584, | Dec 16 2010 | EMC IP HOLDING COMPANY LLC | Techniques for automated storage management |
20020046299, | |||
20020103954, | |||
20050071599, | |||
20060116981, | |||
20120089726, | |||
20140115579, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Sep 17 2014 | HUANG, MING JEN | PROPHETSTOR DATA SERVICES, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 033868 | /0905 | |
Sep 17 2014 | HUANG, CHUN FANG | PROPHETSTOR DATA SERVICES, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 033868 | /0905 | |
Sep 17 2014 | SHIH, TSUNG MING | PROPHETSTOR DATA SERVICES, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 033868 | /0905 | |
Sep 17 2014 | CHEN, WEN SHYEN | PROPHETSTOR DATA SERVICES, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 033868 | /0905 | |
Oct 01 2014 | Prophetstor Data Services, Inc. | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Feb 24 2022 | M2551: Payment of Maintenance Fee, 4th Yr, Small Entity. |
Date | Maintenance Schedule |
Sep 04 2021 | 4 years fee payment window open |
Mar 04 2022 | 6 months grace period start (w surcharge) |
Sep 04 2022 | patent expiry (for year 4) |
Sep 04 2024 | 2 years to revive unintentionally abandoned end. (for year 4) |
Sep 04 2025 | 8 years fee payment window open |
Mar 04 2026 | 6 months grace period start (w surcharge) |
Sep 04 2026 | patent expiry (for year 8) |
Sep 04 2028 | 2 years to revive unintentionally abandoned end. (for year 8) |
Sep 04 2029 | 12 years fee payment window open |
Mar 04 2030 | 6 months grace period start (w surcharge) |
Sep 04 2030 | patent expiry (for year 12) |
Sep 04 2032 | 2 years to revive unintentionally abandoned end. (for year 12) |