ssd wear-level data (320) is generated on managed nodes (202) having ssds (206). The wear-level data is collected by a management node (204).
|
1. A method comprising:
determining solid-state disk (ssd) wear-level data for ssds on managed nodes, wherein the wear-level data includes frequency of writes;
determining a recommendation to replace at least one of the ssds with a hard disk or a replacement ssd based on the frequency of writes, wherein if performance is determined to be less than maximal, the recommendation is to replace the at least one ssd with the hard drive; otherwise the recommendation is to replace the at least one ssd with the replacement ssd; and
balancing, by a management node, loads among the managed nodes based on the wear-level data.
6. A management node comprising:
a processor to:
receive ssd wear-level data for each ssd of a plurality of ssds on nodes, wherein the ssd wear-level data for each ssd includes frequency of writes;
compute wear-rate based on the frequency of writes for each ssd from the ssd wear-level data for the ssd, wherein the wear-rate is a rate of change of wear calculated based on previous ssd wear-level data and current wear-level data determined from the ssd wear-level data;
determining a recommendation to replace at least one of the ssds with a hard disk or a replacement ssd based on the frequency of writes, wherein if performance is determined to be less than maximal, the recommendation is to replace the at least one ssd with the hard drive; otherwise the recommendation is to replace the at least one ssd with the replacement ssd; and
redistribute loads among the nodes based on wear-rate of each ssd of the plurality of ssds to extend the lifetimes of some ssds.
10. A computer system comprising:
a processor; and
a non-transitory computer readable medium storing machine readable instructions executable by the processor to:
collect ssd wear-level data for each of a plurality of ssds from managed nodes having the plurality of ssds, wherein the ssd wear-level data for each ssd includes frequency of writes;
compute wear-rate for each ssd based on the frequency of writes from the ssd wear-level data for the ssd, wherein the wear-rate is a rate of change of wear calculated based on previous ssd wear-level data and current wear-level data determined from the ssd wear-level data; and generate a recommendation to replace an ssd of the plurality of ssds with a hard disk or a replacement ssd based on the wear-rate for the ssd, wherein if performance is less than a maximal value determined based on the wear-rate for the ssd, the recommendation is to replace the ssd with the hard disk; otherwise the recommendation is to replace the ssd with the replacement ssd.
2. The method of
storing the ssd wear-level data in a database of ssd wear data, wherein the stored ssd wear-level data comprises an identifier for each managed node and the wear-level data for each managed node, and the wear-level data is associated with the identifier of the corresponding managed node.
3. The method of
determining workload redistributions among the ssds so as to stagger times at which each ssd is replaced.
4. The method of
receiving, at one of the managed nodes, the wear-level data from each of the other managed nodes, wherein the wear-level data of each of the ssds is measured at the managed node including the ssd.
5. The method of
receiving, at the management node, the wear-level data for each of the ssds, wherein the wear-level data of each of the ssds is measured at the managed node including the ssd.
7. The management node of
8. The management node of
9. The management node of
11. The computer system of
|
The present application is a Continuation of co-pending U.S. patent application Ser. No. 13/822,249, filed Mar. 11, 2013, which is a national stage filing under 35 U.S.C 371 of PCT application number PCT/US2010/055158, having an international filing date of Nov. 2, 2010, the disclosures of which are hereby incorporated by reference in their entireties.
Large computer installations can have thousands of components subject to failure and replacement. Accordingly, some computer management approaches monitor computer system health, e.g., by keeping track of data errors (even if they were corrected). Devices that generate excessive data errors can be targeted for replacement prior to a predicted failure. For example, hard disks can include controllers that detect and correct data errors and then log the occurrence of the error. Such techniques may also be applied to solid-state-disks (SSDs) which provide for higher performance than hard disks for increasingly affordable costs.
A process 100, flow charted in
For example, a computer system 200 includes managed nodes 202 and a management node 204, as shown in
In addition, the SSD wear data of table 212 can include wear-rate data, which can be calculated by comparing previous and current wear-level data for an SSD. Also, the SSD wear data of table 212 can include end dates by which respective SSDs are to be replaced based on the current wear level and rate. Table 212 also includes an FRU (field-replaceable unit) field for identifying the recommended replacement component (e.g., SSD type and size or hard disk speed and size) for each SSD. In other embodiments, wear-level data is represented in other forms, and different parameters are associated with the wear-level data; for example, wear-level data can be represented for different time slots.
As shown in
SSD monitor 312 can be implemented as a hardware-software combination. Embedded firmware on a drive controller, such as RAID (Redundant Array of Inexpensive Disks), HBA (Host Bus Adapter), etc., for node 302 can read the wear data from any SSD drives present and push this to an embedded Baseboard Management Chip (BMC). The BMC can log this data and make it visible over an out-of-band network to management node 204 via a Web-GUI (graphic user interface) and/or IPMI OEM (Intelligent Platform Interface, Original Equipment Manufacturer) command, or another industry standard mechanism. Also, storage management agents on each managed node can extract the SSD drive wear level data and communicate it to other nodes, e.g., management node 204 and/or load-balancing node 304 over an in-band network.
For example, SSD wear-level monitor 312 can transmit SSD wear-level data for SSDs 306 and 308 to load-balancing node 304. Load balancing node 304 is configured to distribute evenly incoming requests 315 among plural application nodes running instances of application 310. A load balancer 314 can base load balancing determination in part based on wear level data from the nodes its distributes to, e.g., to balance the rates at which SSD storage is written to across the nodes or to extend the remaining lifetimes of SSDs with more advanced wear levels.
Node 304 has its own SSD 316 and its own wear-level monitor 318, similar to wear-level monitor 312. Wear-level monitors, e.g., monitors 312 and 318, of managed nodes 202 can transmit their respective wear level data 320 to management node 204 via networks 322. Networks 322 can include an in-band network and an out-of-band management network connected to BMC-bearing internal lights out modules of managed nodes 202.
Wear-level data 320 is received by SSD data manager 324 of management node 204. Data manager 224 stores collected wear level data in table 212 in association with the SSDs and the nodes which generated the data. The wear-level and wear-rate data in table 212 can be extrapolated to project ends of the useful lifetimes of respective SSDs. The projected end dates can be used by SSD purchase agent 326 of management node 204 in making purchase recommendations 328 for SSD replacements. The projections may be made far enough in advance to take advantage of sales and quantity discounts.
A workload manager 330 can make use of the data in table 212 in planning workload redistributions 332. For example, if SSD 306 is suffering from an advanced wear level and a high wear rate, workload manager 324 can place the workload running on node 302 with a less demanding workload to extend the useful life (in days) of SSD 306. Also, workload manager 330 can manage workloads so that not all SSDs are replaced at once. For example, workload manager 330 can act to ensure that the date SSD 306 is to be replaced is not close to the date SSD 308 is to be replaced. This will allow node 302 to continue uninterrupted operation while a replacement SSD is purchased and hot-swapped for SSD 306. Other wear-level management programs can be implemented in accordance with the desires of the owner of computer system 200.
As shown in
Code 408 is configured to implement a process 404, flow charted in
At process segment 412, SSD data manager 324 receives, collects, and stores data from the managed nodes. SSD data manager 324 organizes the collected data and stores it in table 212. At process segment 413, wear rates can be computed by comparing current wear levels with past wear levels. Also, projected end dates can be computed from wear-levels and wear rates.
At process segment 414, workload manager 330 uses the wear-level and wear-rate data as factors in determining periodic workload redistributions. Other factors may include processor utilization, communications bandwidth utilization, power consumption, etc. The workload redistribution can be used to extend the lifetimes of heavily used SSDs, to implement a conveniently staggered replacement policy, or implement other management policies.
At process segment 415, SSD purchase agent 326 uses the wear level and wear rate data in making purchase recommendations. The wear-level and wear-rate data can be extrapolated to project an end of useful life data for each SSD. If replacement SSDs are to be replaced in batches, process segment 415 can provide for identifying which SSDs are to be replaced in the next batch rather than some batch to be purchased further in the future. Also, process segment 415 can make recommendations to replace an SSD with a hard disk, e.g., where there are frequent writes and performance can be less than maximal.
Herein, “storage media” refers to non-transitory tangible computer-readable storage media. Herein “code” refers to computer-readable data and computer-executable instructions. Herein, a “processor” is hardware configured to execute computer-executable instructions, whether that hardware is embodied in a single element (e.g. an integrated circuit) or distributed among plural elements. Herein, a “communications device” is a hardware element used to receive data into a node or transmit data from a node or both. Herein, a “node” is a computer element including a processor, storage media, and at least one communications device.
Herein, “SSD wear level data” includes data indicating the wear level of an SSD, e.g., in terms of a number of write operations or as a percent of estimated SSD lifespan. “SSD wear level data” also encompasses associated data such data identifying the relevant SSD, node, and workload. “SSD wear data” encompasses SSD wear level data and other data (e.g., wear rate and end date) computed using the SSD wear level data.
Herein, a “system” is a set of interacting elements, wherein the elements can be, by way of example and not of limitation, mechanical components, electrical elements, atoms, instructions encoded in storage media, and process segments. In this specification, related art is discussed for expository purposes. Related art labeled “prior art”, if any, is admitted prior art. Related art not labeled “prior art”, is not admitted prior art. The illustrated and other described embodiments, as well as modifications thereto and variations thereupon are within the scope of the following claims.
Patent | Priority | Assignee | Title |
10049757, | Aug 11 2016 | SK HYNIX INC | Techniques for dynamically determining performance of read reclaim operations |
10437488, | Dec 08 2015 | KYOCERA Document Solutions Inc. | Electronic device and non-transitory computer readable storage medium |
11385800, | Jul 31 2018 | Kioxia Corporation | Information processing system for controlling storage device |
Patent | Priority | Assignee | Title |
7809900, | Nov 24 2006 | Seagate Technology LLC | System, method, and computer program product for delaying an operation that reduces a lifetime of memory |
7865761, | Jun 28 2007 | EMC IP HOLDING COMPANY LLC | Accessing multiple non-volatile semiconductor memory modules in an uneven manner |
8010738, | Jun 27 2008 | EMC IP HOLDING COMPANY LLC | Techniques for obtaining a specified lifetime for a data storage device |
8239617, | Feb 12 2010 | EMC Corporation | Enterprise data storage system using multi-level cell flash memory |
9195588, | Nov 02 2010 | Hewlett Packard Enterprise Development LP | Solid-state disk (SSD) management |
20040177143, | |||
20080082725, | |||
20090055465, | |||
20090063895, | |||
20090300277, | |||
20100082890, | |||
20100088461, | |||
20100174851, | |||
20100250831, | |||
20100262793, | |||
20110010487, | |||
20110035535, | |||
20110060865, | |||
20110307679, | |||
20130179624, | |||
20140068153, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Nov 02 2010 | CEPULIS, DARREN J | Hewlett-Packard Development Company, LP | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 037962 | /0460 | |
Oct 16 2015 | Hewlett Packard Enterprise Development LP | (assignment on the face of the patent) | / | |||
Oct 27 2015 | Hewlett-Packard Development Company, LP | Hewlett Packard Enterprise Development LP | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 038076 | /0001 |
Date | Maintenance Fee Events |
Jun 23 2021 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Date | Maintenance Schedule |
Jan 16 2021 | 4 years fee payment window open |
Jul 16 2021 | 6 months grace period start (w surcharge) |
Jan 16 2022 | patent expiry (for year 4) |
Jan 16 2024 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jan 16 2025 | 8 years fee payment window open |
Jul 16 2025 | 6 months grace period start (w surcharge) |
Jan 16 2026 | patent expiry (for year 8) |
Jan 16 2028 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jan 16 2029 | 12 years fee payment window open |
Jul 16 2029 | 6 months grace period start (w surcharge) |
Jan 16 2030 | patent expiry (for year 12) |
Jan 16 2032 | 2 years to revive unintentionally abandoned end. (for year 12) |