A polling interval of a poller in a data storage system is dynamically adjusted to balance latency and resource utilization. Performance metrics are regularly estimated or measured and derived values are calculated including a latency share and a cost per event, based on a latency metric for polling latency and a cost metric for use of processing resources. A target latency share is adjusted by (1) an increase based on a cpu utilization metric being above a utilization threshold, and (2) a reduction based on the cpu utilization metric being below the utilization threshold and the cost per event being lower than a cost-per-event threshold. The polling interval is adjusted by (1) increasing the polling interval based on the latency share being less than the target latency share, and (2) decreasing the polling interval based on the latency share being greater than the target latency share.
|
1. A method of dynamically adjusting a polling interval of a poller to balance latency and resource utilization of the poller for detecting i/O operation events in a data storage system, comprising:
regularly estimating or measuring performance metrics for the poller and calculating derived values including a latency share and a cost per event, the performance metrics including at least a latency metric and a cost metric, the latency metric reflecting latency related to the polling interval, the cost metric reflecting use of processing resources by the poller, the latency share being the latency metric divided by an overall synchronous i/O operation latency value, the cost per event being the cost metric divided by an events count;
adjusting a target latency share by (1) increasing the target latency share based on a cpu utilization metric being above a utilization threshold, and (2) reducing the target latency share based on the cpu utilization metric being below the utilization threshold and the cost per event being lower than a cost-per-event threshold; and
adjusting the polling interval by (1) increasing the polling interval based on the latency share being less than the target latency share, and (2) decreasing the polling interval based on the latency share being greater than the target latency share.
11. A data storage system having storage devices and storage processing circuitry configured and operative to execute computer program instructions to dynamically adjust a polling interval of a poller to balance latency and resource utilization of the poller for detecting i/O operation events in the data storage system, by:
regularly estimating or measuring performance metrics for the poller and calculating derived values including a latency share and a cost per event, the performance metrics including at least a latency metric and a cost metric, the latency metric reflecting latency related to the polling interval, the cost metric reflecting use of processing resources by the poller, the latency share being the latency metric divided by an overall synchronous i/O operation latency value, the cost per event being the cost metric divided by an events count;
adjusting a target latency share by (1) increasing the target latency share based on a cpu utilization metric being above a utilization threshold, and (2) reducing the target latency share based on the cpu utilization metric being below the utilization threshold and the cost per event being lower than a cost-per-event threshold; and
adjusting the polling interval by (1) increasing the polling interval based on the latency share being less than the target latency share, and (2) decreasing the polling interval based on the latency share being greater than the target latency share.
2. The method of
3. The method of
4. The method of
5. The method of
6. The method of
7. The method of
8. The method of
9. The method of
maintaining a recent history of values of the polling interval, latency share and cost per event; and
determining, based on the recent history of values, whether decreasing the polling interval has increased the cost per event without reducing the latency share, and if so then (a) increasing the polling interval to a recent increased value and (b) preventing the polling interval from being increased for some number of subsequent cycles notwithstanding that the value of the target latency share would otherwise result in increasing the polling interval.
10. The method of
12. The data storage system of
13. The data storage system of
14. The data storage system of
15. The data storage system of
16. The data storage system of
17. The data storage system of
18. The data storage system of
19. The data storage system of
maintain a recent history of values of the polling interval, latency share and cost per event; and
determine, based on the recent history of values, whether decreasing the polling interval has increased the cost per event without reducing the latency share, and if so then (a) increase the polling interval to a recent increased value and (b) prevent the polling interval from being increased for some number of subsequent cycles notwithstanding that the value of the target latency share would otherwise result in increasing the polling interval.
20. The data storage system of
|
The present invention is related to the field of data storage systems, and in particular to data storage systems employing polling for detecting and processing events during data storage operations such as reads and writes.
A method is disclosed of dynamically adjusting a polling interval of a poller to balance latency and resource utilization of the poller for detecting I/O operation events in a data storage system.
The method includes regularly estimating or measuring performance metrics for the poller and calculating derived values including a latency share and a cost per event, wherein the performance metrics including at least a latency metric and a cost metric, the latency metric reflecting latency related to the polling interval, and the cost metric reflecting use of processing resources by the poller. The latency share is the latency metric divided by an overall synchronous I/O operation latency value, and the cost per event is the cost metric divided by an events count.
A target latency share is adjusted by (1) increasing the target latency share based on a CPU utilization metric being above a utilization threshold, and (2) reducing the target latency share based on the CPU utilization metric being below the utilization threshold and the cost per event being lower than a cost-per-event threshold.
The polling interval is then adjusted by (1) increasing the polling interval based on the latency share being less than the target latency share, and (2) decreasing the polling interval based on the latency share being greater than the target latency share.
The foregoing and other objects, features and advantages will be apparent from the following description of particular embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views.
Overview
Storage system IO processing time may be divided into two parts:
1) CPU processing time: this part includes actual CPU processing time i.e., all the periods of time in which the CPU cycles are spent to process the IO.
2) Waiting time: this part includes waiting on pollers-IO processing involves initiating operations and waiting for their completion. The storage system uses pollers that poll the interfaces for new events the IO is waiting for.
The “polling” design, as opposed to “interrupt” design, is optimized for a storage system that requires low latency and high IOPS (I/O per second)—as there are no context switches that occur when interrupts are involved. The current disclosure assumes use of a polling design.
To achieve low latency, the storage system preferably allows the pollers to run at a high frequency (optimally all the time) to detect events as soon as they occur.
However, running the pollers takes CPU cycles, that could be used for IO processing. Therefore, running the pollers at a high frequency may result in a waste of CPU cycles when there are no new events to pull. Moreover, if the system is completely idle, running the pollers all the time (with unreasonable frequency) may essentially (and unjustifiably) increase the power consumption.
On the other hand, running the pollers in low frequency will impact (i.e., increase) the IO latency, because the IO may wait a longer time to receive an event, before it may resume processing.
Therefore, there's a tradeoff in running the pollers-between the IO latency and waste of CPU cycles and power consumption. The higher the frequency the lower the IO latency, but in the cost of CPU cycles; the lower the frequency, less CPU cycles are wasted, but in the cost of increased IO latency.
The relative impact of polling rate may be more critical in scenarios where the system is underloaded. When the system is more highly loaded, the IO latency includes long waiting for different types of resources (Locks, Backend, etc.), so the latency is generally high, and the portion spent waiting for Poll event is not that significant.
Existing naïve approaches are either always polling (where significant CPU cycles may be wasted), or polling based on only the CPU utilization or some other simplistic model (e.g., reduce poller frequency when there are no IOs and increase when IOs arrive), which may significantly impact IO latency.
It should be noted that the current system load state may not be a good metric to use for reaching optimal poller frequency. Usually, storage systems may optimize performance by postponing certain operations (such as dedupe, compression, garbage collection etc.) to a later time when the system is idle—this allows the system to reach very high IOPS during peak hours, however, the system accumulates “debt” that must be handled/paid within a certain interval time. Thus, even if the IO rate is low, the system may need CPU to handle high system debt.
Furthermore, there could be no fixed poller frequency that would be optimal for all cases, since IO load changes over time, background load changes, etc.
To address the above problems, a technique is used to dynamically adjust each of the pollers frequency individually, to optimally balance between Io latency and wasted CPU cycles, which can help improve overall system performance and efficiency.
Embodiments
Components of Waiting Time
At a high level, data storage system IO processing may be divided into two general activities:
Thus, the pollers 22 are operated in a manner providing for dynamic adjustment of their polling frequency/intervals to provide a desired balance between IO latency and polling-related “cost” (CPU utilization), which can improve overall system performance and efficiency. The dynamic adjustment may be performed by specialized functionality of the pollers 22 themselves or another component of the storage processing 12.
At 40, the process regularly estimates and/or measures performance metrics for the poller and calculating derived values including a latency share and a cost per event. The performance metrics include at least a latency metric and a cost metric, where the latency metric reflects latency related to the polling interval, and the cost metric reflects use of processing resources by the poller. The latency share is the latency metric divided by an overall I/O operation latency value, and the cost per event is the cost metric divided by an events count.
The various values used in step 40 and the remaining steps of
At 42, the process adjusts a target latency share by (1) increasing the target latency share based on a CPU utilization metric being above a utilization threshold, and (2) reducing the target latency share based on the CPU utilization metric being below the utilization threshold and the cost per event being lower than a cost-per-event threshold.
At 44, the polling interval is conditionally adjusted by (1) increasing the polling interval based on the latency share being less than the target latency share, and (2) decreasing the polling interval based on the latency share being greater than the target latency share. As described below, it may be helpful to include some hysteresis by using high and low limits of a small range about the target latency share (i.e., increase interval only if current latency share is sufficiently below the target latency share, and decrease interval only if current latency share is sufficiently above the target latency share). Generally, if neither condition (1) nor (2) is met, then the polling interval can be left as it is, i.e., no adjustment.
Illustrative Example
A more specific example and detailed description of the general process of
Polling frequency for each poller 22 is dynamically adjusted based on the following metrics that are measured/updated in an ongoing regular manner:
The dynamic adjustment is intended to find a good balance between IO latency impact and CPU time spent delivering each poller event for specific time point/conditions (i.e., CPU utilization, IO latency, etc.) For each poller 22, this is done using the following variables:
Based on the Target_Extra_Latency_Share and current poller Extra Latency Share, the Poll Interval (the time between consecutive polls) is adjusted (increased or decreased) to move the poller latency in the direction of Target_Extra_Latency_Share. The checking and adjusting is performed for each of the pollers 22 individually. Note that this process may be applied to only those pollers 22 that are involved in the IO synchronous path and thus impact the IO latency (other pollers may have less overall performance impact and thus may be operated more statically).
I. Dynamically update Target Extra Latency Share
1) The following metrics (described above) are regularly monitored:
2) Additionally, the following derivations of the metrics above are calculated:
3) Initialization:
The remaining steps below are performed for each poller individually.
4) If the Poller_Cost_Per_Event is below a predefined threshold (e.g., 1 us), THEN set the Target_Extra_Latency_Share for this poller to some minimum value (e.g., Minimal Extra_Latency_Share a percentage that provides a negligible extra IO latency).
5) For other cases: at regular intervals (e.g., every 1-5 seconds), check if the Target_Extra_Latency_Share needs to be either increased or decreased (default is no adjustment):
The thresholds above may be configurable and/or dynamically tunable. The use of Normalized Depth in the above test can help with accuracy in a system in which the CPU Utilization is also used separately to manage the level of background activity.
II. Dynamically Adjust the Poll Interval to Reach the Target_Extra_Latency Share
Every time window, check if the Polling Interval needs to be adjusted. To avoid jitter and provide hysteresis, a range about the Target_Extra_Latency_Share is used that is defined by two thresholds, High_Target_Extra_Latency_Share and Low_Target_Extra_Latency_Share, where High_Target_Extra_Latency_Share is always higher than Low_Target_Extra_Latency_Share.
The above operation effects a dynamic adjustment of polling interval based on balancing polling latency with polling cost. It will be appreciated that various specific values may be changed or tuned in a given embodiment or instance (e.g., thresholds, step sizes, etc.).
The basic technique as described above may be augmented to address potential special cases.
For example, in some conditions it may be that increasing polling frequency does not actually reduce latency share, so the increase represents wasted additional processing. In this case, a recent history of relevant values can be maintained (i.e., the Poll Interval as well as the Poller_Cost_Per_Event and Extra_Latency_Share, for the last several (e.g., 3-5) adjustment cycles). If it is detected that increasing the Poll frequency causes growth of Poller_Cost_Per_Event without reducing Extra_Latency_Share, it means that increasing the poller frequency is resulting in wasted extra polling. In this case, the poll frequency can be reduced to the value it was a few cycles ago and then prevented from being increased over some number (e.g., 10-50) of succeeding cycles, even if the basic processing as described above recommends an increase.
Additionally, some pollers 22 may handle multiple events during processing an IO request, which increases their poller Interval impact on the average IO latency. This could be taken into account by calculating the average number of poller events per IO request and using it as a coefficient to the Extra Latency Share of each poller.
While various embodiments of the invention have been particularly shown and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention as defined by the appended claims.
Kamran, Lior, Shveidel, Vladimir, Alkalay, Amitai
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
10871991, | Jan 18 2019 | EMC IP HOLDING COMPANY LLC | Multi-core processor in storage system executing dedicated polling thread for increased core availability |
10884799, | Jan 18 2019 | EMC IP HOLDING COMPANY LLC | Multi-core processor in storage system executing dynamic thread for increased core availability |
8903954, | Nov 22 2010 | Seven Networks, Inc. | Optimization of resource polling intervals to satisfy mobile device requests |
9514162, | Jun 07 2013 | Internation Business Machines Corporation | Smart polling frequency |
20140281235, | |||
20190026220, | |||
20190155542, | |||
20200348845, | |||
20200401339, | |||
20210029219, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jul 25 2023 | SHVEIDEL, VLADIMIR | Dell Products L P | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 064680 | /0622 | |
Jul 25 2023 | KAMRAN, LIOR | Dell Products L P | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 064680 | /0622 | |
Jul 25 2023 | ALKALAY, AMITAI | Dell Products L P | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 064680 | /0622 | |
Jul 25 2023 | Dell Products L.P. | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Jul 25 2023 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Date | Maintenance Schedule |
Nov 19 2027 | 4 years fee payment window open |
May 19 2028 | 6 months grace period start (w surcharge) |
Nov 19 2028 | patent expiry (for year 4) |
Nov 19 2030 | 2 years to revive unintentionally abandoned end. (for year 4) |
Nov 19 2031 | 8 years fee payment window open |
May 19 2032 | 6 months grace period start (w surcharge) |
Nov 19 2032 | patent expiry (for year 8) |
Nov 19 2034 | 2 years to revive unintentionally abandoned end. (for year 8) |
Nov 19 2035 | 12 years fee payment window open |
May 19 2036 | 6 months grace period start (w surcharge) |
Nov 19 2036 | patent expiry (for year 12) |
Nov 19 2038 | 2 years to revive unintentionally abandoned end. (for year 12) |