Based on a count of the number of dirty pages in a cache memory, the dirty pages are written from the cache memory to a storage array at a rate having a component proportional to the rate of change in the number of dirty pages in the cache memory. For example, a desired flush rate is computed by adding a first term to a second term. The first term is proportional to the rate of change in the number of dirty pages in the cache memory, and the second term is proportional to the number of dirty pages in the cache memory. The rate component has a smoothing effect on incoming I/O bursts and permits cache flushing to occur at a higher rate closer to the maximum storage array throughput without a significant detrimental impact on client application performance.
|
13. A non-transitory computer readable storage medium storing computer instructions that, when executed by a data processor, perform writeback to a storage array of a number of dirty pages of data in a cache memory by the steps of:
(a) obtaining a count of the number of dirty pages in the cache memory; and
(b) based on the count of the number of dirty pages in the cache memory obtained in step (a), writing the dirty pages from the cache memory to the storage array at a rate having a component proportional to a rate of change in the number of dirty pages in the cache memory;
wherein step (b) includes:
computing a first term proportional to the rate of change in the dirty pages in the cache memory, and adding the first term to a second term to compute a desired rate for writing the dirty pages are written from the cache memory to the storage array; and writing the dirty pages from the cache memory to the storage array at the desired rate.
7. A data storage system comprising:
a cache memory,
a storage array, and
a data processor coupled to the cache memory and coupled to the storage array for modifying pages of data in the cache memory so that a number of the pages in the cache memory become dirty, and writing the dirty pages from the cache memory to the storage array so that the dirty pages become clean, and
a non-transitory computer readable storage medium storing computer instructions that, when executed by the data processor, perform the steps of:
(a) obtaining a count of the number of dirty pages in the cache memory; and
(b) based on the count of the number of dirty pages in the cache memory obtained in step (a), writing the dirty pages from the cache memory to the storage array at a rate having a component proportional to a rate of change in the number of dirty pages in the cache memory;
wherein step (b) includes:
computing a first term proportional to the rate of change in the dirty pages in the cache memory, and adding the first term to a second term to compute a desired rate for writing the dirty pages are written from the cache memory to the storage array; and writing the dirty pages from the cache memory to the storage array at the desired rate.
1. A computer-implemented method of cache writeback in a data storage system, the data storage system including a cache memory, a storage array, and a data processor coupled to the cache memory and coupled to the storage array for modifying pages of data in the cache memory so that a number of the pages in the cache memory become dirty, and writing the dirty pages from the cache memory to the storage array so that the dirty pages become clean, the method comprising the data processor executing computer instructions stored on a non-transitory computer-readable storage medium to perform the steps of:
(a) obtaining a count of the number of dirty pages in the cache memory; and
(b) based on the count of the number of dirt pages in the cache memory obtained in step (a), writing the dirt pages from the cache memory to the storage array at a rate having a component proportional to a rate of change in the number of dirty pages in the cache memory;
wherein step (b) includes:
computing a first term proportional to the rate of change in the dirty pages in the cache memory, and adding the first term to a second term to compute a desired rate for writing the dirty pages are written from the cache memory to the storage array; and writing the dirty pages from the cache memory to the storage array at the desired rate.
10. A data storage system comprising:
a cache memory,
a storage array, and
a data processor coupled to the cache memory and coupled to the storage array for modifying pages of data in the cache memory so that a number of the pages in the cache memory become dirty, and writing the dirty pages from the cache memory to the storage array so that the dirty pages become clean, and
a non-transitory computer readable storage medium storing computer instructions that, when executed by the data processor, perform the steps of:
(a) obtaining a count of the number of dirty pages in the cache memory; and
(b) based on the count of the number of dirty pages in the cache memory obtained in step (a), writing the dirty pages from the cache memory to the storage array at a rate having a component proportional to a rate of change in the number of dirty pages in the cache memory;
wherein step (b) includes computing a change in the number of dirty pages in the cache memory over an interval of time, and computing a number of the dirty pages to be written from the cache memory to the storage array over the interval of time from the computed change in the number of dirty pages in the cache memory over the interval of time, and writing, from the cache memory to the storage array, the computed number of the dirty pages to be written from the cache memory to the storage array over the interval of time.
4. A computer-implemented method of cache writeback in a data storage system, the data storage system including a cache memory, a storage array, and a data processor coupled to the cache memory and coupled to the storage array for modifying pages of data in the cache memory so that a number of the pages in the cache memory become dirty, and writing the dirty pages from the cache memory to the storage array so that the dirty pages become clean, the method comprising the data processor executing computer instructions stored on a non-transitory computer-readable storage medium to perform the steps of:
(a) obtaining a count of the number of dirty pages in the cache memory; and
(b) based on the count of the number of dirty pages in the cache memory obtained in step (a), writing the dirty pages from the cache memory to the storage array at a rate having a component proportional to a rate of change in the number of dirty pages in the cache memory;
wherein step (b) includes computing a change in the number of dirty pages in the cache memory over an interval of time, and computing a number of the dirty pages to be written from the cache memory to the storage array over the interval of time from the computed change in the number of dirty pages in the cache memory over the interval of time, and writing, from the cache memory to the storage array, the computed number of the dirty pages to be written from the cache memory to the storage array over the interval of time.
11. A data storage system comprising:
a cache memory,
a storage array, and
a data processor coupled to the cache memory and coupled to the storage array for modifying pages of data in the cache memory so that a number of the pages in the cache memory become dirty, and writing the dirty pages from the cache memory to the storage array so that the dirty pages become clean, and
a non-transitory computer readable storage medium storing computer instructions that, when executed by the data processor, perform the steps of:
(a) obtaining a count of the number of dirty pages in the cache memory; and
(b) based on the count of the number of dirty pages in the cache memory obtained in step (a), writing the dirty pages from the cache memory to the storage array at a rate having a component proportional to a rate of change in the number of dirty pages in the cache memory;
wherein step (b) includes computing the rate of change in the number of dirty pages in the cache memory over a first interval of time, and the method further includes determining whether the change in the number of dirty pages over the first interval of time plus the count of the number of dirty pages in the cache memory obtained in step (a) exceeds a threshold, and upon determining that the change in the number of dirty pages over the first interval of time plus the count of the number of dirty pages in the cache memory obtained in step (a) exceeds a threshold, computing a number of dirty pages to be written from the cache memory to the storage array over a second interval of time less than the first interval of time, and writing, from the cache memory to the storage array over the second interval of time, the computed number of the dirty pages to be written from the cache memory to the storage array over the second interval of time.
5. A computer-implemented method of cache writeback in a data storage system, the data storage system including a cache memory, a storage array, and a data processor coupled to the cache memory and coupled to the storage array for modifying pages of data in the cache memory so that a number of the pages in the cache memory become dirty, and writing the dirty pages from the cache memory to the storage array so that the dirty pages become clean, the method comprising the data processor executing computer instructions stored on a non-transitory computer-readable storage medium to perform the steps of:
(a) obtaining a count of the number of dirty pages in the cache memory; and
(b) based on the count of the number of dirty pages in the cache memory obtained in step (a), writing the dirty pages from the cache memory to the storage array at a rate having a component proportional to a rate of change in the number of dirty pages in the cache memory;
wherein step (b) includes computing the rate of change in the number of dirty pages in the cache memory over a first interval of time, and the method further includes determining whether the change in the number of dirty pages over the first interval of time plus the count of the number of dirty pages in the cache memory obtained in step (a) exceeds a threshold, and upon determining that the change in the number of dirty pages over the first interval of time plus the count of the number of dirty pages in the cache memory obtained in step (a) exceeds a threshold, computing a number of dirty pages to be written from the cache memory to the storage array over a second interval of time less than the first interval of time, and writing, from the cache memory to the storage array over the second interval of time, the computed number of the dirty pages to be written from the cache memory to the storage array over the second interval of time.
2. The computer-implemented method as claimed in
3. The computer-implemented method as claimed in
6. The computer-implemented method as claimed in
8. The data storage system as claimed in
9. The data storage system as claimed in
12. The data storage system as claimed in
14. The non-transitory computer readable storage medium as claimed in
15. The non-transitory computer readable storage medium as claimed in
16. The non-transitory computer readable storage medium as claimed in
17. The non-transitory computer readable storage medium as claimed in
|
The present invention relates generally to a storage system having a buffer cache and a storage array. The present invention relates specifically to a cache write-back method for the storage system.
A storage system typically includes a programmed data processor, a storage array, and a buffer cache. Most buffer caches are write-back in nature for asynchronous or fast write operations. In response to an asynchronous write request, pages of data are written to the cache, the pages in the cache are marked dirty, an acknowledgement of the write operation is returned to the originator of the write request, and some time later the pages are written to the storage array and the pages in the buffer cache are marked clean. In this fashion, the cache hides high disk latencies and avoids performance loss from disk seeks during random access.
Typically a write-back task in a cache manager program ensures that no dirty page remains in the cache for more than a certain length of time. Typically the write-back task also is designed to provide that the cache will have more than a certain minimum number of free memory pages for servicing fast write requests unless the storage system is overloaded. Typically the maximum persistence time of a dirty page in the cache is thirty seconds, and the minimum number of free pages is based on the memory capacity of the cache and is a small fraction of the memory capacity of the cache.
For example, traditional UNIX® file system managers used a periodic update policy, writing back all delayed-write data once every 30 seconds. Such a method of periodic update is easy to implement but degrades synchronous read-write response times at these periodic intervals. Some cache managers also have used high and low watermarks to address space exhaustion. If the number of dirty pages in the cache hits an upper threshold, called the high water mark, then the cache manager wrote back to the storage array as many dirty pages as needed until the number of dirty pages in the cache hit a lower threshold, called the low water mark. The dirty pages were kept on a least recently used (LRU) list so that the least recently used dirty pages were written back before the most recently used dirty pages.
Early NFS clients used relatively small local caches and frequently initiated the flush of dirty pages to the disk using commit operations sent to the storage system. The early storage systems also had relatively small caches, so that the commit operations usually occurred at a rate greater than once every three seconds. Under these conditions, the periodic update policy provided acceptable performance because a relatively small fraction of the write-back operations were performed during the periodic update. Nevertheless, it was recognized that the use of synchronous writes or commit operations limited the performance of application I/O.
Modern NFS clients, with larger local caches, improved their performance by sending multiple I/Os in parallel to the storage system, using delayed writes, without sending any commit operations and counting on the server to flush data to disk at its own leisure. In this case the server flushed the dirty pages based on its cache occupancy state, disconnected from the application intent. It was recognized that under these conditions, the periodic update policy would interfere with application I/O during periodic flushes. It was also recognized that watermark based policies would also interfere with application I/O while the number of dirty pages was being reduced from the high watermark to the low watermark. Therefore designers of data storage systems have been investigating alternative write-back cache update policies that would improve write-back performance and reduce interference with application I/O.
Some proposed update policies trickle periodical flush operation of a small number of pages as a background task with lowest priority to keep the number of dirty pages to a lower level and prevent the cache from filling up too fast. Other similar policies schedule the user data flushes as asynchronous disk writes as soon as a modification fills an entire page. At the other end of the spectrum, some proposed policies modify the watermark technique by flushing dirty pages based on retention time. Other similar policies attempt to update the retention time based on disk performance characteristics, for example by reducing the retention time to match the ratio between the buffer cache size and the disk transfer rate. Others replace the time-driven update policies with dynamic policies, which choose when to schedule disk writes based on the system load and the disk queue length.
More recently, the combination of periodic update and watermark based writeback policies used in the vast majority of storage systems has not been keeping up with the higher speeds of application incoming writes, resulting in a significant performance penalty. With larger caches and dramatic increases in the storage system workloads, the performance penalty has become more detrimental to file system and user application performance. Therefore there is renewed interest in alternative write-back cache update policies that do not require extensive changes to existing cache management programs yet would improve writeback performance and reduce interference with file system and user application performance.
The present invention recognizes that in a storage system, it is desirable to flush dirty pages from a cache memory to a storage array at a rate proportional to the rate at which the dirty pages are generated by incoming I/O requests. Existing cache managers, however, do not measure the rate at which dirty pages are generated by incoming I/O requests. Moreover, the rate at which dirty pages are generated by incoming I/O requests is dependent upon a number of factors, such at the rate of incoming I/O requests that are write requests, the size of each I/O write request, alignment of each I/O write request with the pages in the cache, and whether each write request is accessing a clean or dirty page. Sometimes a full page is cached if any part of it is accessed by a write request, and sometimes a dirty page is accessed by multiple write requests before the dirty page is flushed from cache memory to the storage array.
The present invention recognizes that the number of dirty pages in the cache is maintained by existing cache managers, and the number of dirty pages in the cache can be used to control the rate of flushing of dirty pages from the cache memory to disk storage to obtain good cache writeback performance comparable to measuring the rate at which incoming I/O requests generate dirty pages and flushing the dirty pages from the cache at a rate proportional to the rate at which the dirty pages are generated by incoming I/O requests. Comparable writeback performance can be obtained because the rate at which pages become dirty in the cache memory is equal to the rate of change in the number of dirty pages in the cache memory minus the rate at which dirty pages are flushed from the cache. Therefore adjustment of the rate of flushing of dirty pages from the cache memory to the storage array based on the rate of change in the number of dirty pages in the cache memory is a kind of feedback control of the rate of flushing in response to the rate of creation of the dirty pages in the cache by incoming I/O requests. Feedback control system theory can be applied to the properties of the cache and storage array and the characteristics of the incoming I/O requests to obtain a feedback control function that is most responsive to the incoming I/O requests for good writeback performance and minimal interference with processing of the incoming I/O requests.
In accordance with one aspect, the invention provides a computer-implemented method of cache writeback in a data storage system. The data storage system includes a cache memory, a storage array, and a data processor coupled to the cache memory and coupled to the storage array for modifying pages of data in the cache memory so that a number of the pages in the cache memory become dirty, and writing the dirty pages from the cache memory to the storage array so that the dirty pages become clean. The method includes the data processor executing computer instructions stored on a non-transitory computer-readable storage medium to perform the steps of obtaining a count of the number of dirty pages in the cache memory, and based on the count of the number of dirty pages in the cache memory, writing the dirty pages from the cache memory to the storage array at a rate having a component proportional to a rate of change in the number of dirty pages in the cache memory.
In accordance with another aspect, the invention provides a data storage system including a cache memory, a storage array, a data processor, and a non-transitory computer readable storage medium. The data processor is coupled to the cache memory and coupled to the storage array for modifying pages in the cache memory so that a number of the pages in the cache memory become dirty, and writing the dirty pages from the cache memory to the storage array so that the dirty pages become clean. The non-transitory computer readable storage medium stores computer instructions that, when executed by the data processor, perform the steps of obtaining a count of the number of dirty pages in the cache memory, and based on the count of the number of dirty pages in the cache memory, writing the dirty pages from the cache memory to the storage array at a rate having a component proportional to a rate of change in the number of dirty pages in the cache memory.
In accordance with a final aspect, the invention provides a non-transitory computer readable storage medium. The non-transitory computer readable storage medium stores computer instructions that, when executed by a data processor, perform writeback to a storage array of a number of dirty pages in a cache memory by the steps of obtaining a count of the number of dirty pages in the cache memory, and based on the count of the number of dirty pages in the cache memory, writing the dirty pages from the cache memory to the storage array at a rate having a component proportional to a rate of change in the number of dirty pages in the cache memory.
Additional features and advantages of the invention will be described below with reference to the drawings, in which:
While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof have been shown in the drawings and will be described in detail. It should be understood, however, that it is not intended to limit the invention to the particular forms shown, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the scope of the invention as defined by the appended claims.
With reference to
The storage server 21 includes a data processor 31, program memory 32, random access memory 33, network adapters 34, 35 for linking the data processor 31 to the data network 20, and disk adapters 36, 37, 38, 39, and 40 for linking the data processor 31 to respective strings of the disk drives 41 to 50 in the storage array. The data processor 31 is a general purpose digital computer data processor including one or more core central processing units (CPUs) for executing computer program instructions stored in the program memory 32.
The program memory 32 is a non-transitory computer readable storage medium, such as electrically erasable and programmable read-only memory (EEPROM). In general, non-transitory computer readable storage medium is a physical device or physical material which serves to store computer-readable data on a permanent or semi-permanent basis. Examples of other kinds of non-transitory computer readable storage medium include magnetic disks, magnetic tape, and optical disks. The program memory 32 stores an operating system program 51 and a program stack 52 executed under control of the operating system program 51.
The random access memory 33 includes a cache memory 53 storing pages of data 54. A page of data in the cache is a unit of granularity of data that is read from the storage array, written into the cache memory, modified in the cache memory in response to a write request from a client so that the page of data becomes dirty in the cache memory, and later written from the cache memory back to the storage array so that the dirty page become clean. For example, each of the disk drives 41 to 50 perform an atomic write operation upon a block of data known as a disk block or sector, the cache memory is configured to have a fixed page size, and the fixed page size includes a whole number of one or more disk blocks. In a file server, for example, the page size is the same size as a file system block, the file system block size is four kilobytes, each file system block includes eight disk blocks, and each disk block includes 512 bytes.
Typically the storage server 21 is configured to respond to an asynchronous or fast write request from a client by writing new data to a page of data in the cache memory, and then immediately returning an acknowledgement of the write request to the client, before the new data is written back to the storage array 26. Under normal circumstances it is desired to write each dirty page of data from the cache memory 53 back to the storage array within a certain time after the dirty page becomes dirty. A typically way of ensuring such a maximum persistence time of a dirty page in the cache is to place each dirty page onto the tail of a dirty LRU list 55 kept in the cache memory 53. A background writeback task services the dirty LRU list by writing the dirty page at the head of the dirty LRU list from the cache memory 53 back to the storage array 26, and then removing the dirty page from the head of the dirty LRU list.
The command interpreter 62 is responsive to configuration and control commands from a systems administrator. For example, the systems administrator may set up access controls limiting the clients that have access to the storage array, and may also partition the storage array into logical volumes or logical units of storage (LUN) accessible to particular clients.
The program stack 52 includes a network stack 63, a metadata manager 64, a cache manager 65, a logical-to-physical translation module 66, and disk drivers 66. The network stack 63 implements network protocols such as TCP/IP for communication with the clients (22, 23 in
The cache manager 65 is responsible for maintaining cache state information such as a list of free pages of the cache memory, a hash index for determining whether data at a specified logical storage address is found in a page of data in the cache and if so the cache memory address of the page, and a least recently used list of all used pages of the cache memory. The cache manager 65 includes a writeback task 68 responsible for maintaining the dirty LRU list while writing dirty cache pages from the cache memory to the storage array.
The logical-to-physical translation module 66 maps logical addresses used by the cache manager to physical addresses of storage blocks such as disk sectors in the storage array (26 in
The disk drivers 67 provide an interface between the logical-to-physical translation module 66 and the disk adapters (36-40.)
The present invention more particularly concerns programming the writeback task so that the number of dirty pages in the cache controls the rate of flushing of dirty pages from the cache memory to disk storage in such a way as to obtain good cache writeback performance and minimal interference with client I/O write operations. A primary goal is to minimize the number of dirty pages in the cache while preventing stoppage of incoming client I/O request streams and maintaining smooth utilization of disk I/O throughputs to maximize application performance. A secondary goal is to increase the scalability of storage arrays by preventing bursty I/O operations.
As introduced above, most storage array caches have used a watermark type of flushing procedure because it is very simple and easy to implement, but the watermark type of flushing procedure has frustrated disk throughput, which is measured in bytes per unit of time. For file servers, so long as the ratio between metadata pages and client data dirty pages was similar, the watermark type of flushing procedure was good enough to ensure smooth disk I/O access. Since the total amount of cached user data has become much higher in the new storage arrays, the watermark type of flushing procedure has become a performance bottleneck for the client applications. There is a disconnect between the large number of dirty pages cached in very large buffer caches and the time required to flush them to disk. The flush time is critical to the client application performance because flushing triggered by a high watermark causes stoppage of client application I/Os. The I/O throughput of the disk is limited and as long as the amount of data to be flushed is small, corresponding to small caches, the stoppage time is acceptable to applications that have a certain tolerance to stoppage before failure. Because the cache size has continued to increase much faster than the disk throughput, the stoppage time during watermark flushing has been increasing and is becoming unacceptable to applications and may result in frequent failures.
The problem of flushing client data dirty pages depends on the maximum I/O throughput of the storage array, and the maximum throughput of the storage array will limit application I/O performance. But the required behavior of the storage server is defined and provisioned to ensure that the client application performance will not be bounded before the maximum provisioned throughput is reached. This requirement translates into minimal or no application I/O stoppage even in the case of a large change in the incoming application I/O throughput to storage until the maximum storage array throughput is reached. The problem is complicated by the fact that the watermark methods wait for a trigger before starting the flush to disk. They have no ability to predict and compensate priori for rapid changes in the incoming I/O rate as they are insensitive to the change in the number of dirty pages in the buffer cache. In general there is no correlation between the maximum disk speeds and the rate of change in the application I/O speeds.
As a result of the above analysis it is clear that flushing using watermark techniques is incapable of ensuring steady application performance as compared to say I/O rate based cache writeback techniques. Thus, in light of the fact that disk performance is measured in throughput and not in the amount of bytes written, it is desired to flush proportional to the I/O throughput. By controlling the disk throughput the effect of big changes in the incoming I/O stream will be mitigated to prevent I/O stoppage to the application. Of course a storm of I/Os in a very short period of time will be exposed to possible I/O stoppages, but for a limited period of time application failure is prevented.
It is desired to address the problem by using I/O rate proportional procedures with the goal of reducing I/O stoppage to an acceptable level for the majority of applications, including those that do not send any I/O commits to the storage server. This can be done by using the rate of change of the number of user data dirty pages in the buffer cache as an indication of the I/O rate because there is no convenient way of directly measuring the number of dirty pages generated by an application until the I/O write requests become dirty pages in the cache memory. Direct measurement of the number of dirty pages generated by an application is especially difficult because the size of the application I/Os are different from the cache page size and sometimes full pages are cached even if only part of them were written by the application, and some parts of a dirty page may be re-written multiple times in the cache before the dirty page is written to disk.
Assuming that most all of the dirty pages in cache are generated by client application I/Os, the rate of change in the number of dirty pages in the buffer cache is a good indication of the rate at which the client application I/Os generate dirty pages in the buffer cache because the difference between the two is essentially the rate at which dirty pages are flushed from the cache memory to the storage array. In the absence of an overload condition, the rate at which dirty pages are flushed from the cache memory to the storage array can be estimated based on current conditions, such as the number of dirty pages to be flushed by the cache flushing task scheduled for execution.
In addition, in the absence of a significant change in the rate at which client application IOs generate dirty pages in the buffer cache, the dirty pages can be flushed at a relatively constant rate proportional to the number of dirty pages in the cache so that each dirty page is flushed within a desired persistence time since the page became dirty in the cache. The persistence time can be less than a certain persistence time limit and is calculated as the number of the dirty pages in the cache divided by the storage array bandwidth (B) used for writeback of the dirty pages from the cache memory to the disk array. The combination of the rate proportional flushing and the water level proportional flushing in effect predicts and compensates in a stochastic manner for unexpected very large fluctuations in the application I/O rate. This stochastic approach will ensure with high probability a minimum I/O stoppage time for the application without measuring instantaneous storage array throughput, but with knowledge of a maximum storage array disk throughput. For example, the forgetting factor is proportional to the ratio between the average disk write I/O latency and the maximum latency of the disk array.
By performing writeback of dirty pages based on the rate of change in the number of dirty pages in the cache memory, the writeback task uses a closed-loop feedback control method. Thus, the cache memory can be viewed as part of a dynamic closed loop system including the client and the storage server. In the traditional case the client-server loop was closed by the use of commits issued by the client to the server after the I/O was acknowledged by the server.
As shown in the closed loop model of
The write I/Os 72 in cache are equal to the write I/Os 71 from the client minus the I./Os 73 flushed. The dirty pages and buffer cache dynamics 74 may cause least recently used pages to be flushed on a priority basis, for example when the cache gets full of dirty pages and free pages are needed for servicing read or write requests. A writeback procedure 77 also flushes dirty pages 78 in response to application commits 76. The sum of the dirty pages 77 flushed by the buffer cache dynamics 74 and the dirty pages flushed by the writeback procedure 77 are the total dirty pages flushed 73.
The server must ensure that the metadata associated with the file was updated before sending the acknowledgement of the commit to the client. As a result the client sends the I/O and the commit and waits for the acknowledgement from the server before sending additional write I/Os. So this is a closed loop behavior as the error is zero only at the time when the dirty pages were flushed and only then will the client send the next write.
We can consider the error 72 as the difference between the number of write I/Os 71 sent by the client to the server and the number of pages 73 flushed to disk. The error will represent the amount of dirty pages in the buffer cache of the server. The closed loop ensures that the level of dirty pages in the buffer cache is driven to zero, with a delay of the round trip of the write I/O and the commit acknowledgements.
In order to improve write performance modern applications send multiple I/Os without sending to the server or the file system any commits requiring the server to writeback the dirty pages. Typically the flushing of dirty pages is based on a watermark of the dirty pages in the cache.
There is a disconnect between the time when the client sends the I/O and the moment when the dirty page is flushed to disk as the flush is based on the watermark of dirty pages in the buffer cache crossing a certain threshold value. In this case the dirty page generated by the client at time t0 is flushed at a much later time depending on the speed of arrival of dirty pages and the speed of the flush to disk at time t1.
As a result of this delay (t1−t0) and nonlinear behavior, the system enters into an oscillatory behavior reducing the speed of the I/Os. In
The ideal case would be to measure the number of I/Os sent by the clients to the server. But even if one measures the client I/Os on each client we should get the global number of dirty pages in the system in a distributed manner which would be inaccurate when sent to the server. An alternative would be to measure the number of I/Os before the dirty pages are sent to the cache memory. This also is not feasible as the network layer does not know that the request will result in a dirty page until the page is marked as dirty in the cache memory.
In order to be able to reduce the delay in the system we should flush the dirty pages proportionally to the number of arriving dirty pages or the rate of incoming I/Os. While we cannot measure accurately the generation of dirty pages by the clients we can accurately estimate the rate of arrivals of dirty pages with a much lower delay from the generation time at the clients. If the flush will be done proportional to the arrival rate, assuming that the throughput to storage is lower than the maximum throughput of the array. If this can be achieved then the performance will increase up to the point when the storage throughput equals the maximum.
Any other flushing mechanism will keep the delay in the system high enough to keep the loop opened. For example a “trickle” mechanism can be optimized for a certain maximum throughput. As shown in
The number of dirty pages in the cache will continue to increase at the rate proportional to the difference between the rate (N) of dirty pages arriving in the cache and the rate (n) of flushing dirty pages using the “trickle” mechanism. As “n” is constant and “N” can vary, for high arriving I/O rates, we can end up with N>n and then the number of dirty pages will increase at the speed of N−n. If we want to ensure that the dirty page water level is kept constantly low in the worst case scenario we should make “n” equal to the maximum inbound throughput arriving from the network (Nmax). The problem with this approach is that the flush will still be un-throttled depending on the incoming rate of dirty pages. So, a “trickle writeback” mechanism will just reduce the delay in the loop by flushing dirty pages arriving at the server faster than when watermarks are used.
In order to evaluate the non-linear effect of high delays we instrumented an EMC Corporation CELERRA® brand NFS server to allow measuring the level of dirty pages in the buffer cache of the server (for 5 GB cache size) as well as the dirty page arrival rate.
We also analyzed the spectrum of both water level and rate of change of dirty pages in the buffer cache.
In view of the above observations, we developed and examined a number of different cache writeback procedures, each designed to more closely match the flush rate to the rate of incoming modifications. In general we chose to look primarily at writeback procedures that were self tuning. This was because choosing tuning parameters can be difficult and may not work well in highly dynamic situations. These writeback procedures sample the count of dirty pages in the cache repetitively over a small sampling period, as part of the criteria for determining the output flush rate. For all of the traces we examined the input rate of modification is significantly lower than the maximum output bandwidth. In general this means that there is sufficient write bandwidth available to write back the number of dirty pages scheduled by these writeback procedures.
The cache writeback procedures that we developed and examined include a modified trickle flush, a fixed interval procedure, a variable interval procedure, a quantum flush, and a rate of change proportional procedure. Simulation and experimental results, as further described below, indicate that the rate proportional method is reasonably easy to implement and provides a low stoppage time. The simulations also show that the variable interval procedure also has good behavior under rapid input changes. Therefore specific examples of the rate of change proportional procedure and the variable interval procedure will be described first using knowledge gained from the simulations and experiments, and then the specific procedures used in the simulations and experiments will be described, followed by the results of the simulations and experiments.
In a specific example, the first factor (k1) is a constant of about 1.0 and the second factor (k2) determines the persistence time of the dirty pages in the cache. Although the second factor could be a constant, it is desirable for the second factor to decrease with an increasing rate of change in the number of dirty pages in the cache so that when the rate of change in the number of dirty pages in the cache is equal to the maximum storage array throughput, dirty pages will be written from the cache memory back to the storage array at the maximum storage array throughput independent of the level of the cache. These conditions will occur if the first factor (ki) is equal to one and the second term is equal to a constant (a) times (B−R)/B, where “R” is the rate of change in the number of dirty pages in the cache, and “B” is the maximum storage array throughput for flushing dirty pages from the cache. The constant (a) determines the persistence time of a dirty page in the cache.
In step 124, a new desired flush rate (F) is computed based on the rate of change (R) in the number of dirty pages in the cache. In this example, the computation includes computing a first term (k1*R) proportional to the rate of change (R) in of dirty pages in the cache memory, and computing a second term (k2*D) proportional to the number of dirty pages in the cache memory. The factor (k1), for example, is equal to one, and the second term (k2*D) is proportional to the number of dirty pages in the cache memory by a factor (k2) proportional to a difference (T1*B−R) between the maximum bandwidth (T1*B) of the storage array (in units of pages per sampling interval) and the rate of change (R) in the number of dirty pages in the cache memory (also in units of pages per sampling interval). Thus, the units of B in this example are pages per second.
In step 125, dirty pages are written from the cache memory to the storage array at the new flush rate (F), for example by initiating a new task to flush (F) dirty pages over an iteration interval (T1). In step 126, the value of the variable (d) is set equal to the value of the variable (D). In step 127, execution of the periodic writeback control task is suspended for the remainder of the sampling interval (T1) and then execution resumes at the next sampling time and loops back to step 122.
For the periodic write control task of
Abnormal loading conditions sometimes occur in a data network having a large number of clients sharing a storage system. Therefore, in such an environment it may be advantageous to provide a variable sampling interval (T1) so that the sampling interval is relatively long under low loading conditions, and relatively short under high loading conditions. For example, it may be desirable to use a shorter than normal sampling interval (T1) and give the flushing task a higher priority (just below the priority of servicing a client I/O) if the shorter than normal sampling interval and the higher priority flushing task would help prevent the cache from becoming filled with dirty pages.
For example, it is desired under these circumstances to reduce the sampling interval by one half, and to increase the flushing rate so that at the end of the next iteration cycle (at a time T+½T1) the number of dirty pages in the cache would not exceed the high water mark (H) less a certain number (ε) of dirty pages assuming that the current rate of creation of dirty pages would continue throughout the next iteration cycle. Therefore, under the circumstances of
In a first step 131 of
In step 135, a new desired number of dirty pages to flush (F) is computed as k2←a*(T1*B−R)/B and F←k1*R+k2*D. In step 136, the sampling interval (T1) is compared to the maximum value (TMAX), and if the sampling interval is not less than TMAX, then execution continues to step 137. Otherwise, in step 136, if the current count (D) of dirty pages is not less than a low water mark (L), then execution also continues to step 137. In step 137 a new task is initiated to flush (F) dirty pages over the interval (T1). In step 138, the value of (d) is set equal to the value of (D). In step 139, execution of the writeback control task is suspended for an interval of time (Ti), and then execution loops back to step 132.
In step 134, if the sampling interval (T1) is greater than the minimum value (TMIN) and the sum D+R is greater than the high water mark (H), then execution continues to step 140. In step 140, the sampling interval (T1) is decreased by a factor of one half, and a new desired number of pages to flush (F) over the reduced sampling interval is computed so that an expected number of dirty pages in the cache memory at the end of the reduced sampling interval is less than the threshold (H) by a certain amount (ε). For example, T1←½T1, (D+½R)−(H−ε), and F←η+½F. Execution continues form step 140 to step 137.
Steps 134 and 140 may cause the sampling interval T1 to ratchet down from TMAX to TMIN under high loading conditions. Steps 136 and 130 may cause the sampling interval T1 to ratchet back up to TMAX under low loading conditions. In step 136, if T1<TMAX and D<L, then execution continues to step 130. In step 130, the sampling interval is increased by a factor of two, and desired number of dirty pages to flush (F) is increased by a factor of two. Execution continues form step 130 to step 137. Following is a description of the cache writeback procedures that were simulated. The first procedure that was simulated was a modified trickle flush. Initial implementations of trickle flush used background scheduled tasks to writeback dirty pages with lower priority compared to the metadata and user committed write I/Os. As a result the efficiency of the trickle flush was limited to low I/O traffic. Similar trickle implementations used a wakeup time of 5-10 sec and flush a limited number of pages, Q, as a background task.
As an improved version we measured the rate of change of dirty pages average for each interval between wakeups, reduced to 1 sec, and increased the number of flushed buckets proportional to the rate of change or the number of dirty pages arriving at the server, d. The procedure still flushed a limited number of pages, a proportion, p, of the total number of dirty pages at wakeup time. Also we modified the flush tasks to be continuous, not limited to the wakeup time but given higher priority between the timeouts. One more enhancement was to change the number of flush threads, Th, to be proportional to the rate of change of dirty pages.
We used as input the sampled values from the real server that use the current writeback behavior including the current trickle flush implementation. We simulated the new procedure using Matlab SIMULINK, with the input as defined before.
The second cache writeback procedure that we simulated was a Fixed Interval (FI) procedure. This procedure depends on having a “goal” number of dirty pages in the system. The goal may change over time as memory usage patterns change; however, the procedure still tracks against the goal in force at the time of the interval expiration.
The fixed interval procedure chooses the number of pages to flush based on the how much we exceed the goal number. The procedure is implemented using the following set of variables:
G: the goal or maximum desired number of dirty pages
D: the actual number of dirty pages
d: the number of pages to flush
where
d={0, for D<=G; 2*(D−G), for D>G} (1)
Thus if at the expiry the number of dirty pages D is less than the goal number G, we do not schedule any pages for flushing. Otherwise, we schedule to flush twice the number of pages by which D exceeds G.
The third cache writeback procedure that we simulated was a Variable Interval (VI) procedure. The VI procedure is somewhat more complicated as it attempts to estimate the modification rate to be used in calculating an output rate. It operates in a different manner than FI in the way it treats its goal value. Where FI tries to converge on the goal, VI tries to never exceed the goal value.
VI uses the net change in the number of dirty pages (i.e., all newly modified pages less any pages that are flushed). It then uses the calculated modification rate and the current number of dirty pages to predict when the goal would be reached (or exceeded). The flush rate is then calculated such that the flushing within the next interval would be sufficient to match the input rate assuming the input rate remains constant. The variables of the procedure were:
The VIA procedure is as follows:
I=d/t (2)
F=T+(M−D)/I (3)
If F<T+t then,
c=(M−D)+σ,F<(T+t) (4)
t=ti (5)
else,
t=t/2 (6)
c=t*I+σ (7)
The VI procedure attempts to choose a flush task workload that will prevent the modified count from reaching the high water mark. By increasing the number of pages flushed by σ the procedure slightly exceeds the recent input rate. When the input rate becomes high enough that we would reach the high water mark in less than time t, the time interval is halved. This has the effect of doubling the effect of σ and negating the increased modification rate.
The fourth cache writeback procedure that we simulated was a Quantum Flush. This procedure is based on the idea of a goal where each dirty page will be flushed within a given time period. This period is referred to as the quantum, Q. The flush time is a goal and there are no actual timestamps associated with the buffers. Instead, the scheduling of flushes is done based on an approximation, which assumes a relatively uniform distribution of arrival times. Thus the procedure wakes up n times per Quantum or on a sub-quantum time of q, where q is Q/n. At the expiration of each sub-quantum, 1/nth of the dirty pages are scheduled for flushing.
This procedure can be applied to either client data or metadata, but was particularly designed for metadata, because there is a requirement to flush all metadata within a certain time period.
The procedure depends on having an ordered list of pages to be flushed. The pages are kept in a First Modified order. Thus, pages that were modified first will be flushed first, regardless of whether there were more recent references to the page. This ordering is consistent with the ordering required to be able to release file journal entries.
The fifth cache writeback procedure that we simulated was a rate of change proportional procedure. This is the simplest procedure that we implemented and it is probably the most efficient except for the fact that the rate of change cannot be measured beyond an average value estimated from the change of dirty pages in the buffer cache during the sample interval. The measurement of the level in the buffer cache is an indirect measurement of the difference between the incoming dirty pages and the speed of write to disk. The procedure combines rate based and water level based flushing to disk. The procedure is rate based to respond to large unexpected increase in the write I/O arrival time in the cache. The procedure is also water level based to account for the fact that the number of dirty pages is bounded and the maximum writeback rate is limited by the maximum throughput of the storage array or backend disk. The procedure used the following parameters:
ti: previous sample time
t: last sample time
W: count of dirty pages in the buffer cache
R: rate of change of W during last sample time in pages/sec
c: pages to flush in the next time interval
B: maximum bandwidth of the storage array in pages/sec
μ: forgetting factor accounting for fast changes in R
α: ratio of average/maximum disk latency (0<α≦1)
The a rate of change proportional procedure will flush dirty pages proportional to the rate of change of the water level in the buffer cache and additional portion of the current water level W. The second term accounts for the case when there is a rapid change in the rate of dirty pages generated by the application.
c=R*(t−ti)+W*μ (8)
μ=α*(B−R)/B (9)
The a rate of change proportional procedure assumes that the average rate of dirty pages in the buffer cache is lower than the maximum flush speed to the storage array over a long period of time while the instantaneous rate of dirty pages may be higher than the maximum storage array throughput.
In order to evaluate the value of the five procedures discussed above, we instrumented an EMC Corporation CELERRA® brand file server to measure the number of dirty pages in its buffer cache with 4 msec time resolution. We used eight NFS clients running random write I/Os to the server using the 10 meter performance tool. The I/O pattern was random write and the start time of the I/O on each client was scheduled in such a manner to generate bursts of I/Os by increasing the number of clients starting I/Os at same time. At the time the test started there were dirty pages in the buffer cache from previous I/O on a single client. We sampled the change of the buffer cache level during the entire test and used this data as input to our simulation.
In order to simulate the behavior of the dirty pages for different procedures we wrote a simulation program in Matlab SIMULINK. The input to the simulation was the number of dirty pages change during the sample interval of 4 msec. This is an indirect measure of the number of user I/Os but was a good enough approximation for evaluation of our five cache writeback procedures.
Additionally we instrumented the CELERRA® server code to simulate in line the expected behavior of the cache writeback procedure by computing the flush commands during I/O operations of the server running SPEC sfs2008 benchmark without changing the current procedures used by the server. The results of the in-line simulation validated the Simulink simulation results.
We ran the simulation for the trickle flush procedure for two values for the wakeup time; 5 sec as currently implemented in the server and 1 sec. We started a number of threads proportional to the number of dirty pages that arrived during the time between waking instances with a maximum of 32 threads for flushing dirty pages at each wakeup instance. We flushed 1/128 of the total number of dirty pages in the buffer cache at the wakeup time. If the active threads didn't finish their work by the next wakeup time we continue the work items and added more items according to the new number of dirty pages but not more than 32. The results of the simulation of the two cases are presented in
The results show that the wake time of 1 sec kept a lower level of dirty pages in the buffer cache because we did not reduce the maximum number of threads proportionally to the wake time. As a result the dirty pages flush command for 1 sec is lower than the case of 5 sec but the number of dirty pages is lower than for 5 sec. The 1 sec case show that the level of the water is kept low in the buffer cache but in both cases the sharp change in the incoming I/O showing a poor reaction to the fast changes of the incoming application I/O. This procedure has the potential of improving the behavior to large changes in the input but may require more CPU usage for very large caches. It is more suitable for slower changing metadata flush.
We repeated the same simulation as before but we changed the flushing procedure to the fixed interval (FI) procedure. We used the same input data and same simulation model as before.
So far the simulation results show that dirty pages flushing by the number of dirty pages are not able to cope well with large rapid changes in the input. To address this problem we added a new component to the flush procedure; namely, a term proportional to the rate of change of the dirty pages in the buffer cache, which is proportional to the change in the number of user I/Os. The VI procedure is an attempt to combine equally flush by goal and by rate.
We simulated the rate proportional flush for two values of the forgetting factor α assuming different maximum disk throughput: 0.08 and 0.16.
The results show that the water level in the buffer cache is kept very low regardless of the spikes in the incoming I/O rate. For the case of 0.08, representing a faster disk, shown in
After we compared the five different cache writeback procedures using the simulation we decided to implement some of them in the CELERRA® server and run SPEC sfs2008 NFS benchmark. We decided to implement two of the procedures; the modified trickle that was easy to implement as a small modification of the current trickle implementation, and the rate proportional procedure that was the most promising. The target of the experiment was to observe the impact of both procedures on the stoppage of application I/Os. We ran the benchmark at the peak performance point which is very close to the maximum I/O throughput of the storage array.
The results of the tests are not conclusive about the occurrence of any I/O stoppage as we could not monitor the NFS clients to see if the I/O stopped. Instead we monitored the derivative of the number of dirty pages in the buffer cache and concluded that when the derivative is negative it will incur a virtual stoppage of incoming I/Os. In reality it could be that if the rate of flush is higher than the rate of application incoming I/Os the derivative will be negative, which is enough of a performance penalty to be reduced to a minimum.
The results of the test show that the both procedures track the incoming NFS I/Os closely but the possible stoppage time is much lower for the rate proportional procedure. As
This is a good proof that the flush rate is very close to the rate of the incoming I/Os, which means that a server using the rate proportional procedure can achieve higher performance because there is more room for flushing at higher I/O throughput available to the storage array. Additionally the burstiness of the I/O to the storage is reduced when flushing proportional to the incoming I/O rate. We believe that both the simulation and the experimental results prove that the rate proportional procedure is the best candidate for a cache writeback procedure for reducing the number of dirty pages and keeping them low as the incoming application I/Os are coming at a rate lower than the maximum I/O throughput of the storage array.
As described above, we introduced five cache writeback procedures that flush at a rate that is based on a count of the number of dirty pages in the cache. We compared these five cache writeback procedures using SIMULAB simulation and in-line simulation on the server. We also performed experiments on a real NFS server running a performance benchmark using two of the cache writeback procedures.
A comparison of all five procedures shows that the rate component has a smoothing effect on incoming I/O bursts.
The forgetting factor has a significant contribution in reducing the level of dirty pages when there is enough disk I/O throughput available due to its adaptive nature that allows flushing more pages when the storage array is less busy.
As a result of the simulations we changed the performance goal of the cache writeback technique from reducing the number of dirty pages in the cache toward reducing the stoppage in the application I/O to the servers and file systems and even preventing the I/O stoppage completely. Additionally we expect that any flush procedure used will have a minimal impact and incur minimal to no application performance degradation until the disk becomes the bottleneck. We also observed that rate proportional procedures have an advantage over watermark procedures as they can track more accurately the I/O behavior of the application and send it to the storage preventing unexpected burstiness of the I/O to the storage array. In addition the rate proportional flush procedures better cope with sharp changes in the incoming I/O rate while preventing stoppages in I/O by smoothing the flush to the storage array. Moreover, adding a proportional forgetting factor will help predict bursts of incoming I/Os and flush the dirty pages more evenly.
Faibish, Sorin, Armangau, Philippe, Forecast, John, Pawar, Sitaram, Bixby, Peter
Patent | Priority | Assignee | Title |
10095628, | Sep 29 2015 | International Business Machines Corporation | Considering a density of tracks to destage in groups of tracks to select groups of tracks to destage |
10120811, | Sep 29 2015 | International Business Machines Corporation | Considering a frequency of access to groups of tracks and density of the groups to select groups of tracks to destage |
10133514, | Oct 23 2015 | Microsoft Technology Licensing, LLC | Flushless transactional layer |
10140029, | Dec 10 2014 | NetApp, Inc | Method and apparatus for adaptively managing data in a memory based file system |
10176119, | Mar 02 2016 | Seagate Technology LLC | Workload detection and media cache management |
10191855, | Jul 03 2014 | AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED | Caching systems and methods for page reclamation with simulated NVDRAM in host bus adapters |
10241918, | Sep 29 2015 | International Business Machines Corporation | Considering a frequency of access to groups of tracks to select groups of tracks to destage |
10275360, | Sep 29 2015 | International Business Machines Corporation | Considering a density of tracks to destage in groups of tracks to select groups of tracks to destage |
10417138, | Sep 29 2015 | International Business Machines Corporation | Considering a frequency of access to groups of tracks and density of the groups to select groups of tracks to destage |
10523757, | Nov 03 2011 | NetApp Inc. | Interconnect delivery process |
10528468, | Sep 03 2015 | Fujitsu Limited | Storage controlling apparatus, computer-readable recording medium having storage controlling program stored therein, and storage controlling method |
10545669, | Jan 28 2016 | Weka.IO Ltd. | Congestion mitigation in a distributed storage system |
10565128, | Mar 02 2016 | Seagate Technology LLC | Workload detection and media cache management |
10628048, | Dec 28 2016 | Fujitsu Limited | Storage control device for controlling write access from host device to memory device |
10754788, | Sep 30 2016 | EMC IP HOLDING COMPANY LLC | Hash tables in flash memory |
10795822, | Jul 31 2017 | EMC IP Holding Company, LLC | System and method for negative feedback cache data flush in primary storage systems |
10877688, | Aug 01 2016 | Apple Inc | System for managing memory devices |
10877699, | Jun 26 2019 | VMware LLC | I/O destaging bandwidth control |
10965753, | Nov 03 2011 | NetApp Inc. | Interconnect delivery process |
11055218, | Mar 13 2019 | Samsung Electronics Co., Ltd. | Apparatus and methods for accelerating tasks during storage caching/tiering in a computing environment |
11119689, | May 15 2018 | International Business Machines Corporation | Accelerated data removal in hierarchical storage environments |
11200174, | Sep 29 2015 | International Business Machines Corporation | Considering a frequency of access to groups of tracks and density of the groups to select groups of tracks to destage |
11204880, | Sep 30 2016 | EMC IP HOLDING COMPANY LLC | Hash tables in flash memory |
11210229, | Oct 31 2018 | EMC IP HOLDING COMPANY LLC | Method, device and computer program product for data writing |
11573909, | Dec 06 2006 | Unification Technologies LLC | Apparatus, system, and method for managing commands of solid-state storage using bank interleave |
11592983, | Jan 06 2021 | EMC IP HOLDING COMPANY LLC | Method, electronic device and computer program product for storage management |
11640359, | Dec 06 2006 | Unification Technologies LLC | Systems and methods for identifying storage resources that are not in use |
11734183, | Mar 16 2018 | HUAWEI TECHNOLOGIES CO , LTD | Method and apparatus for controlling data flow in storage device, storage device, and storage medium |
11847066, | Dec 06 2006 | Unification Technologies LLC | Apparatus, system, and method for managing commands of solid-state storage using bank interleave |
11924114, | Aug 08 2018 | SAMSUNG ELECTRONICS CO , LTD | Device and method for processing data packet |
11947968, | Jan 21 2015 | Pure Storage, Inc. | Efficient use of zone in a storage device |
11955175, | May 18 2021 | Micron Technology, Inc. | Copy redundancy in a key-value data storage system using content addressable memory |
11960412, | Dec 06 2006 | Systems and methods for identifying storage resources that are not in use | |
12086458, | May 04 2021 | Micron Technology, Inc | Programming content addressable memory |
12142318, | May 04 2021 | Micron Technology, Inc | Redundancy and majority voting in a key-value data storage system using content addressable memory |
12142334, | May 04 2021 | Micron Technology, Inc | Health scan for content addressable memory |
8768890, | Mar 14 2007 | Microsoft Technology Licensing, LLC | Delaying database writes for database consistency |
8874680, | Nov 03 2011 | NetApp, Inc.; NetApp, Inc | Interconnect delivery process |
9043914, | Aug 22 2012 | International Business Machines Corporation | File scanning |
9058282, | Dec 31 2012 | Intel Corporation | Dynamic cache write policy |
9215278, | Nov 03 2011 | NetApp, Inc. | Interconnect delivery process |
9280469, | Dec 28 2012 | EMC IP HOLDING COMPANY LLC | Accelerating synchronization of certain types of cached data |
9424286, | Mar 04 2011 | Microsoft Technology Licensing, LLC | Managing database recovery time |
9483402, | Sep 24 2014 | NetApp, Inc. | Methods and systems for dynamically controlled caching |
9501406, | Dec 06 2013 | Fujitsu Limited | Storage control apparatus and storage control method |
9519540, | Dec 06 2007 | SanDisk Technologies LLC | Apparatus, system, and method for destaging cached data |
9606887, | Mar 15 2013 | RIVERBED TECHNOLOGY LLC | Persisting large volumes of data in an efficient unobtrusive manner |
9606918, | Sep 24 2014 | NetApp Inc. | Methods and systems for dynamically controlled caching |
9626247, | Nov 21 2014 | HUAZHONG UNIVERSITY OF SCIENCE AND TECHONLOGY | Method for scheduling high speed cache of asymmetric disk array |
9678670, | Jun 29 2014 | NetApp, Inc | Method for compute element state replication |
9690703, | Jun 27 2012 | NetApp, Inc. | Systems and methods providing storage system write elasticity buffers |
9747228, | Jul 03 2014 | AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED | Caching systems and methods for execution within an NVDRAM environment |
9778879, | Oct 23 2015 | Microsoft Technology Licensing, LLC | Flushless transactional layer |
9826037, | Nov 03 2011 | NetApp, Inc. | Interconnect delivery process |
9851919, | Dec 31 2014 | NetApp, Inc | Method for data placement in a memory based file system |
9916105, | Nov 05 2015 | Crossbar, Inc. | Page management for data operations utilizing a memory device |
ER5720, |
Patent | Priority | Assignee | Title |
5893140, | Nov 13 1996 | EMC IP HOLDING COMPANY LLC | File server having a file system cache and protocol for truly safe asynchronous writes |
5933603, | Oct 27 1995 | EMC Corporation | Video file server maintaining sliding windows of a video data set in random access memories of stream server computers for immediate video-on-demand service beginning at any specified location |
6865157, | May 26 2000 | EMC Corporation | Fault tolerant shared system resource with communications passthrough providing high availability communications |
7062675, | Jun 25 2002 | EMC IP HOLDING COMPANY LLC | Data storage cache system shutdown scheme |
7809975, | Sep 28 2006 | EMC IP HOLDING COMPANY LLC | Recovering from a storage processor failure using write cache preservation |
7849350, | Sep 28 2006 | EMC IP HOLDING COMPANY LLC | Responding to a storage processor failure with continued write caching |
7937531, | Feb 01 2007 | Cisco Technology, Inc. | Regularly occurring write back scheme for cache soft error reduction |
20030084252, | |||
20030212865, | |||
20040117441, | |||
20070250660, | |||
20090172286, | |||
20100199039, | |||
20100274962, | |||
20110191534, |
Date | Maintenance Fee Events |
Sep 19 2016 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Aug 20 2020 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Aug 20 2024 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Mar 19 2016 | 4 years fee payment window open |
Sep 19 2016 | 6 months grace period start (w surcharge) |
Mar 19 2017 | patent expiry (for year 4) |
Mar 19 2019 | 2 years to revive unintentionally abandoned end. (for year 4) |
Mar 19 2020 | 8 years fee payment window open |
Sep 19 2020 | 6 months grace period start (w surcharge) |
Mar 19 2021 | patent expiry (for year 8) |
Mar 19 2023 | 2 years to revive unintentionally abandoned end. (for year 8) |
Mar 19 2024 | 12 years fee payment window open |
Sep 19 2024 | 6 months grace period start (w surcharge) |
Mar 19 2025 | patent expiry (for year 12) |
Mar 19 2027 | 2 years to revive unintentionally abandoned end. (for year 12) |