A data storage system can employ a read destructive memory configured to fill a first cache with a first data set from a data repository prior to populating a second cache with a second data set describing the first data set with the first and second cache each having non-volatile ferroelectric memory cells. An entirety of the first cache may be read in response to a cache hit in the second cache with the cache hit responsive to a data read command from a host and with the first cache being read without a refresh operation restoring the data of the first cache.
|
9. A method comprising:
filling a cache with a first data set from a data repository;
populating a cache map with a second data set describing the first data set, the cache and cache map each comprising read destructive memory cells;
reading an entirety of the cache in response to a cache hit to the cache map by a host request, the cache read without a refresh operation restoring the data of the cache, and
filling the cache with data predicted by a controller to be requested by a host in the future.
1. An apparatus comprising:
a cache consisting of read destructive memory cells;
a map connected to the first cache, the map consisting of read destructive memory cells storing information about data resident in the first cache; and
a controller connected to the first cache and map, the controller configured to output less than an entirety of the data stored in the first cache in response to a map hit by a host data request, the entirety of the first cache and map subsequently filled with different data identified by the controller.
17. A method comprising:
filling a cache with a first data set from a data repository;
populating a cache map with a second data set describing the first data set, the cache and cache map each comprising read destructive memory cells;
reading an entirety of the cache in response to a cache hit to the cache map by a host request, the cache read without a refresh operation restoring the data of the cache,
filling the cache with data identified by a controller as compromised; and
applying an error correction code to the data identified as compromised while the data is resident in the cache.
2. The apparatus of
3. The apparatus of
4. The apparatus of
5. The apparatus of
6. The apparatus of
7. The apparatus of
8. The apparatus of
10. The method of
11. The method of
13. The method of
15. The method of
16. The method of
18. The method of
19. The method of
20. The method of
|
The present application makes a claim of domestic priority under 35 U.S.C. 119(e) to U.S. Provisional Patent Application No. 63/212,403 filed Jun. 18, 2021, the contents of which are hereby incorporated by reference.
The present disclosure is generally directed to a memory employing read destructive memory cells as an intelligent cache.
A data storage system, in some embodiments, employs a read destructive memory and is configured to fill a first cache with a first data set from a data repository prior to populating a second cache with a second data set describing the first data set with the first and second cache each having non-volatile ferroelectric memory cells. An entirety of the first cache is read in response to a cache hit in the second cache with the cache hit responsive to a data read command from a host and with the first cache being read without a refresh operation restoring the data of the first cache.
Various embodiments of the present disclosure are generally directed to the intelligent use of cache constructed of read destructive memory cells as part of a data storage system.
While volatile memories, like dynamic random access memory (DRAM), have been utilized for years to provide various buffer and cache functions, the volatile nature of the memory has challenges, particularly after power loss. Non-volatile memories have been attempted for caching purposes, but have had physical and/or operational challenges. For instance, using flash memory as a cache/buffer can suffer from degradation over time and relatively slow data access times. Other non-volatile memories may be too physically large, or lack density, to provide realistic next generation cache functions.
The use of ferroelectric memory as a cache can provide fast operation and relatively small power consumption, but suffers from read destruction as sensing data corresponds with the cell losing some, or all, of the charge that stores a logical state. Such read destruction can be accommodated by simply refreshing each read cell with charge to reinstate the programmed logical state. However, refreshing cells can occupy valuable time and consume valuable power.
Generally, data storage devices can come in many forms, but all such devices usually have a top level controller and a memory. The controller manages overall operation of the device and can include one or more programmable processors and associated memory to which firmware (FW) is loaded and executed by the processor(s). The memory can take any number of forms, such as flash memory in a conventional SSD.
So-called front-end memory is often provided as part of the controller. This front-end memory can take a variety of forms (e.g., SRAM, DRAM, NOR flash, NAND flash, etc.) and serves various storage functions to support the transfers of data between the memory and an external client. In one traditional arrangement, an SSD can consist of an internal data cache formed of SRAM, which is incorporated within a system on chip (SOC) that incorporates the processors and is primarily used to cache first level map metadata describing data stored in the memory. The traditional SSD arrangement can further incorporate an external memory that is separate, but linked to, the SOC—this memory is often formed of DRAM and can be used to store the processor FW, and cached metadata (such as second level map metadata).
Another portion of the external DRAM, or another portion of volatile memory, is designated as a read buffer. This is a location to which retrieved data from the main memory are stored pending transfer to the requesting client. Finally, a separate write cache can be formed of non-volatile memory (NVM) such as NAND or NOR flash memory. The write cache serves to temporarily stored cached write data that have been received from the external client, and are awaiting transfer to the main memory. With these aspects in mind, it is noted that numerous other local memory types and locations can be utilized as well, such as local buffers, keystores, etc.
An example data storage system 100 in which assorted embodiments of an MLC memory unit and cell can be utilized in depicted as a block representation in
While the originally programmed states can be read by a local, or remote, controller 164 before the respective cells 162 become unreadable, the controller 164 can be tasked with refreshing the programmed states to the respective cells 192 by conducting one or more write operations. In comparison to other, non-read destructive solid-state memory cells that can conduct a read operation without refreshing the programmed states with a write operation, FME cells 162 experience greater volumes of write operations as outputted data from a read operation is refreshed by the controller 164 from a local cache 166 or some other data storage location. Such cell writing activity can produce relatively high volumes of heat while consuming relatively large amounts of power and processing time compared to other types of memory. The addition of cell 162 health and wear being difficult to track and/or detect makes FME memory cell usage over time difficult and unreliable with current operation and management.
The read destructive cache cells 162 can be contrasted to the cache cells 168 of
It is noted that the non-read destructive cells 168 may involve refreshing over time to retain accurate polarization while programmed polarizations are not destroyed by a read operation. In other words, the non-read destructive cells 168 can require an erase operation to remove or replace a stored polarization/logic state. These challenges, along with the inefficiencies of data access and power consumption compared to read destructive cells 162, have prompted the utilization of read destructive cells 162 with the operational addition of refresh operations after a cache is wholly, or partially, read. Accordingly, various embodiments are generally focused on mitigating the operational challenges of utilizing read destructive cache cells 162 by intelligently arranging and utilizing a cache structure where a relatively small cache map describes the content of data contained in one or more relatively large cache repositories.
A block representation of a non-limiting cache arrangement is depicted in
Regardless of the number, type, and size of the assorted system 170 cache, a cache map 176 can provide higher level cache correlation of logical data addresses to physical data addresses. Although not required, the cache map 176 can be configured with a smaller physical size (capacity) than the cache which it describes with physical and/or logical addresses of data requested by an upstream host. The cache map 176 may be constructed of any type of memory cells, but has read destructive FME cells in some embodiments.
The cache map 176 can be configured track the location of data, which prevents directly reading a cache 172/174 to determine if requested data is present. Hence, the cache map 176 describing the location of data in downstream cache 172/174 prevents the read destructive nature of FMEs from causing additional write operations to be conducted on the larger capacity caches 172/174 to determine the presence and location of cached data. As such, the smaller capacity cache map 176 can be read and re-written quicker and more reliably with less power consumption. It is contemplated that multiple separate cache maps 176 can be concurrently present and selectively utilized to describe the location of data in downstream cache 172/174.
It is noted that the cache arrangement of
With caches traditionally operating as depositories where data is read many times, the use of read destructive cells poses inefficiencies. Various embodiments use ferroelectric memory to form one or more cache/buffers for a storage device. A particularly valuable configuration is to form the write cache of ferroelectric memory (rather than flash memory as currently implemented). In some cases, the read buffer may also be formed of ferroelectric memory. In still further cases, a large ferroelectric memory may be utilized (such as a 2D or 3D structure) and portions of the memory are separately and potentially allocated for use as separate write cache and read buffer locations. It will be noted that other aspects of the local memory can also be implemented using ferroelectric memory.
The fact that a read operation will destroy the data stored in ferroelectric cells prompts various embodiments to utilize a multi-tier cache of ferroelectric cells. A top-level cache can be configured to identify what data is stored in a lower-level cache, which prevents the lower-level cache from being polled and rewritten repeatedly. The smaller size of the top-level cache corresponds with a faster and lower refresh power consumption compared to refreshing the lower-level map 176. In the event of a cache miss from the top-level, the top-level map 176 is simply refreshed. In the event of a cache hit from the top-level map 176, some, or all, of the lower-level cache 172/174 can be read to a host without a refresh operation, which results in both levels of cache being read destroyed and essentially cleared.
While a host query to the cache map 192 may correspond with no retrieval of data from a cache 194/196, various embodiments respond to a host query/request to the cache map 192 with outputting data from a cache 194 to a read buffer 198. It is contemplated that data is outputted directly to a host without involving the read buffer 198, but various embodiments utilize the read buffer 198 to accumulate requested data from multiple separate caches 194/196 to promote efficiency and boost performance.
The cache map 192 may be utilized to facilitate the access and retrieval of data from multiple separate cache 194/196 concurrently, or sequentially, which allows different data to be accessed and transferred to a requesting host. With the respective cache 194/196 each constructed of read destructive solid-state memory cells, passage of a read voltage corresponds with an output of stored logic states as well as the effective erasing of all programmed data in the cache 194/196, as conveyed in
Although not required or limiting, the speculative filling of a cache 194/196 after a data read operation can move non-requested data from other data repositories to the read-destructive cache 194/196 to allow for optimal responsiveness and efficiency for future host requests to that data. Such speculative filling of a cache 194/196 will correspond with an update of the cache map 192 to reflect what data is stored in the caches 194/196. Hence, satisfaction of a read operation from one or more read-destructive caches 194/196 can result in writing of all caches 194/196 and maps 192 with preexisting and/or speculative data.
The type, size, and number of data that are speculatively written to a cache 194/196 is not limited, but can be intelligently chosen by a controller, such as controller 164, based on one or more criteria. For instance, a controller can identify trends, patterns, addresses, and/or types of data that are predicted to be requested by one or more hosts in the near future. The prediction of what speculative data to send to a cache 194/196 can involve the analysis of model information and/or logged system activity. Some embodiments configure a controller to maintain a list of data that is most ripe for speculative movement to a cache 194/196 based on predetermined criteria, such as frequency of data requests, source of data, data size, or data security level. The tracking of criteria for data can provide a nearly seamless filling of a cache 194/196 with speculative data in response to a read operation.
The ability to fill portions of a cache 194/196 with data predicted to be requested takes advantage of having a free erase operation provided by a cache read to enable optimal data access latency for the speculative data, upon host request. That is, compared to retrieving data from a non-read destructive repository or filling a cache 194/196 with data after a host request, the speculative writing of intelligently selected data to cache 194/196 can provide the lowest data access latency to satisfy a host request.
The availability of ECC can be leveraged after a cache 202 is read, and consequently cleared of stored data, to repair compromised data that is written to the cache 202. For example, a cache 202 can be filled with data that is known, or predicted, to be compromised from error, failure, or security breach, which allows ECC to be selected and applied to repair the data to make it available for transfer to host or storage in a permanent data repository of the system 200. It is noted that the availability of ECC does not require its application to data, as illustrated by the satisfaction of a host request from a first cache 202 while data of a second cache 202 is repaired with ECC. The ability to selectively utilize ECC allows the cache to be read and refreshed less, but does require the generation, or storage, of ECC when necessary.
Ferroelectric memory combines the speed advantages of DRAM with the non-volatile and stackability of flash. Ferroelectric memory is based on FMEs, or ferroelectric memory elements. FMEs can take a large variety of forms (e.g., FTJs, FeFETs, 1T1FC, 1T4FC, 2T2FCs, etc.). Regardless of configuration, an FME stores data in relation to a programmed electric polarity impressed upon an embedded ferroelectric layer. Because FMEs provide read/programming speeds in the nanosecond range, these elements provide a number of potentially valuable usages in the front end of an otherwise conventional storage device.
With speculative cache fetching and faster cache reads compared to having to refresh after each cache read, volumes of cache space can accommodate current and expected write data caching. Overall, the multi-tier cache can utilize ferroelectric memory front-end intelligently in such a way as to maintain certain QoS and data throughput performance levels for the client. Hence, embodiments are directed to both multi-tier cache and speculative caching, particularly various multi-tier cache operation for hits and misses, speculative caching of write data, variable ECC, and selectively refreshing the second (top) tier cache.
El-Batal, Mohamad, Viraraghavan, Praveen, Dykes, John W., Trantham, Jon D., Mehta, Darshana H., Gilbert, Ian J., Kalarickal, Sangita Shreedharan, Totin, Matthew J.
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
10338835, | Sep 13 2016 | Kioxia Corporation | Memory device |
5600587, | Jan 27 1995 | NEC Corporation | Ferroelectric random-access memory |
6711078, | Jul 01 2002 | GLOBALFOUNDRIES U S INC | Writeback and refresh circuitry for direct sensed DRAM macro |
7203084, | Sep 15 2000 | SanDisk Technologies LLC | Three-dimensional memory device with ECC circuitry |
7203794, | Apr 25 2002 | International Business Machines Corporation | Destructive-read random access memory system buffered with destructive-read memory cache |
7552255, | Jul 30 2003 | Intel Corporation | Dynamically partitioning pipeline resources |
7721048, | Mar 15 2006 | The Board of Governors for Higher Education, State of Rhode Island and Providence Plantations; Rhode Island Board of Education, State of Rhode Island and Providence Plantations | System and method for cache replacement |
8014186, | Apr 28 2008 | Rohm Co., Ltd. | Ferroelectric memory device and operating method for the same |
8554982, | Oct 27 2004 | Sony Corporation | Storage device and information processing system |
8677072, | Dec 15 2009 | Cisco Technology, Inc | System and method for reduced latency caching |
8886880, | May 29 2012 | Dot Hill Systems Corporation | Write cache management method and apparatus |
8966181, | Dec 11 2008 | Seagate Technology LLC | Memory hierarchy with non-volatile filter and victim caches |
9202548, | Dec 22 2011 | Intel Corporation | Efficient PCMS refresh mechanism |
9269438, | Dec 21 2011 | TAHOE RESEARCH, LTD | System and method for intelligently flushing data from a processor into a memory subsystem |
9317429, | Sep 30 2011 | Intel Corporation | Apparatus and method for implementing a multi-level memory hierarchy over common memory channels |
9740616, | Sep 26 2013 | International Business Machines Corporation | Multi-granular cache management in multi-processor computing environments |
20030058681, | |||
20050273575, | |||
20060277367, | |||
20170068621, | |||
20220083475, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
May 19 2022 | TRANTHAM, JON D | Seagate Technology LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 060965 | /0598 | |
May 19 2022 | VIRARAGHAVAN, PRAVEEN | Seagate Technology LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 060965 | /0598 | |
May 19 2022 | GILBERT, IAN J | Seagate Technology LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 060965 | /0598 | |
May 19 2022 | KALARICKAL, SANGITA SHREEDHARAN | Seagate Technology LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 060965 | /0598 | |
May 19 2022 | TOTIN, MATTHEW J | Seagate Technology LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 060965 | /0598 | |
May 19 2022 | MEHTA, DARSHANA H | Seagate Technology LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 060965 | /0598 | |
Jun 20 2022 | Seagate Technology LLC | (assignment on the face of the patent) | / | |||
Jun 20 2022 | DYKES, JOHN W | Seagate Technology LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 060965 | /0598 | |
Aug 30 2022 | EL-BATAL, MOHAMAD | Seagate Technology LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 060965 | /0598 |
Date | Maintenance Fee Events |
Jun 20 2022 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Date | Maintenance Schedule |
Feb 13 2027 | 4 years fee payment window open |
Aug 13 2027 | 6 months grace period start (w surcharge) |
Feb 13 2028 | patent expiry (for year 4) |
Feb 13 2030 | 2 years to revive unintentionally abandoned end. (for year 4) |
Feb 13 2031 | 8 years fee payment window open |
Aug 13 2031 | 6 months grace period start (w surcharge) |
Feb 13 2032 | patent expiry (for year 8) |
Feb 13 2034 | 2 years to revive unintentionally abandoned end. (for year 8) |
Feb 13 2035 | 12 years fee payment window open |
Aug 13 2035 | 6 months grace period start (w surcharge) |
Feb 13 2036 | patent expiry (for year 12) |
Feb 13 2038 | 2 years to revive unintentionally abandoned end. (for year 12) |