A data storage system may be configured at least with a primary memory that is coupled to a host via a controller and coupled to at least one external interface. The controller may be adapted to passively partition a secondary memory into cache and user memory space regions in response to the secondary memory engaging the at least one external interface and the cache region can be allocated as cache for the primary memory by the controller.
|
1. An apparatus comprising a primary memory coupled to a host via a controller and to at least one external interface, the controller adapted to store data to the primary memory alone then passively partition a secondary memory into cache and user memory space regions with a first ratio in response to the secondary memory engaging the at least one external interface, the primary memory being visible and accessible by a host while the secondary memory is passively partitioned, the controller adapted to subsequently passively partition the secondary memory into a different second ratio of user memory space and cache regions to adapt to changing operating conditions in the primary memory, the cache region allocated as cache for the primary memory by the controller.
10. A data storage system comprising a primary memory coupled to a host via a controller and to at least one external interface, the controller adapted to store data to the primary memory alone then passively partition a secondary memory from a first user memory capacity into cache and user memory space regions with a second user memory capacity in response to the secondary memory engaging the at least one external interface, the primary memory being visible and accessible by a host while the secondary memory is passively partitioned, the controller adapted to subsequently passively partition the secondary memory into a different with a third user memory capacity to adapt to changing operating conditions in the primary memory, the third user memory capacity being different than the first and second user memory capacities, the second user memory capacity being less than the first user memory capacity, the cache region allocated as cache for the primary memory by the controller to form a hybrid data storage device.
16. A method comprising:
coupling a primary memory to a host via a controller and to at least one external interface, the primary memory having a first cache region;
storing data to the first cache region of the primary memory alone;
engaging the at least one external interface with a secondary memory;
partitioning the secondary memory passively from a first user memory capacity into a second cache and user memory space regions having a first ratio with a second user memory capacity via the controller in response to the secondary memory engaging the at least one external interface, the primary memory being visible and accessible by a host while the secondary memory is passively partitioned;
testing the first and second cache regions for data access speeds;
allocating the second cache region as cache for the primary memory with the controller in response to the second cache region having a faster data access speed than the first cache region;
removing the secondary memory from the at least one external interface;
connecting the secondary memory to the at least one external interface;
partitioning the secondary memory passively into a second ratio of a third user memory capacity and cache regions to adapt to changing operating conditions in the primary memory, the first and second ratios being different.
3. The apparatus of
4. The apparatus of
5. The apparatus of
6. The apparatus of
7. The apparatus of
8. The apparatus of
9. The apparatus of
11. The data storage system of
12. The data storage system of
13. The data storage system of
14. The data storage system of
15. The data storage system of
17. The method of
18. The method of
19. The method of
20. The method of
|
Assorted embodiments may configure a data storage system with a primary memory that is coupled to a host via a controller and coupled to at least one external interface. The controller may be adapted to passively partition a secondary memory into cache and user memory space regions in response to the secondary memory engaging the at least one external interface and the cache region can be allocated as cache for the primary memory by the controller.
The continued industry and consumer emphasis on mobile computing systems has stressed the physical size, computing speed, and connectivity of a data storage system. The advent of solid-state memories to supplement rotating data storage means has increased data storage capacity and data access speeds, but can have larger physical dimensions, small data capacity, and increased risk of data loss. For example, flash memory used to supplement a hard disk drive can wear out over a number of data accesses that can jeopardize the integrity of data stored therein as well as degrade data storage and computing performance. Hence, industry continues to strive for more efficient and reliable manners of complementing a primary data storage memory with a secondary data storage memory.
Accordingly, a data storage system may be configured with at least a primary memory that is coupled to a host via a controller and coupled to at least one external interface. The controller may be adapted to passively partition a secondary memory into cache and user memory space regions in response to the secondary memory engaging the at least one external interface and to allocate the cache region as cache for the primary memory. The ability to supplement the primary memory with secondary memory at will can provide additional data storage capabilities that can increase computing performance. Moreover, the passive partitioning of a secondary memory can allow seamless integration of additional data storage capabilities that can be selectively utilized to optimize data access speed while maintaining data integrity.
While a secondary memory can be configured to supplement an unlimited variety of different data storage environments,
The primary memory 106 can be accessed by the local controller 104 to facilitate a diverse variety of data storage capabilities, such as temporary and permanent storage of host data. The data storage device 102 can have at least one external interface 108, such as a universal serial bus (USB), serial attached Small Computer System Interface (SAS), and serial advanced technology attachment (SATA), to allow one or more secondary memories 110 to be used exclusively and in conjunction with the primary memory 104. The ability to utilize a secondary memory 110 that exhibits different data storage characteristics, such as data access speed and capacity, can complement the primary memory 106 and provide a user with greater data storage capacity and faster data access speeds.
While the secondary memory 110 can be physically connected to the data storage device 102, additional capacity, control, and processing can be provided by a remote host 112 via a network 114 that is wired or wireless and accessed via appropriate protocols. The remote host 112 can allow supplemental computing capabilities, like virtual machines, and network connectivity, like access to local area networks (LAN), to optimize data storage performance in the data storage device 102. However, network availability cannot always be guaranteed, which poses a practical difficulty in increasingly mobile computing systems like smartphones, laptop computers, and tablet computing devices.
Such lack of guaranteed access to additional computing capabilities over a network along with other considerations renders the example data storage device 120 of
While not limiting or required, the solid-state memory 124 can have NAND flash, resistive random access memory, spin-torque random access memory, and programmable metallization memory cells 128 organized as a cross-point array. The cells 128 may be selectively accessed as individual cells 128, page of cells, and garbage collection units via bit 130 and source 132 drivers. In other words, the bit 130 and source 132 drivers can operate at the discretion of a local controller 134 to create a data access circuit through one or more memory cells 128 to program data to and read data from the selected memory cells 128.
The rotating hard disk drive 126 is also not limiting or required as shown, but can have, in accordance with various embodiments, a stack of data storage media 136 that are rotated at a predetermined speed by a central spindle motor 138 to create an air bearing on which a portion of an actuating assembly 140 flies to access one or more data bits 142 from the respective data storage media 136. The choreographed movement of the actuating assembly 140 and spindle motor 138 can be enabled by a hard drive controller 144 that facilitates the transfer of pending data accesses, such a data reads and writes, from a local cache 146 to a designated portion of the media stack 136 at opportune times.
The combination of different types of memory in the hybrid data storage device 122 can provide complementary data storage capabilities, such as the use of the solid-state memory 124 as a cache memory for temporary data, such as pending data requests, system overhead information, personalized user metadata like passwords and encryption keys, and maintenance data, while the hard disk drive 126 permanently stores data from various cache memories. The utilization of the solid-state memory 124 as a cache can take advantage of the fast data access speed of solid-state memory cells to complement the long-term reliability of the hard disk drive 126 to deliver enhanced data storage to a host.
Even with the enhanced capabilities of the hybrid data storage device 122, various conditions can promote the use of at least one external interface 148 to connect a secondary memory 150 to the hybrid data storage device 122. The secondary memory 150 may be connected via a network, but is physically connected via the external interface 148 in assorted embodiments to allow the selective engagement, and disengagement, of the secondary memory 150. The addition of the secondary memory 150 can be in an unlimited variety of forms, such as a flash drive, rotating hard disk drive, and hybrid device, to supplement the hybrid data storage device 122.
In some embodiments, the secondary memory 150 supplements the permanent data storage capacity of the hybrid data storage device 122 by allowing a host to selectively program and read that may or may not be resident in the solid-state 124 and hard disk drive 126 memories. Other non-limiting embodiments can utilize the secondary memory 150 to supplement the caching capacity of the solid-state memory 124 in which a controller stores and retrieves data from the secondary memory 150 in accordance with a predetermined caching scheme that may or may not involve host selection. These secondary memory 150 capabilities can extend the usefulness of the hybrid data storage device 122 by expanding capacity and the number of data access operations. In yet, the use of multiple types of memory can occupy additional space compared to singular data storage devices, which can be problematic in the continually decreasing form factors of mobile computing systems.
Operation of the mobile data storage system 160 may involve the controller 164 directing pending data commands to and from the volatile cache 168 via the buffer manager 166, which may be a supplemental controller dedicated to caching. The addition of a secondary non-volatile memory 174, such as a flash drive connected to an interface external to the SOC, can operate with the primary data storage device 172 to form a hybrid data storage arrangement 176. The controller 164 alone or in concert with the buffer manager 166 and volatile cache 168 can utilize the secondary non-volatile memory as additional cache, temporary storage, or permanent storage to optimize data storage for the mobile system 160.
Despite the ability for a user to create the hybrid data storage arrangement 176 with the engagement of the secondary non-volatile memory 174 with the controller 164, the exclusive use of the secondary non-volatile memory 174 as cache or as user available memory can be insufficient to service some mobile computing environments. For example, a mobile computing device having minimal user data and caching capacities can be stressed by mobile applications, like streaming video, audio file storage, and picture intensive software, which cannot be fully accommodated by dedicated secondary non-volatile memory 174 use. With these issues in mind, various embodiments configure the controller 164 to partition a secondary non-volatile memory 174 into cache and user memory space regions to allow concurrent storage of temporary cache data and user selected permanent data, which supplements both the volatile cache 168 and primary data storage device 172.
In step 186, at least one secondary memory is selectively engaged with an interface that is cabled to the SOC so to be external to the circuitry of the SOC. Engagement of the secondary memory in step 186 triggers step 188 to passively partition the secondary memory into at least cache and user memory space regions. While the secondary memory can be partitioned actively through a user designating the number and size of each partitioned region, the exemplary embodiment of
The passive partitioning of step 188 can be conducted by formatting a portion of the secondary memory in a format that is dissimilar to the user memory space region, but can allow efficient temporary data caching in concert with other caches, such as a volatile cache and a solid-state cache of a hybrid primary memory. With the secondary memory passively partitioned, step 190 can then test the data access speed of the cache region by executing at least one test pattern, which may involve reading and writing data to the secondary memory. The results of the data access speed test can then be used to establish a cache hierarchy in the event other cache locations are being used in conjunction with the secondary cache. However, the secondary cache may, in some embodiments, be used exclusively as cache for data transfer to the user memory space region, which can be particularly useful in the event the secondary memory is a hard disk drive.
Addition of the cache region of the secondary memory to cache previously existing can allow for cache optimization based on a variety of different variables, such as the data access speeds tested in step 190, capacity, and temperature, which can be used to assign priority to the various caches in step 192. Next, step 194 can transfer data and pending data requests between the various caches and eventually to the primary memory as dictated by a controller. The ability to seamlessly add a secondary memory that is automatically integrated into the caching structure of a data storage device while maintaining user data in the user available memory space can concurrently increase the capacity and data access performance of a data storage system.
The example operational plot of
Returning to the example embodiment where dissimilar cache to user memory space is employed, the greater user available data capacity of the hard disk drive would result in the different cache to user available memory capacities shown in
The capacity of cache and user available memory capacity can be further adapted over time to accommodate data caching schemes where the secondary data storage devices can be disengaged from the system 200 at any time.
While step 224 is not limited to a particular manner of assigning the cache hierarchy, assorted embodiments sort cache by capacity, such as smallest cache capacity being L1 cache while progressively larger capacities being L2, L3, etc. Other embodiments may sort cache by data access speed or memory type. It should be noted that the cache hierarchy resulting from step 224 can consider the removable capability of the cache regions of the secondary data storage devices, which can reduce the cache's position in the hierarchy due to the potential loss of the memory space at any time. It is contemplated that the cache hierarchy can be split between physical locations. For instance and in no way required or limiting, a tier of cache memory can be split between partial portions of different physical devices, such as volatile cache, non-volatile cache affixed on an SOC, and secondary non-volatile data storage devices.
Step 226 proceeds to move cache data into the cache hierarchy assigned in step 224 by mirroring cache data first to the highest level of cache, which may be the largest or fastest cache memory space in the system according to some embodiments. Such mirroring can be followed by deletion of the cache data resident in the lower cache tier or the redundant cache data may be kept in certain situations, such as the higher cache tier is a removable secondary data storage device. The movement of cache data to the highest cache tier in step 226 can allow step 228 to then transfer the cache data to lower tiers of cache, or the primary memory, according to predetermined cache hierarchy standards, such as the highest cache tier being full, data being stale, and data being present for a certain amount of time.
In the event a piece of cache data reaches the lowest tier of the cache hierarchy, step 230 can transfer the data to the primary memory, if step 228 has not already conducted the transfer. There are various situations where step 228 can transfer data directly to the primary memory without passing through each level of the cache hierarchy, such as cache data being very active, having high priority, and regarding security operations of the overarching system. The storage of cache data and pending data requests in the cache hierarchy can provide efficient use of system resources as steps 226, 228, and 230 can be conducted during selected times, such as system standby modes and slow processing times, so to not inhibit the performance or capabilities of the system.
The movement of cache data throughout the hierarchy in combination with the addition and removal of secondary cache regions can result in large amounts of residual, redundant data that can degrade caching efficiency. As such, step 232 can flush one or more of the caches routinely, randomly, or selectively. In the example embodiment shown in
Step 234 may be conducted individually, in some embodiments, without the flushing of a cache tier, such as when a secondary cache region is removed. Along those same lines, various steps of scheme 220 can be conducted individually in response to secondary data storage devices engaging and disengaging a data storage system. For instance, step 224 can assign a newly partitioned secondary cache region to an existing cache tier without steps 226 and 228 transferring cache data in response. Regardless of the number of times a step is performed individually or collectively with other steps of the scheme 220, the mapping and assigning cache tiers along with the transferring of cache data through some or all of the cache hierarchy can allow large amounts of data and pending data requests to be serviced while appearing seamless to a user.
Returning to the concept of assigning cache into a hierarchical structure from step 224,
Cache data may further be transferred to a second secondary data storage device cache region 246 by a controller when predetermined data conditions are met, such as the priority and staleness of the cache data. The controller may next transfer the cache data and any pending data requests in the second secondary cache region 246 to the primary memory 248, which is a hard disk drive in assorted embodiments. It should be noted that cache data and pending data requests can be transferred to and from the primary memory 248 from any cache tier, without limitation, but can be successively passed through the assigned cache tiers in some embodiments.
Decision 266 may then evaluate and determine if multiple partitions are to be created within the secondary data storage device engaged in step 264. If multiple partitions are in order, step 268 selects the number and type of regions, such as 1 cache region and 2 user memory space regions, before step 270 passively partitions the secondary data storage device. In the event a single partition is sufficient, step 270 is triggered to configure the secondary data storage device with cache and user memory space regions in a predetermined ratio. The partitioning of the secondary data storage device can be done by deleting portions of existing data on the secondary data storage device as well as through formatting some or the entire secondary data storage device. Irrespective of the manner in which the secondary data storage device is partitioned, step 272 can subsequently format the cache region exclusively to allow efficient transfer of data between different caches.
With the secondary data storage device partially or fully configured, step 274 can begin automatically populating the cache region with cache data and pending data requests. The automatic population of step 274 is conducted without user selection or manipulation and can be conducted in accordance with predetermined cache hierarchical structure and assignments. Concurrently with step 274 or separately, step 276 can selectively populate the user memory space region of the secondary data storage device at the discretion of the host. In contrast to the automatic population of the cache region in step 274, step 276 is conducted in response to host prompting, which allows data to be retained in the user memory space region before, during, and after steps 270 and 272 establish the cache region.
It is contemplated that routine 260 continually conducts steps 274 and 276 to populate the cache and user memory space regions over time before the secondary data storage device is disengaged from the external interface in step 278. Even with the disengaging of the secondary data storage device in step 278, steps 274 and 276 can be conducted on another secondary data storage device or upon the original secondary data storage device being reengaged with the interface. That is, once a secondary data storage device is partitioned and formatted once, there is no requirement that the device goes through the entire routine 260 again, which can save time and processing power as the same secondary data storage device is cyclically engaged and disengaged with the external interface.
Through the various embodiments of the present disclosure, at least one secondary data storage device can be utilized to supplement an existing computing system. The ability to tune the secondary data storage device for both caching and user available memory allows the data capacity to be increased while performance in servicing pending data requests is optimized. Moreover, the ability to automatically respond to engagement of a secondary data storage device allows a system to passively optimize performance without inhibiting a user's computing experience, such as the loss of user data resident on the secondary data storage device upon engagement with an interface.
It is to be understood that even though numerous characteristics and configurations of various embodiments of the present disclosure have been set forth in the foregoing description, together with details of the structure and function of various embodiments, this detailed description is illustrative only, and changes may be made in detail, especially in matters of structure and arrangements of parts within the principles of the technology to the full extent indicated by the broad general meaning of the terms in which the appended claims are expressed. For example, the particular elements may vary depending on the particular application without departing from the spirit and scope of the present disclosure.
Patent | Priority | Assignee | Title |
11297010, | Dec 23 2019 | Western Digital Technologies, Inc.; Western Digital Technologies, INC | In-line data operations for storage systems |
11838222, | Dec 21 2019 | Western Digital Technologies, Inc. | In-line data identification on network |
11893277, | Oct 14 2020 | Western Digital Technologies, Inc. | Data storage device managing low endurance semiconductor memory write cache |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Nov 06 2013 | MOON, JOHN EDWARD | Seagate Technology LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 031571 | /0352 | |
Nov 08 2013 | Seagate Technology LLC | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Jun 23 2020 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Jul 31 2024 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
Jan 31 2020 | 4 years fee payment window open |
Jul 31 2020 | 6 months grace period start (w surcharge) |
Jan 31 2021 | patent expiry (for year 4) |
Jan 31 2023 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jan 31 2024 | 8 years fee payment window open |
Jul 31 2024 | 6 months grace period start (w surcharge) |
Jan 31 2025 | patent expiry (for year 8) |
Jan 31 2027 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jan 31 2028 | 12 years fee payment window open |
Jul 31 2028 | 6 months grace period start (w surcharge) |
Jan 31 2029 | patent expiry (for year 12) |
Jan 31 2031 | 2 years to revive unintentionally abandoned end. (for year 12) |