A system is provided for facilitating crash recovery. The system receives an input/output (I/O) request for data associated with a logical block address. The system retrieves, from a first mapping table associated with a first storage drive, a physical location corresponding to the logical block address, wherein the first mapping table is stored in a random access memory which comprises a block device, and wherein a driver for the block device is stored in system memory separately from the first mapping table stored in the block device. The system accesses the physical location to execute the I/O request. Responsive to determining a crash associated with the driver, the system restarts the driver to recover access to the first mapping table absent of reconstruction of the first mapping table which involves reading data from the first storage drive and extracting mapping relations between logical addresses and physical addresses.

Patent
   11354233
Priority
Jul 27 2020
Filed
Jul 27 2020
Issued
Jun 07 2022
Expiry
Jul 28 2040
Extension
1 days
Assg.orig
Entity
Large
0
416
currently ok
21. A computer-implemented method, comprising:
determining a first mapping table associated with a first storage drive,
wherein the first mapping table is stored in a random access memory (RAM) which comprises a block device, and
wherein a driver for the block device is stored in system memory separately from the first mapping table stored in the block device; and
responsive to determining a crash associated with the driver for the block device, restarting the driver to recover access to the first mapping table absent of reconstruction of the first mapping table which involves reading data from the first storage drive and extracting mapping relations between logical addresses and physical addresses.
1. A computer-implemented method, comprising:
receiving an input/output (I/O) request for data associated with a logical block address;
retrieving, from a first mapping table associated with a first storage drive, a physical location corresponding to the logical block address,
wherein the first mapping table is stored in a random access memory (RAM) which comprises a block device, and
wherein a driver for the block device is stored in system memory separately from the first mapping table stored in the block device;
accessing the physical location to execute the I/O request;
determining, for each of the plurality of storage drives, a size of a mapping table associated with a respective storage drive; and
appending, based on a sequenced order of the storage drives, a plurality of mapping tables associated with the plurality of storage drives to obtain a mapping file.
10. A computer system, comprising:
a processor; and
a memory coupled to the processor and storing instructions which, when executed by the processor, cause the processor to perform a method, the method comprising:
receiving an input/output (I/O) request for data associated with a logical block address;
retrieving, from a first mapping table associated with a first storage drive, a physical location corresponding to the logical block address,
wherein the first mapping table is stored in a random access memory (RAM) which comprises a block device,
wherein a driver for the block device is stored in system memory separately from the first mapping table stored in the block device;
accessing the physical location to execute the I/O request;
determining, for each of the plurality of storage drives, a size of a mapping table associated with a respective storage drive; and
appending, based on a sequenced order of the storage drives, a plurality of mapping tables associated with the plurality of storage drives to obtain a mapping file.
19. A non-transitory computer-readable storage medium storing instructions that when executed by a computer cause the computer to perform a method, the method comprising:
receiving an input/output (I/O) request for data associated with a logical block address;
retrieving, from a first mapping table associated with a first storage drive, a physical location corresponding to the logical block address,
wherein the first mapping table is stored in a random access memory (RAM) which comprises a block device, and
wherein a driver for the block device is stored in system memory separately from the first mapping table stored in the block device;
accessing the physical location to execute the I/O request;
responsive to determining a crash associated with the driver for the block device, restarting the driver to recover access to the first mapping table absent of reconstruction of the first mapping table which involves reading data from the first storage drive and extracting mapping relations between logical addresses and physical addresses.
2. The method of claim 1,
wherein the size of the mapping table is based on a capacity of the associated respective storage drive;
wherein the mapping file includes the first mapping table; and
wherein the method further comprises storing the mapping file in the block device.
3. The method of claim 1, wherein the I/O request for data is associated with one or more logical block addresses which include the logical block address, and wherein the method further comprises retrieving the physical location corresponding to the one or more logical block addresses by:
identifying the first mapping table associated with the first storage drive;
identifying a first starting point of the first mapping table based on a summation of the sizes of mapping tables associated with storage drives in the sequenced order which precede the first storage drive;
determining a first offset based on a value of a first logical block address of the one or more logical block addresses; and
determining a first length associated with the one or more logical block addresses, wherein each logical block address in a respective mapping table corresponds to metadata of a same size, and wherein the first mapping table includes logical block addresses which are sequentially ordered based on values of the included logical block addresses, and
wherein accessing the physical location to execute the I/O request is based on accessing the first storage drive at the first starting point plus the first offset for a number of units equal to the first length.
4. The method of claim 3,
wherein the first length is equal to a number of the one or more logical block addresses scaled by a predetermined size for metadata stored in the first mapping table.
5. The method of claim 1,
wherein a content management module communicates with the driver and the block device to manage the mapping file which includes the appended mapping tables associated with the plurality of storage drives, and
wherein the content management module comprises a granularity modulator, an access pattern analyzer, a random engine, and a sequential engine.
6. The method of claim 5, further comprising:
determining, by the access pattern analyzer, an access pattern for the requested I/O data;
adjusting, by the granularity modulator, a size of a unit to access; and
determining whether the I/O request is associated with a random read/write operation or a sequential read/write operation.
7. The method of claim 6, further comprising:
responsive to determining a random read or write operation, accessing, by the random engine, the mapping file of the block device with a granularity of a first size, wherein the random engine includes a read cache; and
responsive to determining a sequential read or write operation, accessing, by the sequential engine, the mapping file of the block device with a granularity of a second size which is greater than the first size,
wherein the second size is determined based on a prediction of how much data to pre-fetch from the mapping file of the block device.
8. The method of claim 1,
wherein the driver communicates with the block device and the plurality of storage drives,
wherein an application communicates with the driver based on one or more of:
a communication between the application and the driver; and
a communication between the application and the driver via a hypervisor,
wherein the hypervisor communicates with the driver based on one or more of:
a communication between the hypervisor and the driver; and
a communication between the hypervisor and the driver via a distributed file system.
9. The method of claim 1, further comprising:
responsive to determining a crash associated with the driver for the block device, restarting the driver to recover access to the first mapping table absent of reconstruction of the first mapping table which involves reading data from the first storage drive and extracting mapping relations between logical addresses and physical addresses,
wherein a flash translation layer program running in the driver manages the first mapping table,
wherein determining the crash associated with the driver comprises determining a crash associated with the flash translation layer program, and
wherein restarting the driver comprises relaunching the flash translation layer program to recover access to the first mapping table.
11. The computer system of claim 10,
wherein the size of the mapping table is based on a capacity of the associated respective storage drive;
wherein the mapping file includes the first mapping table; and
wherein the method further comprises storing the mapping file in the block device.
12. The computer system of claim 10, wherein the I/O request for data is associated with one or more logical block addresses which include the logical block address, and wherein the method further comprises retrieving the physical location corresponding to the one or more logical block addresses by:
identifying the first mapping table associated with the first storage drive;
identifying a first starting point of the first mapping table based on a summation of the sizes of mapping tables associated with storage drives in the sequenced order which precede the first storage drive;
determining a first offset based on a value of a first logical block address of the one or more logical block addresses; and
determining a first length associated with the one or more logical block addresses, wherein each logical block address in a respective mapping table corresponds to metadata of a same size, and wherein the first mapping table includes logical block addresses which are sequentially ordered based on values of the included logical block addresses, and
wherein accessing the physical location to execute the I/O request is based on accessing the first storage drive at the first starting point plus the first offset for a number of units equal to the first length.
13. The computer system of claim 12,
wherein the first length is equal to a number of the one or more logical block addresses scaled by a predetermined size for metadata stored in the first mapping table.
14. The computer system of claim 10,
wherein a content management module communicates with the driver and the block device to manage the mapping file which includes the appended mapping tables associated with the plurality of storage drives, and
wherein the content management module comprises a granularity modulator, an access pattern analyzer, a random engine, and a sequential engine.
15. The computer system of claim 14, wherein the method further comprises:
determining, by the access pattern analyzer, an access pattern for the requested I/O data;
adjusting, by the granularity modulator, a size of a unit to access; and
determining whether the I/O request is associated with a random read/write operation or a sequential read/write operation.
16. The computer system of claim 15, wherein the method further comprises:
responsive to determining a random read or write operation, accessing, by the random engine, the mapping file of the block device with a granularity of a first size, wherein the random engine includes a read cache; and
responsive to determining a sequential read or write operation, accessing, by the sequential engine, the mapping file of the block device with a granularity of a second size which is greater than the first size,
wherein the second size is determined based on a prediction of how much data to pre-fetch from the mapping file of the block device.
17. The computer system of claim 10,
wherein the driver communicates with the block device and the plurality of storage drives,
wherein an application communicates with the driver based on one or more of:
a communication between the application and the driver; and
a communication between the application and the driver via a hypervisor,
wherein the hypervisor communicates with the driver based on one or more of:
a communication between the hypervisor and the driver; and
a communication between the hypervisor and the driver via a distributed file system.
18. The computer system of claim 10, wherein the method further comprises:
responsive to determining a crash associated with the driver for the block device, restarting the driver to recover access to the first mapping table absent of reconstruction of the first mapping table which involves reading data from the first storage drive and extracting mapping relations between logical addresses and physical addresses,
wherein a flash translation layer program running in the driver manages the first mapping table,
wherein determining the crash associated with the driver comprises determining a crash associated with the flash translation layer program, and
wherein restarting the driver comprises relaunching the flash translation layer program to recover access to the first mapping table.
20. The non-transitory computer-readable storage medium of claim 19, wherein the method further comprises:
determining the crash associated with the driver for the block device;
wherein a flash translation layer program running in the driver manages the first mapping table,
wherein determining the crash associated with the driver comprises determining a crash associated with the flash translation layer program, and
wherein restarting the driver comprises relaunching the flash translation layer program to recover access to the first mapping table.

This disclosure is generally related to the field of data storage. More specifically, this disclosure is related to a method and system for facilitating a fast crash recovery in a storage device.

Today, various storage systems are being used to store and access the ever-increasing amount of digital content. A storage system can include various storage devices which can provide persistent memory, e.g., a solid state drive (SSD) and a hard disk drive (HDD). An open-channel SSD is a type of SSD which can provide transparency and flexibility in managing Not-And (NAND) flash memory. In an open-channel SSD, the flash translation layer (FTL), along with an associated mapping table of logical address information to physical address information, resides in the host side (e.g., in the kernel mode or in the user space). This host-based FTL can allow for the sharing of internal SSD information with the software, and can further provide an optimization of FTL operations along with the execution of host applications. This sharing and optimization can result in an improvement in the performance, cost, reliability, and operation of the SSD and the overall storage system.

In a host-based FTL open-channel SSD, the FTL is a program which generally runs in the system memory. The FTL program is responsible for maintaining the mapping table. The FTL program may crash due to various reasons, e.g., memory issues, host crash, etc. In the event that the FTL program crashes, the system can recover the content of the mapping table by reading a large amount of data from the SSD in order to rebuild or reconstruct the mapping table. This recovery process may be time-consuming, and may result in difficulties in ensuring service recovery in a time sufficient to meet the requirements of a service level agreement (SLA).

Thus, while the host-based FTL can provide transparency and flexibility in managing the physical media of a storage drive, some challenges exist when handling an FTL program crash which results in the time-consuming process of rebuilding the mapping table.

One embodiment provides a system which facilitates crash recovery. The system receives an input/output (I/O) request for data associated with a logical block address. The system retrieves, from a first mapping table associated with a first storage drive, a physical location corresponding to the logical block address, wherein the first mapping table is stored in a random access memory which comprises a block device, and wherein a driver for the block device is stored in system memory separately from the first mapping table stored in the block device. The system accesses the physical location to execute the I/O request.

In some embodiments, the system determines, for each of a plurality of storage drives, a size of a mapping table associated with a respective storage drive, wherein the storage drives include the first storage drive, and wherein the size of the mapping table is based on a capacity of the associated respective storage drive. The system appends, based on a sequenced order of the storage drives, a plurality of mapping tables associated with the plurality of storage drives to obtain a mapping file, wherein the mapping file includes the first mapping table. The system stores the mapping file in the block device.

In some embodiments, the I/O request for data is associated with one or more logical block addresses which include the logical block address. The system retrieves the physical location corresponding to the one or more logical block addresses by the following operations. The system identifies the first mapping table associated with the first storage drive. The system identifies a first starting point of the first mapping table based on a summation of the sizes of mapping tables associated with storage drives in the sequenced order which precede the first storage drive. The system determines a first offset based on a value of a first logical block address of the one or more logical block addresses. The system determines a first length associated with the one or more logical block addresses. Each logical block address in a respective mapping table corresponds to metadata of a same size. The first mapping table includes logical block addresses which are sequentially ordered based on values of the included logical block addresses. Accessing the physical location to execute the I/O request is based on accessing the first storage drive at the first starting point plus the first offset for a number of units equal to the first length.

In some embodiments, the first length is equal to a number of the one or more logical block addresses scaled by a predetermined size for metadata stored in the first mapping table.

In some embodiments, a content management module communicates with the driver and the block device to manage the appended mapping tables of the mapping file. The content management module comprises a granularity modulator, an access pattern analyzer, a random engine, and a sequential engine.

In some embodiments, the system determines, by the access pattern analyzer, an access pattern for the requested I/O data. The system adjusts, by the granularity modulator, a size of a unit to access. The system determines whether the I/O request is associated with a random read/write operation or a sequential read/write operation.

In some embodiments, responsive to determining a random read or write operation, the system accesses, by the random engine, the mapping file of the block device with a granularity of a first size, wherein the random engine includes a read cache. Responsive to determining a sequential read or write operation, the system accesses, by the sequential engine, the mapping file of the block device with a granularity of a second size which is greater than the first size. The second size is determined based on a prediction of how much data to pre-fetch from the mapping file of the block device.

In some embodiments, the driver communicates with the block device and the storage drives. An application communicates with the driver based on one or more of: a communication between the application and the driver; and a communication between the application and the driver via a hypervisor. The hypervisor communicates with the driver based on one or more of: a communication between the hypervisor and the driver; and a communication between the hypervisor and the driver via a distributed file system.

In some embodiments, responsive to determining a crash associated with the driver for the block device, the system restarts the driver to recover access to the first mapping table absent of reconstruction of the first mapping table which involves reading data from the first storage drive and extracting mapping relations between logical addresses and physical addresses. A flash translation layer program running in the driver manages the first mapping table. Determining the crash associated with the driver comprises determining a crash associated with the flash translation layer program. Restarting the driver comprises relaunching the flash translation layer program to recover access to the first mapping table.

In another embodiment, the system determines a first mapping table associated with a first storage drive, wherein the first mapping table is stored in a random access memory (RAM) which comprises a block device, and wherein a driver for the block device is stored in system memory separately from the first mapping table stored in the block device. Responsive to determining a crash associated with the driver for the block device, the system restarts the driver to recover access to the first mapping table absent of reconstruction of the first mapping table which involves reading data from the first storage drive and extracting mapping relations between logical addresses and physical addresses.

FIG. 1 illustrates an architecture of an exemplary environment for data storage, in accordance with the prior art.

FIG. 2A illustrates an exemplary environment, where a flash translation layer runs in the system memory, in accordance with the prior art.

FIG. 2B illustrates an exemplary environment, where a flash translation layer runs in a RAM block device, in accordance with an embodiment of the present application.

FIG. 3 illustrates an exemplary access hierarchy, in accordance with an embodiment of the present application.

FIG. 4A illustrates diagram of a recovery procedure subsequent to a host-FTL crash, including a mapping table reconstruction, in accordance with the prior art.

FIG. 4B illustrates a diagram of a recovery procedure subsequent to a host-FTL crash, in accordance with an embodiment of the present application.

FIG. 5 illustrates an environment with a content management module which provides a dynamic access granularity, in accordance with an embodiment of the present application.

FIG. 6 presents a flowchart illustrating a method for facilitating recovery subsequent to a crash, in accordance with an embodiment of the present application.

FIG. 7A presents a flowchart illustrating a method for facilitating management and access of a mapping file, in accordance with an embodiment of the present application.

FIG. 7B presents a flowchart illustrating a method for facilitating management and access of a mapping file, in accordance with an embodiment of the present application.

FIG. 8 illustrates an exemplary computer system that facilitates recovery, in accordance with an embodiment of the present application.

FIG. 9 illustrates an exemplary apparatus that facilitates recovery, in accordance with an embodiment of the present application.

In the figures, like reference numerals refer to the same figure elements.

The following description is presented to enable any person skilled in the art to make and use the embodiments, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present disclosure. Thus, the embodiments described herein are not limited to the embodiments shown, but are to be accorded the widest scope consistent with the principles and features disclosed herein.

Overview

The embodiments described herein provide a system which places the FTL mapping file (as a plurality of appended mapping tables corresponding to a plurality of storage drives) in a random access memory (RAM) disk which comprises a block device, where a driver for the block device is stored in system memory separately from the FTL mapping file or mapping tables, thus avoiding the need to reconstruct the mapping tables by reading data from the storage drives and extracting mapping relations between logical addresses and physical addresses.

As described above, in a host-based FTL (such as an open-channel SSD), the associated mapping table of logical address information to physical address information resides in the host side (e.g., in the kernel mode or in the user space). This host-based FTL can allow for the sharing of internal SSD information with the software, and can further provide an optimization of FTL operations along with the execution of host applications. This sharing and optimization can result in an improvement in the performance, cost, reliability, and operation of the SSD and the overall storage system.

In a host-based FTL open-channel SSD, the FTL is a program which generally runs in the system memory. The FTL program is responsible for maintaining the mapping table. The FTL program may crash due to various reasons, e.g., memory issues, host crash, etc. In the event that the FTL program crashes, the system can recover the content of mapping table by reading a large amount of data from the SSD in order to rebuild or reconstruct the mapping table. This recovery process may be time-consuming, and may result in difficulties in ensuring service recovery in a time sufficient to meet the requirements of a service level agreement (SLA).

Thus, while the host-based FTL can provide transparency and flexibility in managing the physical media of a storage drive, some challenges exist when handling an FTL program crash which results in the time-consuming process of rebuilding the mapping table.

One solution is to provide a device-based FTL, in which the FTL operates as part of an individual or embedded microprocessor and DRAM of a storage device. This device-based FTL can be separate from the host CPU and DRAM, which allows the FTL to remain independent of the applications. If a host application crashes or runs out of order, the FTL is not affected as it is isolated from the host CPU and DRAM. This isolation can thus provide a decoupling of the relationship between the applications and the FTL.

However, as described below in relation to FIG. 1, this solution is limited by several constraints. First, because the FTL operates on the device side, the host lacks visibility to the FTL, which may result in a more difficult management and operation of the physical storage media (e.g., NAND). Second, microprocessors are developing with an increasingly complicated architecture, and in-drive DRAM is developing with an increasingly large capacity. These developments can lead to an increase in the consumption of both power and cost. Third, developing and debugging firmware which is software operating on a microprocessor can be more complicated than on host-side software. Debugging may be particularly difficult due to the limited information which is dumped in the occurrence of accidents. Fourth, if the device-based FTL program itself crashes, the system may need to perform a restart operation, and which can involve the time-consuming process of rebuilding the mapping table (e.g., by reading data from the storage media and extracting the logical to physical mapping relationships).

The embodiments described herein address the challenges associated with both the conventional device-based FTL solution and recovering from a crash in a host-based FTL by placing the FTL mapping information in a RAM disk which is a block device. The block device can store the FTL mapping information, and can be managed or operated by a block device driver (such as an open-channel SSD driver, as described below in relation to FIG. 3). The block device driver can include an FTL driver program. The system can store the FTL mapping information (e.g., mapping tables and files) in the block device separately from the FTL driver program, instead of storing the FTL mapping information together with the FTL driver program in system memory. When the system experiences a crash associated with the block device driver, the system need only restart the driver in order to recover access to the FTL file, as described below in relation to FIG. 4B. This can eliminate or avoid the need to reconstruct the mapping tables by reading data from the storage drives and extracting mapping relations between logical addresses and physical addresses. That is, the recovery process can involve restarting the driver to recover access to the FTL mapping file/tables absent of reconstruction of the FTL mapping file/tables. The block device can store the FTL mapping information, e.g., as a mapping file which includes a plurality of mapping tables corresponding to a plurality of storage drives.

The mapping tables stored in the FTL mapping file in the block device can be appended based on an ordered sequence, as described below in relation to FIG. 5. An I/O request can be a request for data associated with one or more logical block addresses. The data can be accessed by retrieving, from a given mapping table corresponding to a physical storage drive (on which the requested data is stored or to be written to), a physical location associated with the one or more logical block addresses. A first LBA can correspond to a first-occurring LBA of the one or more LBAs based on an ascending order. The physical location can be determined, by: identifying the starting point of the given mapping table (based on sizes of the preceding mapping tables in the ordered sequence of mapping tables in the FTL file); using the value of the first LBA as an index into the given mapping table to determine an offset from the starting point; and determining a length associated with the one or more logical block addresses (based on a fixed size for each LBA entry in the given mapping table). An exemplary communication for accessing information in the mapping tables of the FTL file stored in a block device are described below in relation to FIGS. 3 and 5.

In the embodiments described herein, the system can also include a content management layer which operates between the open-channel SSD driver and the block device. The content management layer can manage the mapping table files, and can also adjust the access granularity (e.g., the I/O size) based on a pattern of access (e.g., an access frequency). The content management layer can perform these functions using a random engine and a sequential engine. An exemplary content management layer is described below in relation to FIG. 5.

Thus, the embodiments described herein provide an improvement to the time-consuming process of rebuilding the mapping table in the event of a crash by placing the FTL file (as an appended plurality of FTL mapping tables corresponding to storage drives in the system) in a RAM disk which is a block device, where an FTL driver program is stored in system memory separately from the FTL file. The system further uses a content management layer to manage and operate the block device based on access granularity. The FTL mapping file is shared between the storage drives and can be accessed based on the ordered sequence of the appended mapping tables as well as an offset and a length associated with one or more logical block addresses of an I/O request.

A “storage system” refers to the overall set of hardware and software components used to facilitate storage for a system. A storage system can include multiple clusters of storage servers and other servers. A “storage server” refers to a computing device which can include multiple storage devices or storage drives. A “storage device” or a “storage drive” refers to a device or a drive with a non-volatile memory which can provide persistent storage of data, e.g., a solid state drive (SSD), a hard disk drive (HDD), or a flash-based storage device.

A “computing device” refers to any server, device, node, entity, drive, or any other entity which can provide any computing capabilities.

A “mapping table” refers to a data structure which maps a logical address to a physical address or a physical location, as described below in relation to FIGS. 3 and 5. An “FTL mapping file” refers to a file or other data structure which includes a plurality of appended mapping tables.

A “block device” or “RAM disk” refers to a random access memory which comprises a block device which is part of system memory. In this disclosure, the FTL mapping file is stored in the block device. Subsequent to a crash associated with a driver which controls, manages, interfaces, or communicates with the block device, the FTL mapping file can be efficiently and quickly accessed after the crash via a system call.

The term “crash recovery” refers to a process by which a driver program returns to a consistent and usable state. In this disclosure, a crash recovery process can include a system call to the FTL mapping file or FTL program running in the block device.

An “open-channel SSD” refers to a storage device which is part of a storage system in which the FTL program does not reside in the storage device (as in a device-based FTL), and instead resides in the host (as in a host-based FTL). In this disclosure, the host-based FTL is managed by a content management layer and the driver of the block device which stores the FTL mapping file.

Architecture of an Exemplary System in the Prior Art

FIG. 1 illustrates an architecture of an exemplary environment 100 for data storage, in accordance with the prior art. Environment 100 can include a host which includes: central processing units (CPUs) 110 and 130. Each CPU can include multiple cores and can be coupled to multiple dual in-line memory modules (DIMMs). For example, CPU 110 can include cores 112, 114, and 116, and can be coupled to DIMMs 120, 122, and 124. Similarly, CPU 130 can include cores 132, 134, and 136, and can be coupled to DIMMs 140, 142, and 144. The host can communicate with a storage device (such as an SSD 150) via a host interface 170 and communication 172 and 174. SSD 150 can also include: a microprocessor 152; DRAMs 162 and 164; and NANDs 154, 156, 158, and 160.

SSD 150 can include an FTL program running on (embedded) microprocessor 152 and stored in DRAMs 162 and 164. In this device-based FTL of environment 100, any issues with host applications generally do not affect the running of the device-based FTL in SSD 150, because the device-based FTL is isolated from the host. While this solution can shield the device-based FTL from suffering due to issues associated with host applications, several limitations remain.

First, because the FTL operates on the device side, the host lacks visibility to the FTL, which may result in a more difficult management and operation of the physical storage media (e.g., NAND). Second, microprocessors are developing with an increasingly complicated architecture, and in-drive DRAM is developing with an increasingly large capacity. These developments can lead to an increase in the consumption of both power and cost. Third, developing and debugging firmware which is software operating on a microprocessor can be more complicated than on host-side software. Debugging may be particularly difficult due to the limited information which is dumped in the occurrence of accidents. Fourth, if the device-based FTL program itself crashes, the system may need to restart, and may need to perform the time-consuming process of rebuilding the mapping table (e.g., by reading data from the storage media and extracting the logical-to-physical mapping relationships).

Thus, while the current solution of the device-based FTL can isolate the FTL from the host, several challenges still remain.

FTL in System Memory in the Prior Art vs FTL in a RAM Block Device

FIG. 2A illustrates an exemplary environment 200, where a flash translation layer runs in the system memory, in accordance with the prior art. Environment 200 can include a CPU 210 with a cache 212. CPU 210 can communicate or be coupled to a main memory 220 (e.g., a DRAM DIMM), which can include: an operating system 222; applications 224; and an FTL/mapping table 226. In environment 200, FTL 226 can be a module or program which allocates memory and updates the mapping table in the allocated region. However, when FTL/mapping table 226 program crashes (e.g., a system crash or an application-related crash), the mapping table disappears or is no longer available. In order to recover from such a crash, the system must rebuild the mapping table in the time-consuming conventional manner, as described above in relation to FIG. 1.

FIG. 2B illustrates an exemplary environment 230, where a flash translation layer runs in a RAM block device 256, in accordance with an embodiment of the present application. Environment 230 can include a CPU 240 with a cache 242. CPU 240 can communicate or be coupled to a main memory 250 (e.g., a DRAM DIMM), which can include: an operating system 252; applications 254; and a RAM disk 256, which can store an FTL/mapping table 258. In environment 230, RAM disk 256 is a block device formed by the DRAM of the system memory (e.g., main memory 250). If the system experiences a crash associated with an application or with a driver which controls RAM disk 256, the system can quickly regain access to FTL/mapping table 258 by relaunching the FTL program, e.g., by making a system call to FTL program 258. Thus, the system can avoid the time-consuming process of reconstructing the mapping table(s) of FTL mapping file 258 stored in RAM disk 256.

Exemplary Access Hierarchy

FIG. 3 illustrates an exemplary access hierarchy 300, in accordance with an embodiment of the present application. Hierarchy 300 can include: applications 302; a hypervisor 304; a distributed file system 306; an open-channel SSD driver 308; a RAM disk 310; and open-channel SSDs 312, 314, and 316. During operation, applications 302 can communicate with or directly operate open-channel SSD driver 308 (via a communication 346). Applications 302 can also communicate with open-channel SSD driver 308 via hypervisor 304 (via a communication 324) for virtualization. Hypervisor 304 can communicate with or directly operate open-channel SSD driver 308 (via a communication 344) or can communicate with open-channel SSD driver 308 via distributed file system 306 (via communications 324 and 326).

Open-channel SSD driver 308 can communicate with RAM disk 310 (which can be a block device formed by system memory) (via a communication 330). Open-channel SSD driver 308 can also communicate with a plurality of open-channel SSD, e.g., open-channel SSDs 312, 314, and 316 (via, respectively, communications 332, 334, and 336).

RAM disk 310 can store FTL mapping file 340, which can include appended mapping tables 342, 344, and 346 associated with each of open channel SSDs 312-316, in an ordered sequence based on the same ordered sequence of the associated storage drives. The ordered sequence can be, e.g., as depicted from left to right, SSD 312, SSD 314, and SSD 316. Each individual mapping table can be stored as a single unit or chunk as part of FTL mapping file 340. Note that FTL mapping file 340 can be stored in RAM disk 310 separately from an FTL driver program of open-channel SSD driver 308, which can be stored in system memory.

The size of each mapping table can be based on the size or capacity of the corresponding storage drive or SSD. The size or capacity of each SSD may be different, and the size of the associated mapping tables may also be different. For example, given a ratio of 1000:1, if a capacity of SSD 312 is 4 Terabytes (TB), a size 352 of mapping table 342 (associated with SSD 312) can be 4 Gigabytes (GB). Similarly, if a capacity of SSD 314 is 2 TB, a size 354 of mapping table 344 (associated with SSD 314) can be 2 GB. In addition, if a capacity of SSD 316 is 8 TB, a size 356 of mapping table 346 (associated with SSD 316) can be 8 GB.

Each mapping table can be organized based on an ascending order of LBA values, where each LBA can correspond to metadata of a predetermined size for the mapping table. For example, the metadata may indicate information about the physical location (including, e.g., a physical block address (PBA)) of where data corresponding to a given LBA value is stored, and the metadata itself as stored in the mapping table may be of a fixed size, e.g., 10 bytes. Using this fixed-size for the metadata, in a given mapping table, with LBA values in an ascending order (e.g., {LBA_1, LBA_2, LBA_3, . . . LBA_n}: LBA_1 may correspond to bytes 1 through 10; LBA_2 may correspond to bytes 11 through 20; LBA_3 may correspond to bytes 21 through 30; and LBA_n may correspond to bytes ((n*10)−9) through (n*10). This allows the system to utilize the known data format and size of data stored on the block device (i.e., in each mapping table) to build the addressing. That is, the system can build the mapping tables and the address information based on the data structure of the mapping tables, without requiring a complex content management layer such as a file system. An exemplary content management layer which communicates with the driver and the block device is described below in relation to FIG. 5.

Thus, based on the ascending order of the LBA values in a given mapping table and based on the known or predetermined size of metadata stored in the given mapping table, the system can determine an offset and a length. The system can determine the offset based on a value of an LBA associated with an incoming I/O request (or with the first LBA of one or more LBAs associated with an incoming I/O request). The system can determine the length based on a number of the one or more LBAs associated with the incoming I/O request and the predetermined or fixed size of metadata stored in the given mapping table (e.g., length=number of LBAs*fixed size of metadata for the given mapping table).

Furthermore, the system can determine a starting point for each mapping table based on the cumulative sizes of the preceding mapping tables, where the preceding mapping tables are determined based on the ordered sequence. That is, the system can identify a starting point for a given mapping table associated with a given storage drive based on a summation of the sizes of mapping tables associated with storage drives which precede the given mapping table in the sequence.

A starting point 362 of mapping table 342 can have a value of zero, as mapping table 342 is the first table in the appended plurality of corresponding mapping tables in FTL mapping file 340. A starting point 364 of mapping table 344 can have a value equal to the sum of the sizes of the preceding mapping tables, i.e., a value of length or size 352 of preceding mapping table 342. A starting point 366 of mapping table 346 can have a value equal to the sum of the sizes of the preceding mapping tables, i.e., a value equal to at least size 352 of preceding mapping table 342 plus size 354 of preceding mapping table 344 (plus the sizes of any other preceding mapping tables subsequent to mapping table 344 and prior to mapping table 346, not shown).

Furthermore, when processing an incoming I/O request, the system can determine an offset and a length associated with the I/O request, and access a physical location or physical block address in the given storage drive by starting at the identified starting point plus the offset for a number of units (e.g., bytes) equal to the length. For example, assume that LBA_e corresponds to or is mapped to metadata 376, LBA_f corresponds to or is mapped to metadata 378, and LBA_g corresponds to or is mapped to metadata 380. While processing an I/O request for data associated with logical block addresses LBA_e, LBA_f, and LBA_g (“the three incoming LBAs”), open-channel SSD driver 308 can receive the I/O request and determine that SSD 314 is the storage device to be accessed. Driver 308 can access RAM disk 310 to retrieve from mapping table 344 (associated with SSD 314) the physical location associated with the incoming LBAs. The system can identify starting point 364 for mapping table 344 (as described above), and can determine an offset 392 (determined based on the value of LBA_e, the first of the three incoming LBAs, as corresponding to metadata 376). The system can start reading data from starting point 364 plus offset 392, for a length 394 (determined based on the number (three) of LBAs of the incoming LBAs multiplied by the fixed size of metadata for mapping table 344).

Thus, hierarchy 300 depicts both the hierarchy of communications in the described embodiments, as well as the manner of accessing a given mapping table stored in the FTL mapping file as part of a plurality of ordered and appended mapping tables associated with an ordered sequence of the storage drives.

Mapping Table Reconstruction in the Prior Art vs. Fast Recovery From Crash Using Block Device

FIG. 4A illustrates diagram 400 of a recovery procedure subsequent to a host-FTL crash, including a mapping table reconstruction, in accordance with the prior art. In diagram 400, when the system experiences a crash (e.g., a host-side FTL crash) associated with an open-channel SSD driver (old) 402, the system must restart the FTL program (as indicated by “CRASH→RESTART”). An open-channel SSD driver (new) 412 must re-allocate memory and load data from an open-channel SSD 414 (via a communication 420) in order to rebuild a mapping table (memory allocation) 404 (as indicated by “CRASH→REBUILD”). As described above, rebuilding or reconstructing the mapping table in this manner can require a time-consuming process which may not sufficiently meet the terms of an SLA.

FIG. 4B illustrates a diagram 440 of a recovery procedure subsequent to a host-FTL crash, in accordance with an embodiment of the present application. In diagram 440, when the system experiences a crash (e.g., a host-side FTL crash) associated with an open-channel SSD driver (old) 442, the system need only relaunch or restart the FTL program (as indicated by “CRASH→RESTART”). Instead of reallocating memory and loading data from an open-channel SSD 454 in order to rebuild or reconstruct the mapping table (as in prior art diagram 400 of FIG. 4A), an open-channel SSD driver (new) 452 need only make a system call 462 to access a mapping table 444 (as stored in RAM block device 444). As a result, the system essentially loses only a communication 460 between open-channel SSD driver (old) 442, and does not need to communicate with SSD 454 at all in order to rebuild or reconstruct the mapping table (as indicated by the absence of a label for “CRASH→REBUILD” in FIG. 4B). Note that the FTL mapping table stored in RAM block device 444 can be stored separately from, rather than together with, an FTL driver program of the SSD driver or block device driver (442 or 452), which can be stored in system memory. Thus, the system can eliminate the need to rebuild or reconstruct the mapping table, as the mapping table remains unaffected, stored, and quickly accessible via system call 462, based on its placement in RAM block device 444, which can facilitate a fast crash recovery.

In this manner, the system of FIG. 4B can avoid, eliminate, or be absent of the reconstruction of the mapping table (as indicated by an improvement 430 over the prior art environment of diagram 400), thus eliminating the need for a time-consuming process which may not sufficiently meet the terms of an SLA.

The described embodiments provide a solution and improvement to the scenario in which the host-side FTL crashes (as in an open-channel SSD). In the infrequent event of the entire server experiencing a crash (such as during a power cycle), the entire server will require time to properly restart. Because the FTL mapping file is stored in the block device which is a RAM disk and running on the volatile system memory, the system can rebuild the mapping tables as needed during the time required for the whole server to restart. That is, the embodiments described herein are directed to the situation in which the FTL program, the FTL mapping file of the block device, or the associated block device driver experiences a crash, and to the improvements thereon.

Content Management Module For Dynamic Granularity Access

FIG. 5 illustrates an environment 500 with a content management module 512 which provides a dynamic access granularity, in accordance with an embodiment of the present application. In environment 500, an open-channel driver 510 communicates with a block device 514 via content management module 512. Content management module 512 can include: a granularity modulator 520, which adjusts a size of a unit of data to access in the FTL mapping file stored in block device 514; an access pattern analyzer 522, which determines an access pattern for a given or requested data or corresponding metadata; a random engine 524; and a sequential engine 526. Block device 514 can store an FTL mapping file 540, which can include metadata of mapping tables associated with one or more storage drives. For example, FTL mapping file 540 can include metadata corresponding to LBAs which are sequentially ordered based on an ascending order of LBA values, such as metadata 542, 544, 546, 548, 550, 552, and 554.

The system can determine whether an I/O request is associated with a random read/write operation or a sequential read/write operation. The system can use random engine 524 responsive to determining a random read or write operation. Random engine 524 can access FTL mapping file 540 of block device 514 based on a granularity of a first size (e.g., a small size). Random engine 524 can also include a read cache (not shown) to increase the hit rate of data and to reduce the number of queries to block device 514. The system can use sequential engine 526 responsive to determining a sequential read or write operation, e.g., by predicting a relatively larger I/O size for which to pre-fetch mapping information (e.g., metadata or physical location information) corresponding to one or more incoming LBAs. Sequential engine 526 can access FTL mapping file 540 of block device 514 based on a granularity of a second size (e.g., a large size, or a size greater than the first size).

As described above, when processing an I/O request and accessing FTL mapping file 540 stored in block device 514, the system can determine a starting point of a given mapping table in FTL mapping file 540 (as described above not shown in FIG. 5). The system can also determine an {offset, length} 560 associated with incoming I/O data. The system can retrieve data starting from a location 562 (which can include the starting point plus offset 560) for a number of units equal to a length 560 (indicated as a size or a length 564 in FTL mapping file 540). In some embodiments, offset 560 can include the starting point, where the system determines a single offset by first identifying the starting point of the given mapping table and moving to the correct location in the given mapping table based on the offset determined by the specific LBA value (or the first specific LBA value of a plurality of LBA values).

Method For Facilitating Recovery of Data

FIG. 6 presents a flowchart 600 illustrating a method for facilitating recovery subsequent to a crash, in accordance with an embodiment of the present application. During operation, the system receives an input/output (I/O) request for data associated with a logical block address (operation 602). The system retrieves, from a first mapping table associated with a first storage drive, a physical location corresponding to the logical block address, wherein the first mapping table is stored in a random access memory (RAM) which comprises a block device, and wherein a driver for the block device is stored in system memory separately from the first mapping table stored in the block device (operation 604). The system accesses the physical location to execute the I/O request (operation 606). If the system does not determine a crash associated with a driver which controls (or for) the block device (decision 608), the operation returns.

If the system does determine a crash associated with a driver which controls (or for) the block device (decision 608), the system restarts the driver to recover access to the first mapping table absent of reconstruction of the first mapping table which involves reading data from the first storage drive and extracting mapping relations between logical addresses and physical addresses (operation 610). The operation returns.

Method For Facilitating Management of Mapping Files and Dynamic Access Granularity

FIG. 7A presents a flowchart 700 illustrating a method for facilitating management and access of a mapping file, in accordance with an embodiment of the present application. During operation, the system determines, for each of a plurality of storage drives, a size of a mapping table associated with a respective storage drive, wherein the storage drives include at least a first storage drive, and wherein the size of the mapping table is based on a capacity of the associated respective storage drive (operation 702). The system appends, based on a sequenced order of the storage drives, a plurality of mapping tables associated with the plurality of storage drives to obtain a mapping file, wherein the mapping file includes at least a first mapping table (operation 704). The system stores the mapping file in a random access memory (RAM) which comprises a block device, wherein a driver for the block device is stored in a system memory separately from the mapping file stored in the block device (operation 706). The system receives an I/O request for data associated with one or more logical block addresses (operation 708). The system retrieves, from the mapping file, a physical location corresponding to the one or more logical block addresses (operation 710). The operation continues at Label A of FIG. 7B.

FIG. 7B presents a flowchart 720 illustrating a method for facilitating management and access of a mapping file, in accordance with an embodiment of the present application. The system identifies the first mapping table associated with the first storage drive (operation 722). The system identifies a first starting point of the first mapping table based on a summation of the sizes of mapping tables associated with storage drives in the sequenced order which precede the first storage drive (operation 724). The system determines a first offset based on a value of a first logical block address of the one or more logical block addresses (operation 726). This “first” LBA is the first LBA value which occurs in an ordered sequence of the one or more LBAs. The system determines a first length associated with the one or more logical block addresses, wherein each logical block address in a respective mapping table corresponds to metadata of a same size, and wherein the first mapping table includes logical block addresses which are sequentially ordered based on values of the included logical block addresses (operation 728).

The system determines a physical location based on the one or more logical block addresses, the first starting point, the first offset, and the first length (operation 730). The system accesses the physical location to execute the I/O request based on accessing the first storage drive at the first starting point plus the first offset for a number of units equal to the first length (operation 732). The first length can be equal to a number of the one or more logical block addresses scaled by a predetermined size of metadata stored in the first mapping table. The operation continues at operation 608 of FIG. 6.

Determining and accessing the physical location may involve one or more of an access pattern analyzer, a granularity modulator, a random engine, and a sequential engine, as described above in relation to FIG. 5.

Exemplary Computer System and Apparatus

FIG. 8 illustrates an exemplary computer system that facilitates recovery, in accordance with an embodiment of the present application. Computer system 800 includes a processor 802, a volatile memory 806, and a storage device 808. In some embodiments, computer system 800 can include a controller 804 (indicated by the dashed lines). Volatile memory 806 can include, e.g., random access memory (RAM), that serves as a managed memory. Volatile memory 806 can be used to store one or more memory pools and to form a block device. Storage device 808 can include persistent storage which can be managed or accessed via processor 802 (or controller 804). Furthermore, computer system 800 can be coupled to peripheral input/output (I/O) user devices 810, e.g., a display device 811, a keyboard 812, and a pointing device 814. Storage device 808 can store an operating system 816, a content-processing system 818, and data 836.

Content-processing system 818 can include instructions, which when executed by computer system 800, can cause computer system 800 or processor 802 to perform methods and/or processes described in this disclosure. Specifically, content-processing system 818 can include instructions for receiving and transmitting data packets, including data to be read or written, an input/output (I/O) request (e.g., a read request or a write request), metadata, a logical block address (LBA), and a physical block address (PBA) or a physical location (communication module 820).

Content-processing system 818 can further include instructions for receiving an input/output (I/O) request for data associated with a logical block address (communication module 820). Content-processing system 818 can include instructions for retrieving, from a first mapping table associated with a first storage drive, a physical location corresponding to the logical block address, wherein the first mapping table is stored in a random access memory (RAM) which comprises a block device, and wherein a driver for the block device is stored in system memory separately from the first mapping table stored in the block device (mapping file-managing module 822). Content-processing system 818 can also include instructions for accessing the physical location to execute the I/O request (physical location-accessing module 824). Content-processing system 818 can include instructions for, responsive to determining a crash associated with a driver which controls the block device (driver-crash determining module 826), restarting the driver to recover access to the first mapping table absent of reconstruction of the first mapping table which involves reading data from the first storage drive and extracting mapping relations between logical addresses and physical addresses (driver-restarting module).

Content-processing system 818 can additionally include instructions for determining sizes of mapping tables, appending mapping tables associated with storages drives to obtain a mapping file, and storing the mapping file in a block device (mapping file-managing module 822). Content-processing system 818 can include instructions for retrieving the physical location by identifying a starting point and determining an offset and a length (physical location-accessing module 824).

Content-processing system 818 can also include instructions for determining an access pattern for the requested I/O data (access pattern-analyzing module 832). Content-processing system 818 can include instructions for adjusting a size of a unit to access (granularity-adjusting module 830). Content-processing system 818 can include instructions for determining whether an I/O request is associated with a random read/write operation or a sequential read/write operation (data-processing module 834). Content-processing system 818 can further include instructions for, responsive to determining a random read or write operation (data-processing module 834), accessing, by the random engine, the mapping file of the block device with a granularity of a first size, wherein the random engine includes a read cache (mapping file-managing module 822). Content-processing system 818 can include instructions for, responsive to determining a sequential read or write operation (data-processing module 834), accessing, by the sequential engine, the mapping file of the block device with a granularity of a second size which is greater than the first size (mapping file-managing module 822).

Data 836 can include any data that is required as input or generated as output by the methods and/or processes described in this disclosure. Specifically, data 836 can store at least: data; an I/O request; metadata; data associated with a logical block address (LBA); a logical block address (LBA); a physical block address (PBA); a physical location; a mapping table; a mapping file or an FTL mapping file; a logical-to-physical mapping; an identifier or indicator of storage drive or an associated mapping table; an order; a sequential, ascending, or sequenced order; a starting point; an offset; a length; a size; a number of units; a number of LBAs; an identifier or indicator of a content management module, an access pattern analyzer, a granularity modulator, a random engine, or a sequential engine; an identifier or indicator of an application, a driver, a hypervisor, a distributed file system, or a block device; a flash translation layer program and related information; and a system call to restart a driver.

FIG. 9 illustrates an exemplary apparatus 900 that facilitates recovery, in accordance with an embodiment of the present application. Apparatus 900 can comprise a plurality of units or apparatuses which may communicate with one another via a wired, wireless, quantum light, or electrical communication channel. Apparatus 900 may be realized using one or more integrated circuits, and may include fewer or more units or apparatuses than those shown in FIG. 9. Furthermore, apparatus 900 may be integrated in a computer system, or realized as a separate device or devices capable of communicating with other computer systems and/or devices.

Apparatus 900 can comprise modules or units 902-916 which are configured to perform functions or operations similar to modules 820-834 of computer system 800 of FIG. 8, including: a communication unit 902; a mapping file-managing unit 904; a physical location-accessing unit 906; a driver crash-determining unit 908; a driver-restarting unit 910; a granularity-adjusting unit 912; an access pattern-analyzing unit 914; and a data-processing unit 916.

The data structures and code described in this detailed description are typically stored on a computer-readable storage medium, which may be any device or medium that can store code and/or data for use by a computer system. The computer-readable storage medium includes, but is not limited to, volatile memory, non-volatile memory, magnetic and optical storage devices such as disk drives, magnetic tape, CDs (compact discs), DVDs (digital versatile discs or digital video discs), or other media capable of storing computer-readable media now known or later developed.

The methods and processes described in the detailed description section can be embodied as code and/or data, which can be stored in a computer-readable storage medium as described above. When a computer system reads and executes the code and/or data stored on the computer-readable storage medium, the computer system performs the methods and processes embodied as data structures and code and stored within the computer-readable storage medium.

Furthermore, the methods and processes described above can be included in hardware modules. For example, the hardware modules can include, but are not limited to, application-specific integrated circuit (ASIC) chips, field-programmable gate arrays (FPGAs), and other programmable-logic devices now known or later developed. When the hardware modules are activated, the hardware modules perform the methods and processes included within the hardware modules.

The foregoing embodiments described herein have been presented for purposes of illustration and description only. They are not intended to be exhaustive or to limit the embodiments described herein to the forms disclosed. Accordingly, many modifications and variations will be apparent to practitioners skilled in the art. Additionally, the above disclosure is not intended to limit the embodiments described herein. The scope of the embodiments described herein is defined by the appended claims.

Li, Shu

Patent Priority Assignee Title
Patent Priority Assignee Title
10013169, Dec 19 2014 International Business Machines Corporation Cooperative data deduplication in a solid state storage array
10199066, Mar 01 2018 Seagate Technology LLC Write management of physically coupled storage areas
10229735, Dec 22 2017 Intel Corporation Block management for dynamic single-level cell buffers in storage devices
10235198, Feb 24 2016 Samsung Electronics Co., Ltd.; SAMSUNG ELECTRONICS CO , LTD VM-aware FTL design for SR-IOV NVME SSD
10268390, Feb 07 2014 Open Invention Network LLC Methods, systems and devices relating to data storage interfaces for managing data address spaces in data storage devices
10318467, Mar 09 2015 International Business Machines Corporation Preventing input/output (I/O) traffic overloading of an interconnect channel in a distributed data storage system
10361722, Nov 10 2016 SK Hynix Inc. Semiconductor memory device performing randomization operation
10437670, May 24 2018 International Business Machines Corporation Metadata hardening and parity accumulation for log-structured arrays
10459663, Jan 23 2017 International Business Machines Corporation Thin provisioning of raid storage
10642522, Sep 15 2017 Alibaba Group Holding Limited Method and system for in-line deduplication in a storage drive based on a non-collision hash
10649657, Mar 22 2018 Western Digital Technologies, Inc. Log-based storage for different data types in non-volatile memory
10678432, Oct 04 2016 Pure Storage, Inc User space and kernel space access to memory devices through private queues
10756816, Oct 04 2016 Pure Storage, Inc.; Pure Storage, Inc Optimized fibre channel and non-volatile memory express access
10928847, Sep 29 2018 Intel Corporation Apparatuses and methods for frequency scaling a message scheduler data path of a hashing accelerator
3893071,
4562494, Apr 07 1983 Verbatim Corporation Disk drive alignment analyzer
4718067, Aug 02 1984 U S PHILIPS CORPORATION Device for correcting and concealing errors in a data stream, and video and/or audio reproduction apparatus comprising such a device
4775932, Jul 31 1984 Texas Instruments Incorporated; TEXAS INSTRUMENTS INCORPORATED A CORP OF DE Computer memory system with parallel garbage collection independent from an associated user processor
4858040, Aug 25 1987 Ampex Corporation Bimorph actuator for a disk drive
5394382, Feb 11 1993 IBM Corporation Method for the organization of data on a CD-ROM
5602693, Dec 14 1994 RESEARCH INVESTMENT NETWORK, INC Method and apparatus for sensing position in a disk drive
5715471, Mar 14 1994 Fujitsu Limited Parallel computer
5732093, Feb 08 1996 MEDIA TEK INCORPORATION Error correction method and apparatus on optical disc system
5802551, Oct 01 1993 Fujitsu Limited Method and apparatus for controlling the writing and erasing of information in a memory device
5930167, Jul 30 1997 SanDisk Technologies LLC Multi-state non-volatile flash memory capable of being its own two state write cache
6098185, Oct 31 1997 STMICROELECTRONICS N V Header-formatted defective sector management system
6148377, Nov 22 1996 GOOGLE LLC Shared memory computer networks
6226650, Sep 17 1998 SYNCHROLOGIC, INC Database synchronization and organization system and method
6243795, Aug 04 1998 BOARD OF GOVERNORS FOR HIGHER EDUCATION, THE, STATE OF RHODE ISLAND AND PROVIDENCE PLANTATIONS Redundant, asymmetrically parallel disk cache for a data storage system
6457104, Mar 20 2000 International Business Machines Corporation System and method for recycling stale memory content in compressed memory systems
6658478, Aug 04 2000 Hewlett Packard Enterprise Development LP Data storage system
6795894, Aug 08 2000 Hewlett Packard Enterprise Development LP Fast disk cache writing system
7351072, Jul 06 2006 SAMSUNG ELECTRONICS CO , LTD Memory module, memory extension memory module, memory module system, and method for manufacturing a memory module
7565454, Jul 18 2003 Microsoft Technology Licensing, LLC State migration in multiple NIC RDMA enabled devices
7599139, Jun 22 2007 Western Digital Technologies, Inc. Disk drive having a high performance access mode and a lower performance archive mode
7953899, Aug 21 2002 Hewlett Packard Enterprise Development LP Universal diagnostic hardware space access system for firmware
7958433, Nov 30 2006 CAVIUM INTERNATIONAL; MARVELL ASIA PTE, LTD Methods and systems for storing data in memory using zoning
8085569, Dec 28 2006 Hynix Semiconductor Inc. Semiconductor memory device, and multi-chip package and method of operating the same
8144512, Dec 18 2009 SanDisk Technologies LLC Data transfer flows for on-chip folding
8166233, Jul 24 2009 AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED Garbage collection for solid state disks
8260924, May 03 2006 BlueTie, Inc. User load balancing systems and methods thereof
8281061, Mar 31 2008 U S BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT Data conditioning to improve flash memory reliability
8452819, Mar 22 2011 Amazon Technologies, Inc Methods and apparatus for optimizing resource utilization in distributed storage systems
8516284, Nov 04 2010 International Business Machines Corporation Saving power by placing inactive computing devices in optimized configuration corresponding to a specific constraint
8527544, Aug 11 2011 Pure Storage Inc. Garbage collection in a storage system
8751763, Mar 13 2013 Nimbus Data Systems, Inc. Low-overhead deduplication within a block-based data storage
8819367, Dec 19 2011 Western Digital Technologies, Inc.; Western Digital Technologies, INC Accelerated translation power recovery
8825937, Feb 25 2011 SanDisk Technologies LLC Writing cached data forward on read
8832688, May 27 2010 Transoft Kernel bus system with a hyberbus and method therefor
8868825, Jul 02 2014 Pure Storage, Inc. Nonrepeating identifiers in an address space of a non-volatile solid-state storage
8904061, Dec 30 2011 EMC IP HOLDING COMPANY LLC Managing storage operations in a server cache
8949208, Sep 30 2011 EMC IP HOLDING COMPANY LLC System and method for bulk data movement between storage tiers
9015561, Jun 11 2014 SanDisk Technologies LLC Adaptive redundancy in three dimensional memory
9031296, Sep 21 2010 SCHLEIFRING GMBH Method and system for error resilient compression and decompression of computed tomography data
9043545, Jan 06 2012 NetApp, Inc Distributing capacity slices across storage system nodes
9088300, Dec 15 2011 MARVELL INTERNATIONAL LTD Cyclic redundancy check for out-of-order codewords
9092223, May 31 2012 GOOGLE LLC Systems and methods to save power in data-center networks
9129628, Oct 23 2014 Western Digital Technologies, INC Data management for data storage device with different track density regions
9141176, Jul 29 2013 Western Digital Technologies, Inc.; Western Digital Technologies, INC Power management for data storage device
9208817, Mar 10 2015 Alibaba Group Holding Limited System and method for determination and reallocation of pending sectors caused by media fatigue
9213627, Dec 21 2005 MORGAN STANLEY SENIOR FUNDING, INC Non-volatile memory with block erasable locations
9280472, Mar 13 2013 Western Digital Technologies, INC Caching data in a high performance zone of a data storage system
9280487, Jan 18 2013 Cisco Technology, Inc. Methods and apparatus for data processing using data compression, linked lists and de-duplication techniques
9311939, Dec 23 2014 Western Digital Technologies, INC Write-through media caching
9336340, Mar 30 2012 EMC IP HOLDING COMPANY LLC Evaluating management operations
9436595, Mar 15 2013 GOOGLE LLC Use of application data and garbage-collected data to improve write efficiency of a data storage device
9495263, Dec 21 2004 INFORTREND TECHNOLOGY, INC. Redundant SAS storage virtualization subsystem and system using the same, and method therefor
9529601, Jul 15 2015 Dell Products L.P. Multi-processor startup system
9529670, May 16 2014 International Business Machines Corporation Storage element polymorphism to reduce performance degradation during error recovery
9575982, Apr 29 2013 Amazon Technologies, Inc Size targeted database I/O compression
9588698, Mar 15 2013 Western Digital Technologies, INC Managing the write performance of an asymmetric memory system
9588977, Sep 30 2014 EMC IP HOLDING COMPANY LLC Data and metadata structures for use in tiering data to cloud storage
9607631, Nov 24 2014 Seagate Technology LLC Enhanced capacity recording
9671971, Mar 27 2015 Intel Corporation Managing prior versions of data for logical addresses in a storage device
9747202, Mar 14 2013 SanDisk Technologies LLC Storage module and method for identifying hot and cold data
9852076, Dec 18 2014 Innovations In Memory LLC Caching of metadata for deduplicated LUNs
9875053, Jun 05 2015 Western Digital Technologies, INC Scheduling scheme(s) for a multi-die storage device
9912530, Nov 18 2009 Juniper Networks, Inc. Method and apparatus for hitless failover in networking systems using single database
9946596, Jan 29 2016 Kioxia Corporation Global error recovery system
20010032324,
20020010783,
20020039260,
20020073358,
20020095403,
20020112085,
20020161890,
20030074319,
20030145274,
20030163594,
20030163633,
20030217080,
20040010545,
20040066741,
20040103238,
20040143718,
20040255171,
20040267752,
20040268278,
20050038954,
20050097126,
20050138325,
20050144358,
20050149827,
20050174670,
20050177672,
20050177755,
20050195635,
20050235067,
20050235171,
20060031709,
20060101197,
20060156012,
20060184813,
20070033323,
20070061502,
20070101096,
20070250756,
20070266011,
20070283081,
20070283104,
20070285980,
20080034154,
20080065805,
20080082731,
20080112238,
20080163033,
20080301532,
20090006667,
20090089544,
20090113219,
20090125788,
20090183052,
20090254705,
20090282275,
20090287956,
20090307249,
20090307426,
20090310412,
20100031000,
20100169470,
20100217952,
20100229224,
20100241848,
20100321999,
20100325367,
20100332922,
20110031546,
20110055458,
20110055471,
20110060722,
20110072204,
20110099418,
20110153903,
20110161784,
20110191525,
20110218969,
20110231598,
20110239083,
20110252188,
20110258514,
20110289263,
20110289280,
20110292538,
20110296411,
20110299317,
20110302353,
20120017037,
20120039117,
20120084523,
20120089774,
20120096330,
20120117399,
20120147021,
20120151253,
20120159099,
20120159289,
20120173792,
20120203958,
20120210095,
20120233523,
20120246392,
20120278579,
20120284587,
20120324312,
20120331207,
20130013880,
20130016970,
20130018852,
20130024605,
20130054822,
20130061029,
20130073798,
20130080391,
20130145085,
20130145089,
20130151759,
20130159251,
20130159723,
20130166820,
20130173845,
20130191601,
20130205183,
20130219131,
20130227347,
20130238955,
20130254622,
20130318283,
20130318395,
20130329492,
20140006688,
20140019650,
20140025638,
20140082273,
20140082412,
20140095769,
20140095827,
20140108414,
20140108891,
20140164447,
20140164879,
20140181532,
20140195564,
20140215129,
20140223079,
20140233950,
20140250259,
20140279927,
20140304452,
20140310574,
20140359229,
20140365707,
20150019798,
20150082317,
20150106556,
20150106559,
20150121031,
20150142752,
20150143030,
20150199234,
20150227316,
20150234845,
20150269964,
20150277937,
20150286477,
20150294684,
20150301964,
20150304108,
20150310916,
20150317095,
20150341123,
20150347025,
20150363271,
20150363328,
20150372597,
20160014039,
20160026575,
20160041760,
20160048327,
20160048341,
20160054922,
20160062885,
20160077749,
20160077764,
20160077968,
20160098344,
20160098350,
20160103631,
20160110254,
20160132237,
20160154601,
20160155750,
20160162187,
20160179399,
20160188223,
20160188890,
20160203000,
20160224267,
20160232103,
20160234297,
20160239074,
20160239380,
20160274636,
20160306699,
20160306853,
20160321002,
20160335085,
20160342345,
20160343429,
20160350002,
20160350385,
20160364146,
20160381442,
20170004037,
20170010652,
20170075583,
20170075594,
20170091110,
20170109199,
20170109232,
20170123655,
20170147499,
20170161202,
20170162235,
20170168986,
20170177217,
20170177259,
20170185498,
20170192848,
20170199823,
20170212708,
20170220254,
20170221519,
20170228157,
20170242722,
20170249162,
20170262176,
20170262178,
20170262217,
20170269998,
20170279460,
20170285976,
20170286311,
20170322888,
20170344470,
20170344491,
20170353576,
20180024772,
20180024779,
20180033491,
20180052797,
20180067847,
20180069658,
20180074730,
20180076828,
20180088867,
20180107591,
20180113631,
20180143780,
20180150640,
20180165038,
20180165169,
20180165340,
20180167268,
20180173620,
20180188970,
20180189175,
20180189182,
20180212951,
20180219561,
20180226124,
20180232151,
20180260148,
20180270110,
20180293014,
20180300203,
20180321864,
20180322024,
20180329776,
20180336921,
20180349396,
20180356992,
20180357126,
20180373428,
20180373655,
20180373664,
20190012111,
20190050327,
20190065085,
20190073261,
20190073262,
20190087089,
20190087115,
20190087328,
20190116127,
20190171532,
20190172820,
20190196748,
20190196907,
20190205206,
20190212949,
20190220392,
20190227927,
20190272242,
20190278654,
20190317901,
20190339998,
20190377632,
20190377821,
20190391748,
20200004456,
20200004674,
20200013458,
20200042223,
20200042387,
20200089430,
20200097189,
20200143885,
20200159425,
20200167091,
20200225875,
20200242021,
20200250032,
20200257598,
20200326855,
20200328192,
20200348888,
20200387327,
20200401334,
20200409791,
20210010338,
20210089392,
20210103388,
JP2003022209,
JP2011175422,
WO1994018634,
WO9418634,
//
Executed onAssignorAssigneeConveyanceFrameReelDoc
Jul 22 2020LI, SHUAlibaba Group Holding LimitedASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0533220242 pdf
Jul 27 2020Alibaba Group Holding Limited(assignment on the face of the patent)
Date Maintenance Fee Events
Jul 27 2020BIG: Entity status set to Undiscounted (note the period is included in the code).


Date Maintenance Schedule
Jun 07 20254 years fee payment window open
Dec 07 20256 months grace period start (w surcharge)
Jun 07 2026patent expiry (for year 4)
Jun 07 20282 years to revive unintentionally abandoned end. (for year 4)
Jun 07 20298 years fee payment window open
Dec 07 20296 months grace period start (w surcharge)
Jun 07 2030patent expiry (for year 8)
Jun 07 20322 years to revive unintentionally abandoned end. (for year 8)
Jun 07 203312 years fee payment window open
Dec 07 20336 months grace period start (w surcharge)
Jun 07 2034patent expiry (for year 12)
Jun 07 20362 years to revive unintentionally abandoned end. (for year 12)