A two-dimensional self-RAID method of protecting page-based storage data in a MLC multiple-level-cell flash memory device. The protection scheme includes reserving one parity sector across each data page, reserving one parity page as the column parity, selecting a specific number of pages to form a parity group, writing into the parity page a group parity value for data stored in the pages of the parity group. The parity sector represents applying a RAID technique in a first dimension. The group parity represents applying a RAID technique in a second dimension. Data protection is achieved because a corrupted data sector can likely be recovered by the two dimensional RAID data.
|
1. A method of managing a multiple level cell flash memory that is organized logically into one or more blocks each having a plurality of pages, each page including a plurality of sectors, the multiple level cell flash memory further including sense circuitry, the method comprising:
detecting charge levels of a plurality of multiple level cells in the multiple level cell flash memory using a first sense voltage, and compiling a first correlation table of correlations between the first sense voltage and the detected charge levels of the plurality of multiple level cells;
selecting a second sense voltage that is lower than the first sense voltage and detecting charge levels of the plurality of multiple level cells using the second sense voltage, and compiling a second correlation table of correlations between the second sense voltage and the detected charge levels of the plurality of multiple level cells; and
replacing the first correlation table by the second correlation table.
15. A multiple level cell flash memory data storage device, comprising:
a flash memory array having a plurality of blocks, each block in the plurality of blocks comprising an erase unit and having a plurality of pages, a respective block including a plurality of groups of pages, each group of pages in the respective block including an assigned parity page;
each page of the respective block having a plurality of sectors, including an assigned parity sector;
sense circuitry;
wherein the device is operable to:
detect charge levels of a plurality of multiple level cells in the multiple level cell flash memory using a first sense voltage, and compile a first correlation table of correlations between the first sense voltage and the detected charge levels of the plurality of multiple level cells;
select a second sense voltage that is lower than the first sense voltage and detect charge levels of the plurality of multiple level cells using the second sense voltage, and compile a second correlation table of correlations between the second sense voltage and the detected charge levels of the plurality of multiple level cells; and
replace the first correlation table by the second correlation table.
2. The method of
programming and erasing data on a page at a predetermined speed;
detecting an error rate for each page of a block and identifying a group of high error pages based on the error rates; and
applying a speed slower than the predetermined speed in programming and erasing data on the identified high error pages.
3. The method of
(a) identifying a parity block;
(b) choosing a parity sector in each page of a respective block;
(c) assigning the pages of the respective block into a plurality of groups;
(d) for each page of the respective block, calculating a sector parity value for data stored in the sectors in the page and storing the sector parity value in the parity sector of the page;
(e) prior to completing data writing into all pages of a group in the respective block, calculating a subset group parity for a subset of pages in the group, the subset of pages comprising a plurality of pages in the group in the respective block; and
(f) storing the subset group parity in the parity block.
4. The method of
(g) repeating steps (e) to (f) for an additional subset of pages in a second group distinct from the first group.
5. The method as in
(h) selecting a column parity page and calculating, for each sector number, a column parity for all sectors assigned the sector number in the pages of the respective block.
7. The method as in
10. The method as in
11. The method as in
12. The method as in
14. The method of
(a) choosing a parity sector in each page of a respective block;
(b) assigning the pages of the respective block into a plurality of groups, each group in the respective block including an assigned parity page;
(c) for each page of the respective block, calculating a sector parity value for data stored in the sectors in the page and storing the sector parity value in the parity sector of the page;
(d) calculating a group parity value for data stored in the pages of the group; and
(e) storing the group parity value in the assigned parity page.
16. The multiple level cell flash memory data storage device of
program and erase data on a page at a predetermined speed;
detect an error rate for each page of a block and identify a group of high error pages based on the error rates; and
apply a speed slower than the predetermined speed in programming and erasing data on the identified high error pages.
17. The multiple level cell flash memory data storage device of
wherein the device is further operable, for a respective page,
to store data in data sectors of the respective page;
to store, in the assigned parity sector of the respective page, a sector parity value of the data stored in the data sectors of the respective page; and
wherein the device is further operable, for a respective group of pages in the respective block,
to store data in data pages of the respective group of pages; and
to store, in a parity block, a subset group parity value calculated for a subset of pages in the respective group prior to completing data writing into all pages of the respective group.
18. The multiple level cell flash memory data storage device of
19. The multiple level cell flash memory data storage device of
20. The multiple level cell flash memory data storage device of
21. The multiple level cell flash memory data storage device of
22. The multiple level cell flash memory data storage device of
23. The multiple level cell flash memory data storage device of
24. The multiple level cell flash memory data storage device of
25. The multiple level cell flash memory data storage device of
26. The multiple level cell flash memory data storage device of
27. The multiple level cell flash memory data storage device of
28. The multiple level cell flash memory data storage device of
29. The multiple level cell flash memory data storage device of
30. The multiple level cell flash memory data storage device of
wherein the device is further operable, for a respective page,
to store data in data sectors of the respective page;
to store, in the assigned parity sector of the respective page, a sector parity value of the data stored in the data sectors of the respective page; and
wherein the device is further operable, for a respective group of pages in the respective block,
to store data in data pages of the respective group of pages; and
to store, in the parity page of the respective group, a group parity value of the data stored in the data pages of the respective group.
|
This application is a divisional of U.S. application Ser. No. 12/726,200, filed Mar. 17, 2010, which is incorporated by reference herein its entirety.
This application is related to the following co-pending patent applications: (1) U.S. patent application Ser. No. 13/535,237, filed Jun. 27, 2012; and (2) U.S. patent application Ser. No. 13/535,243, filed Jun. 27, 2012, which are incorporated by reference herein their entirety.
This application also relates to subject matter in the following co-pending patent applications: U.S. patent application Ser. No. 12/082,202, filed on Apr. 8, 2008, entitled “System and Method for Performing Host Initiated Mass Storage Commands Using a Hierarchy of Data Structures”; U.S. patent application Ser. No. 12/082,205, filed on Apr. 8, 2008, entitled “Flash Memory Controller Having Reduced Pinout”; U.S. patent application Ser. No. 12/082,221, filed on Apr. 8, 2008, entitled “Multiprocessor Storage Controller”; U.S. patent application Ser. No. 12/082,207, filed on Apr. 8, 2008, entitled “Storage Controller for Flash Memory Including a Crossbar Switch Connecting a Plurality of Processors with a Plurality of Internal Memories”; U.S. patent application Ser. No. 12/082,220, filed on Apr. 8, 2008, entitled “Flash Memory Controller and System Including Data Pipelines Incorporating Multiple Buffers”; U.S. patent application Ser. No. 12/082,206, filed on Apr. 8, 2008, entitled “Mass Storage Controller Volatile Memory Containing Metadata Related to Flash Memory Storage”; U.S. patent application Ser. No. 12/082,204, filed on Apr. 8, 2008, entitled “Patrol Function Used in Flash Storage Controller to Detect Data Errors”; U.S. patent application Ser. No. 12/082,223, filed on Apr. 8, 2008, entitled “Flash Storage Controller Execute Loop”; U.S. patent application Ser. No. 12/082,222, filed on Apr. 8, 2008, entitled “Metadata Rebuild in a Flash Memory Controller Following a Loss of Power”, and U.S. patent application Ser. No. 12/082,203, filed on Apr. 8, 2008, entitled “Flash Memory Controller Garbage Collection Operations Performed Independently in Multiple Flash Memory Groups,” which are incorporated by reference herein their entirety
The invention described herein relates to data storage management in semiconductor flash memories, and in particular to a data storage protection method that prevents data corruption in multiple level cell (MLC) memory devices in the event of a power interruption.
Current enterprise-level mass storage relies on hard drives that are typically characterized by a 3.5″ form factor, a 15,000 rpm spindle motor and a storage capacity between 73 GB and 450 GB. The mechanical design follows the traditional hard drive with a single actuator and 8 read/write heads moving across 8 surfaces. The constraints of the head/media technology limit the read/write capabilities to using only one active head at a time. All data requests that are sent to the drive are handled in a serial manner, with long delays between operations, as the actuator moves the read/write head to the required position and the media rotates to place the data under the read/write head.
A solid state memory device is attractive in an enterprise mass-storage environment. For that environment, the flash memory is a good candidate among various solid state memory devices, since it does not have the mechanical delays associated with hard drives, thereby allowing higher performance and commensurately lower cost, and better usage of power and space.
The flash memory is a form of non-volatile memory, i.e., EEPROM (electronically erasable programmable read-only memory). A memory cell in a flash memory array generally includes a transistor having a control gate and drain and source diffusion regions formed in a substrate. The transistor has a floating gate under the control gate, thus forming an electron storage device. A channel region lies under the floating gate, isolated by an insulation layer (e.g., a tunnel oxide layer) between the channel and the floating gate. The energy barrier imposed by the insulating layer against charge carriers movement into or out of the floating gate can be overcome by applying a sufficiently high electric field across the insulating layer. The charge stored in the floating gate determines the threshold voltage (Vt) of the cell, which represents the stored data of the cell. Charge stored in the floating gate causes the cell to have a higher Vt. To change the Vt of a cell to a higher or lower value, the charge stored in the floating gate is increased or decreased by applying appropriate voltages at the control gate, the drain and source diffusion regions, and the channel region. The appropriate voltages cause charge to move between one or more of these regions and through the insulation layer to the floating gate.
A single-level cell (SLC) flash memory device has a single threshold voltage Vt and can store one bit of data per cell. A memory cell in a multiple-level cell (MLC) flash memory device has multiple threshold voltages, and depending on the amount of charge stored in the floating gate, can represent more than one bit of data. Because a MLC flash memory device enables the storage of multiple data bits per cell, high density mass storage applications (such as 512 Mb and beyond) are readily achievable. In a typical four-level two-bit MLC flash memory device, the cell threshold voltage Vt can be set at any of four levels to represent data “00”, “01”, “10”, and “11”. To program the memory cell to a given level, the cell may be programmed multiple times. Before each write, a flash memory array is erased to reset every cell in the array to a default state. As a result, multiple data bits that share the same cell and their electronic states, (hence their threshold voltage Vt's), are interdependent to a point that an unexpected power interruption can generate unpredictable consequences. Variations in the electronic states of the memory cells also generate variations within ranges of threshold voltages in a real system. Table 1 below shows the electronic states and the threshold voltage ranges in a two-bit MLC.
TABLE 1
Threshold voltages and bit values in a two-bit MLC memory cell
Vt
Bit 1
Bit 2
−4.25 V to −1.75 V
1
1
−1.75 V to 0.75 V
1
0
0.75 V to 3.25 V
0
1
3.25 V to 5.75 V
0
0
In spite of the advantages of MLC over SLC, MLC flash memory devices have not traditionally been used because of certain technical constraints, among which data corruption presents one of the most severe challenges.
All flash memories have a finite number of erase-write cycles. MLC flash memory devices are more vulnerable to data corruption than SLC flash memory devices. The specified erase cycle limit for each flash memory page is typically in the order of 100,000 cycles for SLC flash memory devices and typically in the order of 10,000 cycles for MLC devices. The lower cycle limit in the MLC flash memory devices poses particular problems for data centers that operate with unpredictable data streams. The unpredictable data streams may cause “hot spots”, resulting in certain highly-used areas of memory being subject to a large number of erase cycles.
In addition, various factors in normal operation can also affect flash memory integrity, including read disturbs or program disturbs. These disturbs lead to unpredictable loss of data bits in a memory cell, as a result of the reading or writing of memory cells adjacent to the disturbed cell. Sudden data losses in MLC flash memory devices due to unexpected power interruptions require frequent data recoveries. Because some data levels require more than one write operations to achieve and because more than one bit of data share the same memory cell, a power change or a program error during a write data operation leaves the data in a wrong state. When the power returns, the memory cell can be in an erratic state. Therefore a power interruption is a major risk to the integrity of data stored in MLC flash memory devices.
Flash media typically are written in units called “pages”; each page typically includes between 2000 bytes and 8000 bytes. Flash media typically are erased in units called “blocks”. Each block typically includes between 16 and 64 pages. Pages in MLC flash memory devices are coupled into paired pages. The number of paired pages maybe two for the 2-bit MLC and may go up to 3 to 4 or higher for higher bit MLCs. The paired pages may reside in shared MLC flash memory cells. If the power failure occurs while the MLC is in the middle of an operation that changes the contents of the flash media (e.g., in the middle of writing a page of data or in the middle of erasing a block of data), the electrical states of the interrupted page or block are unpredictable after the device is powered up again. The electrical states can even be random, because some of the affected bits may already be in the states assigned to them by the operation, at the time power is interrupted. However, other bits may be lagging behind and have not yet reached their target values yet. Furthermore, some bits might be caught in intermediate states and thus be in an unreliable mode, so that reading these bits returns different results under different read operations. Therefore power losses while programming a certain page can corrupt a paired page.
In the prior arts, error correction codes (ECC) and Redundant Array of Inexpensive Disk (RAID) techniques have been used to mitigate data corruption. In one instance, data corruption is prevented by writing parity pages at a different page address. Those techniques require either additional memory or complicated error-searching and data rebuilding procedures after power returns. Such requirements or solutions make the process costly to implement and place significant strain on the processing power of a conventional flash memory controller, which generally includes only a single processor. Furthermore, if a power failure occurs during the writing of a page, the paired page data can become corrupt in a MLC flash memory device. Therefore even the conventional paired page technique is susceptible to a sudden power interruption. As a matter of fact, the severity of the possible corruption is high; in some cases, every 10th data bit can be lost. Relying on conventional ECC techniques to make a MLC flash memory system reliable would be impractical to implement.
NAND flash memory data corruption can also result from program erase cycle wear outs. Electrons are injected and removed by tunneling through thin film oxide insulators. Repeated program/erase cycles damage the oxide and reduce its effectiveness. As device dimensions (e.g., oxide film thickness) shrink, data integrity problems from device wearing out can become more severe. One factor that influences this wearing out process is the speed at which the program and erase cycles are performed. However, if the speed of programming and erasing is slowed to avoid wearing out, overall performance can be impacted significantly.
Currently, a technique exists which applies a lower sense voltage to measure the charge states of the flash memory, in order to extend the lifetime of the memory device. A flash memory device is a charge-trap device that uses sense circuits to detect if a cell contains a given charge level. However, as the device wears out, its ability to store a charge is compromised. A worn out memory device allows the stored charge on the floating gate to leak. Consequently a sense circuit will detect a reduced voltage from the device. One current recovery mechanism reduces the sense voltage that is used to determine the logic value a cell contains. However, a lower sense voltage also returns a lower detected voltage, thus resulting in an incorrect charge tracking.
The present invention provides a two-dimensional self-RAID method of protecting, following a power loss, page-based storage data in a multiple-level-cell flash memory device. The process includes reserving a parity sector in each data page under an application of RAID (“First dimensional RAID”) technique, thereby forming a parity group containing a predetermined number of pages, and repeating the parity grouping for every subsequent data pages under a second application of a RAID technique (“Second dimensional RAID”). Thus if a subsequent write corrupts a paired page, the lost data can be recovered using the two dimensional RAID data.
The first dimensional parity in the present invention is associated with a data page. One sector within the page is reserved for the first dimensional RAID data. This parity sector allows the recovery of any single sector within the ECC capability of that sector. This level of RAID data can be calculated from the available data at the time the controller transfers the data to a chip buffer.
The second dimensional parity in the present invention is calculated across a column of sectors in a predetermined number of pages. When the specific page number is selected carefully, paired page faults can be recovered.
Full data protection against power interruption is achieved because any corrupted data sector can be recovered by the RAID data either from the within page sector parity or from the crossed sector page parity.
The present invention provides a method of preserving page-based flash memory integrity during writing, in the event of a power loss. The present invention can be used to manage a flash memory having multiple level cells (MLC).
According to an embodiment of the present invention, a method is provided to protect a MLC flash memory data which includes numerous memory pages. A method of managing a multiple level cell flash memory that includes a plurality of pages, each page including a plurality of sequentially numbered sectors, the method comprising: (a) choosing a sector in each page as a parity sector; (b) writing data into each page and calculating a parity value of the sectors in each page and storing the parity value in the reserved parity sectors; (c) dividing data pages into a plurality of groups, wherein each group, except the first group and the last group, consists of a first predetermined number of pages; (d) reserving a page in each group as a group parity page and writing data into each page of the group, calculating a parity value of the group and storing the parity value in the reserved group parity page; (e) repeating (a) to (d) for each group; (f) reserving a new page to store a column parity of all sectors sharing the same sector number.
According to another embodiment of the present invention, a method is provided to protect a MLC flash memory data which includes numerous memory pages. A method of managing a multiple level cell flash memory that includes a plurality of pages, each page including a plurality of sequentially numbered sectors, the method comprising: (a) reserving a parity block; (b) choosing a sector in each page as a parity sector; (c) writing data into each page and calculating a parity value of the sectors in each page and storing the parity value in the chosen parity sector; (d) dividing data pages into a plurality of groups, wherein each group, except the last group, consists of a second predetermined number of pages; (e) writing data into pages in a subset of a group and calculating the subset group parity; (f) storing the subset group parity value in the reserved parity block; (g) writing data into the remaining pages of the group; (h) repeating (b) to (g); (i) reserving a new page to store a column parity of sectors sharing the same sector number but residing in different pages.
According to the present invention, a data storage system is provided in which the above methods can be operated upon.
According to the present invention, a method for reducing data corruption from device wearing out is provided by extending the programming and erasing time on selected weak cells. Weak cells are identified by the rate they generate errors, and then the blocks and pages associated with the weak cells can be programmed and erased at a slower rate than other cells. By tracking the weaker blocks and treating them differently than other more robust blocks, endurance can be enhanced. Because the slower programming and erasure processes are only performed on the relatively few weak blocks, overall performance is not significantly compromised. A method of managing a multiple level cell flash memory with numerous pages, the method comprising: (a) programming and erasing data on a page at a predetermined speed; (b) detecting an error rate for each of the pages and identifying the pages associated with error rates that exceed a predetermined value; (c) programming and erasing the identified high error page set a speed that is slower than the predetermined speed.
According to another embodiment of the present invention, a method for overcoming the leakage induced charge level shifts is provided. A method of managing a multiple level cell flash memory that includes a sense circuitry, the method comprising: (a) selecting a sense voltage; (b) detecting charge levels of memory cells in the multiple level cell flash memory with the selected sense voltage, and making a first table that correlates the predetermined sense voltage and the sensed charge levels; (c) reducing the sense voltage and detecting charge levels of memory cells in the multiple level cell flash memory using the reduced sense voltage and making a second table that correlates the reduced sensed voltage with the sensed charge levels; (d) replacing the first table with the second table.
The present invention is better understood upon consideration of the detailed description below in conjunction with the accompanying drawings.
An example of a conventional multiple level (MLC) flash memory cell is illustrated in
Variations of the electronic states generate ranges of threshold voltages in a real MLC system.
Pages of data sharing the same multiple level cells are called “shared pages”. Each manufacturer may use a different distance between its shared pages. Many memory vendors prefer to set the distance at four. For example, at a pair distance of 4, page 0 is paired with page 4, page 1 is paired with page 5, page 2 is paired with page 6, and page 3 is paired with page 7.
The paired pages may share the same memory cells in a MLC flash memory system (e.g., in a 2-bit MLC flash memory, bit 0 and bit 1 of a memory cell are bits from the first and second pages of the paired pages, respectively). When a program operation is abnormally aborted, for example, during a power down or a reset, not only is the page data that is being programmed damaged, the data in the paired page may also be damaged, even though it may have been written correctly at a previous time.
According to one embodiment of the present invention, RAID techniques are applied in a method along two dimensions. In the first dimension, “the first dimensional RAID”), the method preserves parity information on the same page. The first dimensional RAID uses row parity or the sector parity, which is calculated using data from the first sector to the last sector in the same page. As shown in
In the second dimension (“the second dimensional RAID”), the method preserves parity data calculated over a number of pages in a parity group. Such parity data is referred to as group parity. The number of pages in each parity group is variable. In one implementation, for example, the number of pages in a parity group is 8 pages.
Group parity that is calculated for corresponding sectors over all pages in a block is referred to column parity.
Group parity in the second dimensional RAID provides additional parity protection in a flash memory device. A group parity for a parity group that includes less than all pages of a block sets a higher level of protection than column parity which is illustrated in
Another advantage of this type of RAID protection is that it does not require reads to generate the parity data on writes. All that is required is a parity cache for the pages being written. This simplifies the algorithm required for parity generation and does not cause a write performance penalty. The only time performance is affected is during the rebuild of data in the event that a hard error is encountered.
Although in the detailed description of the current invention, an exemplary number of 8 are used as the number of pages in a group, the invention does not limit the number of pages in a group to 8.
To implement the scheme shown in
In a more severe power interruption scenario when a number of paired pages are affected in a single page or in a single column, the group sector parity is able to recover corrupted data by combining the row parity, column parity and group parity. One example of the recovery scheme is illustrated in
In one embodiment of the present invention where a set aside parity block outside the data pages is used to reserve the group parity, an algorithm can be written for the process of data writing, when data is protected against the write-corrupt at a power interruption. This algorithm comprises the following steps for a parity group of 8 pages:
The foregoing description is intended to illustrate, but not to limit, the scope of the invention, which is defined by the scope of the appended claims. Other embodiments are within the scope of this disclosure.
Olbrich, Aaron K., Prins, Doug
Patent | Priority | Assignee | Title |
9053809, | Nov 09 2011 | Apple Inc.; Apple Inc | Data protection from write failures in nonvolatile memory |
Patent | Priority | Assignee | Title |
4916652, | Sep 30 1987 | International Business Machines Corporation | Dynamic multiple instruction stream multiple data multiple pipeline apparatus for floating-point single instruction stream single data architectures |
5530705, | Feb 08 1995 | HGST NETHERLANDS B V | Soft error recovery system and method |
5537555, | Mar 22 1993 | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | Fully pipelined and highly concurrent memory controller |
5551003, | Dec 11 1992 | International Business Machines Corporation | System for managing log structured array (LSA) of DASDS by managing segment space availability and reclaiming regions of segments using garbage collection procedure |
5657332, | May 20 1992 | SanDisk Technologies LLC | Soft errors handling in EEPROM devices |
5666114, | Nov 22 1994 | International Business Machines Corporation | Method and means for managing linear mapped address spaces storing compressed data at the storage subsystem control unit or device level |
5943692, | Apr 30 1997 | International Business Machines Corp | Mobile client computer system with flash memory management utilizing a virtual address map and variable length data |
5982664, | Oct 22 1997 | LAPIS SEMICONDUCTOR CO , LTD | Semiconductor memory capable of writing and reading data |
6000006, | Aug 25 1997 | BITMICRO LLC | Unified re-map and cache-index table with dual write-counters for wear-leveling of non-volatile flash RAM mass storage |
6016560, | Jun 14 1995 | Renesas Electronics Corporation | Semiconductor memory, memory device, and memory card |
6295592, | Jul 31 1998 | Round Rock Research, LLC | Method of processing memory requests in a pipelined memory controller |
6311263, | Sep 23 1994 | Cambridge Silicon Radio Limited | Data processing circuits and interfaces |
6442076, | Jun 30 2000 | Round Rock Research, LLC | Flash memory with multiple status reading capability |
6449625, | Apr 20 1999 | RPX Corporation | Use of a two-way stack approach to optimize flash memory management for embedded database systems |
6484224, | Nov 29 1999 | Cisco Technology, Inc | Multi-interface symmetric multiprocessor |
6678788, | May 26 2000 | EMC IP HOLDING COMPANY LLC | Data type and topological data categorization and ordering for a mass storage system |
6757768, | May 17 2001 | Cisco Technology, Inc. | Apparatus and technique for maintaining order among requests issued over an external bus of an intermediate network node |
6775792, | Jan 29 2001 | OVERLAND STORAGE, INC | Discrete mapping of parity blocks |
6836808, | Feb 25 2002 | International Business Machines Corporation | Pipelined packet processing |
6836815, | Jul 11 2001 | MIND FUSION, LLC | Layered crossbar for interconnection of multiple processors and shared memories |
6842436, | Dec 17 1999 | NOKIA SOLUTIONS AND NETWORKS GMBH & CO KG | Multiport-RAM memory device |
6871257, | Feb 22 2002 | SanDisk Technologies LLC | Pipelined parallel programming operation in a non-volatile memory system |
6895464, | Jun 03 2002 | III Holdings 12, LLC | Flash memory management system and method utilizing multiple block list windows |
6978343, | Aug 05 2002 | AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED | Error-correcting content addressable memory |
6981205, | Oct 23 2001 | Lenovo PC International | Data storage apparatus, read data processor, and read data processing method |
6988171, | Mar 03 1999 | International Business Machines Corporation | Method and system for recovery of meta data in a storage controller |
7020017, | Apr 06 2004 | SanDisk Technologies LLC | Variable programming of non-volatile memory |
7032123, | Oct 19 2001 | Oracle America, Inc | Error recovery |
7043505, | Jan 28 2003 | Unisys Corporation | Method variation for collecting stability data from proprietary systems |
7100002, | Sep 16 2003 | Cadence Design Systems, INC | Port independent data transaction interface for multi-port devices |
7162678, | Mar 14 2003 | Quantum Corporation | Extended error correction codes |
7173852, | Oct 03 2003 | SanDisk Technologies LLC | Corrected data storage and handling methods |
7184446, | Jul 06 2001 | Juniper Networks, Inc | Cross-bar switch employing a multiple entry point FIFO |
7516292, | May 09 2003 | Fujitsu Limited | Method for predicting and avoiding danger in execution environment |
7571277, | Nov 06 2006 | Hitachi, Ltd. | Semiconductor memory system for flash memory |
7574554, | Aug 11 2006 | RAKUTEN GROUP, INC | Storage system and data protection method therefor |
7596643, | Feb 07 2007 | Western Digital Technologies, INC | Storage subsystem with configurable buffer |
7681106, | Mar 29 2006 | SHENZHEN XINGUODU TECHNOLOGY CO , LTD | Error correction device and methods thereof |
7707481, | May 16 2006 | Pitney Bowes Inc.; Pitney Bowes Inc | System and method for efficient uncorrectable error detection in flash memory |
7761655, | Mar 15 2007 | Hitachi, Ltd. | Storage system and method of preventing deterioration of write performance in storage system |
7774390, | Jan 20 2006 | Samsung Electronics Co., Ltd. | Apparatus for collecting garbage block of nonvolatile memory according to power state and method of collecting the same |
7890818, | Mar 28 2007 | Samsung Electronics Co., Ltd. | Read level control apparatuses and methods |
7913022, | Feb 14 2007 | XILINX, Inc. | Port interface modules (PIMs) in a multi-port memory controller (MPMC) |
7925960, | Nov 07 2006 | Macronix International Co., Ltd. | Memory and method for checking reading errors thereof |
7934052, | Dec 27 2007 | SanDisk Technologies LLC | System and method for performing host initiated mass storage commands using a hierarchy of data structures |
7971112, | Feb 05 2008 | Fujitsu Limited | Memory diagnosis method |
7978516, | Dec 27 2007 | SanDisk Technologies LLC | Flash memory controller having reduced pinout |
7996642, | Apr 25 2007 | CAVIUM INTERNATIONAL; MARVELL ASIA PTE, LTD | Digital locked loop on channel tagged memory requests for memory optimization |
8032724, | Apr 04 2007 | CAVIUM INTERNATIONAL; MARVELL ASIA PTE, LTD | Demand-driven opportunistic garbage collection in memory components |
20020024846, | |||
20020083299, | |||
20020152305, | |||
20020162075, | |||
20030041299, | |||
20030043829, | |||
20030088805, | |||
20030188045, | |||
20030198100, | |||
20030212719, | |||
20040024957, | |||
20040024963, | |||
20040073829, | |||
20040153902, | |||
20040181734, | |||
20040199714, | |||
20040237018, | |||
20050060456, | |||
20050060501, | |||
20050114587, | |||
20050172065, | |||
20050193161, | |||
20050201148, | |||
20050257120, | |||
20050273560, | |||
20050289314, | |||
20060039196, | |||
20060053246, | |||
20060085671, | |||
20060136570, | |||
20060156177, | |||
20060195650, | |||
20060259528, | |||
20070011413, | |||
20070058446, | |||
20070061597, | |||
20070076479, | |||
20070081408, | |||
20070083697, | |||
20070113019, | |||
20070133312, | |||
20070147113, | |||
20070150790, | |||
20070174579, | |||
20070180188, | |||
20070208901, | |||
20070234143, | |||
20070245061, | |||
20070277036, | |||
20070291556, | |||
20070294496, | |||
20070300130, | |||
20080019182, | |||
20080022163, | |||
20080052446, | |||
20080077841, | |||
20080144371, | |||
20080147964, | |||
20080148124, | |||
20080163030, | |||
20080229000, | |||
20080229003, | |||
20080229176, | |||
20080270680, | |||
20080282128, | |||
20080285351, | |||
20090037652, | |||
20090168525, | |||
20090172258, | |||
20090172259, | |||
20090172260, | |||
20090172261, | |||
20090172262, | |||
20090172308, | |||
20090172335, | |||
20090172499, | |||
20090193058, | |||
20090207660, | |||
20090296466, | |||
20090296486, | |||
20090319864, | |||
20100103737, | |||
20100199125, | |||
20100202196, | |||
20100262889, | |||
20100281207, | |||
20110083060, | |||
20110131444, | |||
20110205823, | |||
20110213920, | |||
20110228601, | |||
20110231600, | |||
20120195126, | |||
EP1465203, | |||
JP2002532806, | |||
WO2008121553, | |||
WO2008121577, | |||
WO2009058140, | |||
WO2009134576, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jun 27 2012 | SANDISK ENTERPRISE IP LLC | (assignment on the face of the patent) | / | |||
Mar 24 2016 | SANDISK ENTERPRISE IP LLC | SanDisk Technologies Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 038295 | /0225 | |
May 16 2016 | SanDisk Technologies Inc | SanDisk Technologies LLC | CHANGE OF NAME SEE DOCUMENT FOR DETAILS | 038807 | /0898 |
Date | Maintenance Fee Events |
Dec 08 2016 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Sep 25 2020 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Dec 11 2024 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Jun 25 2016 | 4 years fee payment window open |
Dec 25 2016 | 6 months grace period start (w surcharge) |
Jun 25 2017 | patent expiry (for year 4) |
Jun 25 2019 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jun 25 2020 | 8 years fee payment window open |
Dec 25 2020 | 6 months grace period start (w surcharge) |
Jun 25 2021 | patent expiry (for year 8) |
Jun 25 2023 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jun 25 2024 | 12 years fee payment window open |
Dec 25 2024 | 6 months grace period start (w surcharge) |
Jun 25 2025 | patent expiry (for year 12) |
Jun 25 2027 | 2 years to revive unintentionally abandoned end. (for year 12) |