The present disclosure describes technologies and techniques for use by a data storage controller—such as a controller for use with a NAND or other non-volatile memory (NVM)—to provide a user-expandable memory space. In examples described herein, a customer may choose to purchase access to only a portion of the total available memory space of a consumer device, such as a smartphone. Later, the customer may expand the user-accessible memory space. In one example, the customer submits suitable payment via a communication network to a centralized authorization server, which returns an unlock key. Components within the data storage controller of the consumer device then use the key to unlock additional memory space within the device. In this manner, if the initial amount of memory the consumer paid for becomes full, the consumer may conveniently expand the amount of user-accessible memory.
|
1. A data storage device, comprising:
a non-volatile memory (NVM) device; and
a processing circuit configured to:
enable access to a first amount of memory within the NVM device while restricting access to a second amount of memory,
authorize access to the second amount of memory within the NVM device based on an indication of authorization received from a host device,
enable access to the second amount of memory in response to the indication of authorization, and
perform a program erase cycle (PEC) balancing operation on blocks within both the first and second amounts of memory in response to the indication of authorization.
8. A method for use by a data storage device having a data storage controller and a non-volatile memory (NVM) device, the method comprising:
enabling access to a first amount of memory within a total amount of memory within the NVM device;
receiving an indication that access is permitted to a second amount of memory within the total amount of memory within the NVM device;
enabling access to the second amount of memory within the NVM device in response to the indication; and
perform a program erase cycle (PEC) balancing operation on blocks within both the first and second amounts of memory in response to the indication that access is permitted.
2. The data storage device of
3. The data storage device of
wherein the indication of authorization includes an authorization key, and
wherein the processing circuit is further configured to verify authorization using a pre-stored comparison key.
4. The data storage device of
wherein the processing circuit is further configured to restrict access to a preselected amount of memory within the NVM device, and
wherein the second amount of memory comprises at least a portion of the preselected amount of memory.
5. The data storage device of
wherein the second amount of memory is larger than the first amount of memory, and
wherein the processing circuit is further configured to expand the amount of memory that a user can access from the first amount to the second amount.
6. The data storage device of
wherein the first amount of memory corresponds to a first logical partition of a physical memory of the NVM device, and
wherein the second amount of memory corresponds to a second logical partition of the physical memory of the NVM device.
7. The data storage device of
wherein the physical memory further comprises a portion of reserved memory.
9. The method of
10. The method of
wherein enabling access to the first amount of memory includes enabling access to a first logical partition of a physical memory of the NVM device, and
wherein enabling access to the second amount of memory includes enabling access to a second logical partition of the physical memory of the NVM device.
11. The method of
12. The method of
wherein enabling access to the first amount of memory includes providing a first logical block addresses (LBA) range corresponding to the first amount of memory, and
wherein enabling access to the second amount of memory includes expanding the LBA range to additionally include the second amount of memory.
13. The method of
wherein enabling access to the first amount of memory includes executing a partitioning request to enable a first partition within a memory space of the NVM device that corresponds to the first amount of memory, and
wherein enabling access to the second amount of memory includes executing a repartitioning request to the NVM device to repartition the memory space of the NVM device to include the first amount of memory and the second amount of memory.
14. The method of
wherein receiving the indication includes receiving an authorization key from an authorization server via a host device coupled to the data storage device, and
wherein enabling access to the second amount of memory includes confirming authorization using a pre-stored comparison key.
15. The method of
|
This application is a continuation of U.S. patent application Ser. No. 16/136,174, filed Sep. 19, 2018, having, entitled “EXPANDABLE MEMORY FOR USE WITH SOLID STATE SYSTEMS AND DEVICES,” the entire content of which is incorporated herein by reference.
The subject matter described herein relates to solid state systems and devices. More particularly, the subject matter relates, in some examples, to expandable memories for use with non-volatile memory (NVM) devices or other solid state memory devices.
Solid state device (SSD) data storage systems, such as flash drive data storage systems, often utilize an NVM composed of NAND storage components or arrays (hereinafter “NANDs”). Many such SSDs are installed as embedded components in mobile devices, such as smartphones. To save money or for other reasons, mobile device customers often purchase a relatively inexpensive device with a relatively modest SSD storage capacity. Later, the customer may wish to store larger quantities of data within the device, especially if new applications (apps) have been installed in the device that require the storage of larger amounts of data. Once the memory of the SSD is full, the customer may then need to store the data elsewhere, such as in a cloud-based storage system, which can be inconvenient and costly, or may need to buy a new mobile device with a larger SSD storage capacity, which can also be inconvenient and costly.
It would be desirable to provide solutions that would permit mobile device users, or the users of other devices employing SSDs, to more conveniently and inexpensively expand or otherwise change the amount of storage available to them within the device.
One embodiment of the present disclosure provides a data storage device, including: an NVM device; a memory access enablement controller configured to enable access to a first amount of memory within the NVM device; an authorization controller configured to authorize access a second amount of memory within the NVM device based on an indication of authorization received from a host device; and a memory access adjustment controller configured to enable access to the second amount of memory in response to the indication of authorization.
Another embodiment of the present disclosure provides a data storage device, including: an NVM device; and a processing circuit configured to enable access to a first amount of memory within the NVM device while restricting access to a second amount of memory, authorize access the second amount of memory within the NVM device based on an indication of authorization received from a host device, and enable access to the second amount of memory in response to the indication of authorization.
Yet another embodiment of the present disclosure provides a system, including: a host device; an authorization server; and a data storage device coupled to the host device, the data storage device including an NVM device, a memory access enablement controller configured to enable access to a first amount of memory within the NVM device, an authorization controller configured to authorize access a second amount of memory within the NVM device based on an indication of authorization received from the authorization server via the host device, and a memory access adjustment controller configured to enable access to the second amount of memory in response to the indication of authorization.
Still another embodiment of the present disclosure provides a method for use by a data storage device having a data storage controller and an NVM device, the method including: enabling access to a first amount of memory within a total amount of memory within the NVM device; receiving an indication that access is permitted to a second amount of memory within the total amount of memory within the NVM device; and enabling access to the second amount of memory within the NVM device in response to the indication.
Yet another embodiment of the present disclosure provides an apparatus for use in a data storage system having an NVM device, the apparatus including: means for enabling access to a first portion of memory within the NVM device, the first portion of memory less than a total available memory of the NVM device; means for receiving an indication of authorization to expand access to include a second portion of memory within the NVM device; and means, operative in response to the indication of authorization, for expanding access to include the second portion of memory within the NVM device.
The subject matter described herein will now be explained with reference to the accompanying drawings of which:
In the following detailed description, reference is made to the accompanying drawings, which form a part thereof. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description. The description of elements in each figure may refer to elements of proceeding figures. Like numbers may refer to like elements in the figures, including alternate embodiments of like elements.
Overview
Aspects of the present disclosure provide various apparatus, devices, systems and methods for use by solid state devices (SSDs) such as flash drive data storage systems. The main examples described herein relate to non-removable (i.e. embedded) non-volatile memory (NVM) storage systems configured for use as components of smartphones or the like. However, aspects and features described herein are applicable to other data storage systems and components.
As noted in the Introduction Section above, issues can arise if a customer or user purchases a relatively inexpensive smartphone (or other device) with a relatively modest NVM storage capacity. Once the embedded memory of the device is filled, the user may then need to store additional data elsewhere, such as in a cloud-based storage system, or may need to buy a new device with larger capacity, which can also be inconvenient and costly. Cloud storage also raises issues of security and privacy since data is shared with the third party operating the cloud storage system. Moreover, even if the physical memory of the device can be expanded by adding an additional separate memory (e.g. a second flash drive), the user still might not be able to store large files that would exceed the size of the individual memory components.
These and other issues may be addressed within SSDs (or similar systems or devices) by providing user-expandable memory within the device. Within examples described herein, consumer devices such as smartphones are equipped with a relatively large embedded NVM capacity (e.g. 256 gigabytes GB of physical NAND, which is large by current standards), but to save money the customer may initially purchase access to only a portion of that total available memory (e.g. 128 GB). As such, the user is initially authorized to access only a portion of the available physical memory space. Later, the customer may expand the accessible memory space by purchasing access to more of the physical memory space. This may be accomplished by having the user submit a purchase authorization to a centralized authorization server, which returns an authorization (unlock) key. Components within the data storage controller of the device verify the key and unlock the remaining physical memory space (or whatever amount of additional memory space the user has purchased access to via the expansion procedure). As such, the user need not purchase a new device, or resort to storing excess data in a cloud-based system, but may more conveniently expand the accessible memory space within the device. Moreover, even if the user never purchases the upgrade to the larger memory, benefits may nevertheless be gained because wear leveling can be performed over all physical memory blocks, and so endurance can be enhanced (as compared to a device that is equipped with less physical memory). The values of 128 GB and 256 GB are merely exemplary. The total amount of physical memory might instead be larger (e.g. 512 GB, 1 terabyte (TB), 2 TB, etc.) or smaller (e.g. 128 GB, 64 GB, 32 GB, etc.)
Among many other functions, the data storage controller 106 controls the amount of memory within the NVM 108 that the host components 104 are permitted to access on behalf of the user (or owner or operator) of the device. In the example of
A memory expansion controller 122 is configured to expand the amount of memory the host device components 104 are permitted to access, in response to receipt of proper authorization, from the initial user accessible partition 118 to an expanded user accessible partition 124. Thereafter, the host device components 104 (such as videocamera components or the like) are permitted to store a greater amount of data. In this manner, the user may conveniently expand the accessible memory of the device without purchasing a new device or resorting to cloud storage. Notably, although the user may be authorized initially to access only, for example, one half of the total memory space of the NVM device 108, the data storage controller 106 may distribute that data over the entire physical memory of the NVM device 108. That is, the initial memory partition 118 and the expanded memory partition 124 may be logical memory partitions that specify an amount of data that can be stored, rather than physical memory partitions that specify where the data is to be stored within the physical NAND storage elements. By employing logical memory partitions, rather than physical memory partitions, wearing leveling and other functions may be applied to the entire NVM physical memory space even if the user is authorized to access only a portion of that total physical memory space.
At 506, if the initial authorization is for only a portion of the total available memory space of the NVM (e.g. an initial 32 GB partition), the host device subsequently prompts the user (e.g. via an input/output I/O display screen of the host device) when the embedded NVM is nearly full (or using some other criteria) to determine if the user wants to upgrade to larger memory, and then may receive an upgrade request from the user (via the I/O screen). In one particular example, the NVM is deemed “nearly full” if the user has used 90% or more of the initial logical memory partition. As can be appreciated, many criteria may be used to trigger the user notification. The determination of whether the NVM is nearly full may be made by the data storage controller or, in some examples, by host software.
At 508, the host device forwards the upgrade request to a central authorization server (along with a credit card/debit card authorization or other payment details) for payment verification and upgrade authorization. At 510, the host device receives the upgrade authorization from the central authorization server (assuming that authorization is granted) along with an authorization (unlock) key that is specific to the particular host device. At 512, the host device forwards the key to the data storage controller so that the controller may verify the upgrade authorization based on an internally-stored copy of the key to verify that the unlock key is authentic and is not an attempt to “jailbreak” or “hack” the device to expand the memory without proper payment. Such keys may be managed and maintained by the authorization server and uploaded to the End-User host device upon initial purchase of the device for storage in the NVM system. In some examples, the host device and the authorization server perform a key exchange while the host device is still in the factory to set up the unlock key that is bound to the unique device ID. The unlock key and the unique ID are then burned into the ROM. In other examples, the key exchange might be performed later. Preferably, at least some key components or IDs are stored in hardware to provide top tier protection from hacks.
Examples of device IDs and corresponding unlock keys are shown in Table I:
TABLE I
ID
Key
SN023875203875
c8ed26c3b
72e4709d
e2c7b6c8e
7022b6e9
2edb2a
SN023875203876
0909ebd8
a02d4644
d6681175
5ba488a1
26b31bf7
Assuming authorization is verified at 512, the controller expands the amount of accessible memory based on the authorization (e.g. from an initial 32 GB logical partition to an expanded 64 GB logical partition). In some examples, the unlock key serves to unlock a wider range of LBAs to allow user access to more memory space (so that the additional LBAs are no longer transparent to the host device). Once the data storage controller has expanded the LBA range, the host may respond by mounting and/or expanding any partitions it maintains.
Eventually, at 814, the data storage controller notifies the host device components 806 when the NVM 809 is nearly full. At 816, the host device components 806 notify the user and receive or input a memory upgrade request from the user (if the user wishes to pay for the memory expansion). At 818, the host device components 806 forward the upgrade request (with payment authorization) to the centralized authorization server 804, which receives the upgrade request, at 820, with payment authorization. At 822, centralized authorization server 804 verifies payment and confirms the upgrade request with an authorization (unlock) key. At 824, the host components 806 receive the upgrade authorization along with the key and relay the authorization and the key to the data storage controller 808, which, at 826, confirms the key and expands user access to, e.g., the full memory of the NVM 809.
However, if the expansion is authorized, then a “block device” controller 908 within the host device sends a partition request, at 910, to its embedded NVM storage device controller 912 that specifies the new (expanded) memory size. (Herein, “block device controller” refers to those components within the host device that interact with the NVM system as a block storage device, i.e. a storage device where the details of the particular memory components are transparent to the host). Many embedded NVMs are configured for use as block devices so the designers of host devices need not concern themselves with the details of the particular storage technology. An example of such a NVM block device is the iNAND device provided by SanDisk (where iNAND is a trademark of SanDisk LLC.) At 914, front end (FE) components of the device controller 912 perform a repartitioning of the NVM to allow access to the new (expanded) memory size. Since repartitioning is handled by the FE, back end (BE) components need not be affected. That is, the repartitioning is transparent to components of the device controller that perform wear leveling, garbage collection, etc. As already explained, such operations may be performed over the entire physical memory of the NVM, even if the user is authorized to use only a portion of the memory, thus providing better overall endurance. Terra-bytes written (TBM) over the lifetime of the device can be increased.
The block device controller 1008 requests, at 1010, the NVM device controller 1012 to expand the memory (which, as discussed above, may involve sending a partitioning request to the controller that specifies the new larger partition size). At 1014, a key check is performed using the unlock key obtained from the server and the storage device ID to make sure that the unlock key is legitimate (and is not an attempt to “jailbreak” or “hack” the device). If the key check indicates that the expansion authorization is valid, the FE components of the device controller 1012 add an extra LBA range to thereby achieve a repartitioning of the NVM to allow user access to the new (expanded) memory size. Once the LBA range is extended, the memory expansion is a success, as indicated at 1016, and a suitable success notification can be provided to the user by, e.g., the NVM (NAND) app 1002. If any of the decision blocks yield a “no,” e.g. payment is not made or the key check fails, the memory expansion thus fails, as indicated at 1018, and a suitable failure warning 1018 is provided to the user by the app 1002.
Exemplary NVM System
The controller 1102 (which may be a flash memory controller) can take the form of processing circuitry, a microprocessor or processor, and/or a computer-readable medium that stores computer-readable program code (e.g., software or firmware) executable by the (micro)processor, logic gates, switches, an application specific integrated circuit (ASIC), a programmable logic controller, and an embedded microcontroller, for example. The controller 1102 can be configured with hardware and/or firmware to perform the various functions described herein and shown in the flow diagrams. Also, some of the components shown herein as being internal to the controller can also be stored external to the controller, and other components can be used. Additionally, the phrase “operatively in communication with” can mean directly in communication with or indirectly (wired or wireless) in communication with through one or more components, which may or may not be shown or described herein.
As used herein, a flash memory controller is a device that manages data stored on flash memory and communicates with a host, such as a computer or electronic device. A flash memory controller can have functionality in addition to the specific functionality described herein. For example, the flash memory controller can format the flash memory to ensure the memory is operating properly, map bad flash memory cells, and allocate spare cells to be substituted for future failed cells. Some portion of the spare cells can be used to hold firmware to operate the flash memory controller and implement other features. In operation, when a host needs to read data from or write data to the flash memory, it communicates with the flash memory controller. If the host provides a logical address to which data is to be read/written, the flash memory controller converts the logical address received from the host to a physical address in the flash memory. The flash memory controller can also perform various memory management functions, such as wear leveling (i.e. distributing writes to avoid wearing out specific blocks of memory that would otherwise be repeatedly written to) and garbage collection (i.e. after a block is full, moving only valid pages of data to a new block, so the full block can be erased and reused).
An NVM die 1104 may include any suitable non-volatile storage medium, including NAND flash memory cells and/or NOR flash memory cells. The memory cells can take the form of solid-state (e.g., flash) memory cells and can be one-time programmable, few-time programmable, or many-time programmable. The memory cells can also be single-level cells (SLC), multiple-level cells (MLC), triple-level cells (TLC), or use other memory technologies, now known or later developed. Also, the memory cells can be arranged in a two-dimensional or three-dimensional fashion (as will be discussed further below).
The interface between controller 1102 and NVM die 1104 may be any suitable flash interface, such as a suitable toggle mode. In the primary embodiments described herein, memory system 1100 is an embedded memory system. In alternative embodiments, memory system 1100 might be a card-based system, such as a secure digital (SD) or a micro secure digital (micro-SD) card. Although, in the example illustrated in
Modules of the controller 1102 may include a data management module 1112 that handles the scheduling of maintenance and host write operations to balance the consumption of space with the creation of free space. In embodiments having a NVM with a plurality of NVM dies, each NVM die may be operated asynchronously and independently such that multiple NVM die may concurrently have schedule cycles balancing consumption and creation of free space in each respective NVM die. A memory expansion controller (e.g. control module) 1113 of the FE module 1108 is provided to perform or control the above-described memory expansion-related operations, such as by initially configuring LBAs to correspond to only a portion (e.g. half) of the physically-available memory space of the NVM die 1104 and then expanding the LBA range to correspond to the full memory space to thereby permit the user to access the full memory.
A buffer manager/bus controller 1114 manages buffers in volatile memory such as in a random access memory (RAM) 1116 and controls the internal bus arbitration of controller 1102. A read only memory (ROM) 1118 stores system boot code and stores the unique device ID discussed above. A dynamic RAM (DRAM) 1140 may also be provided. Although illustrated in
FE module 1108 also includes a host interface 1120 and a physical layer interface (PHY) 1122 that provide the electrical interface with the host or next level storage controller. The choice of the type of host interface 1120 can depend on the type of memory being used. Examples of host interfaces 1120 include, but are not limited to, SATA, SATA Express, SAS, Fibre Channel, USB, PCIe, and NVMe. See, for example, NVM Express standard, Revision 1.3a, Oct. 24, 2017. However, aspects described herein are applicable to other data storage systems or protocols. The host interface 1120 typically facilitates transfer for data, control signals, and timing signals. Note that, although the memory expansion control module 1113 is shown as part of the front end module 1108 in
Back end module 1110 includes an error correction controller (ECC) engine 1124 that encodes the data bytes received from the host, and decodes and error corrects the data bytes read from the NVM. A low level command sequencer 1126 generates command sequences, such as program and erase command sequences, to be transmitted to NVM die 1104. A RAID (Redundant Array of Independent Drives) module 1128 manages generation of RAID parity and recovery of failed data. The RAID parity may be used as an additional level of integrity protection for the data being written into the NVM die 1104. In some cases, the RAID module 1128 may be a part of the ECC engine 1124. A memory interface 1130 provides the command sequences to NVM die 1104 and receives status information from NVM die 1104. In one embodiment, memory interface 1130 may be a double data rate (DDR) interface. A flash control layer 1132 controls the overall operation of back end module 1110.
Additional components of system 1100 illustrated in
These systems and procedures are particularly useful within embedded removable data storage devices equipped for NVMe, but aspects of the systems and procedures might be exploited in removable NVMe storage devices as well, and in devices that do not use NVMe.
The data storage controller 1400 of
As shown in
An authorization (unlock) key input/receive controller 1518 receives or inputs an authorization key from an external authorization server (not shown in
A repartitioning controller 1524 is configured to provide, define or request that the NVM 1504 be repartitioned to encompass or correspond to the expanded user-accessible NVM partition 1522 of a second (larger) size. As discussed above, the expanded user-accessible partition may correspond to all of the non-reserved memory of the NVM. To facilitate memory access operations, the processing system 1510 may also include an LBA mapping controller 1526 configured to (a) provide an initial LBA range corresponding to the initial NVM partition and (b) to expand the LBA range to additionally include the second NVM partition (once authorization to expand the memory is confirmed). A wear leveling controller 1528 may be provided for applying wear-leveling to the entire NVM (to, for example, improve NVM lifetime even if the user never requests memory expansion). A program erase cycle (PEC) controller 1530 may be provided for PEC balancing all memory blocks when user access is expanded to the larger second partition.
In at least some examples, means may be provided for performing the functions illustrated in
As further examples, the apparatus may include means (such as controller 1512 of
As still further examples, the apparatus may include: means (such as controller 1406 of
The subject matter described herein may be implemented in hardware, software, firmware, or any combination thereof. As such, the terms “function,” “node” or “module” as used herein refer to hardware, which may also include software and/or firmware components, for implementing the feature being described. In one exemplary implementation, the subject matter described herein may be implemented using a computer readable medium having stored thereon computer executable instructions that when executed by the processor of a computer control the computer to perform steps. Exemplary computer readable media suitable for implementing the subject matter described herein include non-transitory computer-readable media, such as disk memory devices, chip memory devices, programmable logic devices, and application specific integrated circuits. In addition, a computer readable medium that implements the subject matter described herein may be located on a single device or computing platform or may be distributed across multiple devices or computing platforms. These are just some examples of suitable means for performing or controlling the various functions.
In at least some examples, a machine-readable storage medium may be provided having one or more instructions which when executed by a processing circuit causes the processing circuit to performing the functions illustrated in
As further examples, instructions may be provided for enabling access to a first logical partition of a physical memory of the NVM device. The instructions for expanding access may include instructions for enabling access to a second logical partition of the physical memory of the NVM device. Instructions may be provided for wear levelling the NVM device that applies wearing levelling to a total amount of physical memory of the NVM device even if access is enabled only for the first portion of memory. Instructions may be provided for performing program erase cycle balancing on a total amount of physical memory of the NVM device when access is enabled to the second amount of memory within the NVM device. As still further examples, instructions may be provided for: enabling access to a first amount of memory within the NVM device (where the first amount of memory may be less than a total available amount of memory within the NVM device); authorizing access a second amount of memory within the NVM device based on an indication of authorization received from an external device; and for enabling access to the second amount of memory in response to the indication of authorization (where the second amount of memory may be larger than the first amount).
Additional Implementation Aspects and Configurations
The subject matter described herein can be implemented in any suitable NAND flash memory, including 2D or 3D NAND flash memory. Semiconductor memory devices include volatile memory devices, such as DRAM or static random access memory (SRAM) devices, non-volatile memory devices, such as resistive random access memory (ReRAM), electrically erasable programmable read only memory (EEPROM), flash memory (which can also be considered a subset of EEPROM), ferroelectric random access memory (FRAM), and magnetoresistive random access memory (MRAM), and other semiconductor elements capable of storing information. Each type of memory device may have different configurations. For example, flash memory devices may be configured in a NAND or a NOR configuration.
The memory devices can be formed from passive and/or active elements, in any combinations. By way of non-limiting example, passive semiconductor memory elements include ReRAM device elements, which in some embodiments include a resistivity switching storage element, such as an anti-fuse, phase change material, etc., and optionally a steering element, such as a diode, etc. Further by way of non-limiting example, active semiconductor memory elements include EEPROM and flash memory device elements, which in some embodiments include elements containing a charge storage region, such as a floating gate, conductive nanoparticles, or a charge storage dielectric material.
Multiple memory elements may be configured so that they are connected in series or so that each element is individually accessible. By way of non-limiting example, flash memory devices in a NAND configuration (NAND memory) typically contain memory elements connected in series. A NAND memory array may be configured so that the array is composed of multiple strings of memory in which a string is composed of multiple memory elements sharing a single bit line and accessed as a group. Alternatively, memory elements may be configured so that each element is individually accessible, e.g., a NOR memory array. NAND and NOR memory configurations are exemplary, and memory elements may be otherwise configured. The semiconductor memory elements located within and/or over a substrate may be arranged in two or three dimensions, such as a two dimensional memory structure or a three dimensional memory structure.
In a two dimensional memory structure, the semiconductor memory elements are arranged in a single plane or a single memory device level. Typically, in a two dimensional memory structure, memory elements are arranged in a plane (e.g., in an x-z direction plane) which extends substantially parallel to a major surface of a substrate that supports the memory elements. The substrate may be a wafer over or in which the layer of the memory elements are formed or it may be a carrier substrate which is attached to the memory elements after they are formed. As a non-limiting example, the substrate may include a semiconductor such as silicon. The memory elements may be arranged in the single memory device level in an ordered array, such as in a plurality of rows and/or columns. However, the memory elements may be arrayed in non-regular or non-orthogonal configurations. The memory elements may each have two or more electrodes or contact lines, such as bit lines and word lines.
A three dimensional memory array is arranged so that memory elements occupy multiple planes or multiple memory device levels, thereby forming a structure in three dimensions (i.e., in the x, y and z directions, where the y direction is substantially perpendicular and the x and z directions are substantially parallel to the major surface of the substrate). As a non-limiting example, a three dimensional memory structure may be vertically arranged as a stack of multiple two dimensional memory device levels. As another non-limiting example, a three dimensional memory array may be arranged as multiple vertical columns (e.g., columns extending substantially perpendicular to the major surface of the substrate, i.e., in the y direction) with each column having multiple memory elements in each column. The columns may be arranged in a two dimensional configuration, e.g., in an x-z plane, resulting in a three dimensional arrangement of memory elements with elements on multiple vertically stacked memory planes. Other configurations of memory elements in three dimensions can also constitute a three dimensional memory array.
By way of non-limiting example, in a three dimensional NAND memory array, the memory elements may be coupled together to form a NAND string within a single horizontal (e.g., x-z) memory device levels. Alternatively, the memory elements may be coupled together to form a vertical NAND string that traverses across multiple horizontal memory device levels. Other three dimensional configurations can be envisioned wherein some NAND strings contain memory elements in a single memory level while other strings contain memory elements which span through multiple memory levels. Three dimensional memory arrays may also be designed in a NOR configuration and in a ReRAM configuration.
Typically, in a monolithic three dimensional memory array, one or more memory device levels are formed above a single substrate. Optionally, the monolithic three dimensional memory array may also have one or more memory layers at least partially within the single substrate. As a non-limiting example, the substrate may include a semiconductor such as silicon. In a monolithic three dimensional array, the layers constituting each memory device level of the array are typically formed on the layers of the underlying memory device levels of the array. However, layers of adjacent memory device levels of a monolithic three dimensional memory array may be shared or have intervening layers between memory device levels.
Then again, two dimensional arrays may be formed separately and then packaged together to form a non-monolithic memory device having multiple layers of memory. For example, non-monolithic stacked memories can be constructed by forming memory levels on separate substrates and then stacking the memory levels atop each other. The substrates may be thinned or removed from the memory device levels before stacking, but as the memory device levels are initially formed over separate substrates, the resulting memory arrays are not monolithic three dimensional memory arrays. Further, multiple two dimensional memory arrays or three dimensional memory arrays (monolithic or non-monolithic) may be formed on separate chips and then packaged together to form a stacked-chip memory device.
Associated circuitry is typically required for operation of the memory elements and for communication with the memory elements. As non-limiting examples, memory devices may have circuitry used for controlling and driving memory elements to accomplish functions such as programming and reading. This associated circuitry may be on the same substrate as the memory elements and/or on a separate substrate. For example, a controller for memory read-write operations may be located on a separate controller chip and/or on the same substrate as the memory elements. One of skill in the art will recognize that the subject matter described herein is not limited to the two dimensional and three dimensional exemplary structures described but cover all relevant memory structures within the spirit and scope of the subject matter as described herein and as understood by one of skill in the art.
While the above descriptions contain many specific embodiments of the invention, these should not be construed as limitations on the scope of the invention, but rather as examples of specific embodiments thereof. Accordingly, the scope of the invention should be determined not by the embodiments illustrated, but by the appended claims and their equivalents. Moreover, reference throughout this specification to “one embodiment,” “an embodiment,” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, appearances of the phrases “in one embodiment,” “in an embodiment,” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment, but mean “one or more but not all embodiments” unless expressly specified otherwise. The terms “including,” “comprising,” “having,” and variations thereof mean “including but not limited to” unless expressly specified otherwise. An enumerated listing of items does not imply that any or all of the items are mutually exclusive and/or mutually inclusive, unless expressly specified otherwise. The terms “a,” “an,” and “the” also refer to “one or more” unless expressly specified otherwise. Furthermore, as used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. By way of example, “at least one of: A, B, or C” is intended to cover A, B, C, A-B, A-C, B-C, and A-B-C, as well as multiples of the same members (e.g., any lists that include AA, BB, or CC). Likewise, “at least one of: A, B, and C” is intended to cover A, B, C, A-B, A-C, B-C, and A-B-C, as well as multiples of the same members. Similarly, as used herein, a phrase referring to a list of items linked with “and/or” refers to any combination of the items. As an example, “A and/or B” is intended to cover A alone, B alone, or A and B together. As another example, “A, B and/or C” is intended to cover A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B, and C together.
Aspects of the present disclosure have been described above with reference to schematic flowchart diagrams and/or schematic block diagrams of methods, apparatuses, systems, and computer program products according to embodiments of the disclosure. It will be understood that each block of the schematic flowchart diagrams and/or schematic block diagrams, and combinations of blocks in the schematic flowchart diagrams and/or schematic block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a computer or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor or other programmable data processing apparatus, create means for implementing the functions and/or acts specified in the schematic flowchart diagrams and/or schematic block diagrams block or blocks.
It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. Other steps and methods may be conceived that are equivalent in function, logic, or effect to one or more blocks, or portions thereof, of the illustrated figures. Although various arrow types and line types may be employed in the flowchart and/or block diagrams, they are understood not to limit the scope of the corresponding embodiments. For instance, an arrow may indicate a waiting or monitoring period of unspecified duration between enumerated steps of the depicted embodiment.
The various features and processes described above may be used independently of one another, or may be combined in various ways. All possible combinations and sub-combinations are intended to fall within the scope of this disclosure. In addition, certain method, event, state or process blocks may be omitted in some implementations. The methods and processes described herein are also not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences that are appropriate. For example, described tasks or events may be performed in an order other than that specifically disclosed, or multiple may be combined in a single block or state. The example tasks or events may be performed in serial, in parallel, or in some other suitable manner. Tasks or events may be added to or removed from the disclosed example embodiments. The example systems and components described herein may be configured differently than described. For example, elements may be added to, removed from, or rearranged compared to the disclosed example embodiments.
Various details of the presently disclosed subject matter may be changed without departing from the scope of the presently disclosed subject matter. Furthermore, the foregoing description is for the purpose of illustration only, and not for the purpose of limitation.
Shaharabany, Amir, Sharoni, Liran
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
8307186, | Sep 11 2009 | Hitachi, Ltd. | Computer system performing capacity virtualization based on thin provisioning technology in both storage system and server computer |
9983807, | Mar 30 2015 | EMC IP HOLDING COMPANY LLC | Static service levels and application specific usage tags for storage policy based management of storage resources |
20090282212, | |||
20120110338, | |||
20130282967, | |||
20140240335, | |||
20190205054, | |||
20190258521, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Aug 08 2018 | SHARONI, LIRAN | Western Digital Technologies, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 055803 | /0376 | |
Aug 28 2018 | SHAHARABANY, AMIR | Western Digital Technologies, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 055803 | /0376 | |
Apr 01 2021 | Western Digital Technologies, Inc. | (assignment on the face of the patent) | / | |||
May 07 2021 | Western Digital Technologies, INC | JPMORGAN CHASE BANK, N A , AS AGENT | SECURITY INTEREST SEE DOCUMENT FOR DETAILS | 056285 | /0292 | |
Feb 03 2022 | JPMORGAN CHASE BANK, N A | Western Digital Technologies, INC | RELEASE OF SECURITY INTEREST AT REEL 056285 FRAME 0292 | 058982 | /0001 | |
Aug 18 2023 | Western Digital Technologies, INC | JPMORGAN CHASE BANK, N A | PATENT COLLATERAL AGREEMENT - DDTL LOAN AGREEMENT | 067045 | /0156 | |
Aug 18 2023 | Western Digital Technologies, INC | JPMORGAN CHASE BANK, N A | PATENT COLLATERAL AGREEMENT - A&R LOAN AGREEMENT | 064715 | /0001 | |
May 03 2024 | Western Digital Technologies, INC | SanDisk Technologies, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 067567 | /0682 | |
Jun 21 2024 | SanDisk Technologies, Inc | SanDisk Technologies, Inc | CHANGE OF NAME SEE DOCUMENT FOR DETAILS | 067982 | /0032 | |
Aug 20 2024 | SanDisk Technologies, Inc | JPMORGAN CHASE BANK, N A , AS THE AGENT | PATENT COLLATERAL AGREEMENT | 068762 | /0494 |
Date | Maintenance Fee Events |
Apr 01 2021 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Date | Maintenance Schedule |
Oct 11 2025 | 4 years fee payment window open |
Apr 11 2026 | 6 months grace period start (w surcharge) |
Oct 11 2026 | patent expiry (for year 4) |
Oct 11 2028 | 2 years to revive unintentionally abandoned end. (for year 4) |
Oct 11 2029 | 8 years fee payment window open |
Apr 11 2030 | 6 months grace period start (w surcharge) |
Oct 11 2030 | patent expiry (for year 8) |
Oct 11 2032 | 2 years to revive unintentionally abandoned end. (for year 8) |
Oct 11 2033 | 12 years fee payment window open |
Apr 11 2034 | 6 months grace period start (w surcharge) |
Oct 11 2034 | patent expiry (for year 12) |
Oct 11 2036 | 2 years to revive unintentionally abandoned end. (for year 12) |