Embodiments of an improved memory architecture for processing data inside of a device are described. In some embodiments, the device can store neural network layers, such as a systolic flow engine, in non-volatile memory and/or a separate first memory. A processor of a host system can delegate the execution of a neural network to the device. Advantageously, neural network processing in the device can be scalable, with the ability to process large amounts of data.
|
16. A method of performing neural network computations in a device, the method comprising:
receiving at least one data transfer command from a host system;
storing data in at least one of a first memory or a second memory of the device, and retrieving data from at least one of the first memory or the second memory in response to the at least one data transfer command;
performing neural network computations for a plurality of neural networks in the second memory by applying neural network layers to input data received from the host system, wherein a first result of neural network computations for a first neural network is used as input data for a successive neural network; and
storing a result of the neural network computations in the first memory for retrieval by the host system.
1. A device configured to perform neural network computations, the device comprising:
a first memory;
a second memory configured to store one or more layers of a neural network; and
means for:
storing data in at least one of the first memory or the second memory and retrieving data from at least one of the first memory or the second memory in response to at least one data transfer command received from a host system;
performing neural network computations in the second memory by applying one or more neural network layers to input data received from the host system; and
asynchronously storing a result of the neural network computations in the first memory for retrieval by the host system before completion of neural network computations for all neural network layers stored in the second memory.
11. A device configured to perform neural network computations, the device comprising:
a first memory;
a second memory configured to store one or more layers of a neural network; and
means for:
storing data in at least one of the first memory or the second memory and retrieving data from at least one of the first memory or the second memory in response to at least one data transfer command received from a host system;
performing neural network computations in the second memory by applying one or more neural network layers to input data received from the host system; and
synchronously storing a result of the neural network computations in the first memory for retrieval by the host system following completion of neural network computations for all neural network layers stored in the second memory.
3. The device of
performing neural network computations for a plurality of neural networks; and
using a result of neural network computations for a first neural network as input data for a successive neural network.
4. The device of
6. The device of
7. The device of
8. The device of
9. The device of
12. The device of
setting a locked state of the data before inputting the data into the neural network; and
setting an unlocked state of the data after making the output of the neural network available, wherein the locked state prevents changing the data.
13. The device of
14. The device of
17. The method of
receiving a request to initiate neural network computations comprising a type of data processing; and
identifying neural network configuration parameters based on the type of data processing.
18. The method of
19. The method of
receiving neural network configuration parameters and input data for the neural network computations; and
defining one or more neural network layers based on the neural network configuration parameters.
20. The method of
receiving a request to perform a data processing function comprising a type of data processing;
identifying neural network configuration parameters based on the type of data processing; and
defining one or more neural network layers based on the neural network configuration parameters.
|
This application is a continuation of application Ser. No. 16/363,661, filed on Mar. 25, 2019, titled “ENHANCED MEMORY DEVICE ARCHITECTURE FOR MACHINE LEARNING”, the contents of which are hereby incorporated by reference in their entirety. Any and all applications for which a foreign or domestic priority claim is identified in the Application Data Sheet as filed with the present application are hereby incorporated by reference under 37 CFR 1.57.
The present disclosure relates to memory device architecture, and more particularly, to data processing inside the memory device via improving machine learning.
Machine learning techniques, such as neural networks, are frequently being utilized by modern computing systems. These technologies can operate on large data sets and thus can require large amounts of storage space. However, current memory architectures do not allow for scalability of big data analysis. The present disclosure addresses these and other problems.
The innovations described in the claims each have several aspects, no single one of which is solely responsible for its desirable attributes. Without limiting the scope of the claims, some prominent features of this disclosure will now be briefly described.
While certain embodiments are described, these embodiments are presented by way of example only, and are not intended to limit the scope of protection. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms. Furthermore, various omissions, substitutions, and changes in the form of the methods and systems described herein may be made without departing from the scope of protection.
Various embodiments of this disclosure provide a memory device (or storage device) configured to perform neural network computations, the device comprising: a volatile memory; a non-volatile memory configured to store one or more layers of a neural network; and a controller configured to: store data in at least one of the volatile memory or the non-volatile memory and retrieve data from at least one of the volatile memory or the non-volatile memory in response to at least one data transfer command received from a host system; perform neural network computations in the non-volatile memory by applying one or more neural network layers to input data received from the host system; and store a result of the neural network computations in the volatile memory for retrieval by the host system.
In the memory device of the preceding paragraph or any paragraphs herein, the input data can be stored in the volatile memory.
In the memory device of the preceding paragraph or any paragraphs herein, the controller can be further configured to perform neural network computations for a plurality of neural networks and use a result of neural network computations for a first neural network as input data for a successive neural network.
In the memory device of the preceding paragraph or any paragraphs herein, the controller can be further configured to reconfigure the first neural network as the successive neural network before inputting the data into the successive network.
In the memory device of the preceding paragraph or any paragraphs herein, the controller can be a sole controller of the memory device.
In the memory device of the preceding paragraph or any paragraphs herein, the controller can be further configured to provide the result of the neural network computations to the host system asynchronously.
In the memory device of the preceding paragraph or any paragraphs herein, provision of the result asynchronously can comprise at least one of polling a state of memory pages in the non-volatile memory or issuing an interrupt.
In the memory device of the preceding paragraph or any paragraphs herein, polling can comprise periodic polling of the state of memory pages.
In the memory device of the preceding paragraph or any paragraphs herein, the result of the neural network computations can be configured to be retrieved synchronously.
In the memory device of the preceding paragraph or any paragraphs herein, the memory device can be further configured to receive a request to initiate neural network computations, the request comprising neural network configuration parameters and input data for neural network computations.
In the memory device of the preceding paragraph or any paragraphs herein, the request to initiate neural network computations can comprise a type of data processing, and the controller can be further configured to identify neural network configuration parameters based on the type of data processing.
Various embodiments of this disclosure provide a method of performing neural network computations in a memory device, the method comprising: by a controller of the memory device: storing data in at least one of the volatile memory or the non-volatile memory and retrieve data from at least one of the volatile memory or the non-volatile memory in response to at least one data transfer command received from a host system; performing neural network computations in the non-volatile memory by applying one or more neural network layers to input data received from the host system; and storing a result of the neural network computations in the volatile memory for retrieval by the host system.
The method of the preceding paragraph or any paragraphs herein, can include setting a locked state of the data before inputting the data into the neural network, and setting an unlocked state of the data after making the output of the neural network available, wherein the locked state can prevent changing the data.
The method of the preceding paragraph or any paragraphs herein, can include configuring the neural network configured to perform the data processing function on the data based on at least one of a number of nodes or a type of activation function.
The method of the preceding paragraph or any paragraphs herein, can include inputting the data into the neural network by initiating back propagation on the neural network, and output of the neural network can include an adjusted weighting for one or more nodes of the neural network.
Various embodiments of this disclosure provide a data storage device configured to perform neural network computations, the data storage device comprising a volatile memory, non-volatile memory, and a sole controller configured to: store data in at least one of the volatile memory or the non-volatile memory and retrieve data from at least one of the volatile memory or the non-volatile memory in response to at least one data transfer command received from a host system; perform neural network computations in the non-volatile memory by applying one or more neural network layers to input data received from the host system and stored in the volatile memory; and store a result of the neural network computations in the volatile memory for retrieval by the host system.
In the device of the preceding paragraph or any paragraphs herein, the request to initiate neural network computations can comprise a type of data processing, and the controller can be further configured to identify neural network configuration parameters based on the type of data processing.
In the device of the preceding paragraph or any paragraphs herein, the neural network may not be directly accessible by a processor of the host system.
In the device of the preceding paragraph or any paragraphs herein, the request to perform the data processing function can comprise neural network configuration parameters and input data for the neural network computations, and the controller can be further configured to define the one or more neural network layers based on the neural network configuration parameters.
In the device of the preceding paragraph or any paragraphs herein, the request to perform the data processing function can comprise a type of data processing, and the controller can be further configured to identify neural network configuration parameters based on the type of data processing and define the one or more neural network layers based on the neural network configuration parameters.
Overview
Traditional memory architectures, such as the architecture found in non-volatile memory (NVM), magnetic random-access memory (MRAM), resistive random-access memory (ReRAM), nantero random-access memory (NRAM), and/or the like, can have low latency properties, providing opportunities to increase performance of computer systems dramatically. However, these traditional memory architectures are unable to efficiently take advantage of the non-volatile memory. Traditional memory architectures suffer from critical drawbacks, in particular if some data is not pre-fetched into the page cache, then persistent data is transferred to the dynamic random-access memory (DRAM) from persistent storage when some data is processed.
Furthermore, current memory chip architectures do not allow for scalability of big data analysis. With such architectures, large amounts of data would have to be transferred to and from the DRAM and the persistent storage devices. As such, simply increasing the number of cores for increased data processing does not address the issues described herein. For example, the storage device may have to copy data to a host side, and the host side may have to process the data. Then, one set of data needs to be copied in DRAM, the CPUs would process the set of data, and the next set of data would then be copied again for processing. This creates a large bottleneck for performance and cannot scale for large data processing. As such, the data processing would take a large amount of time and resource. Moreover, this would result in large overhead in the software stack. Furthermore, with separate CPU cores, each CPU can be dedicated to a subset of data such as modifying the subset of data, resulting in an inconsistent state of data across the CPUs. Moreover, increasing size of the DRAM also comes with inefficiencies, such as an increase in power consumption. Furthermore, the CPU may not be able to address a DRAM over a certain size, and thus the DRAM is not scalable.
Advantageously, non-volatile memory can enable scalability for large data processing and reduce power requirements over DRAM. However, introducing non-volatile memory can create new issues. Moreover, the number of CPU cores cannot simply increase because of inefficiencies created in the task scheduler. The activity of the task scheduler by assignment of time slices for threads execution is increased. Moreover, the number of context switches are increased as well. However, if we can offload data processing into memory pages of the smart memory device, then the task scheduler does not need to manage the shared CPU cores. Moreover, there are issues with cache coherence where the data from DRAM is copied into a CPU's L1/L2 cache for data processing, with the same portion of data being available to be copied into L1/L2 cache for another CPU core. If one core modifies the data, then the DRAM contains an inconsistent state of data. As described herein, disclosed embodiments solve at least these problems.
Communication Between Processor and Smart Memory
Generally, some embodiments of systems and methods described herein improve memory architecture by processing data inside of the memory device.
The improved memory architecture of the smart memory device can transfer data from the storage device into a smart memory device, and thus, the smart memory device can process data internally. Advantageously, data processing on the smart memory device can be scalable, with the ability to process large amounts of data. The smart memory device 406 can store one or more layers of a neural network, as described herein.
Data Processing Via Neural Network in Non-Volatile Memory
In some embodiments, the non-volatile memory can configure and/or reconfigure one or more neural networks, and/or store preconfigured neural networks. The non-volatile memory can configure a neural network based on certain received parameters, such as a number of nodes, layers, weights, a desired inference operation, and/or the like.
In some embodiments, the CPU 502 (and/or a controller) can communicate with the DRAM 504 without knowledge of the underlying data processing via the neural network in the non-volatile memory. For example, the CPU 502 can use the DRAM 504 to perform a particular operation on a set of data. The CPU 502 can determine whether to perform the operation internally or to send the data to the non-volatile memory to process the data. The particular operation can be an inference (or training) operation of a neural network that may require substantial processing. The non-volatile memory 506 can receive the input data from the DRAM 504, configure the neural network to perform the inference (or training) operation, process the data through the neural network, and send (or store) the output data to the DRAM 504. The CPU 502 can subsequently retrieve the results of the inference operation from the DRAM 504. Advantageously, the CPU 502 can offload the execution of the inference operation to a separate non-volatile memory 506. Moreover, the non-volatile memory 506 can execute inference operations of the neural network in parallel or substantially in parallel with the other operations being performed in the DRAM 504.
Data Processing in Layers of a Neural Network
The neural network engine used by the disclosed embodiments can be configured to any type of neural network. The neural network engine can define a neural network based on one or more factors, including (1) the number of nodes in one layer, (2) the number of hidden layers, (3) the type of activation function, and/or (4) the matrix of weights for every connection between nodes of layers. In some embodiments, the neural network can be defined based on a functionality, and the neural network engine can retrieve a predefined neural network corresponding to the desired functionality.
In some embodiments, a controller, such as the external CPU and/or a controller of the non-volatile memory, can configure the neural network, such as define the type of neural network for processing of the data. The controller can identify the appropriate input data. For example, the input data may include a picture that is sent into a neural network, such as a systolic flow engine, that is trained to identify people in the picture. The systolic flow engine may output an output stream that provides an indication on whether a person was identified in the picture of the input stream.
The DRAM 602 can receive and store the input data (e.g. N Bytes of input data) and push the data into the neural network. The non-volatile memory can include the layers of the neural network 604A, 604B, . . . 604N. The output of the neural network can be stored back into the DRAM 602. In some embodiments, an output of one neural network can be fed into an input of another neural network. In some embodiments, the DRAM can feed multiple neural networks in non-volatile memory for data processing of multiple functionalities.
In some embodiments, the CPU can lock the corresponding input data as the input data is pushed into the neural network. Thus, if the neural network is still processing the input data, the CPU can wait for the neural network to complete its computations before modifying the input data. The CPU can access the data without modification, such as by performing a read operation.
In some embodiments, the CPU or DRAM's controller can copy the corresponding input data, and push the copy of the data into the neural network. In such cases, the CPU can modify the original input data while the copy of the data is being processed. The circuitry between the neural network layers can include one or more memory cells to store the outputs of a previous layer as inputs to the next layer.
In some embodiments, the DRAM 602 can serve as the input layer and/or the output layer for the neural network. In other embodiments, the DRAM 602 can input the data into an input layer of a neural network and/or receive the output of an output layer of a neural network.
In some embodiments, the non-volatile memory can include all of the layers of the neural network. In other embodiments, the non-volatile memory (e.g. 408 in
In some embodiments, a controller can control the receiving and/or sending of data to and/or from the DRAM 602. The controller can configure the non-volatile memory for a particular neural network. The controller can facilitate data processing through the neural network stored in the non-volatile memory.
In some embodiments, data can be back-propagated through the layers of the non-volatile memory for training purposes. For example, training data can be forward propagated through the neural network. Based on the output of the neural network, the controller can back propagate through each layer by increasing the weight for the nodes that contributed to the desired output and vice versa.
Repurposing Non-Volatile Memory for Multiple Neural Networks
At step 3, the data can be processed through the layers of the neural network 702A, 702B, . . . 702N. The output of the non-volatile memory can be inputted back into the non-volatile memory for processing by a subsequent layer. In some cases, multiple neural networks can be used to process data in sequence. For example, at step L, result of processing by a particular neural network can be stored in memory such as temporary memory or buffer (which can be part of the DRAM). At step L+1, a subsequent neural network can be configured for the non-volatile memory, and at step L+2, the output that was inputted back into the non-volatile memory can be processed through such subsequent neural network.
In step 704, the process can store input data in the DRAM and cause the input data to be provided to the neural network stored in the non-volatile memory. In step 706, the data can be processed by the neural network. In step 708, the process can receive the output of the neural network.
In step 710, the process can determine whether another neural network is to further process the data or if the data processing is complete. If data processing is complete, then the process ends at step 722.
If there are further neural network processing operations, at step 712, the process can define the type of neural network. The process can determine that the same neural network can be rerun and/or a different neural network is needed.
In step 714, the process can retrieve the stored data from the previous neural network, and in step 716, can input the saved output data from the previous neural network into the newly configured neural network. In step 718, the data can be processed through the neural network. In step 720, the process can save the output of the neural network, for example in the DRAM. Then, the process can continue to step 710, where the process can determine whether another neural network is to further process the data or if the data processing is complete.
Multiple Neural Networks Configured in Non-Volatile Memory
In some embodiments, a smart memory device can process neural networks in series, such as the example shown in
Smart Memory Device Architecture
If neural network processing is requested, the CPU 904 can send configuration parameters of the desired neural network to a non-volatile memory controller 908. The controller 908 can process the data through the layers 902A, 902B, . . . 902C of the neural network implemented in the non-volatile memory and send the output of the neural network to the memory page 906 of the DRAM (or another area of DRAM) at step 4. In step 5, the controller 908 can indicate to the CPU 904 that the neural network operation is complete. This can be performed by setting or activating an interrupt. In other embodiments, the CPU 904 can poll the controller 908 for a status of the neural network operation.
If the request requires a neural network operation, at step 920 the CPU can send characteristics of a neural network to a controller 918. The controller 918 can determine the corresponding neural network based on the received characteristics, and at step 922, input the data stored in memory into the neural network. The neural network engine can process the data through the neural network in step 924. In step 926, the controller 918 can send the output of the neural network to the DRAM, and at step 928, the DRAM 910 can store the output data into memory for the CPU to access.
In some embodiments, the memory device can process the data synchronously, and the CPU can wait for the neural network operations to complete. The CPU can optionally send an end function to stop the processing of data through the neural network during data processing. Otherwise, the CPU can poll the memory device. Advantageously for asynchronous processing, the CPU does not have to wait for neural network data processing.
CPU Delegation of Processing to the Non-Volatile Memory
In step 2, the task scheduler 1002 can manage the delegation of tasks, such as by assigning a time slice to the CPU 1004 to perform a certain task, where the CPU activity is split between time slices. The CPU 1004 can initiate data processing in step 3 by sending the request to a controller 1012 of a smart memory device. The controller 1012 can configure a neural network 1008 to perform the neural network operation(s), receive the input data from memory 1006 (such as DRAM), process the data through the neural network, and send the output data to DRAM memory page, as described herein.
In some embodiments, while the data is being processed by the neural network, the CPU 1004 can indicate to the task scheduler 1002 to put the process into a sleep state (for example, because the CPU 1004 is waiting for completion of the neural network processing). Then, the task scheduler 1002 doesn't assign a time slice for the process' 10106. In some embodiments, the CPU 1004 can perform other tasks while the controller 1012 is managing the neural network processing.
After neural network processing is finished in step 5, in step 6, the task scheduler 1002 is in a ready state. Advantageously, offloading the neural network processing from the CPU to the smart memory device can dramatically improve system performance by freeing the CPU's resources. In addition, the whole memory space can be able to perform large data processing without affecting the system performance. Also, power consumption can be reduced, for example, because processing-intensive neural network computations are performed by the non-volatile memory device, rather than the CPU.
Other Variations
Any of the embodiments disclosed herein can be used with any of the concepts disclosed in co-pending U.S. patent application Ser. No. 16/363,744, titled “ENHANCED STORAGE DEVICE MEMORY ARCHITECTURE FOR NEURAL NETWORK PROCESSING”, filed on Mar. 25, 2019, and hereby incorporated by reference in its entirety.
Those skilled in the art will appreciate that in some embodiments additional system components can be utilized, and disclosed system components can be combined or omitted. Although some embodiments describe video data transmission, disclosed systems and methods can be used for transmission of any type of data. In addition, although some embodiments utilize erasure coding, any suitable error correction schemes can be used. The actual steps taken in the disclosed processes may differ from those shown in the figures. Depending on the embodiment, certain of the steps described above may be removed, others may be added. Accordingly, the scope of the present disclosure is intended to be defined only by reference to the appended claims.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the protection. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms. Furthermore, various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the spirit of the protection. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the protection. For example, the systems and methods disclosed herein can be applied to hard disk drives, hybrid hard drives, and the like. In addition, other forms of storage (such as, DRAM or SRAM, battery backed-up volatile DRAM or SRAM devices, EPROM, EEPROM memory, etc.) may additionally or alternatively be used. As another example, the various components illustrated in the figures may be implemented as software and/or firmware on a processor, ASIC/FPGA, or dedicated hardware. Also, the features and attributes of the specific embodiments disclosed above may be combined in different ways to form additional embodiments, all of which fall within the scope of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of this disclosure. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will further be understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Further, references to “a method” or “an embodiment” throughout are not intended to mean the same method or same embodiment, unless the context clearly indicates otherwise.
The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the various embodiments of the present disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of this disclosure. The example embodiments were chosen and described in order to best explain the principles of this disclosure and the practical application, and to enable others of ordinary skill in the art to understand this disclosure for various embodiments with various modifications as are suited to the particular use contemplated.
Although the present disclosure provides certain preferred embodiments and applications, other embodiments that are apparent to those of ordinary skill in the art, including embodiments which do not provide all of the features and advantages set forth herein, are also within the scope of this disclosure. Accordingly, the scope of the present disclosure is intended to be defined only by reference to the appended claims.
Franca-Neto, Luiz M., Dubeyko, Viacheslav
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
10019668, | May 19 2017 | GOOGLE LLC | Scheduling neural network processing |
10043095, | Oct 10 2016 | GYRFALCON TECHNOLOGY INC | Data structure for CNN based digital integrated circuit for extracting features out of an input image |
10074051, | May 21 2015 | GOOGLE LLC | Vector computation unit in a neural network processor |
10083171, | Aug 03 2017 | Gyrfalcon Technology Inc.; GYRFALCON TECHNOLOGY INC | Natural language processing using a CNN based integrated circuit |
10083395, | May 21 2015 | GOOGLE LLC | Batch processing in a neural network processor |
10102453, | Aug 03 2017 | GYRFALCON TECHNOLOGY INC ; Gyrfalcon Technology Inc. | Natural language processing via a two-dimensional symbol having multiple ideograms contained therein |
10459849, | Aug 31 2018 | SAS Institute Inc.; SAS INSTITUTE INC | Scheduling operations in an access-controlled region of memory |
10521488, | Dec 30 2016 | GOOGLE LLC | Dynamic partitioning |
10790828, | Jul 21 2017 | GOOGLE LLC | Application specific integrated circuit accelerators |
10796198, | Feb 08 2018 | SanDisk Technologies, Inc | Adjusting enhancement coefficients for neural network engine |
10817802, | May 07 2016 | Intel Corporation | Apparatus for hardware accelerated machine learning |
3602186, | |||
5091864, | Dec 23 1988 | Hitachi, Ltd. | Systolic processor elements for a neural network |
5138695, | Oct 10 1989 | HNC SOFTWARE INC , A DE CORP | Systolic array image processing system |
5226092, | Jun 28 1991 | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | Method and apparatus for learning in a neural network |
5509106, | May 22 1990 | International Business Machines Corporation | Triangular scalable neural array processor |
5519811, | Oct 17 1991 | KAWASAKI MICROELECTRONICS, INC | Neural network, processor, and pattern recognition apparatus |
5627943, | Feb 17 1993 | KAWASAKI MICROELECTRONICS, INC | Neural network processor including systolic array of two-dimensional layers |
5659781, | Jun 29 1994 | Bidirectional systolic ring network | |
5799134, | Mar 13 1995 | Transpacific IP Ltd | One dimensional systolic array architecture for neural network |
5812993, | Feb 27 1997 | Technion Research and Development Foundation Ltd. | Digital hardware architecture for realizing neural network |
7085749, | May 31 2001 | Canon Kabushiki Kaisha | Pulse signal circuit, parallel processing circuit, pattern recognition system, and image input system |
7437339, | May 31 2001 | Canon Kabuhsiki Kaisha | Pulse signal circuit, parallel processing circuit, pattern recognition system, and image input system |
7489834, | Jan 17 2003 | Parimics, Inc. | Method and apparatus for image processing |
7564996, | Jan 17 2003 | Parimics, Inc. | Method and apparatus for image processing |
7743004, | May 31 2001 | Canon Kabushiki Kaisha | Pulse signal circuit, parallel processing circuit, and pattern recognition system |
8392683, | Nov 30 2009 | U S BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT | Dynamic range unlock or lock memory device and method to operate the same |
8724624, | Dec 22 2009 | Systolic array architecture for fast IP lookup | |
8824603, | Mar 01 2013 | Futurewei Technologies, Inc. | Bi-directional ring-bus architecture for CORDIC-based matrix inversion |
9697463, | May 21 2015 | GOOGLE LLC | Computing convolutions using a neural network processor |
9710748, | May 21 2015 | GOOGLE LLC | Neural network processor |
9721203, | Nov 10 2016 | GOOGLE LLC | Performing kernel striding in hardware |
9747548, | May 21 2015 | GOOGLE LLC | Rotating data for neural network computations |
9805303, | May 21 2015 | GOOGLE LLC | Rotating data for neural network computations |
9928460, | Jun 16 2017 | GOOGLE LLC | Neural network accelerator tile architecture with three-dimensional stacking |
9959500, | Apr 21 2017 | Gyrfalcon Technology Inc. | Embedded spin transfer torque memory for cellular neural network based processing unit |
20030004907, | |||
20040156546, | |||
20040156547, | |||
20070011120, | |||
20080270335, | |||
20110029471, | |||
20120257506, | |||
20140270494, | |||
20140289445, | |||
20150112911, | |||
20150170021, | |||
20160014273, | |||
20160142731, | |||
20160342893, | |||
20170103313, | |||
20170103314, | |||
20170103318, | |||
20170147942, | |||
20180005115, | |||
20180075350, | |||
20180101743, | |||
20180101747, | |||
20180101748, | |||
20180107921, | |||
20180129936, | |||
20180157465, | |||
20180157940, | |||
20180165577, | |||
20180173441, | |||
20180174031, | |||
20180189595, | |||
20180189642, | |||
20180189648, | |||
20180247113, | |||
20180268234, | |||
20180285005, | |||
20180285006, | |||
20180285713, | |||
20180285714, | |||
20180285720, | |||
20180285722, | |||
20180285723, | |||
20180307438, | |||
20180307980, | |||
20180309050, | |||
20180314671, | |||
20180336164, | |||
20190042918, | |||
20190043203, | |||
20190073259, | |||
20190114499, | |||
20190114548, | |||
20190121889, | |||
20190156187, | |||
20190179795, | |||
20190236049, | |||
20190244077, | |||
20190244078, | |||
20190244081, | |||
20190244082, | |||
20190244083, | |||
20190244085, | |||
20190244086, | |||
20190244105, | |||
20190244106, | |||
20190317901, | |||
20200073726, | |||
20200127685, | |||
20200133531, | |||
20200134462, | |||
20200293866, | |||
20200311537, | |||
20200327367, | |||
20200387798, | |||
AU197131902, | |||
BE771045, | |||
CA930619, | |||
DE2139302, | |||
EP3373210, | |||
ES196704, | |||
FR2104032, | |||
GB1316899, | |||
IL37434, | |||
KR197900473, | |||
SE361090, | |||
WO2017006512, | |||
WO2019075267, | |||
WO2017006512, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Mar 20 2019 | DUBEYKO, VIACHESLAV | Western Digital Technologies, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 054833 | /0210 | |
Apr 23 2019 | FRANCA-NETO, LUIZ M | Western Digital Technologies, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 054833 | /0210 | |
Jan 06 2021 | Western Digital Technologies, Inc. | (assignment on the face of the patent) | / | |||
Feb 10 2021 | Western Digital Technologies, INC | JPMORGAN CHASE BANK, N A , AS AGENT | SECURITY INTEREST SEE DOCUMENT FOR DETAILS | 055404 | /0942 | |
Feb 03 2022 | JPMORGAN CHASE BANK, N A | Western Digital Technologies, INC | RELEASE OF SECURITY INTEREST AT REEL 055404 FRAME 0942 | 058966 | /0407 | |
Aug 18 2023 | Western Digital Technologies, INC | JPMORGAN CHASE BANK, N A | PATENT COLLATERAL AGREEMENT - DDTL LOAN AGREEMENT | 067045 | /0156 | |
Aug 18 2023 | Western Digital Technologies, INC | JPMORGAN CHASE BANK, N A | PATENT COLLATERAL AGREEMENT - A&R LOAN AGREEMENT | 064715 | /0001 | |
May 03 2024 | Western Digital Technologies, INC | SanDisk Technologies, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 067567 | /0682 | |
Jun 21 2024 | SanDisk Technologies, Inc | SanDisk Technologies, Inc | CHANGE OF NAME SEE DOCUMENT FOR DETAILS | 067982 | /0032 | |
Aug 20 2024 | SanDisk Technologies, Inc | JPMORGAN CHASE BANK, N A , AS THE AGENT | PATENT COLLATERAL AGREEMENT | 068762 | /0494 |
Date | Maintenance Fee Events |
Jan 06 2021 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Date | Maintenance Schedule |
Jun 28 2025 | 4 years fee payment window open |
Dec 28 2025 | 6 months grace period start (w surcharge) |
Jun 28 2026 | patent expiry (for year 4) |
Jun 28 2028 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jun 28 2029 | 8 years fee payment window open |
Dec 28 2029 | 6 months grace period start (w surcharge) |
Jun 28 2030 | patent expiry (for year 8) |
Jun 28 2032 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jun 28 2033 | 12 years fee payment window open |
Dec 28 2033 | 6 months grace period start (w surcharge) |
Jun 28 2034 | patent expiry (for year 12) |
Jun 28 2036 | 2 years to revive unintentionally abandoned end. (for year 12) |