Disclosed is a method and device for concurrently performing a plurality of data manipulation operations on data being transferred via a direct memory access (dma) channel managed by a dma controller/engine. A control data block (CDB) that controls where the data is retrieved from, delivered to, and how the plurality of data manipulation operations are performed may be fetched by the dma controller. A CDB processor operating within the dma controller may read the CDB and set up the data reads, data manipulation operations, and data writes in accord with the contents of the CDB. data may be provided from one or more sources and data/modified data may be delivered to one or more destinations. While data is being channeled through the dma controller, the dma controller may concurrently perform a plurality of data manipulation operations on the data, such as, but not limited to: hashing, HMAC, fill pattern, LFSR, EEDP check, EEDP generation, XOR, encryption, and decryption. The data modification engines that perform the data manipulation operations may be implemented on the dma controller such that the use of memory during data manipulation operations uses local RAM so as to avoid a need to access external memory during data manipulation operations.
|
9. A direct memory access (dma) controller that performs a plurality of data modification operations on data being transferred via a direct memory access (dma) channel managed by said dma controller comprising:
a control data block (CDB) processor subsystem that fetches a control data block (CDB), said CDB containing instructions for reading said data from at least one data source, performing said plurality of data modification operations on said data, and writing said data to at least one destination;
a fill subsystem that retrieves said data from at least one data source in accord with said instructions encoded in said CDB;
a plurality of data modification engines within said dma controller that each perform at least one of a variety of data modification operations for each data modification operation of said plurality of data modification operations on said data received by fill subsystem in accord with said instructions encoded in said CDB such that at least two of said plurality of data modification operations are performed concurrently by said dma controller and a first data modification engine of said plurality of data modification engines creates first data results that are used as a basis of computation for a second data modification engine of said plurality of data modification engines to perform at least one data modification operation of said plurality of data modification operations; and
a drain subsystem that sends results of said plurality of data modification operations to at least one destination in accord with said instructions encoded in said CDB.
1. A method to perform a plurality of data modification operations on data being transferred via a direct memory access (dma) channel managed by a dma controller comprising:
providing a plurality of data modification engines within said dma controller that each perform at least one of a variety of data modification operations;
fetching a control data block (CDB) by said dma controller, said CDB containing instructions for reading said data from at least one data source, performing said plurality of data modification operations on said data, and writing said data to at least one destination;
retrieving by said dma controller said data from at least one data source in accord with said instructions encoded in said CDB;
performing within said dma controller each data modification operation of said plurality of data modification operations on said data received by said dma controller using one of said plurality of data modification engines provided on said dma controller for each data modification operation in accord with said instructions encoded in said CDB such that at least two of said plurality of data modification operations are performed concurrently by said dma controller and a first data modification engine of said plurality of data modification engines creates first data results that are used as a basis of computation for a second data modification engine of said plurality of data modification engines to perform at least one data modification operation of said plurality of data modification operations; and
sending by said dma controller results of said plurality of data modification operations to at least one destination in accord with said instructions encoded in said CDB.
17. A direct memory access (dma) controller that performs a plurality of data modification operations on data being transferred via a direct memory access (dma) channel managed by said dma controller comprising:
means for providing a plurality of data modification engines within said dma controller that each perform at least one of a variety of data modification operations;
means for fetching a control data block (CDB) by said dma controller, said CDB containing instructions for reading said data from at least one data source, performing said plurality of data modification operations on said data, and writing said data to at least one destination;
means for retrieving by said dma controller said data from at least one data source in accord with said instructions encoded in said CDB;
means for performing within said dma controller each data modification operation of said plurality of data modification operations on said data received by said dma controller using one of said plurality of data modification engines provided on said dma controller for each data modification operation in accord with said instructions encoded in said CDB such that at least two of said plurality of data modification operations are performed concurrently by said dma controller and a first data modification engine of said plurality of data modification engines creates first data results that are used as a basis of computation for a second data modification engine of said plurality of data modification engines to perform at least one data modification operation of said plurality of data modification operations; and
means for sending by said dma controller results of said plurality of data modification operations to at least one destination in accord with said instructions encoded in said CDB.
2. The method of
creating read commands within said dma controller in accord with said instructions encoded in said CDB to read said data from said at least one data source;
sending said read commands from said dma controller to said at least one data source; and
receiving at said dma controller said data from said at least one data source sent by said at least one data source in accord with said read commands.
3. The method of
creating write commands within said dma controller in accord with said instructions encoded in said CDB to write said results of said plurality of data modification operations to said at least one destination; and
sending by said dma controller said results of said plurality of data modification operations to said at least one destination in accord with said write commands.
4. The method of
5. The method of
6. The method of
7. The method of
8. The method of
10. The dma controller of
11. The dma controller of
12. The dma controller of
13. The dma controller of
14. The dma controller of
15. The dma controller of
16. The dma controller of
|
Direct Memory Access (DMA) is an essential feature of modern computers. DMA permits particular hardware subsystems of a computer to have read and/or write access to system memory independent of the Central Processing Unit (CPU). Some example hardware systems that may use DMA include, but are not limited to: disk drive controllers, RAID (Redundant Array of Independent Disks)-On-a-Chip (ROC) controllers, graphics cards, network cards, and sound cards. DMA may also be used for intra-chip data transfer on multi-core processors. Management and implementation of a DMA channel is typically performed by a DMA controller. Many times the DMA controller is equipped with local memory such that the DMA controller transfers data to and from the local DMA memory and the external main memory. Since the DMA controller manages the transfer of data and not the computer CPU, the data transfers that use DMA use much less computer CPU processing time, thus, increasing the effective computing power of a computer having a DMA controller. Without DMA, communication with peripheral devices or between cores of a multi-core system may fully occupy the CPU during the entire read/write operation, which makes the CPU unavailable for performing other computing tasks. With DMA, the CPU would initiate the transfer then do other operations while the transfer is in progress, and receive an interrupt from the DMA controller once the operation has been completed. Freeing the CPU from performing the data transfer with peripheral devices is especially important since communication with peripheral devices is typically slower than for normal system Random Access Memory (RAM), so the CPU would be unavailable for even longer periods of time during communication with peripheral devices without the use of a DMA channel managed by a DMA controller.
A typical DMA transfer copies a block of memory from one device to another. The CPU initiates the data transfer, but the CPU does not execute the data transfer itself. For an Industry Standard Architecture (ISA) bus, the data transfer is performed by a DMA controller, which is typically incorporated as part of the computer motherboard chipset. A Peripheral Component Interconnect (PCI) bus uses a bus mastering DMA where the peripheral device takes control of the bus and performs the transfer itself.
An embedded processor may include a DMA engine/controller within the chip to allow the processing element to issue a data transfer while continuing to perform other tasks during the data transfer. When the DMA controller is incorporated within a chip, the DMA controller is commonly referred to as a DMA engine. Multi-core embedded processors often include one or more DMA engines in combination with local DMA memory as subsystems within the chip multi-core processor chip.
An embodiment of the present invention may comprise a method to perform a plurality of data manipulation operations on data being transferred via a Direct Memory Access (DMA) channel managed by a DMA controller comprising: providing a plurality of data manipulation engines within the DMA controller that each perform at least one of a variety of data manipulation operations; fetching a Control Data Block (CDB) by the DMA controller, the CDB containing instructions for reading the data from at least one data source, performing the plurality of data manipulation operations on the data, and writing the data to at least one destination; retrieving by the DMA controller the data from at least one data source in accord with the instructions encoded in the CDB; performing within the DMA controller each data manipulation operation of the plurality of data manipulation operations on the data received by the DMA controller using one of the plurality of data manipulation engines provided on the DMA controller for each data manipulation operation in accord with the instructions encoded in the CDB such that at least two of the plurality of data manipulation operations are performed concurrently by the DMA controller; and sending by the DMA controller results of the plurality of data manipulation operations to at least one destination in accord with the instructions encoded in the CDB.
An embodiment of the present invention may further comprise a Direct Memory Access (DMA) controller that performs a plurality of data manipulation operations on data being transferred via a Direct Memory Access (DMA) channel managed by the DMA controller comprising: a Control Data Block (CDB) processor subsystem that fetches a Control Data Block (CDB), the CDB containing instructions for reading the data from at least one data source, performing the plurality of data manipulation operations on the data, and writing the data to at least one destination; a fill subsystem that retrieves the data from at least one data source in accord with the instructions encoded in the CDB; a plurality of data manipulation engines within the DMA controller that each perform at least one of a variety of data manipulation operations for each data manipulation operation of the plurality of data manipulation operations on the data received by fill subsystem in accord with the instructions encoded in the CDB such that at least two of the plurality of data manipulation operations are performed concurrently by the DMA controller; and a drain subsystem that sends results of the plurality of data manipulation operations to at least one destination in accord with the instructions encoded in the CDB.
An embodiment of the present invention may further comprise a Direct Memory Access (DMA) controller that performs a plurality of data manipulation operations on data being transferred via a Direct Memory Access (DMA) channel managed by the DMA controller comprising: means for providing a plurality of data manipulation engines within the DMA controller that each perform at least one of a variety of data manipulation operations; means for fetching a Control Data Block (CDB) by the DMA controller, the CDB containing instructions for reading the data from at least one data source, performing the plurality of data manipulation operations on the data, and writing the data to at least one destination; means for retrieving by the DMA controller the data from at least one data source in accord with the instructions encoded in the CDB; means for performing within the DMA controller each data manipulation operation of the plurality of data manipulation operations on the data received by the DMA controller using one of the plurality of data manipulation engines provided on the DMA controller for each data manipulation operation in accord with the instructions encoded in the CDB such that at least two of the plurality of data manipulation operations are performed concurrently by the DMA controller; and means for sending by the DMA controller results of the plurality of data manipulation operations to at least one destination in accord with the instructions encoded in the CDB.
In the drawings,
As modern computer systems become more sophisticated, additional data manipulation operations are expected to be performed on data being transferred within a computer system and to peripheral devices connected to the computer system. For instance, a RAID (Redundant Array of Independent Disks)-On-a-Chip (ROC) system may be expected to perform a number of data manipulation operations on data being transferred as part of the ROC system operations. An ROC system typically includes a Direct Memory Access (DMA) engine/controller as a subsystem within the ROC architecture. Some typical data manipulation operations include: hash, Hash Message Authentication Code (HMAC), hash/HMAC combined, fill pattern, Linear Feedback Shift Register (LFSR), End-to-End Data Protection (EEDP) check/add/update/remove, exclusive OR (XOR), encryption, and decryption. The DMA engine/controller of an embodiment may incorporate performing various data manipulation operations on data being transferred within the DMA engine/controller to reduce processing requirements for other systems/subsystems within a computer system. Further, the DMA engine/controller of an embodiment may concurrently perform a plurality of data manipulations on the data being transferred such that the plurality of data manipulations are performed more quickly than performing the plurality of data manipulation operations one at a time in a serial fashion. Thus, an embodiment allows data to move freely from source to destination with as little interruption as possible. The data manipulation operations performed on the data being transferred by the DMA engine/controller of an embodiment may also store and retrieve data/modified data into memory local to the DMA engine/controller (i.e., local memory) avoiding unnecessary overhead to access memory external to the DMA engine/controller (i.e., external memory).
For example, the DMA engine/controller of an embodiment may encrypt and store data in a first data buffer located in local DMA memory using an encryption engine on the DMA engine/controller. While the encrypted data is being placed into the first data buffer, an XOR engine on the DMA engine/controller may concurrently perform an XOR operation on the encrypted data and store the XOR data in a second data buffer located in local DMA memory without any interruption to the data flow of the encrypted data to the first data buffer. By performing the XOR operation on the encrypted data as the encrypted data is being produced, an embodiment may avoid the latency of waiting for the encrypted data to be completely written to a data buffer before performing the XOR operation. In addition, the DMA engine/controller of an embodiment may also perform EEDP check, add, and/or remove operations in parallel with the encryption and XOR data manipulation operations, further reducing latency involved in performing a plurality of data manipulation operations on data being transferred by a DMA engine/controller.
Various embodiments may be implemented on DMA engines/controllers in a variety of computer systems and electronic devices having DMA engines/controllers. If desired, a computer system and/or electronic device may include multiple DMA engines/controllers if multiple DMA channels are desired. Also, some DMA engines/controllers may provide multiple DMA channels such that including multiple DMA engines/controllers in a system result in a multiplier effect where the number of available DMA channels is a product of the number of DMA engines/controllers times the number of DMA channels available on each DMA engine/controller. DMA controllers for an embodiment may be implemented as separate systems that are incorporated into a computer system and/or electronic device as a separate dedicated computer “card,” a separate dedicated chip, or as a separate dedicated electronic device. Often, however, the DMA controller is incorporated as a subsystem, or engine, of a larger multi-function chip, card, or circuitry integrated into a computer system or electronic device. Thus, DMA controllers for an embodiment may be implemented as DMA engine subsystems included within a larger multi-function chip, card, or circuit. Typically, when a DMA controller is included as subsystem of a larger multi-function chip, card, or circuit, the DMA controller is called a DMA engine. Hence, throughout this document, the terms DMA controller and DMA engine will be used interchangeably such that when there is a reference to a DMA controller it is also a reference to a DMA engine and vice versa.
For an embodiment, the data source(s) 102-104 may be any data source 102-104 compatible with typical DMA controllers/engines 106. An embodiment may gather/read data from a single data source 102 or from a plurality of sources 102-104. For instance, source data from a single memory area may be read from external memory (i.e., memory external to the DMA controller 106) and transferred to various data destinations 116-118. Likewise, source data from a multiple memory areas may be read from external and transferred to various data destinations 116-118 in one data transfer operation.
Data modification engines may be created to perform any data modifications desired for data being transferred via a DMA controller/engine 106. For example, some data modification engines may include, but are not limited to: hash, HMAC, hash/HMAC combined, fill pattern, Linear Feedback Shift Register (LFSR), End-to-End Data Protection (EEDP) check/add/update/remove, exclusive OR (XOR), encryption, and decryption. Some types of supported hashing algorithms include the standard Secure Hash Algorithm (SHA)-224, SHA-256, SHA-384, and SHA-512 algorithms which may be performed individually or concurrently by a hash engine. The input data for a data manipulation engine 110-112 may be designated to come from another data manipulation engine 110-112 as an intermediate data result in order to permit multiple data manipulations combined into a single result. The intermediate result may also be sent to one of the destination locations 116-118 if desired as encoded in the CDB 120. However, the intermediate data result does not necessarily need to be sent to a data destination 116-118. When working on data coming from another data manipulation engine, the second data manipulation engine may operate concurrently with the first data manipulation engine, but the second data manipulation may start slightly behind the first data manipulation engine in order to allow the first data manipulation engine to begin streaming intermediate result data before starting the second data manipulation engine calculations. For various embodiments, concurrent operation of data manipulation engines 110-112 may also occur as different data manipulation engines 110-112 perform operations on the same data (intermediate results and/or originally received data) at the same time in parallel. Further, various embodiments may deliver both the original data and the modified data results to the data destinations 116-118.
Various embodiments may implement the local source data storage 108 and the local modify data storage 114 within the DMA controller 106 as one or more local electronic memory circuits. Electronic memory may also be called computer readable memory even though the computer readable (i.e., electronic) memory may be included in electronic devices that require memory storage, but would not typically be considered to be computers. A typical implementation of the electronic memory would be to provide a set of electronic Random Access Memory (RAM) that may be portioned by the DMA controller 106 into source data storage 108 and modify data storage 114. The electronic memory may be in fixed partitions for the source data storage 108 and the modify data storage 114, but the DMA controller may also dynamically allocate the electronic memory RAM as needed for the source data storage and the modify data storage 114 so that the entire electronic memory may be more efficiently utilized. Further, the modify data storage 114 may be subdivided into multiple segments in order to store data from multiple data modify engines 110-112.
Data destinations 116-118 for an embodiment may be any data destination 116-118 compatible with typical DMA controllers/engines 106. Various embodiments may write data and/or modified data results to multiple data destinations 116-118 or to a single data destination 116. Individual data results may be sent to all, or only a subgroup, of the available data destinations 116-118. That is one result may be sent to data destinations 116-118 while other results may be sent only to a single destination 116 or to a subset of destinations 116-118. Typical data destinations 116-118 may include, but are not limited to: disk drives, computer peripherals, a separate external memory segment, and/or other external devices.
For the embodiment 500 shown in
In an example operation of a multi source move for the data modification engines of the embodiment shown in
An embodiment implementing the configuration 600 shown in
For the embodiment 700 shown in
In the ROC embodiment 802 shown in
Various embodiments may provide the control and management functions detailed herein via an application operating on a computer system, a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), other programmable logic circuits, or other electronic devices. Embodiments may be provided as a computer program product which may include a computer-readable, or machine-readable, medium having stored thereon instructions which may be used to program/operate a computer (or other electronic devices) or computer system to perform a process or processes in accordance with the present invention. The computer-readable medium may include, but is not limited to, hard disk drives, floppy diskettes, optical disks, Compact Disc Read-Only Memories (CD-ROMs), Digital Versatile Disc ROMS (DVD-ROMs), Universal Serial Bus (USB) memory sticks, magneto-optical disks, ROMs, random access memories (RAMs), Erasable Programmable ROMs (EPROMs), Electrically Erasable Programmable ROMs (EEPROMs), magnetic optical cards, flash memory, or other types of media/machine-readable medium suitable for storing electronic instructions. The computer program instructions may reside and operate on a single computer/electronic device/electronic circuit or various portions may be spread over multiple computers/devices/electronic circuits that comprise a computer system. Moreover, embodiments may also be downloaded as a computer program product, wherein the program may be transferred from a remote computer to a requesting computer by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., a modem or network connection, including both wired/cabled and wireless connections).
The foregoing description of the invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed, and other modifications and variations may be possible in light of the above teachings. The embodiment was chosen and described in order to best explain the principles of the invention and its practical application to thereby enable others skilled in the art to best utilize the invention in various embodiments and various modifications as are suited to the particular use contemplated. It is intended that the appended claims be construed to include other alternative embodiments of the invention except insofar as limited by the prior art.
Olson, David M., Piccirillo, Gary
Patent | Priority | Assignee | Title |
10310998, | Jun 30 2015 | Microsoft Technology Licensing, LLC | Direct memory access with filtering |
9965416, | May 09 2016 | NXP USA, INC. | DMA controller with arithmetic unit |
ER9351, |
Patent | Priority | Assignee | Title |
5448702, | Mar 02 1993 | International Business Machines Corporation; INTERANTIONAL BUSINESS MACHINES CORP | Adapters with descriptor queue management capability |
5535348, | Oct 20 1994 | Texas Instruments Incorporated | Block instruction |
5608891, | Oct 06 1992 | Mitsubishi Denki Kabushiki Kaisha | Recording system having a redundant array of storage devices and having read and write circuits with memory buffers |
6324594, | Oct 30 1998 | AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED | System for transferring data having a generator for generating a plurality of transfer extend entries in response to a plurality of commands received |
6330374, | Nov 13 1998 | Ricoh Company, Ltd,; LOGIC PLUS PLUS, INC | Image manipulation for a digital copier which operates on a block basis |
6336150, | Oct 30 1998 | AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED | Apparatus and method for enhancing data transfer rates using transfer control blocks |
6411984, | Sep 28 1990 | Texas Instruments Incorporated | Processor integrated circuit |
6449666, | Oct 30 1998 | AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED | One retrieval channel in a data controller having staging registers and a next pointer register and programming a context of a direct memory access block |
6609167, | Mar 17 1999 | RPX Corporation | Host and device serial communication protocols and communication packet formats |
6636922, | Mar 17 1999 | RPX Corporation | Methods and apparatus for implementing a host side advanced serial protocol |
6772310, | Nov 04 1997 | Hewlett Packard Enterprise Development LP | Method and apparatus for zeroing a transfer buffer memory as a background task |
6898646, | May 03 2000 | NERA INNOVATIONS LIMITED | Highly concurrent DMA controller with programmable DMA channels |
6925519, | Jul 25 2002 | AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED | Automatic translation from SCSI command protocol to ATA command protocol |
7003702, | Mar 18 2002 | EMC IP HOLDING COMPANY LLC | End-to-end checksumming for read operations |
7089391, | Aug 23 2001 | Intellectual Ventures I LLC | Managing a codec engine for memory compression/decompression operations using a data movement engine |
7181578, | Sep 12 2002 | RPX Corporation | Method and apparatus for efficient scalable storage management |
7200691, | Dec 22 2003 | National Instruments Corp. | System and method for efficient DMA transfer and buffering of captured data events from a nondeterministic data bus |
7219169, | Sep 30 2002 | Oracle America, Inc | Composite DMA disk controller for efficient hardware-assisted data transfer operations |
7302503, | Apr 01 2002 | Broadcom Corporation | Memory access engine having multi-level command structure |
7421520, | Aug 29 2003 | ADAPTEC INC | High-speed I/O controller having separate control and data paths |
7565462, | Apr 01 2002 | Broadcom Corporation | Memory access engine having multi-level command structure |
7596639, | Sep 01 2004 | AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED | Skip mask table automated context generation |
7716389, | Mar 17 2006 | BITMICRO LLC | Direct memory access controller with encryption and decryption for non-blocking high bandwidth I/O transactions |
7779082, | Jul 02 2003 | Hitachi, Ltd. | Address management device |
7934021, | Jan 06 2002 | AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED | System and method for network interfacing |
20060123152, | |||
20070088864, | |||
20070130384, | |||
20080181245, | |||
20080209084, | |||
20090006810, | |||
20090248910, | |||
WO9917212, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Mar 05 2010 | LSI Corporation | (assignment on the face of the patent) | / | |||
Mar 17 2010 | OLSON, DAVID M | LSI Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 024145 | /0525 | |
Mar 17 2010 | PICCIRILLO, GARY | LSI Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 024145 | /0525 | |
May 06 2014 | LSI Corporation | DEUTSCHE BANK AG NEW YORK BRANCH, AS COLLATERAL AGENT | PATENT SECURITY AGREEMENT | 032856 | /0031 | |
May 06 2014 | Agere Systems LLC | DEUTSCHE BANK AG NEW YORK BRANCH, AS COLLATERAL AGENT | PATENT SECURITY AGREEMENT | 032856 | /0031 | |
Aug 14 2014 | LSI Corporation | AVAGO TECHNOLOGIES GENERAL IP SINGAPORE PTE LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 035390 | /0388 | |
Feb 01 2016 | DEUTSCHE BANK AG NEW YORK BRANCH, AS COLLATERAL AGENT | LSI Corporation | TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENT RIGHTS RELEASES RF 032856-0031 | 037684 | /0039 | |
Feb 01 2016 | AVAGO TECHNOLOGIES GENERAL IP SINGAPORE PTE LTD | BANK OF AMERICA, N A , AS COLLATERAL AGENT | PATENT SECURITY AGREEMENT | 037808 | /0001 | |
Feb 01 2016 | DEUTSCHE BANK AG NEW YORK BRANCH, AS COLLATERAL AGENT | Agere Systems LLC | TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENT RIGHTS RELEASES RF 032856-0031 | 037684 | /0039 | |
Jan 19 2017 | BANK OF AMERICA, N A , AS COLLATERAL AGENT | AVAGO TECHNOLOGIES GENERAL IP SINGAPORE PTE LTD | TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS | 041710 | /0001 |
Date | Maintenance Fee Events |
Dec 08 2016 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Dec 08 2016 | M1554: Surcharge for Late Payment, Large Entity. |
Jan 25 2021 | REM: Maintenance Fee Reminder Mailed. |
Jul 12 2021 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Jun 04 2016 | 4 years fee payment window open |
Dec 04 2016 | 6 months grace period start (w surcharge) |
Jun 04 2017 | patent expiry (for year 4) |
Jun 04 2019 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jun 04 2020 | 8 years fee payment window open |
Dec 04 2020 | 6 months grace period start (w surcharge) |
Jun 04 2021 | patent expiry (for year 8) |
Jun 04 2023 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jun 04 2024 | 12 years fee payment window open |
Dec 04 2024 | 6 months grace period start (w surcharge) |
Jun 04 2025 | patent expiry (for year 12) |
Jun 04 2027 | 2 years to revive unintentionally abandoned end. (for year 12) |