A resource pool managing system and a signal processing method are provided in embodiments of the present disclosure. On the basis of the resource pool, all filters on links share one set of operation resources and cached resources. The embodiment can be adapted to support different application scenarios with unequal carrier rates while mixing modes are supported and the application scenarios with unequal carrier filter orders. The embodiment also supports each stage of filters of the supporting mode-mixing system to share one set of multiply-adding and cached resources to unify the dispatching of resources in one resource pool and maximize the utilization of resources, and supports the parameterized configuration of the links forward-backward stages, link parameter, carrier rate, and so on.

Patent
   8612686
Priority
Jan 21 2009
Filed
Jan 06 2010
Issued
Dec 17 2013
Expiry
Aug 06 2032
Extension
943 days
Assg.orig
Entity
Large
0
18
window open
8. A signal processing method, performed by a resource pool managing system comprising a node caching module with multiple node caching channels, a mixed-mode caching module with multiple multiplier channels for caching a data array to be processed by a resource pool module, the resource pool module, and a controlling module, the method comprising:
receiving by the node caching module, data processed by the resource pool module;
caching by the node caching module, the data in a node caching channel according to a node write enable signal sent by the controlling module;
sending by the node caching module, a storage status of the node caching channels to the controlling module;
receiving by the controlling module, the storage status of the node caching channels;
sending by the controlling module, a mapping selecting signal according to the storage status of the node caching channels;
obtaining by the node caching module, the data cached in the node caching channel according to the mapping selecting signal from the controlling module;
caching the data by the mixed-mode caching module, in the data array according to a write address signal sent by the controlling module;
obtaining data in a column of the data array according to a read address signal sent by the controlling module; and
performing filtering operations on the data in the column of the data array according to selecting signals sent by the controlling module.
1. A resource pool managing system, comprises:
a node caching module, a mixed-mode caching module, a resource pool module, and a controlling module; wherein:
the node caching module comprising multiple node caching channels for caching data processed by the resource pool module, wherein the node caching module is configured to:
send a storage status of the multiple node caching channels to the controlling module,
cache the data in one of the multiple node caching channels according to a node write enable signal sent by the controlling module; and
obtain the data cached in one of the multiple node caching channels according to a mapping selecting signal from the controlling module;
the mixed-mode caching module comprising multiple multiplier channels for caching a data array to be processed by the resource pool module, wherein the mixed-mode caching module is configured to:
cache the data obtained by the node caching module in the data array according to a write address signal sent by the controlling module, and
obtain the data in a column of the data array according to a read address signal sent by the controlling module;
the resource pool module is configured to perform filtering operations on the data in the column of the data array according to selecting signals sent by the controlling module; and
the controlling module is configured to:
control the node caching module by sending the node write enable signal, and sending the mapping selecting signal according to the storage status of the multiple node caching channels,
control the mixed-mode caching module by sending the write address signal and the read address signal, and
control the resource pool module by sending the selecting signals.
2. The resource pool managing system according to claim 1, wherein the resource pool module comprises:
a multiply-adding operation array submodule and an output logic submodule which are connected in sequence;
the multiply-adding operation array submodule which comprises a multiplier array and an adder array, adapted to perform filtering operations according to a resource pool selecting signal and a resource pool cache selecting signal sent by the controlling module; and
the output logic submodule which comprises an output register group and a multilink selector, adapted to map and output the node intermediate data to the node caching channel in the node caching module according to an output selecting signal sent by the controlling module, wherein the node intermediate data is obtained by performing the filtering operations.
3. The resource pool managing system according to claim 2, wherein the multiplier channel is implemented by using a Dual-Access Random Access Memory, the amount of the Dual-Access Random Access Memories is the same as the amount of multipliers in the multiplier array.
4. The resource pool managing system according to claim 2, wherein the node caching channel is implemented by using a Single-Access Random Access Memory.
5. The resource pool managing system according to claim 2, wherein the node caching module comprises: a first counter, adapted to generate a read address under the control of the mapping selecting signal; and a second counter, adapted to generate a write address under the control of the node write enable signal; wherein the first counter and the second counter are further adapted to generate the storage status, and wherein the storage status comprises congestion information indicating a congestion degree of the node caching channels.
6. The resource pool managing system according to claim 5, wherein the controlling module is further adapted to: generate the mapping selecting signal according to the congestion information and priority of the node caching channels.
7. The resource pool managing system according to claim 1, further comprising: a processing module, disposed between the output end of the resource pool module and the input end of the node caching module, adapted to perform secondary processing on the data processed by the resource pool module.
9. The signal processing method according to claim 8, wherein the storage status comprises congestion information indicating a congestion degree of the node caching channels.
10. The signal processing method according to claim 8, wherein the controlling module sends the read address signal to the mixed-mode caching module upon receiving a status signal from the mixed-mode caching module, the status signal indicates that the data in the data array is ready to be output.
11. The signal processing method according to claim 8, the mapping selecting signal is sent according to the congestion information and priority of the node caching channel.

This application claims priority to Chinese Patent Application No. 200910001996.7, filed on Jan. 21, 2009, which is hereby incorporated by reference in its entirety.

This disclosure relates to the field of communication technology, and in particular, to a resource pool managing system and a signal processing method.

With the rapid development of wireless communication technology, the continuous evolution of wireless protocol has highlighted the importance of mode-mixing base stations in the future market, mainly as follows: wireless network is developing from 2G to 3G and the Global System For Mobile Communication (GSM) network needs to realize the smooth transition to 3G network, and therefore base stations are required to support GSM systems at the very beginning. In addition, the base stations must maintain the capability for the carriers from GSM to the Universal Mobile Telecommunications System (UMTS) to coexist in an operator's frequency in the process of switching networks and the capability of mixing modes in different systems while switching completely to UMTS. The continuous evolution of 3G protocols also requires wireless base stations to mix modes in different modes, for example, base stations for the Wideband Code Division Multiple Access (WCDMA) needs to evolve toward a Long Term Evolution (LTE) amid protocol evolution. In addition, base stations may also need to switch between different standards. For example, CDMA2000 base stations need to smoothly switch to WCDMA or directly upgrade to LTE. FIG. 1 shows the processing of intermediate frequency (IF) signals. As shown in FIG. 1, most prior-art IF signal processing chips support the signal processing system in a single communication mode only, and does not have the ability to simultaneously support multiple bandwidth carrier signals.

The prior art which supports the IF signal processing system in a single communication mode has the following shortcomings:

A resource pool managing system is provided in an embodiment of the present disclosure. The system includes:

A signal processing method is provided in an embodiment of the present disclosure. The method comprises:

A resource pool managing system and a signal processing method are provided in the embodiments of the present disclosure. On the basis of the resource pool, all filters on links share one set of operation resources and cached resources. The embodiments can be adapted to support different application scenarios with unequal carrier rates and the application scenarios with unequal carrier filter orders. Filter resources can be distributed according to needs. Each stage of filters of the system shares one set of multiply-adding and caching resources to unify the dispatching of resources in one resource pool and maximize the utilization of resources. In addition, this can make full use of resources and improve the system extensibility.

FIG. 1 shows a schematic of processing IF signal in the prior art;

FIG. 2 shows a structure of a resource pool managing system in embodiment 1 of the present disclosure;

FIG. 3 shows a structure of a resource pool managing system in embodiment 2 of the present disclosure;

FIG. 4 shows a structure of a resource pool managing system in embodiment 3 of the present disclosure;

FIG. 5 shows a structure of a mode-mixing channel based on a resource pool in an embodiment of the present disclosure;

FIG. 6 shows a channel priority judging circuit of a mode-mixing channel in an embodiment of the present disclosure; and

FIG. 7 shows a flowchart of a signal processing method in an embodiment of the present disclosure.

The technical solution of the embodiments of the present disclosure will be further described with reference to the accompanying drawings and exemplary embodiments.

With the continuous development of wireless communication technology, mixed-mode base stations as the important network equipment have given more requirements on the design of multi-carrier filters in the IF channel. Considering the processing of IF signals, and because the main resource consumption is from multiply-adding operation array and caching resources, all filters (and other signal processors) on a link are considered to share one set of operation resources and caching resources. That is, all operation resources and caching resources are included in one big resource pool, and upon the logic, the resources are automatically distributed according to the priority configuration and channel congestion. FIG. 2 shows a structure of a resource pool managing system in embodiment 1 of the present disclosure. As shown in FIG. 2, the resource pool managing system comprises BUFFER, RAM, and CALC functional modules which may help flexibly configure the bandwidth, order, and carrier number of the IF channel based on the implementation idea of resource pool in an embodiment of the present disclosure. It may be understood that: a node provides a filter on a carrier; an external node represents the link input and output; an internal node represents the interface connection among filters; BUFFER represents the cache dispatching after the node input; RAM caches the data and coefficient of the finite impulse response filters (that is, FIR filters); CALC is a multiplier adder array; the controlling module enables each clock to send the to-be-computed cell into the CALC for operation and then send them out. The embodiments of the present disclosure are described on the basis of design of the Application Specific Integrated Circuit (ASIC) for processing mixed-mode IF signals to introduce how to process IF signals of mixed-mode signals by using the design method of the resource pool.

FIG. 3 shows a structure of a resource pool managing system in embodiment 2 of the present disclosure. As shown in FIG. 3, the system comprises a node caching module 1, a mixed-mode caching module 2, a resource pool module 3 and a controlling module 4, wherein the controlling module 4 is adapted to control the node caching module 1, the mixed-mode caching module 2 and the resource pool module 3. The node caching module 1 comprises multiple node caching channels, which are adapted to cache the node input data and node intermediate data. The node input data is the data directly input to the node caching module 1 from the external part. As to the node intermediate data, the data should be processed through filters in multiple stages, re-cached in the node caching module 1 for the next processing before finishing the multiple stages of processing and after the previous stage through the resource pool module 3, output from the node caching module 1 when the polling is processed, and then matched with the data in the mixed-mode caching module 2 and re-input into the resource pool module 3 for the next stage of processing. If the processing is completed, the data is output from the resource pool module 3; if the processing is not completed, another similar processing in circulation is required. The intermediate data during such processing is called the node intermediate data. The node caching module 1 may also detect the counting of each node caching channel, send the storage status of the node caching channel to the controlling module 4, and provide reference information for the controlling module 4 to read and dispatch the data cached by the node caching channel. The controlling module 4 generates the mapping selecting signal from the node caching module 1 to the mixed-mode caching module 2 according to the reference information and indicates the node caching module 1 the data to be read. The node caching module 1 obtains the data cached by the relevant node caching channel according to the mapping selecting signal sent by the controlling module 4. When the resource pool module 3 outputs the node intermediate data, the controlling module 4 sends the node write enable signal to the node caching module 1 to indicate to cache the node intermediate data output by the resource pool module 3 to the relevant node caching channel. The node caching module 1 caches the node intermediate data in the relevant node caching channel according to the node write enable signal.

The mixed-mode caching module 2 is adapted to cache and dispatch a data array involved in the processing of the resource pool modules. The mixed-mode caching module 2 comprises multiple multiplier channels. The controlling module 4 sends the mapping selecting signal and meanwhile a write address signal to the mixed-mode caching module 2. The mixed-mode caching module 2 caches the data of the node caching channel selected by the controlling module 4 in the relevant multiplier channel of the data array according to the write address signal sent by the controlling module 4. When the data filling in the data array meets the requirements for outputting, for example, when one column of data is already filled up, the mixed-mode caching module 2 sends the indication of full node data to the controlling module 4. Upon receipt of the indication, the controlling module 4 sends a read address signal to the mixed-mode caching module 2. The mixed-mode caching module 2 obtains the data cached in the relevant column of the data array according to the read address signal and gets ready to output the data to the resource pool module 3 for filtering.

The resource pool module 3 is adapted to perform filtering operations on the data cached in the relevant column of the data array which is output by the mixed-mode caching module 2 and output the node intermediate data or result data obtained by performing the filtering operations according to a resource pool selecting signal, a resource pool cache selecting signal, and an output selecting signal sent by the controlling module 4. The controlling module 4 sends the read address signal to the mixed-mode caching module 2, and meanwhile can also send the resource pool selecting signal, the resource pool cache selecting signal, and the output selecting signal to the resource pool module 3. The array data output by the mixed-mode caching module 2 is reorganized under the control of the resource pool selecting signal and finds out the multiplier in the resource pool module 3. The addition chain is reorganized under the control of the resource pool cache selecting signal after the processing of the multiplier array. The resource pool module 3 outputs the data processed through the addition array and multiplication array under the control of the output selecting signal; if the next stage of processing is needed, output to the node caching module 1; if it is the final result, output directly.

A resource pool managing system is provided in an embodiment of the present disclosure. On the basis of a resource pool, all filters (and other signal processors) on the link share one set of operation resources and caching resources. The embodiment can be adapted to support different application scenarios with unequal carrier rates (bandwidth) while mixing modes and the application scenarios with unequal carrier filter orders. Filter resources can be distributed according to needs. The embodiment also supports each stage of filters of the supporting mode-mixing system to share one set of multiply-adding and caching resources to unify the dispatching of resources in one resource pool and maximize the utilization of resources, and supports the parameterized configuration of the links forward-backward stages, link parameter, carrier rate, and so on. Filter structures are highly parameterized. The embodiment may make full use of resources and improve the system extensibility.

On the basis of the foregoing embodiments, the resource pool module 3 comprises a multiply-adding operation array submodule and an output logic submodule which are connected in sequence. The multiply-adding operation array submodule comprises a multiplier array and an adder array, adapted to perform filtering operations according to the resource pool selecting signal and the resource pool cache selecting signal sent by the controlling module 4. The output logic submodule comprises an output register group and a multilink selector adapted to, according to the output selecting signal sent by the controlling module 4, perform mapping and outputting of the node intermediate data obtained by performing the filtering operations to the node caching channel in the node caching module 1. Two counters may also set up in the node caching module 1, wherein the first counter is adapted to generate the read address under the control of mapping selecting signal and the second counter is adapted to generate the write address under the control of the node write enable signal sent by the controlling module 4. The node caching module 1 sends the congestion information of all node caching channels generated by two counters as the dispatching reference information to the controlling module 4. The controlling module 4 may choose to dispatch the node data with the highest congestion according to the congestion information of all node caching channels sent by the node caching module 1 and according to the priority of all preset node caching channels. If congestion of two nodes is the same, the controlling module 4 chooses to process the node data with higher priority to generate mapping selecting signals so as to instruct the node caching module 1 to output the data of the relevant node number to the mixed-mode caching module 2. In this embodiment, node data may be filtered through the resource pool module 3. FIG. 4 shows a structure of a resource pool managing system in embodiment 3 of the present disclosure. As shown in FIG. 4, if some handling processes are complicated to realize in the resource pool module 3, a processing module 5 may be added between the output end of the resource pool module 3 and the input end of the node caching module 3. The processing module 5 is adapted to perform secondary processing on the node intermediate data which is output by the resource pool module, e.g. phase equalization processing.

FIG. 5 shows a structure of a mode-mixing channel based on a resource pool in an embodiment of the present disclosure. As shown in FIG. 5, in the system structure, the node caching module 1 comprises multiple node caching channels adapted to cache the input node data or the intermediate operation data of other nodes on the link, which is implemented by using a Single-Access RAM circuit. That is, the same node data is not to be read and written at the same time. The amount of the Single-Access RAM is determined by the amount of external and internal nodes, and the depth of the Single-Access RAM is determined by the maximum bandwidth and data flow of the node. That is, the higher speed the data flow has, the bigger the caching depth is. The node write enable signal of the node caching channel is generated by the controlling module 4 which may generate the node write enable signal of the relevant node caching module 1 according to the status of “sel_out” to cache the operation data of the relevant node into the relevant node caching channel. The node read enable signal of the node caching channel is generated by the controlling module 4. Read and write addresses are generated by the counters, both of which are controlled by the read and write enable signal and generate and return the full or empty marks and congestion information (difference value of the read and write addresses) to the controlling module 4. According to the “node_vol” signal sent by the node caching module 1 and the priority information of each node software configuration, the controlling module 4 generates the read enable signal “sel_nd2mul”, namely, the mapping selecting signal from the node caching module 1 to the mixed-mode caching module 2.

The mixed-mode caching module 2 is adapted to cache and dispatch the data array involved in the processing of the resource pool operation, which is implemented by using a Dual-Access RAM. That is, there may be both reading and writing operations while each node data is dispatched, but reading and writing do not use the same RAM address. The amount of the Dual-Access RAM is determined by the amount of the multipliers in the resource pool module 3 and the depth of Dual-Access RAM is determined by the amount of the nodes and the maximum instant bandwidth of the resource pool. That is, the more the nodes, the bigger the instant bandwidth and the more need for caches. The write enable, write address, read address, and read enable of the mixed-mode caching module 2 are generated by the controlling module. The controlling module 4 generates the write address signal “wr_mult” of the relevant mixed-mode caching module 2 according to the current address (two-dimensional pointer) of the mixed-mode caching module 2. The controlling module 4 generates the read address signal “rd_mult” of the relevant multiplier channel according to the status of the current mixed-mode caching module 2, for example, the full column indication, namely, “node_full” signal and priority configuration.

The resource pool module 3 is adapted to perform the filtering operation and output of the node data. The resource pool module 3 comprises a multiplier array, an adder array, an output register group and a multilink selector. The multiply-adding array is adapted to finish the main filtering operation and the output register group and multilink selector are adapted to finish the mapping and outputting of the output data to the node caching channel. The controlling module 4 dynamically generates the resource pool selecting signal “sel_mul2pol” according to the node priority of the software configuration and the full indication signal (if the same address of different multiplier channels is written in the data, the full column indication should be reported) of some column of the mixed-mode caching module 2. The controlling module 4 generates the resource pool cache selecting signal “sel_pol” according to the current operation node “sel_mul2pol”. The controlling module 4 caches the operation outputting data of the resource pool module 3 into the node caching module 1 (intermediate operation node) of the relevant node or directly outputs them (link output node).

The above-mentioned mapping selecting signal “sel_nd2mul” from the node caching module to the mixed-mode caching module needs to determine the priority according to the current congestion. Take 32 nodes as an example to describe the generating logic of the priority: Comparison between one and another is a basic unit, and then the current node number with the highest congestion can be obtained after five-level comparison. FIG. 6 shows a channel priority judging circuit of a mode-mixing channel in an embodiment of the present disclosure. As shown in FIG. 6, the judgment and selection are made according to the situation of being empty or full in the node caching channel NODE_RAM: firstly generating 5 bit volume indication v1-v32 by the reading and writing pointer of each NODE_RAM, and then comparing one with another to obtain the node number “sel” with the biggest v.

In addition, some modules in the structure may make the control and logic design complicated, because their operations are more special than the traditional FIR filters, or only few resources are taken for operation, so they may connect to the operation link through IP series to simplify the design of the resource pool. Single or Dual-Access RAMs in the above embodiments may be replaced by the register resources and more resources are needed, but it is easier to control. The operation of non-FIR filters may also be finished in the resource pool, but it is more complicated. The selecting signal of NODE_RAM may also be determined according to the congestion of NODE_RAM and the status of MULT_RAM.

FIG. 7 shows a flowchart of a signal processing method in an embodiment of the present disclosure. As shown in FIG. 7, the method comprises:

Step 100: Multiple node caching channels in the node caching module receive and cache the node input data and the node intermediate data, and read the data cached in the relevant node caching channel according to the mapping selecting signal sent by the controlling module.

The relevant node caching channel of the node caching module caches the input data of the external nodes or the intermediate data of the internal nodes, and waits for the dispatching by the controlling module to perform further filtering operation. The controlling module generates the mapping selecting signal and selects the data in the node caching channel currently with the highest congestion and priority for further processing according to the congestion of each node caching channel and the priority of software configuration.

Step 101: The mixed-mode caching module caches the received data read from the node caching module in the relevant multiplier channel of the data array involved in the processing of the resource pool module according to the write address signal sent by the controlling module.

After receiving the data dispatched from the controlling module, the mixed-mode caching module caches the data at the proper position of the relevant data array according to the regulation and waits for filtering. The mixed-mode caching module receives the data output by the node caching module, caches into the relevant multiplier channel, and sends the status signal of the data array to the controlling module when the data array is full. According to the status signal of the data array, the controlling module generates the read address signal and returns to the resource pool module, and indicates the mixed-mode caching module the data to be output to the resource pool module for filtering.

Step 102: According to the resource pool selecting signal and the resource pool cache selecting signal sent by the controlling module, the resource pool module obtains the data cached in the relevant column of the data array obtained by the mixed-mode caching module according to the read address signal sent by the controlling module, performs filtering operations on the data to obtain the node intermediate data or result data, and sends the node intermediate data to the node caching module according to the output selecting signal sent by the controlling module.

The controlling module sends the resource pool selecting signal and resource pool cache selecting signal to the resource pool module and reconstructs the to-be-processed data according to the filtering regulation. The resource pool module performs filtering operations, specifically obtaining of data for the resource pool module according to the resource pool selecting signal sent by the controlling module, and performing filtering operations on the data according to the resource pool cache selecting signal sent by the controlling module. After finishing the operation, the result is obtained. If the result is the final filtering result, it is directly output; if the result is one stage while processing, the result is to be sent back to the node caching module. The specific caching location is controlled by the output selecting signal sent by the controlling module.

In the method for processing mixed-mode IF signal provided in an embodiment of the present disclosure, if some handling processes are complicated to realize in the resource pool module, a processing module may be added between the output end of the resource pool module and the input end of the node caching module. The resource pool module sends the node intermediate data to the processing module according to the output selecting signal sent by the controlling module. The processing module performs secondary processing on the node intermediate data which is output by the resource pool module, and sends the node intermediate data after secondary processing to the node caching module.

A method for processing mixed-mode IF signal is provided in an embodiment of the present disclosure. On the basis of resource pool, all filters on link share one set of operation resources and caching resources. The embodiment can be adapted to support different application scenarios with unequal carrier rates while mixing modes and the application scenarios with unequal carrier filter orders. Filter resources can be distributed according to needs. The embodiment also supports each stage of filters of the supporting mode-mixing system to share one set of multiply-adding and caching resources to unify the dispatching of resources in one resource pool and maximize the utilization of resources, and supports the parameterized configuration of links forward-backward stages, link parameter, and carrier rate, and so on. Filter structures are highly parameterized. The embodiment may make full use of resources and improve the system extensibility.

It is understandable to those skilled in the art that all or part of the steps in the preceding embodiments may be performed through hardware instructed by a program. The program may be stored in a computer-readable storage medium such as ROM, RAM, magnetic disk, and compact disk. When being executed, the program performs those steps in preceding embodiments.

It should be noted that although the disclosure is described through the above-mentioned exemplary embodiments, the disclosure is not limited to such embodiments. Those skilled in the art can make various modifications and variations to the disclosure without departing from the spirit and scope of the disclosure. The disclosure is intended to cover the modifications and variations provided that they fall in the scope of protection defined by the following claims or their equivalents.

Sheng, Lanping

Patent Priority Assignee Title
Patent Priority Assignee Title
4467414, Aug 22 1980 Nippon Electric Co., Ltd. Cashe memory arrangement comprising a cashe buffer in combination with a pair of cache memories
4816993, Dec 24 1984 Hitachi, Ltd. Parallel processing computer including interconnected operation units
5001665, Jun 26 1986 Motorola, Inc. Addressing technique for providing read, modify and write operations in a single data processing cycle with serpentine configured RAMs
5655090, Mar 13 1992 International Business Machines Corporation Externally controlled DSP with input/output FIFOs operating asynchronously and independently of a system environment
5829028, May 06 1996 SAMSUNG ELECTRONICS CO , LTD Data cache configured to store data in a use-once manner
5954811, Jan 25 1996 Analog Devices, Inc. Digital signal processor architecture
6002882, Nov 03 1997 Analog Devices, Inc. Bidirectional communication port for digital signal processor
6266717, Apr 18 1997 MOTOROLA SOLUTIONS, INC System for controlling data exchange between a host device and a processor
6813734, Jan 26 2001 Exar Corporation Method and apparatus for data alignment
7761688, Dec 03 2003 CEREMORPHIC, INC Multiple thread in-order issue in-order completion DSP and micro-controller
20020059509,
20040093465,
20050058059,
20050182806,
20060155958,
20080229075,
20080244220,
20080288728,
///
Executed onAssignorAssigneeConveyanceFrameReelDoc
Dec 29 2009SHENG, LANPINGHUAWEI TECHNOLOGIES CO , LTD ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0237420124 pdf
Jan 06 2010Huawei Technologies Co., Ltd.(assignment on the face of the patent)
Apr 12 2021HUAWEI TECHNOLOGIES CO , LTD HONOR DEVICE CO , LTD ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0559190344 pdf
Date Maintenance Fee Events
Jun 01 2017M1551: Payment of Maintenance Fee, 4th Year, Large Entity.
Jun 02 2021M1552: Payment of Maintenance Fee, 8th Year, Large Entity.


Date Maintenance Schedule
Dec 17 20164 years fee payment window open
Jun 17 20176 months grace period start (w surcharge)
Dec 17 2017patent expiry (for year 4)
Dec 17 20192 years to revive unintentionally abandoned end. (for year 4)
Dec 17 20208 years fee payment window open
Jun 17 20216 months grace period start (w surcharge)
Dec 17 2021patent expiry (for year 8)
Dec 17 20232 years to revive unintentionally abandoned end. (for year 8)
Dec 17 202412 years fee payment window open
Jun 17 20256 months grace period start (w surcharge)
Dec 17 2025patent expiry (for year 12)
Dec 17 20272 years to revive unintentionally abandoned end. (for year 12)