A resource pool managing system and a signal processing method are provided in embodiments of the present disclosure. On the basis of the resource pool, all filters on links share one set of operation resources and cached resources. The embodiment can be adapted to support different application scenarios with unequal carrier rates while mixing modes are supported and the application scenarios with unequal carrier filter orders. The embodiment also supports each stage of filters of the supporting mode-mixing system to share one set of multiply-adding and cached resources to unify the dispatching of resources in one resource pool and maximize the utilization of resources, and supports the parameterized configuration of the links forward-backward stages, link parameter, carrier rate, and so on.
|
8. A signal processing method, performed by a resource pool managing system comprising a node caching module with multiple node caching channels, a mixed-mode caching module with multiple multiplier channels for caching a data array to be processed by a resource pool module, the resource pool module, and a controlling module, the method comprising:
receiving by the node caching module, data processed by the resource pool module;
caching by the node caching module, the data in a node caching channel according to a node write enable signal sent by the controlling module;
sending by the node caching module, a storage status of the node caching channels to the controlling module;
receiving by the controlling module, the storage status of the node caching channels;
sending by the controlling module, a mapping selecting signal according to the storage status of the node caching channels;
obtaining by the node caching module, the data cached in the node caching channel according to the mapping selecting signal from the controlling module;
caching the data by the mixed-mode caching module, in the data array according to a write address signal sent by the controlling module;
obtaining data in a column of the data array according to a read address signal sent by the controlling module; and
performing filtering operations on the data in the column of the data array according to selecting signals sent by the controlling module.
1. A resource pool managing system, comprises:
a node caching module, a mixed-mode caching module, a resource pool module, and a controlling module; wherein:
the node caching module comprising multiple node caching channels for caching data processed by the resource pool module, wherein the node caching module is configured to:
send a storage status of the multiple node caching channels to the controlling module,
cache the data in one of the multiple node caching channels according to a node write enable signal sent by the controlling module; and
obtain the data cached in one of the multiple node caching channels according to a mapping selecting signal from the controlling module;
the mixed-mode caching module comprising multiple multiplier channels for caching a data array to be processed by the resource pool module, wherein the mixed-mode caching module is configured to:
cache the data obtained by the node caching module in the data array according to a write address signal sent by the controlling module, and
obtain the data in a column of the data array according to a read address signal sent by the controlling module;
the resource pool module is configured to perform filtering operations on the data in the column of the data array according to selecting signals sent by the controlling module; and
the controlling module is configured to:
control the node caching module by sending the node write enable signal, and sending the mapping selecting signal according to the storage status of the multiple node caching channels,
control the mixed-mode caching module by sending the write address signal and the read address signal, and
control the resource pool module by sending the selecting signals.
2. The resource pool managing system according to
a multiply-adding operation array submodule and an output logic submodule which are connected in sequence;
the multiply-adding operation array submodule which comprises a multiplier array and an adder array, adapted to perform filtering operations according to a resource pool selecting signal and a resource pool cache selecting signal sent by the controlling module; and
the output logic submodule which comprises an output register group and a multilink selector, adapted to map and output the node intermediate data to the node caching channel in the node caching module according to an output selecting signal sent by the controlling module, wherein the node intermediate data is obtained by performing the filtering operations.
3. The resource pool managing system according to
4. The resource pool managing system according to
5. The resource pool managing system according to
6. The resource pool managing system according to
7. The resource pool managing system according to
9. The signal processing method according to
10. The signal processing method according to
11. The signal processing method according to
|
This application claims priority to Chinese Patent Application No. 200910001996.7, filed on Jan. 21, 2009, which is hereby incorporated by reference in its entirety.
This disclosure relates to the field of communication technology, and in particular, to a resource pool managing system and a signal processing method.
With the rapid development of wireless communication technology, the continuous evolution of wireless protocol has highlighted the importance of mode-mixing base stations in the future market, mainly as follows: wireless network is developing from 2G to 3G and the Global System For Mobile Communication (GSM) network needs to realize the smooth transition to 3G network, and therefore base stations are required to support GSM systems at the very beginning. In addition, the base stations must maintain the capability for the carriers from GSM to the Universal Mobile Telecommunications System (UMTS) to coexist in an operator's frequency in the process of switching networks and the capability of mixing modes in different systems while switching completely to UMTS. The continuous evolution of 3G protocols also requires wireless base stations to mix modes in different modes, for example, base stations for the Wideband Code Division Multiple Access (WCDMA) needs to evolve toward a Long Term Evolution (LTE) amid protocol evolution. In addition, base stations may also need to switch between different standards. For example, CDMA2000 base stations need to smoothly switch to WCDMA or directly upgrade to LTE.
The prior art which supports the IF signal processing system in a single communication mode has the following shortcomings:
A resource pool managing system is provided in an embodiment of the present disclosure. The system includes:
A signal processing method is provided in an embodiment of the present disclosure. The method comprises:
A resource pool managing system and a signal processing method are provided in the embodiments of the present disclosure. On the basis of the resource pool, all filters on links share one set of operation resources and cached resources. The embodiments can be adapted to support different application scenarios with unequal carrier rates and the application scenarios with unequal carrier filter orders. Filter resources can be distributed according to needs. Each stage of filters of the system shares one set of multiply-adding and caching resources to unify the dispatching of resources in one resource pool and maximize the utilization of resources. In addition, this can make full use of resources and improve the system extensibility.
The technical solution of the embodiments of the present disclosure will be further described with reference to the accompanying drawings and exemplary embodiments.
With the continuous development of wireless communication technology, mixed-mode base stations as the important network equipment have given more requirements on the design of multi-carrier filters in the IF channel. Considering the processing of IF signals, and because the main resource consumption is from multiply-adding operation array and caching resources, all filters (and other signal processors) on a link are considered to share one set of operation resources and caching resources. That is, all operation resources and caching resources are included in one big resource pool, and upon the logic, the resources are automatically distributed according to the priority configuration and channel congestion.
The mixed-mode caching module 2 is adapted to cache and dispatch a data array involved in the processing of the resource pool modules. The mixed-mode caching module 2 comprises multiple multiplier channels. The controlling module 4 sends the mapping selecting signal and meanwhile a write address signal to the mixed-mode caching module 2. The mixed-mode caching module 2 caches the data of the node caching channel selected by the controlling module 4 in the relevant multiplier channel of the data array according to the write address signal sent by the controlling module 4. When the data filling in the data array meets the requirements for outputting, for example, when one column of data is already filled up, the mixed-mode caching module 2 sends the indication of full node data to the controlling module 4. Upon receipt of the indication, the controlling module 4 sends a read address signal to the mixed-mode caching module 2. The mixed-mode caching module 2 obtains the data cached in the relevant column of the data array according to the read address signal and gets ready to output the data to the resource pool module 3 for filtering.
The resource pool module 3 is adapted to perform filtering operations on the data cached in the relevant column of the data array which is output by the mixed-mode caching module 2 and output the node intermediate data or result data obtained by performing the filtering operations according to a resource pool selecting signal, a resource pool cache selecting signal, and an output selecting signal sent by the controlling module 4. The controlling module 4 sends the read address signal to the mixed-mode caching module 2, and meanwhile can also send the resource pool selecting signal, the resource pool cache selecting signal, and the output selecting signal to the resource pool module 3. The array data output by the mixed-mode caching module 2 is reorganized under the control of the resource pool selecting signal and finds out the multiplier in the resource pool module 3. The addition chain is reorganized under the control of the resource pool cache selecting signal after the processing of the multiplier array. The resource pool module 3 outputs the data processed through the addition array and multiplication array under the control of the output selecting signal; if the next stage of processing is needed, output to the node caching module 1; if it is the final result, output directly.
A resource pool managing system is provided in an embodiment of the present disclosure. On the basis of a resource pool, all filters (and other signal processors) on the link share one set of operation resources and caching resources. The embodiment can be adapted to support different application scenarios with unequal carrier rates (bandwidth) while mixing modes and the application scenarios with unequal carrier filter orders. Filter resources can be distributed according to needs. The embodiment also supports each stage of filters of the supporting mode-mixing system to share one set of multiply-adding and caching resources to unify the dispatching of resources in one resource pool and maximize the utilization of resources, and supports the parameterized configuration of the links forward-backward stages, link parameter, carrier rate, and so on. Filter structures are highly parameterized. The embodiment may make full use of resources and improve the system extensibility.
On the basis of the foregoing embodiments, the resource pool module 3 comprises a multiply-adding operation array submodule and an output logic submodule which are connected in sequence. The multiply-adding operation array submodule comprises a multiplier array and an adder array, adapted to perform filtering operations according to the resource pool selecting signal and the resource pool cache selecting signal sent by the controlling module 4. The output logic submodule comprises an output register group and a multilink selector adapted to, according to the output selecting signal sent by the controlling module 4, perform mapping and outputting of the node intermediate data obtained by performing the filtering operations to the node caching channel in the node caching module 1. Two counters may also set up in the node caching module 1, wherein the first counter is adapted to generate the read address under the control of mapping selecting signal and the second counter is adapted to generate the write address under the control of the node write enable signal sent by the controlling module 4. The node caching module 1 sends the congestion information of all node caching channels generated by two counters as the dispatching reference information to the controlling module 4. The controlling module 4 may choose to dispatch the node data with the highest congestion according to the congestion information of all node caching channels sent by the node caching module 1 and according to the priority of all preset node caching channels. If congestion of two nodes is the same, the controlling module 4 chooses to process the node data with higher priority to generate mapping selecting signals so as to instruct the node caching module 1 to output the data of the relevant node number to the mixed-mode caching module 2. In this embodiment, node data may be filtered through the resource pool module 3.
The mixed-mode caching module 2 is adapted to cache and dispatch the data array involved in the processing of the resource pool operation, which is implemented by using a Dual-Access RAM. That is, there may be both reading and writing operations while each node data is dispatched, but reading and writing do not use the same RAM address. The amount of the Dual-Access RAM is determined by the amount of the multipliers in the resource pool module 3 and the depth of Dual-Access RAM is determined by the amount of the nodes and the maximum instant bandwidth of the resource pool. That is, the more the nodes, the bigger the instant bandwidth and the more need for caches. The write enable, write address, read address, and read enable of the mixed-mode caching module 2 are generated by the controlling module. The controlling module 4 generates the write address signal “wr_mult” of the relevant mixed-mode caching module 2 according to the current address (two-dimensional pointer) of the mixed-mode caching module 2. The controlling module 4 generates the read address signal “rd_mult” of the relevant multiplier channel according to the status of the current mixed-mode caching module 2, for example, the full column indication, namely, “node_full” signal and priority configuration.
The resource pool module 3 is adapted to perform the filtering operation and output of the node data. The resource pool module 3 comprises a multiplier array, an adder array, an output register group and a multilink selector. The multiply-adding array is adapted to finish the main filtering operation and the output register group and multilink selector are adapted to finish the mapping and outputting of the output data to the node caching channel. The controlling module 4 dynamically generates the resource pool selecting signal “sel_mul2pol” according to the node priority of the software configuration and the full indication signal (if the same address of different multiplier channels is written in the data, the full column indication should be reported) of some column of the mixed-mode caching module 2. The controlling module 4 generates the resource pool cache selecting signal “sel_pol” according to the current operation node “sel_mul2pol”. The controlling module 4 caches the operation outputting data of the resource pool module 3 into the node caching module 1 (intermediate operation node) of the relevant node or directly outputs them (link output node).
The above-mentioned mapping selecting signal “sel_nd2mul” from the node caching module to the mixed-mode caching module needs to determine the priority according to the current congestion. Take 32 nodes as an example to describe the generating logic of the priority: Comparison between one and another is a basic unit, and then the current node number with the highest congestion can be obtained after five-level comparison.
In addition, some modules in the structure may make the control and logic design complicated, because their operations are more special than the traditional FIR filters, or only few resources are taken for operation, so they may connect to the operation link through IP series to simplify the design of the resource pool. Single or Dual-Access RAMs in the above embodiments may be replaced by the register resources and more resources are needed, but it is easier to control. The operation of non-FIR filters may also be finished in the resource pool, but it is more complicated. The selecting signal of NODE_RAM may also be determined according to the congestion of NODE_RAM and the status of MULT_RAM.
Step 100: Multiple node caching channels in the node caching module receive and cache the node input data and the node intermediate data, and read the data cached in the relevant node caching channel according to the mapping selecting signal sent by the controlling module.
The relevant node caching channel of the node caching module caches the input data of the external nodes or the intermediate data of the internal nodes, and waits for the dispatching by the controlling module to perform further filtering operation. The controlling module generates the mapping selecting signal and selects the data in the node caching channel currently with the highest congestion and priority for further processing according to the congestion of each node caching channel and the priority of software configuration.
Step 101: The mixed-mode caching module caches the received data read from the node caching module in the relevant multiplier channel of the data array involved in the processing of the resource pool module according to the write address signal sent by the controlling module.
After receiving the data dispatched from the controlling module, the mixed-mode caching module caches the data at the proper position of the relevant data array according to the regulation and waits for filtering. The mixed-mode caching module receives the data output by the node caching module, caches into the relevant multiplier channel, and sends the status signal of the data array to the controlling module when the data array is full. According to the status signal of the data array, the controlling module generates the read address signal and returns to the resource pool module, and indicates the mixed-mode caching module the data to be output to the resource pool module for filtering.
Step 102: According to the resource pool selecting signal and the resource pool cache selecting signal sent by the controlling module, the resource pool module obtains the data cached in the relevant column of the data array obtained by the mixed-mode caching module according to the read address signal sent by the controlling module, performs filtering operations on the data to obtain the node intermediate data or result data, and sends the node intermediate data to the node caching module according to the output selecting signal sent by the controlling module.
The controlling module sends the resource pool selecting signal and resource pool cache selecting signal to the resource pool module and reconstructs the to-be-processed data according to the filtering regulation. The resource pool module performs filtering operations, specifically obtaining of data for the resource pool module according to the resource pool selecting signal sent by the controlling module, and performing filtering operations on the data according to the resource pool cache selecting signal sent by the controlling module. After finishing the operation, the result is obtained. If the result is the final filtering result, it is directly output; if the result is one stage while processing, the result is to be sent back to the node caching module. The specific caching location is controlled by the output selecting signal sent by the controlling module.
In the method for processing mixed-mode IF signal provided in an embodiment of the present disclosure, if some handling processes are complicated to realize in the resource pool module, a processing module may be added between the output end of the resource pool module and the input end of the node caching module. The resource pool module sends the node intermediate data to the processing module according to the output selecting signal sent by the controlling module. The processing module performs secondary processing on the node intermediate data which is output by the resource pool module, and sends the node intermediate data after secondary processing to the node caching module.
A method for processing mixed-mode IF signal is provided in an embodiment of the present disclosure. On the basis of resource pool, all filters on link share one set of operation resources and caching resources. The embodiment can be adapted to support different application scenarios with unequal carrier rates while mixing modes and the application scenarios with unequal carrier filter orders. Filter resources can be distributed according to needs. The embodiment also supports each stage of filters of the supporting mode-mixing system to share one set of multiply-adding and caching resources to unify the dispatching of resources in one resource pool and maximize the utilization of resources, and supports the parameterized configuration of links forward-backward stages, link parameter, and carrier rate, and so on. Filter structures are highly parameterized. The embodiment may make full use of resources and improve the system extensibility.
It is understandable to those skilled in the art that all or part of the steps in the preceding embodiments may be performed through hardware instructed by a program. The program may be stored in a computer-readable storage medium such as ROM, RAM, magnetic disk, and compact disk. When being executed, the program performs those steps in preceding embodiments.
It should be noted that although the disclosure is described through the above-mentioned exemplary embodiments, the disclosure is not limited to such embodiments. Those skilled in the art can make various modifications and variations to the disclosure without departing from the spirit and scope of the disclosure. The disclosure is intended to cover the modifications and variations provided that they fall in the scope of protection defined by the following claims or their equivalents.
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
4467414, | Aug 22 1980 | Nippon Electric Co., Ltd. | Cashe memory arrangement comprising a cashe buffer in combination with a pair of cache memories |
4816993, | Dec 24 1984 | Hitachi, Ltd. | Parallel processing computer including interconnected operation units |
5001665, | Jun 26 1986 | Motorola, Inc. | Addressing technique for providing read, modify and write operations in a single data processing cycle with serpentine configured RAMs |
5655090, | Mar 13 1992 | International Business Machines Corporation | Externally controlled DSP with input/output FIFOs operating asynchronously and independently of a system environment |
5829028, | May 06 1996 | SAMSUNG ELECTRONICS CO , LTD | Data cache configured to store data in a use-once manner |
5954811, | Jan 25 1996 | Analog Devices, Inc. | Digital signal processor architecture |
6002882, | Nov 03 1997 | Analog Devices, Inc. | Bidirectional communication port for digital signal processor |
6266717, | Apr 18 1997 | MOTOROLA SOLUTIONS, INC | System for controlling data exchange between a host device and a processor |
6813734, | Jan 26 2001 | Exar Corporation | Method and apparatus for data alignment |
7761688, | Dec 03 2003 | CEREMORPHIC, INC | Multiple thread in-order issue in-order completion DSP and micro-controller |
20020059509, | |||
20040093465, | |||
20050058059, | |||
20050182806, | |||
20060155958, | |||
20080229075, | |||
20080244220, | |||
20080288728, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Dec 29 2009 | SHENG, LANPING | HUAWEI TECHNOLOGIES CO , LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 023742 | /0124 | |
Jan 06 2010 | Huawei Technologies Co., Ltd. | (assignment on the face of the patent) | / | |||
Apr 12 2021 | HUAWEI TECHNOLOGIES CO , LTD | HONOR DEVICE CO , LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 055919 | /0344 |
Date | Maintenance Fee Events |
Jun 01 2017 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Jun 02 2021 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
Dec 17 2016 | 4 years fee payment window open |
Jun 17 2017 | 6 months grace period start (w surcharge) |
Dec 17 2017 | patent expiry (for year 4) |
Dec 17 2019 | 2 years to revive unintentionally abandoned end. (for year 4) |
Dec 17 2020 | 8 years fee payment window open |
Jun 17 2021 | 6 months grace period start (w surcharge) |
Dec 17 2021 | patent expiry (for year 8) |
Dec 17 2023 | 2 years to revive unintentionally abandoned end. (for year 8) |
Dec 17 2024 | 12 years fee payment window open |
Jun 17 2025 | 6 months grace period start (w surcharge) |
Dec 17 2025 | patent expiry (for year 12) |
Dec 17 2027 | 2 years to revive unintentionally abandoned end. (for year 12) |