A method and apparatus for statistical compilation is presented. The statistical compilation circuit includes a multi-bank memory that stores a plurality of statistics, where a statistic component portion for each statistic is stored in each of the plurality of banks in the multi-bank memory. An arbitration block is operably coupled to receive at least one statistical update stream. Each statistical update stream includes a plurality of statistical updates, where each statistical update includes a statistic identifier and an update operand. The arbitration block schedules received statistical updates to produce a scheduled update stream. A control block operably coupled to the arbitration block and the multi-bank memory executes the updates included in the scheduled update stream. The control block retrieves the current value of one of the statistic component portions from one of the memory banks and combines the current value with the update operand of a corresponding statistical update. The resulting updated component value is then stored back into the memory bank from which it was read. When a processing block that is operably coupled to the control block issues a statistic request corresponding to a particular requested statistic, the control block fetches each of the component portions from each of the memory banks corresponding to that particular statistic and combines the component portions to produce a total statistic value.
|
1. A statistical compilation circuits comprising:
a multi-bank memory, wherein the multi-bank memory stores a plurality of statistics, wherein a statistic component portion for each statistic is stored in each of a plurality of banks of the multi-bank memory; an arbitration block operably coupled to receive at least one statistical update stream, wherein the at least one statistical update stream includes a plurality of statistical updates, wherein a statistical update includes a statistic identifier and an update operand, wherein the arbitration block schedules received statistical updates to produce a scheduled update stream; a control block operably coupled to the arbitration block and the multi-bank memory, wherein the control block compiles a set of read operations based on a portion of the scheduled update stream, wherein each read operation corresponds to a scheduled update in the portion of the scheduled update stream, wherein a read operation for a particular scheduled update retrieves a component value of one of the statistic component portions from one of the plurality of banks of the multi-bank memory for a statistic corresponding to the particular scheduled update, wherein the control block combines the operand for the particular scheduled update with the component value to produce an updated component value, wherein the control block compiles a set of write operations corresponding to the set of read operations, wherein the set of write operations overwrite component values fetched by the set of read operations with corresponding updated component values produced through combination operations; and a processing block operably coupled to the control block, wherein the processing block issues statistic requests corresponding to a requested statistic to the control block, wherein the control block retrieves component values for each component portion of the requested statistic from the multi-bank memory, wherein the control block combines the component values that are retrieved to produce a total statistic value for the requested statistic.
3. The circuit of
4. The circuit of
5. The circuit of
6. The circuit of
7. The circuit of
8. The circuit of
9. The circuit of
10. The circuit of
11. The circuit of
12. The circuit of
13. The circuit of
14. The circuit of
15. The circuit of
16. The circuit of
17. The circuit of
18. The circuit of
19. The circuit of
20. The circuit of
|
The invention relates generally to statistical compilation, and more particularly to statistical compilation in a communications network.
In data communication systems or other data processing systems that involve a large number of statistics that have to be maintained, maintenance of the statistics can become a complicated task that consumes a relatively large amount of available resources in the system. For example, in a data communication system a number of parameters relating to data traffic must be maintained for billing purposes, network maintenance, and the like. Each of these statistics may be stored in a memory structure such that periodically the statistics can be retrieved and analyzed to generate billing information, perform network utilization studies, etc. Each time a particular statistic needs to be updated in the memory, the current value stored in the memory must be read, the modification to the value made, and the resulting value stored back into the memory.
Having to perform these statistical updates can consume a significant amount of the available resources of the data path processors within the communication network. This may interfere with the level of efficiency with which the data path processors perform the other functions which they are designed to perform, such as those associated with directing data traffic through the network. The degradation in efficiency becomes increasingly significant as traffic speeds and the number of statistics maintained increase.
Therefore, a need exists for a method and apparatus for statistical compilation that reduces the resources required on the part of the function components in the system, such as data path processors in a communication network.
Generally, the present invention provides a method and apparatus for statistical compilation. The statistical compilation circuit includes a multi-bank memory that stores a plurality of statistics, where a statistic component portion for each statistic is stored in each of the plurality of banks in the multi-bank memory. An arbitration block is operably coupled to receive at least one statistical update stream. Each statistical update stream includes a plurality of statistical updates, where each statistical update includes a statistic identifier and an update operand. The arbitration block schedules received statistical updates to produce a scheduled update stream. A control block operably coupled to the arbitration block and the multi-bank memory executes the updates included in the scheduled update stream. The control block retrieves the current value of one of the statistic component portions from one of the memory banks and combines the current value with the update operand of a corresponding statistical update. The resulting updated component value is then stored back into the memory bank from which it was read. When a processing block that is operably coupled to the control block issues a statistic request corresponding to a particular requested statistic, the control block fetches each of the component portions from each of the memory banks corresponding to that particular statistic and combines the component portions to produce a total statistic value.
The invention can be better understood with reference to
In order to minimize the processing resources consumed through statistic update operations by the data path processors 10-16, the format of each statistical update is standardized. Each statistical update includes a statistic identifier, which identifies the particular statistic to be updated, and an update operand, which represents the change in the particular statistic. For example, one statistical update may correspond to the billing statistic for a particular user on a data communications network. In such case, one of the data path processors 10-16 would issue a statistical update that includes a statistic identifier that indicates the billing statistic for the particular user is to be updated. The update operand included in the statistical update would indicate the change to that particular statistic, which in the example may be to increment the billing statistic by a certain amount.
The statistic update processor 20 receives the statistical updates in the form of data stream from each of the data path processors 10-16. As such, the statistic update processor 20 can off-load from the data path processors 10-16 all of the read, modification, and writing operations required to update statistics. Each of the data path processors 10-16 merely issues a single command that contains all of the required information for the statistic update processor 20 to adequately maintain the various statistics for the network.
Because the statistic update processor 20 is typically required to process a large number of statistical updates, the memory used to store the statistics is preferably a multi-bank memory 30. Utilizing a multi-bank memory allows multiple component portions for each statistic to be maintained in each of the banks within the memory. This allows a number of update operations to be performed in quick succession, while ensuring that multiple updates to a single statistic do not interfere with each other. In prior art attempts that included one statistic value within a conventional memory structure, the speed with which the statistics could be updated was limited. Using multiple component values distributed throughout multiple banks within the multi-bank 30 allows for much for efficient memory accesses. Utilizing the multiple banks, there are no problems with sequential updates to the same statistic, and the memory can effectively be operated at a much higher rate of speed. Additional efficiencies are achieved by sequencing multiple statistic updates such that down time associated with switching between reading and writing operations to the multi-bank memory 30 is reduced. This increases the bandwidth available for updating statistics and will be discussed in additional detail below.
When the processing block 40 issues a statistic request to the statistic update processor 20, each of the component values for the statistic are read from the multi-bank memory 30 and combined to produce a total statistic value that is provided in response to the statistic request. Thus, although multiple component values are contained within the multi-bank memory 30, a single value is returned to any entity requesting the current value of a statistic.
Each of the banks 132-138 stores a component value corresponding to a particular statistic 142-148, respectively. Therefore, when a statistical update is to be performed, the statistic update processor 20 can retrieve the current value stored in any of the components 142-148 in order to perform the update. Once the statistic update processor 20 has made the modification to the component value, it is stored back in the appropriate memory bank. When a statistic request is received by the statistic update processor 20, all of the statistic component values 142-148 are read and combined together to produce the total statistic value for the requested statistic.
An arbiter 182 within the arbitration block 180 controls the sequential execution of the received statistical updates. Thus, the arbiter 182 receives notification of pending statistical updates from the update buffer 160 and prioritizes the statistical updates via the selection block 175 to produce a scheduled update stream. The scheduled update stream may be buffered by a scheduled update stream buffer 192 that is coupled to the selection block 175. Buffering of the various updates and requests helps to improve the overall throughput of the statistical compilation circuit.
The arbitration performed by the arbiter 182 may be based on a round-robin scheme, a weighted fair queuing technique, or some other prioritization scheme. The weighted fair queuing technique may schedule the updates based on the priority level of each statistical update stream, the loading level of each statistical update stream buffer, or some combination of these two factors. In another embodiment, the arbiter 182 includes a receipt sequence priority encoder such that statistic updates are performed in temporal order based on order of receipt. Such an embodiment is described and discussed in more detail with FIG. 4.
The control block 190 compiles a set of read operations based on the scheduled update stream, where each read operation corresponds to a scheduled update. Each read operation retrieves a component value of one of the statistic component portions for the particular statistic from one of the plurality of banks of the multi-bank memory 30. The control block 190 then combines the operand for the particular scheduled update with the component value that has been retrieved to produce an updated component value. The control block 190 preferably includes an adder that combines the operand for the particular scheduled update with the component value to produce the updated component value.
The control block 190 also compiles a set of write operations corresponding to the set of read operations such that the write operations overwrite component values fetched by the set of read operations. The write operations store the updated component value produced through the combination operations. The control block 190 can include additional circuitry in order to optimize the execution of the read, write, and combination operations. These potential enhancements are described in more detail with respect to
The control block 190 receives statistic requests from the processing block 40 via the arbitration circuitry such that statistic requests are serviced in accordance with the arbitration scheme. As described earlier, the control block 190 will fetch the required component values that are combined to provide the total statistic value returned in response to the statistic request from the processing block 40.
In order to enable the processing block 40 to operate more efficiently, a results buffer 196 may be included in the circuit such that multiple statistic requests can be serviced between accesses to the statistical compilation circuitry by the processing block 40. The processing block 40 can issue a plurality of statistic requests that are buffered in the statistic request buffer 170, and then perform other functions before reading the total statistic values for each of the requests from the results buffer 196.
As was described with respect to
In one embodiment, the statistic compilation circuit includes a plurality of multi-bank memories, which are illustrated in
Each of the multi-bank memory structures 30 and 35 may have an associated memory controller 150 and 152, respectively. Each memory controller is operably coupled to the control block 190 and to a corresponding one of the multi-bank memories. The memory controllers 150 and 152 allow multiple memory operations to take place concurrently. In other words, the memory controllers 150 and 152 off-load the actual interaction with the multi-bank memories 30 and 35 from the control block 190.
In the example illustrated, four statistical update streams 156-159 have the potential to provide a statistical update during any particular statistical update receipt interval. The statistical updates are stored in the statistical update buffers 162-168. The selection block 175 is controlled by the arbiter 182, which orders the statistical updates to produce the scheduled update stream 222.
In the example illustrated, each statistical update stream 156-159 has a corresponding bit for each statistical update receipt interval in the first-in/first-service buffer 210. Thus, the right-most column illustrates the oldest statistical update receipt interval currently stored in the first-in/first-service buffer 210. Assuming that the positioning of the bits within the column corresponds to the positioning of the streams in the diagram, the top-most bit location would correspond to the statistical update stream 156. The values illustrated in the right-most column show, in one embodiment, that the only statistical update stream that received a statistical update during that time interval was statistical update stream 158. This is signified by a bit that is set within the column. Similarly, the column directly to the left of the right-most column indicates that statistical updates were received in statistical update streams 158 and 159 during that particular time interval. The subsequent time interval shows that a statistical update was received for statistical update stream 158, and the following interval indicates that statistical updates were received on streams 156 and 158.
The arbiter 182 can interpret the bit patterns included in the first-in/first-serviced buffer 210 to determine the temporal ordering of the statistical update requests in terms of their order of receipt. As such, the arbiter 182 can select amongst pending statistical updates stored in the buffers 162-168 to produce a scheduled update stream 222 that orders the statistical updates according to their time of receipt. It should be noted that the first-in/first-service buffer 210 may use other encoding schemes to store the temporal order of the receipt of statistical updates. For example, in another embodiment, time intervals may not be addressed but rather when a particular statistical update is received, an encoding corresponding to its particular update stream is included in a first-in/first-out buffer. Thus, the arbiter could simply examine the next value in the first-in/first-out buffer 210 to determine the next statistical update to include in the scheduled update stream 222.
Another advantage of the arbitration scheme illustrated in
In order to increase the efficiency of the memory usage with respect to the multi-bank memory 30, multiple statistic updates may be sequenced such that the downtime associated with switching between reading and writing operations is reduced. For example, 16 statistic component read operations may occur sequentially followed by 16 write operations that update stored component values. As such, the control block 190 queues up 16 statistic update operations and performs the component reads for those operations in sequence. In order to avoid retrieving a component value for a particular statistic multiple times, a set of circuitry such as that included in
The circuitry in
When a statistic update is received and is to be queued so that it will be executed in the following set of memory operations, the CAM 240 is first examined to determine whether or not there is already an update pending for that particular statistic. If the CAM 240 does not store an indication that that particular statistic already has a statistical update pending, the statistic 242 is added to the CAM 240. At the same time, the statistical update, which includes the statistic identifier 252 and the update operand 254 is added to the FIFO 250. If a subsequent statistical update corresponding to the same statistic is received, the presence of the statistic 242 within the CAM 240 is detected. At this point, the update operand 254 for that particular statistic update is combined with the update operand of the newly received statistic update to produce a combined update operand. The combined update operand is stored back in the FIFO 250 at the location corresponding to the update operand 254. Thus, when the statistic updates stored within the FIFO 250 are executed, a single statistic update will be performed in which both received update operands are combined with the currently stored component value.
Including the circuitry or functionality of the circuitry included in
The statistical compilation circuitry discussed thus far may be used in any system that requires the maintenance of a number of statistics that may be modified by a number of separate entities. In one embodiment, the plurality of statistics stored within the multi-bank memory 30 includes statistics corresponding to packet traffic or cell traffic in a communication system. These statistics may further correspond to network performance statistics, billing statistics, class of service traffic statistics, discard statistics, or statistics concerning traffic flow along particular paths within the network.
The memory 264 may be a single memory device or a plurality of memory devices. Such a memory device may be a read only memory device, random access memory device, floppy disk, hard drive memory and/or any device that stores digital information. Note that when the processing module 262 has one or more of its functions performed by a state machine and/or logic circuitry, the memory containing the corresponding operational instructions is embedded within the state machine and/or logic circuitry. The memory 264 stores programming and/or operating instructions that, when executed, cause the processing module 262 to perform at least a portion of the steps of the method illustrated in FIG. 7. Note that the statistics processor 260 may implement some of the functions of the method through software stored in the memory 264, whereas other portions of the method may be implemented using hardware, or circuitry included within the statistics processor 260. Thus, in some embodiments, a mix of hardware and software may be used to perform the method of FIG. 7.
At step 304, the statistical updates are prioritized to produce prioritized statistical updates. Preferably, the prioritization occurring at step 304 is performed by an arbiter that may perform the prioritization based on a round-robin scheme, a weighted fair queuing scheme, or a receipt sequenced priority scheme as described with respect to FIG. 4.
At step 306, the prioritized statistical updates are executed. Execution of a prioritized statistical update modifies one of the statistic component portions stored in the multi-bank memory to reflect the modification to the statistic. Preferably, each statistical update includes a statistic identifier and an update operand. More preferably, the update operand includes a value that is added to the currently stored value for that statistic in the statistic component retrieved from memory. Execution of the prioritized statistical updates may include queuing a number of statistic updates such that the memory operations associated with execution of a statistical update can be performed in a more optimal manner. As was described with respect to
Execution of a particular prioritized statistical update includes reading a component value corresponding to one of the statistic component portions from the multi-bank memory, combining the operand of the statistical update with the component value to produce an updated component value, and finally storing the updated component value in the multi-bank memory.
At step 308, a statistic request is received corresponding to one of the statistics maintained within the multi-bank memory. At step 310, each of the component values for the particular statistic is retrieved from the multi-bank memory. Note that the potential for overflow within a particular component portion of the statistic within the multi-bank memory is possible, and as such, an overflow indication may be stored within the system such that any overflow that occurred in performing statistical updates is known. Finally, at step 312, each of the components, and any potential overflow indications, are combined to produce a total statistic value that is provided in response to the statistic request.
It should be noted that the method illustrated in
By providing the statistic update circuitry and methods described herein, statistic maintenance functions in communication systems and other statistic-intensive systems can be off-loaded from the processing entities that perform the functional tasks within the network or other system. This off-loading enables the functional entities to perform their tasks more efficiently. It should be understood that the implementation of variations and modifications of the invention in its various aspects should be apparent to those of ordinary skill in the art, and that the invention is not limited to the specific embodiments described. It is therefore contemplated to cover by the present invention any and all modifications, variations, or equivalents that fall within the spirit and scope of the basic underlying principles disclosed and claimed herein.
Hanes, Gordon G. G., Darwin, Martin, Tong, Mainz
Patent | Priority | Assignee | Title |
6633835, | Jan 10 2002 | Network General Technology | Prioritized data capture, classification and filtering in a network monitoring environment |
7299277, | Jan 11 2002 | Network General Technology | Media module apparatus and method for use in a network monitoring environment |
7317718, | Dec 06 2002 | Juniper Networks, Inc. | Flexible counter update and retrieval |
7401002, | Sep 22 1999 | Alcatel-Lucent Canada Inc | Method and apparatus for statistical compilation |
7451268, | Jul 27 2004 | GIGAFIN NETWORKS, INC | Arbiter for array machine context data memory |
7710952, | Dec 06 2002 | Juniper Networks, Inc. | Flexible counter update and retrieval |
7739460, | Aug 30 2004 | Integrated Device Technology, Inc. | Integrated circuit memory systems having write-back buffers therein that support read-write-modify (RWM) operations within high capacity memory devices |
7870200, | May 29 2004 | IRONPORT SYSTEMS, INC | Monitoring the flow of messages received at a server |
7873689, | Dec 30 2004 | WSOU Investments, LLC | Distributed set-expression cardinality estimation |
7873695, | May 29 2004 | IRONPORT SYSTEMS, INC | Managing connections and messages at a server by associating different actions for both different senders and different recipients |
8185651, | Jan 10 2002 | Network General Technology | Multi-segment network application monitoring and correlation architecture |
8190731, | Jun 15 2004 | WSOU Investments, LLC | Network statistics processing device |
8331359, | Dec 06 2002 | Juniper Networks, Inc. | Flexible counter update and retrieval |
Patent | Priority | Assignee | Title |
5146344, | Sep 28 1990 | XEROX CORPORATION, STAMFORD, A CORP OF NEW YORK | Printing system with automatic statistical compilation and billing |
5787267, | Jun 07 1995 | MOSYS, INC | Caching method and circuit for a memory system with circuit module architecture |
5809450, | Nov 26 1997 | HTC Corporation | Method for estimating statistics of properties of instructions processed by a processor pipeline |
5875452, | Dec 21 1995 | International Business Machines Corporation | DRAM/SRAM with uniform access time using buffers, write back, address decode, read/write and refresh controllers |
6000007, | Jun 07 1995 | MOSYS, INC | Caching in a multi-processor computer system |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Sep 21 1999 | HANES, GORDON G G | NEWBRIDGE NETWORKS, CORP | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 010278 | /0659 | |
Sep 21 1999 | TONG, MAINZ | NEWBRIDGE NETWORKS, CORP | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 010278 | /0659 | |
Sep 21 1999 | DARWIN, MARTIN | NEWBRIDGE NETWORKS, CORP | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 010278 | /0659 | |
Sep 22 1999 | Alcatel Canada Inc. | (assignment on the face of the patent) | / | |||
Jan 30 2013 | Alcatel-Lucent Canada Inc | CREDIT SUISSE AG | SECURITY INTEREST SEE DOCUMENT FOR DETAILS | 029826 | /0927 | |
Aug 19 2014 | CREDIT SUISSE AG | Alcatel-Lucent Canada Inc | RELEASE OF SECURITY INTEREST | 033686 | /0798 |
Date | Maintenance Fee Events |
Aug 07 2002 | ASPN: Payor Number Assigned. |
Mar 24 2006 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Mar 25 2010 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Mar 27 2014 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Oct 01 2005 | 4 years fee payment window open |
Apr 01 2006 | 6 months grace period start (w surcharge) |
Oct 01 2006 | patent expiry (for year 4) |
Oct 01 2008 | 2 years to revive unintentionally abandoned end. (for year 4) |
Oct 01 2009 | 8 years fee payment window open |
Apr 01 2010 | 6 months grace period start (w surcharge) |
Oct 01 2010 | patent expiry (for year 8) |
Oct 01 2012 | 2 years to revive unintentionally abandoned end. (for year 8) |
Oct 01 2013 | 12 years fee payment window open |
Apr 01 2014 | 6 months grace period start (w surcharge) |
Oct 01 2014 | patent expiry (for year 12) |
Oct 01 2016 | 2 years to revive unintentionally abandoned end. (for year 12) |