A circular buffer storing packets for processing by one or more network processors employs an empty buffer address register identifying where a next received packet should be stored, a next packet address register identifying the next packet to be processed, and a packet-processing address register within each network processor identifying the packet being processed by that network processor. The n-bit addresses to the buffer are mapped or masked from/to the m-bit packet-processing address registers by software, allowing the buffer size to be fully scalable. A dedicated packet retrieval instruction supported by the network processor(s) retrieves a new packet for processing using the next packet address register and copies that into the associated packet-processing address register for use in subsequent accesses. buffer management is thus independent of the network processor architecture.
|
16. A method of operating a packet storage subsystem comprising:
storing received packets within a buffer containing a plurality of entries each uniquely address by an n-bit address;
mapping or masking an m-bit address value within a packet-processing address register employed by a processing unit to store an address for a packet being processed by the processing unit to produce an n-bit address for a buffer entry containing the packet being processed, wherein the buffer is a circular buffer,the method further comprising:
storing an address for a buffer entry into which a next packet received for processing within an empty buffer address register; and
storing an address for a buffer entry containing a next packet to be processed within a next packet address register shared by all processing units having access to the buffer,
wherein m is different from n.
19. A method of operating a packet storage subsystem comprising:
storing received packets within a circular buffer containing a plurality of entries each uniquely address by an n-bit address;
mapping or masking an m-bit address value within a packet-processing address register employed by a processing unit to store an address for a packet being processed by the processing unit to produce an n-bit address for a buffer entry containing the packet being processed;
selectively employing, within each processing unit having access to the buffer, any buffer allocation and usage algorithm, independent of buffer management used by the buffer;
storing an address for a buffer entry into which a next packet received for processing within an empty buffer address register; and
storing an address for a buffer entry containing a next packet to be processed within a next packet address register shared by all processing units having access to the buffer,
wherein m is different from n.
13. A method of operating a packet storage subsystem comprising:
storing received packets within a circular buffer containing a plurality of entries each uniquely address by an n-bit address;
mapping or masking an m-bit address value within a packet-processing address register employed by a processing unit to store an address for a packet being processed by the processing unit to produce an n-bit address for a buffer entry containing the packet being processed;
storing an address for a buffer entry into which a next packet received for processing within an empty buffer address register;
storing an address for a buffer entry containing a next packet to be processed within a next packet address register shared by all processing units having access to the buffer,
wherein m is different from n; and
accessing a buffer entry containing a packet to be processed for a first time utilizing a dedicated packet retrieval instruction employing an address value within the next packet address register.
14. A method of operating a packet storage subsystem comprising:
storing received packets within a circular buffer containing a plurality of entries each uniquely address by an n-bit address;
mapping or masking an m-bit address value within a packet-processing address register employed by a processing unit to store an address for a packet being processed by the processing unit to produce an n-bit address for a buffer entry containing the packet being processed;
mapping or masking m-bit address values within each packet processing address register associated with one of a plurality of processing units sharing access to the buffer and storing an address for a packet being processed by that processing unit to produce an n-bit address for a buffer entry containing the packet being processed;
storing an address for a buffer entry into which a next packet received for processing within an empty buffer address register; and
storing an address for a buffer entry containing a next packet to be processed within a next packet address register shared by all processing units having access to the buffer,
wherein m is different from n.
3. A packet storage subsystem comprising:
a buffer containing a plurality of entries each uniquely addressed by an n-bit address; and
an m-bit packet-processing address register employed by a processing unit to store an address for a packet being processed by the processing unit, wherein an m-bit address value within the packet-processing address register is mapped or masked to produce an n-bit address for a buffer entry containing the packet being processed,
wherein each of a plurality of processing units sharing access to the buffer has an associated m-bit packet-processing address register employed by the respective processing unit to store an address for a packet being processed by that processing unit, wherein m-bit address values within each packet-processing address register are mapped or masked to produce an n-bit address for a buffer entry containing the packet being processed,
wherein the buffer is a circular buffer, the packet storage subsystem further comprising:
an empty buffer address register containing an address for a buffer entry into which a next packet received for processing should be stored; and
a next packet address register shared by all processing units having access to the buffer and containing an address for a buffer entry in which a next packet to be processed is stored.
1. A packet storage subsystem comprising:
a buffer containing a plurality of entries each uniquely addressed by an n-bit address; and
an m-bit packet-processing address register employed by a processing unit to store an address for a packet being processed by the processing unit, wherein an m-bit address value within the packet-processing address register is mapped or masked to produce an n-bit address for a buffer entry containing the packet being processed,
wherein each of a plurality of processing units sharing access to the buffer has an associated m-bit packet-processing address register employed by the respective processing unit to store an address for a packet being processed by that processing unit, wherein m-bit address values within each packet-processing address register are mapped or masked to produce an n-bit address for a buffer entry containing the packet being processed,
wherein the buffer is a circular buffer, the packet storage subsystem further comprising:
an empty buffer address register containing an address for a buffer entry into which a next packet received for processing should be stored; and
a next packet address register shared by all processing units having access to the buffer and containing an address for a buffer entry in which a next packet to be processed is stored, and
wherein m is different from n.
6. A packet storage subsystem comprising:
a buffer containing a plurality of entries each uniquely addressed by an n-bit address; and
an m-bit packet-processing address register employed by a processing unit to store an address for a packet being processed by the processing unit, wherein an m-bit address value within the packet-processing address register is mapped or masked to produce an n-bit address for a buffer entry containing the packet being processed,
wherein each of a plurality of processing units sharing access to the buffer has an associated m-bit packet-processing address register employed by the respective processing unit to store an address for a packet being processed by that processing unit, wherein m-bit address values within each packet-processing address register are mapped or masked to produce an n-bit address for a buffer entry containing the packet being processed,
wherein each processing unit having access to the buffer may selectively employ any buffer allocation and usage algorithm, independent of buffer management used by the buffer, and
wherein the buffer is a circular buffer, the packet storage subsystem further comprising:
an empty buffer address register containing an address for a buffer entry into which a next packet received for processing should be stored; and
a next packet address register shared by all processing units having access to the buffer and containing an address for a buffer entry in which a next packet to be processed is stored.
9. A router comprising:
one or more network processors;
a packet storage subsystem shared by the one or more network processors via an interconnect, the packet storage subsystem comprising:
a buffer containing a plurality of entries each uniquely address by an n-bit address; and
an m-bit packet-processing address register employed by a network processor within the one or more network processors to store an address for a packet being processed by the network processor, wherein an m-bit address value within, the packet-processing address register is mapped or masked to produce an n-bit address for a buffer entry containing the packet being processed,
wherein each of the one or more network processors sharing access to the buffer has an associated m-bit packet-processing address register employed by the respective network processor to store an address for a packet being processed by that network processor, wherein m-bit address values within each packet-processing address register are mapped or masked to produce an n-bit address for a buffer entry containing the packet being processed,
wherein the buffer is a circular buffer, the packet storage subsystem further comprising:
an empty buffer address register containing an address for a buffer entry into which a next packet received for processing should be stored; and
a next packet address register shared by all of the one or more network processors having access to the buffer and containing an address for a buffer entry in which a next packet to be processed is stored.
7. A router comprising:
one or more network processors;
a packet storage subsystem shared by the one or more network processors via an interconnect, the packet storage subsystem comprising:
a buffer containing a plurality of entries each uniquely address by an n-bit address; and
an m-bit packet-processing address register employed by a network processor within the one or more network processors to store an address for a packet being processed by the network processor, wherein an m-bit address value within the packet-processing address register is mapped or masked to produce an n-bit address for a buffer entry containing the packet being processed,
wherein each of the one or more network processors sharing access to the buffer has an associated m-bit packet-processing address register employed by the respective network processor to store an address for a packet being processed by that network processor, wherein m-bit address values within each packet-processing address register are mapped or masked to produce an n-bit address for a buffer entry containing the packet being processed,
wherein the buffer is a circular buffer, the packet storage subsystem further comprising:
an empty buffer address register containing an address for a buffer entry into which a next packet received for processing should be stored; and
a next packet address register shared by all processing units having access to the buffer and containing an address for a buffer entry in which a next packet to be processed is stored, and
wherein m is different from n.
12. A router comprising:
one or more network processors;
a packet storage subsystem shared by the one or more network processors via an interconnect, the packet storage subsystem comprising:
a buffer containing a plurality of entries each uniquely address by an n-bit address; and
an m-bit packet-processing address register employed by a network processor within the one or more network processors to store an address for a packet being processed by the network processor, wherein an m-bit address value within the packet-processing address register is mapped or masked to produce an n-bit address for a buffer entry containing the packet being processed,
wherein each of the one or more network processors sharing access to the buffer has an associated m-bit packet-processing address register employed by the respective network processor to store an address for a packet being processed by that network processor, wherein m-bit address values within each packet-processing address register are mapped or masked to produce an n-bit address for a buffer entry containing the packet being processed,
wherein each of the one or more network processors having access to the buffer may selectively employ any buffer allocation and usage algorithm, independent of buffer management used by the buffer,
wherein the buffer is a circular buffer, the packet storage subsystem further comprising:
an empty buffer address register containing an address for a buffer entry into which a next packet received for processing should be stored; and
a next packet address register shared by all processing units having access to the buffer and containing an address for a buffer entry in which a next packet to be processed is stored.
2. The packet storage subsystem according to
4. The packet storage subsystem according to
5. The packet storage subsystem according to
8. The router according to
10. The router according to
11. The router according to
15. The method according to
mapping or masking the m-bit address value within the packet-processing address register to produce the n-bit address by software executing within an associated processing unit.
17. The method according to
accessing a buffer entry containing a packet to be processed for a first time employing an address value within the next packet address register;
storing the address value from the next packet address register within a packet-processing address register associated with a processing unit that will process the packet; and
subsequently accessing the buffer entry using the address value stored in the associated packet-processing address register until processing of the packet within the buffer entry is complete.
18. The method according to
accessing the buffer entry containing the packet to be processed for the first time utilizing a dedicated packet retrieval instruction employing the address value from the next packet address register.
|
This application claims priority to U.S. provisional application no. 60/345,107 filed Dec. 31, 2001, the content of which is incorporated herein by reference.
The present invention is directed, in general, to packet buffer management and, more specifically, to a scalable structure and operation for packet buffer management within a network processor.
Network processors are dedicated processors that act as programmable packet forwarding engines for network routers. Since these processors have to interact with the packet buffers for packet forwarding or routing, the design of network processors is typically closely tied to the design of the overall routing system. Normally packet buffer management is closely related to the system architecture and is designed to suit the packet buffer size, packet queuing algorithms, and buffer allocation and deallocation techniques employed, as well as any other specific issues relating to packet processing.
To achieve contemporary wire speed packet processing and efficient usage of available memory resources, an effective technique for managing packet buffers in networking environments and buffer management scalable to the varying needs of system requirements are both essential.
There is, therefore, a need in the art for a system independent and scalable packet buffer management architecture for network processors.
To address the above-discussed deficiencies of the prior art, it is a primary object of the present invention to provide, for use in a router, a circular buffer storing packets for processing by one or more network processors and employing an empty buffer address register identifying the buffer entry into which a next received packet should be stored, a next packet address register identifying the buffer entry containing the next packet to be processed, and a packet-processing address register within each network processor identifying the buffer entry containing the packet being processed by that network processor. The empty buffer address register and the next packet address register are incremented when a new packet is received and stored or when processing of a packet by a network processor is initiated, respectively. The n-bit addresses to the buffer entries are mapped or masked from/to the m-bit packet-processing address register(s) within the network processor(s) by software, allowing the buffer size to be fully scalable. A dedicated packet retrieval instruction supported by the network processor(s) accesses a new packet for processing using the next packet address register, and copies the content of the next packet address register into the packet-processing address register within the respective network processor for use in subsequent accesses by that network processor. Upon completion of packet processing, the network processor invalidates the content of the associated packet-processing address register and signals the circular buffer, which marks the respective buffer entry as empty for re-use. Buffer management is thus independent of the network processor architecture.
The foregoing has outlined rather broadly the features and technical advantages of the present invention so that those skilled in the art may better understand the detailed description of the invention that follows. Additional features and advantages of the invention will be described hereinafter that form the subject of the claims of the invention. Those skilled in the art will appreciate that they may readily use the conception and the specific embodiment disclosed as a basis for modifying or designing other structures for carrying out the same purposes of the present invention. Those skilled in the art will also realize that such equivalent constructions do not depart from the spirit and scope of the invention in its broadest form.
Before undertaking the DETAILED DESCRIPTION OF THE INVENTION below, it may be advantageous to set forth definitions of certain words or phrases used throughout this patent document: the terms “include” and “comprise,” as well as derivatives thereof, mean inclusion without limitation; the term “or” is inclusive, meaning and/or; the phrases “associated with” and “associated therewith,” as well as derivatives thereof, may mean to include, be included within, interconnect with, contain, be contained within, connect to or with, couple to or with, be communicable with, cooperate with, interleave, juxtapose, be proximate to, be bound to or with, have, have a property of, or the like; and the term “controller” means any device, system or part thereof that controls at least one operation, whether such a device is implemented in hardware, firmware, software or some combination of at least two of the same. It should be noted that the functionality associated with any particular controller may be centralized or distributed, whether locally or remotely. Definitions for certain words and phrases are provided throughout this patent document, and those of ordinary skill in the art will understand that such definitions apply in many, if not most, instances to prior as well as future uses of such defined words and phrases.
For a more complete understanding of the present invention, and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, wherein like numbers designate like objects, and in which:
FIGS. 1 through 3A-3C, discussed below, and the various embodiments used to describe the principles of the present invention in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the invention. Those skilled in the art will understand that the principles of the present invention may be implemented in any suitably arranged device.
The principle problem addressed by the present invention is the independence of packet processing architecture at the network processor level from the packet buffer management architecture at the router design level. The present invention allows such independence both in terms of functionality and buffer size scalability, as described in further detail below.
Three address pointer registers are employed: a single empty buffer register EBR containing an address pointing to the next available location in the circular buffer 102 and used to write a new packet header to the circular buffer; a single next packet register NPR containing an address for a location in the circular buffer 102 for reading a packet header; and a packet pointer register within each packet-processing unit 101 containing an address for a packet being processed by a respective processing unit.
The address pointer registers EBR and NPR are of a size n≦32 bits and are thus scalable with the size of the circular buffer 102 (the number of buffer entries) designed at the network processor level. The packet buffer addresses stored within the address pointer registers PPR within each processing unit are always 32-bit values in the exemplary embodiment, and are thus independent of the network processor level configuration. The instruction set architecture (ISA) for each processing unit 101 includes an instruction get_packet for retrieving packet header information from the circular buffer 102 as described in greater detail below.
As illustrated in
As a result of executing the get_packet instruction, the processing unit receives the appropriate packet header from the circular buffer 102 together with a copy of the contents of address pointer register NPR (prior to incrementing). The first four words of the received packet header, as well as the address received from the address pointer register NPR, are stored internally by the receiving processing unit. The value within the address pointer register NPR is also incremented to point to the next location within the circular buffer 102.
As illustrated in
The present invention stores circular buffer addresses for packets within a processing unit and includes dedicated instructions to access packet buffer entries using the internally stored 32-bit address values. The design of the processing unit architecture is thus independent of packet buffer management at the processing system (router) level. The router may use any suitable buffer allocation and usage technique depending on the overall router design.
In addition, the circular buffer size is completely scalable at the network processor level and may be selected for a particular implementation based on router size and traffic volume handled. The processing unit(s) (network processors) within the router simply treat the buffer addresses as 32-bit numbers through software, which may be easily mapped or masked to specific-size buffer addresses employed by the circular buffer. Overall this technique permits optimized design of the processing units (network processor) with sufficient programmability and efficiency of packet processing, and with configurability and scalability at the router level.
Although the present invention has been described in detail, those skilled in the art will understand that various changes, substitutions, variations, enhancements, nuances, gradations, lesser forms, alterations, revisions, improvements and knock-offs of the invention disclosed herein may be made without departing from the spirit and scope of the invention in its broadest form.
Karim, Faraydon O., Chandra, Ramesh, Stramm, Bernd H.
Patent | Priority | Assignee | Title |
7929418, | Mar 23 2007 | VALTRUS INNOVATIONS LIMITED | Data packet communication protocol offload method and system |
8760460, | Oct 15 2009 | Nvidia Corporation | Hardware-managed virtual buffers using a shared memory for load distribution |
Patent | Priority | Assignee | Title |
5377340, | Jun 18 1991 | Hewlett-Packard Company | Method and apparatus for memory interleaving using an improved hashing scheme |
5454089, | Apr 17 1991 | Intel Corporation | Branch look ahead adder for use in an instruction pipeline sequencer with multiple instruction decoding |
5826041, | Oct 28 1993 | Microsoft Technology Licensing, LLC | Method and system for buffering network packets that are transferred between a V86 mode network driver and a protected mode computer program |
6434148, | Dec 28 1998 | SAMSUNG ELECTRONICS CO , LTD | Data packet re-sequencer |
20010020941, | |||
20020156990, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Nov 05 2002 | KARIM, FARAYDON O | STMicroelectronics, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 013504 | /0492 | |
Nov 05 2002 | STRAMM, BERND H | STMicroelectronics, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 013504 | /0492 | |
Nov 07 2002 | CHANDRA, RAMESH | STMicroelectronics, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 013504 | /0492 | |
Nov 08 2002 | STMicroelectronics, Inc. | (assignment on the face of the patent) | / | |||
Jun 27 2024 | STMicroelectronics, Inc | STMICROELECTRONICS INTERNATIONAL N V | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 068433 | /0883 |
Date | Maintenance Fee Events |
May 25 2012 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
May 30 2016 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
May 20 2020 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Dec 23 2011 | 4 years fee payment window open |
Jun 23 2012 | 6 months grace period start (w surcharge) |
Dec 23 2012 | patent expiry (for year 4) |
Dec 23 2014 | 2 years to revive unintentionally abandoned end. (for year 4) |
Dec 23 2015 | 8 years fee payment window open |
Jun 23 2016 | 6 months grace period start (w surcharge) |
Dec 23 2016 | patent expiry (for year 8) |
Dec 23 2018 | 2 years to revive unintentionally abandoned end. (for year 8) |
Dec 23 2019 | 12 years fee payment window open |
Jun 23 2020 | 6 months grace period start (w surcharge) |
Dec 23 2020 | patent expiry (for year 12) |
Dec 23 2022 | 2 years to revive unintentionally abandoned end. (for year 12) |