A method of dynamically allocating buffers involves receiving a packet onto an ingress circuit. The ingress circuit includes a memory that stores a free buffer list, and an allocated buffer list. packet data of the packet is stored into a buffer. The buffer is associated with a buffer identification (id). The buffer id is moved from the free buffer list to the allocated buffer list once the packet data is stored in the buffer. The buffer id is used to read the packet data from the buffer and into an egress circuit and is stored in a de-allocation buffer list in the egress circuit. A send buffer ids command is received from a processor onto the egress circuit and instructs the egress circuit to send the buffer id to the ingress circuit such that the buffer id is pushed onto the free buffer list.
|
13. A network device, comprising:
an ingress circuit of an integrated circuit including a first memory unit that stores a free buffer list and an allocated buffer list;
a buffer that stores a packet, wherein at least part of the packet is stored in a third memory unit, wherein the buffer has a buffer identification (id); and
an egress circuit on the integrated circuit that uses the buffer id to read the packet via a first bus, wherein the egress circuit includes a second memory unit that stores a de-allocate buffer list, wherein the ingress circuit and the egress circuit are both hardware circuits, wherein the egress circuit outputs an event packet via a second bus when a threshold condition is met, wherein the ingress circuit and the egress circuit are coupled to the first bus, and wherein the egress circuit outputs a buffer id via the first bus in response to receiving a send buffer ids command and as a result the buffer id is put on the free buffer list in the first memory unit in the ingress circuit, wherein the third memory unit is not a part of the ingress circuit and is not a part of the egress circuit.
19. An integrated circuit, wherein the integrated circuit is adapted for coupling to an external memory, the integrated circuit comprising:
an ingress circuit coupled to a bus that maintains a free buffer list and an allocated buffer list stored in a first memory located in the ingress circuit;
an egress circuit coupled to the bus that maintains a de-allocate buffer list stored in a second memory located in the egress circuit, wherein the buffer lists store buffer identifications (buffer ids), wherein the egress circuit generates an indication that at least a threshold number of buffer ids are present in the de-allocate buffer list;
a processor that processes packets received onto the integrated circuit, wherein the packets are received onto the integrated circuit via the ingress circuit and then at least part of the packets are stored by the integrated circuit into the external memory, wherein the processor, the ingress circuit, and the egress circuit are each separate physical circuits; and
means for communicating the indication to the processor from the egress circuit, wherein in response to the indication being communicated to the processor the processor causes a buffer id to be removed from the de-allocate buffer list of the egress circuit and to be stored onto the free buffer list of the ingress circuit.
1. A method of dynamically allocating buffers as packets are received and processed through a network device, comprising:
(a) receiving a packet onto an ingress circuit of the network device, wherein the ingress circuit includes a first memory that stores a free buffer list, and an allocated buffer list, and wherein the ingress circuit is coupled to a bus;
(b) storing packet data of the packet into a buffer, wherein the buffer is in a second memory and has a buffer identification (id), and moving the buffer id from the free buffer list to the allocated buffer list;
(c) using the buffer id to read the packet data via the bus from the buffer in the second memory and into an egress circuit, and storing the buffer id in a de-allocate buffer list stored in a third memory located in the egress circuit, wherein the egress circuit is coupled to the bus, and wherein the ingress circuit and the egress circuit are located on an integrated circuit; and
(d) receiving a send buffer ids command via the bus from a processor onto the egress circuit, wherein the send buffer ids command instructs the egress circuit to send the buffer id from the egress circuit to the ingress circuit via the bus such that the buffer id is pushed onto the free buffer list stored in the first memory located in the ingress circuit, wherein the ingress circuit is a hardware circuit, wherein the egress circuit is a hardware circuit, wherein the second memory is not part of the ingress circuit or the egress circuit, and wherein the first, second and third memories are separate hardware memory units located in different locations on the network device.
2. The method of
3. The method of
(e) determining on the egress circuit that a threshold condition has been satisfied and as a result outputting an event packet onto the event bus.
4. The method of
5. The method of
6. The method of
(f) receiving the event packet on the processor, wherein the processor responds to the receiving of the event packet by sending the send buffer ids command of (d) to the egress circuit.
7. The method of
(b1) storing a first portion of the packet in the internal memory unit; and
(b2) storing a second portion of the packet in the external memory unit.
8. The method of
9. The method of
10. The method of
11. The method of
(e) removing the buffer id from the de-allocate buffer list in the egress circuit after responding to the command of (d).
12. The method of
(e) sending a complete command to the third island from the second island, wherein the complete command is sent in response to the send buffer ids command, and wherein the complete command is an instruction to de-allocate a packet number.
14. The network device of
15. The network device of
16. The network device of
wherein the third memory unit is an external memory unit that is external to the integrated circuit.
17. The network device of
an external memory unit that is not located on any of the islands, wherein a first portion of the packet is stored in the third memory unit, and wherein a second portion of the packet is stored in the external memory unit.
18. The network device of
20. The integrated circuit of
|
This application claims the benefit under 35 U.S.C. § 119 from provisional U.S. patent application Ser. No. 62/073,865, entitled “A METHOD OF DYNAMICALLY ALLOCATING BUFFERS FOR PACKET DATA RECEIVED ONTO A NETWORKING DEVICE”, filed Oct. 31, 2014. The above-listed provisional application is incorporated by reference.
The described embodiments relate generally to managing memory allocation and more specifically to dynamically managing memory allocation for packet data processing within the network flow processor.
In a first novel aspect, a packet is received onto a network device. The network device includes an ingress circuit, a buffer, and an egress circuit. The ingress circuit includes a memory that stores a free buffer list, and an allocated buffer list. Packet data of the packet is stored into a buffer. The buffer is associated with a buffer identification (ID). The buffer ID is moved from the free buffer list to the allocated buffer list once the packet data is stored in the buffer. The buffer ID is used to read the packet data from the buffer and into an egress circuit and is stored in an de-allocate buffer list in the egress circuit. A send buffer IDs command is received from a processor onto the egress circuit and instructs the egress circuit to send the buffer ID to the ingress circuit such that the buffer ID is pushed onto the free buffer list in the ingress circuit.
In a second novel aspect, a network device includes an ingress circuit, a buffer and an egress circuit. The ingress circuit includes a first memory unit that stores a free buffer list and an allocated buffer list for a packet. The buffer stores the packet. The buffer is associated with a buffer identification (ID). The egress circuit uses the buffer ID to read the packet from the buffer. The egress circuit includes a second memory unit that stores a de-allocate buffer list. When a threshold condition is met the egress circuit outputs an event packet. The egress circuit outputs a buffer ID in response to receiving a send buffer IDs command.
In a third novel aspect, a network device includes an ingress circuit, an egress circuit and a processor. The ingress circuit maintains a free buffer list and an allocated buffer list. The egress circuit that maintains a de-allocate buffer list. The buffer lists store buffer identifications (buffer IDs). The processor processes packets received onto the network device. The network device further includes means for generating an indication that at least a threshold number of buffer IDs are present on the egress circuit, and means for communicating the indication to the processor from the egress circuit.
In one example, the means for generating is an event bus that traverses the egress circuit. In another example, the processor can monitor event packets communicated along the event bus.
In one example, the means for communicating is a Command/Push/Pull (CPP) bus.
Further details and embodiments and techniques are described in the detailed description below. This summary does not purport to define the invention. The invention is defined by the claims.
The accompanying drawings, where like numerals indicate like components, illustrate embodiments of the invention.
Reference will now be made in detail to background examples and some embodiments of the invention, examples of which are illustrated in the accompanying drawings. In the description and claims below, relational terms such as “horizontal”, “vertical”, “lateral”, “top”, “upper”, “bottom”, “lower”, “right”, “left”, “over” and “under” may be used to describe relative orientations between different parts of a structure being described, and it is to be understood that the overall structure being described can actually be oriented in any way in three-dimensional space.
Line card 4 includes a first optical transceiver 10, a first PHY integrated circuit 11, an Island-Based Network Flow Processor (IB-NFP) integrated circuit 12, a configuration Programmable Read Only Memory (PROM) 13, an external memory such as Dynamic Random Access Memory (DRAM) 40-41, a second PHY integrated circuit 15, and a second optical transceiver 16. Packet data received from the network via optical cable 7 is converted into electrical signals by optical transceiver 10. PHY integrated circuit 11 receives the packet data in electrical form from optical transceiver 10 via connections 17 and forwards the packet data to the IB-NFP integrated circuit 12 via SerDes connections 18. In one example, the flows of packets into the IB-NFP integrated circuit from optical cable 7 is 100 Gbps traffic. A set of four SerDes circuits 19-22 within the IB-NFP integrated circuit 12 receives the packet data in serialized form from SerDes connections 18, deserializes the packet data, and outputs packet data in deserialized form to digital circuitry within IB-NFP integrated circuit 12.
Similarly, IB-NFP integrated circuit 12 may output 100 Gbps packet traffic to optical cable 8. The set of four SerDes circuits 19-22 within the IB-NFP integrated circuit 12 receives the packet data in deserialized form from digital circuitry within integrated circuit 12. The four SerDes circuits 19-22 output the packet data in serialized form onto SerDes connections 23. PHY 15 receives the serialized form packet data from SerDes connections 23 and supplies the packet data via connections 24 to optical transceiver 16. Optical transceiver 16 converts the packet data into optical form and drives the optical signals through optical cable 8. Accordingly, the same set of four duplex SerDes circuits 19-22 within the IB-NFP integrated circuit 12 communicates packet data both into and out of the IB-NFP integrated circuit 12.
IB-NFP integrated circuit 12 can also output packet data to switch fabric 9. Another set of four duplex SerDes circuits 25-28 within IB-NFP integrated circuit 12 receives the packet data in deserialized form, and serializes the packet data, and supplies the packet data in serialized form to switch fabric 9 via SerDes connections 29. Packet data from switch fabric 9 in serialized form can pass from the switch fabric via SerDes connections 30 into the IB-NFP integrated circuit 12 and to the set of four SerDes circuits 25-28. SerDes circuits 25-28 convert the packet data from serialized form into deserialized form for subsequent processing by digital circuitry within the IB-NFP integrated circuit 12.
Management card 3 includes a CPU (Central Processing Unit) 31. CPU 31 handles router management functions including the configuring of the IB-NFP integrated circuits on the various line cards 4-6. CPU 31 communicates with the IB-NFP integrated circuits via dedicated PCIE connections. CPU 31 includes a PCIE SerDes circuit 32. IB-NFP integrated circuit 12 also includes a PCIE SerDes 33. The configuration information passes from CPU 31 to IB-NFP integrated circuit 12 via SerDes circuit 32, SerDes connections 34 on the backplane, and the PCIE SerDes circuit 33 within the IB-NFP integrated circuit 12. External configuration PROM (Programmable Read Only Memory) integrated circuit 13 stores other types of configuration information such as information that configures various lookup tables on the IB-NFP integrated circuit. This configuration information 35 is loaded into the IB-NFP integrated circuit 12 upon power up. As is explained in further detail below, IB-NFP integrated circuit 12 can store various types of information including buffered packet data in external DRAM integrated circuits 40-41.
For each packet, the functional circuitry of ingress NBI island 72 examines fields in the header portion to determine what storage strategy to use to place the packet into memory. In one example, the NBI island examines the header portion and from that determines whether the packet is an exception packet or whether the packet is a fast-path packet. If the packet is an exception packet then the NBI island determines a first storage strategy to be used to store the packet so that relatively involved exception processing can be performed efficiently, whereas if the packet is a fast-path packet then the NBI island determines a second storage strategy to be used to store the packet for more efficient transmission of the packet from the IB-NFP.
In the operational example of
Half island 68 is an interface island through which all information passing into, and out of, SRAM MU block 78 passes. The functional circuitry within half island 68 serves as the interface and control circuitry for the SRAM within block 78. For simplicity purposes in the discussion below, both half island 68 and MU block 78 may be referred to together as the MU island, although it is to be understood that MU block 78 is actually not an island as the term is used here but rather is a block. In one example, MU block 78 is an amount of so-called “IP” that is designed and supplied commercially by a commercial entity other than the commercial entity that designs and lays out the IB-NFP integrated circuit. The area occupied by block 78 is a keep out area for the designer of the IB-NFP in that the substantially all the wiring and all the transistors in block 78 are laid out by the memory compiler and are part of the SRAM. Accordingly, the mesh buses and associated crossbar switches of the configurable mesh data bus, the mesh control bus, and the mesh event bus do not pass into the area of block 78. No transistors of the mesh buses are present in block 78. There is an interface portion of the SRAM circuitry of block 78 that is connected by short direct metal connections to circuitry in half island 68. The data bus, control bus, and event bus structures pass into and over the half island 68, and through the half island couple to the interface circuitry in block 78. Accordingly, the payload portion of the incoming fast-path packet is communicated from NBI island 72, across the configurable mesh data bus to SRAM control island 68, and from control island 68, to the interface circuitry in block 78, and to the internal SRAM circuitry of block 78. The internal SRAM of block 78 stores the payloads so that they can be accessed for flow determination by the ME island.
In addition, a preclassifier in the ingress NBI island determines that the payload portions for others of the packets should be stored in external DRAM 40 and 41. For example, the payload portions for exception packets are stored in external DRAM 40 and 41. Interface island 70, IP block 79, and DDR PHY I/O blocks 46 and 47 serve as the interface and control for external DRAM integrated circuits 40 and 41. The payload portions of the exception packets are therefore communicated across the configurable mesh data bus from NBI island 72, to interface and control island 70, to external MU SRAM block 79, to 32-bit DDR PHY I/O blocks 46 and 47, and to external DRAM integrated circuits 40 and 41. At this point in the operational example, the packet header portions and their associated payload portions are stored in different places. The payload portions of fast-path packets are stored in internal SRAM in MU block 78, whereas the payload portions of exception packets are stored in external SRAM in external DRAMs 40 and 41.
ME island 66 informs second NBI island 63 where the packet headers and the packet payloads can be found and provides the second NBI island 63 with an egress packet descriptor for each packet. The egress packet descriptor indicates a queuing strategy to be used on the packet. Second NBI island 63 uses the egress packet descriptor to read the packet headers and any header modification from ME island 66 and to read the packet payloads from either internal SRAM 78 or external DRAMs 40 and 41. Second NBI island 63 places packet descriptors for packets to be output into the correct order. For each packet that is then scheduled to be transmitted, the second NBI island uses the packet descriptor to read the header portion and any header modification and the payload portion and to assemble the packet to be transmitted. Note that the header modification is not actually part of the egress packet descriptor, but rather it is stored with the packet header by the ME when the packet is presented to the NBI. The second NBI island then performs any indicated packet modification on the packet. The resulting modified packet then passes from second NBI island 63 and to egress MAC island 64.
Egress MAC island 64 buffers the packets, and converts them into symbols. The symbols are then delivered by conductors from the MAC island 64 to the four SerDes I/O blocks 25-28. From SerDes I/O blocks 25-28, the 100 Gbps outgoing packet flow passes out of the IB-NFP integrated circuit 12 and across SerDes connections 34 (see
As packets are loaded into SRAM, a statistics block 306 counts the number of packets that meet certain criteria. Various sub-circuits of the ingress MAC island are configurable. The input conductors 307 labeled CB couples the certain portions of the MAC island to the control bus tree so that these portions receive configuration information from the root of control bus tree. SRAM block 305 includes error detection and correction circuitry (ECC) 308. Error information detected and collected by ECC block 308 and statistics block 306 is reported through the local event bus and global event chain back to the ARM island 51. Ingress MAC island 71 is part of one of the local event rings. Event packets are circulated into the MAC island via conductors 309 and are circulated out of the MAC island via conductors 310. Packets that are buffered in SRAM 305 are then output from the MAC island to the ingress NBI island 72 in the form of one or more 256 byte minipackets 311 communicated across dedicated connections 312. Statistics information 313 is also communicated to the ingress NBI island 72 via dedicated connections 314.
The packet is buffered in SRAM 322. A buffer pool is a set of targets in ME islands where header portions can be placed. A buffer list is a list of memory addresses where payload portions can be placed. DMA engine 323 can read the packet out of SRAM via conductors 324, then use the buffer pools to determine a destination to which the packet header is to be DMA transferred, and use the buffer lists to determine a destination to which the packet payload is to be DMA transferred. The DMA transfers occur across the configurable mesh data bus. In the case of the exception packet of this example the preclassification user metadata and buffer pool number indicate to the DMA engine that the packet is an exception packet and this causes a first buffer pool and a first different buffer list to be used, whereas in the case of the fast-path packet the preclassification user metadata and buffer pool number indicate to the DMA engine that the packet is a fast-path packet and this causes a second buffer pool and a second buffer list to be used.
The ingress NBI island maintains and stores a number of buffer lists 392. One of the buffer lists is a free buffer list. Packet data is received from an ingress MAC island into the ingress NBI island and is stored in SRAM 322. Individual portions of the packet data are stored in buffers in main memory. Each buffer has an associated buffer ID. The packet data of a packet may occupy multiple buffers, where the buffer IDs for the buffers are in one of the buffer lists. The DMA 323 causes the portions of packet data to be written into their corresponding buffers in external memory. The DMA 323 uses the CPP bus to do this. The DMA 323 also sends an ingress packet descriptor to a CTM in the ME island.
Block 326 is data bus interface circuitry through which the configurable mesh data bus in accessed. Arrow 325 represents packets that are DMA transferred out of the NBI island 72 by DMA engine 323. Each packet is output with a corresponding ingress packet descriptor.
The programs stored in the instruction stores that are executable by the picoengines can be changed multiple times a second as the router operates. Configuration block 327 receives configuration information from the control bus CB tree via connections 328 and supplies the configuration information to various ones of the sub-circuits of NBI island 72 that are configurable. Error detection and correction (ECC) circuitry 329 collects error information such as errors detected in the contents of the instruction stores. ECC circuitry 329 and ECC circuitry 330 are coupled via connections 331 and 332 and other internal island connections not shown to be part of the local event ring of which the ingress MAC island 72 is a part. A detailed description of the event ring is provided in U.S. patent application Ser. No. 13/399,678, entitled “LOCAL EVENT RING IN AN ISLAND-BASED NETWORK FLOW PROCESSOR”, filed Feb. 17, 2012, now U.S. Pat. No. 9,619,418, by Gavin J. Stark (the subject matter of which is incorporated herein by reference).
Each egress packet descriptor includes: 1) an address indicating where and in which ME island the header portion is found, 2) an address indicating where and in which MU island the payload portion is found, 3) how long the packet is, 4) sequence number of the packet in the flow, 5) an indication of which queue the packet belongs to (result of the packet policy), 6) an indication of where the packet is to be sent (a result of the packet policy), 7) user metadata indicating what kind of packet it is, 8) packet sequencer identification to be used by the reorder block in determining in-order packet transmissions, 9) a drop precedence value that indicates a variable drop probability for a instantaneous queue depth range, 10) a split indicator that indicates if a packet is split between CTM and main memory or storing in a single memory location, and 11) a priority indicator that indicates if the packet associated with the packet descriptor is a high priority packet or a low priority packet.
Regarding the split indicator, the split indicator allows different portion of a packet to be stored in different memory locations. In one example, the split indicator may be used to store a first part of a packet in the cluster target memory and a second portion of the packet in main memory located in the MU island. The cluster target memory can be quickly accessed by a microengines in the ME island, while access time to main memory in the MU island is takes additional time. One beneficial use of this dynamic splitting and storing of a packet is to split the header portion of the packet from the payload portion of the packet. The header portion of the packet is stored in the cluster target memory and the payload portion of the packet is stored in main memory. This split allows the microengine fast access to the header portion of the packet which is necessary to determine packet routing, while many time the microengine does not need to access the payload portion of the packet at all to determine packet routing. In this fashion, the quickly accessible cluster target memory is only used to store packet information that is actually used by the microengines and storage space in the cluster target memory is not wasted by storing the packet information that is not used by the microengines. In the event that the portion of the packet that is necessary for processing changes, the microengines can communicate change to the ingress NBI via the CPP bus and instruct the ingress NBI to change which portion of the packet is stored in the cluster target memory and which portion of the packet is stored in the main memory. This dynamic control ensures that fast access cluster target memory is always most efficiently utilized.
Regarding the priority indicator, in one example of a high priority packet is a control plane packet. Another example of a high priority packet is a maintenance packet. On the contrary, one example of a low priority packet is an HTTP packet.
Memory errors and other events detected in the ME island are reported via a local event ring and the global event chain back to the ARM island 51. A local event ring is made to snake through the ME island for this purpose. Event packets from the local event chain are received via connections 339 and event packets are supplied out to the local event chain via connections 340. The CB island bridge 341, the cluster local scratch 342, and CTM 333 can be configured and are therefore coupled to the control bus CB via connections 343A so that they can receive configuration information from the control bus CB.
A microengine within the ME island can use data bus commands to interact with a target, regardless of whether the target is located locally on the same ME island as the microengine or whether the target is located remotely in another island, using the same configurable data bus communications. If the target is local within the ME island, then the microengine uses data bus commands and operations as described above as if the memory were outside the island in another island, except that bus transaction values do not have a final destination value. The bus transaction values do not leave the ME island and therefore do not need the final destination information. If, on the other hand, the target is not local within the ME island then intelligence 343 within the DB island bridge adds the final destination value before the bus transaction value is sent out onto the configurable mesh data bus. From the perspective of the microengine master, the interaction with the target has the same protocol and command and data format regardless of whether the target is local or remote.
In the present operational example, a microengine in the ME island 66 issues a lookup command across the configurable mesh data bus to have lookup hardware engine 350 examine tables in SRAM 351 for the presence of given data. The data to be looked for in this case is a particular MPLS label. The lookup command as received onto the MU island is a lookup command so the data base interface 352 presents the lookup command to the lookup engine. The lookup command includes a table descriptor of what part to memory to look in. The lookup command also contains a pull-id reference indicating what to look for (the MPLS label in this case). The data to look for is actually stored in transfer registers of the originating microengine. The lookup engine 350 therefore issues a pull-id out onto the configurable mesh data bus request back to the originating microengine. The microengine returns the requested data (the MPLS label to look for) corresponding to the reference id. The lookup engine now has the lookup command, the table descriptor, and the MPLS label that it is to look for. In the illustration there are three tables 353-355. A table description identifies one such table by indicating the starting address of the table in SRAM 351, and how large the table is. If the lookup operation is successful in that the lookup hardware engine 350 finds the MPLS label in the table identified by the table descriptor, then the lookup hardware engine 350 returns a predetermined value “Packet Policy” 356 back to the requesting microengine. A packet policy is a code that indicates: 1) a header modification to be done, and 2) a queuing strategy to use. Lookup engine 350 returns the packet policy 356 to the originating microengine by pushing the data (the packet policy) via the push interface of the configurable mesh data bus.
Various parts of the MU island are configurable by changing the contents of registers and memory via the control bus CB and connections 357 and control status registers 362. Errors detected on the MU island by circuits 360 and 361 are reported into a local event ring. Event packets from the local event ring are received via input connections 358 and the MU island outputs event packets to the local even ring via output connections 359. Various sub-circuits of the MU island are configurable.
The ingress NBI island 72 maintains and stores a number of buffer lists 392. One of the buffer lists is a free buffer list and a second buffer list is an allocated buffer list. Packet data is received from an ingress MAC island into the ingress NBI island and is stored in SRAM 322. Individual portions of the packet data are stored in buffers in main memory. Each buffer has an associated buffer ID. The packet data of a packet may occupy multiple buffers, where the buffer IDs for the buffers are in one of the buffer lists. The DMA 323 causes the portions of packet data to be written into their corresponding buffers in external memory. The DMA 323 uses the CPP bus to do this. The DMA 323 also sends an ingress packet descriptor to a CTM in the ME island. The ingress packet descriptor includes a PPI number that is associated with the header portion of the packet as stored in the CTM. The ingress packet descriptor also includes a buffer list identifier that identifies the buffer list of buffer IDs (that store the packet payload in main memory). The ingress packet descriptor is converted into an egress packet descriptor 396 and is loaded into the queue SRAM 367 of the egress NBI island. There are lists of such egress packet descriptors stored in the queue SRAM 367. When the packet is scheduled to be output from the IB-NFP, then the egress packet descriptor 396 for the packet is sent to the DMA 363. The DMA engine 363 uses the buffer list identifier to obtain the buffer ID of the list from the ingress NBI island, and then uses the buffer IDs to read the associated packet data 397 from the indicated buffers. The DMA engine 363 also uses the PPI number of the packet to have the packet engine of the ME island return the header portion of the packet. The DMA engine 363 combines the header portion of the packet with data portions from the buffers and supplies the packet in sections via FIFO 365 to the packet modifier. The buffers that stored the packet data for the packet, at this point, are no longer used so their buffer IDs are recorded in a buffer descriptor memory 404 in the DMA engine 363. There may be multiple such lists of buffer IDs in the buffer descriptor memory 404 in the DMA engine 363. When the number of buffer IDs in this memory reaches a predetermined threshold condition, an event packet is generated. To prevent stalling of the egress NBI island, an event packet can also be generated when the de-allocate buffer list is full. This event generation methodology causes microengines to clear out buffer IDs from the de-allocate buffer list whenever the de-allocate buffer list is full, regardless of how many buffer IDs for any one microengine is stored in the de-allocation buffer list.
In one general example, the threshold condition is a multiple of the CPP bus data payload to improve bus utilization. In a more specific example, two buffer IDs fit in a single 64-bit DSF/CPP push bus cycle enabling reads of up to 32 buffer descriptors in a single CPP burst. For example, a CPP read command size of 16 results in 32 buffer IDs being read from the selected queue and pushed to its destination.
This event packet 399 is output from the egress NBI island onto the event bus. The event packet 399 is communicated through the event bus to the ME island. A microengine is alerted, and in response sends a CPP command 398 to the DMA engine 363 of the egress NBI island. This CPP command 398 is an instruction to the DMA engine to send a number of the buffer IDs 405 (recorded in the buffer descriptor memory of the DMA engine 363) across the CPP bus to the DMA engine 323 in the ingress NBI island 72. This command is also referred to as a “send buffer IDs command”. These buffer IDs 405 are then pushed onto one of the free buffer lists in ingress NBI island 72. The DMA engine 363 also sends a complete command to the packet engine in the CTM of the ME island, instructing the packet engine to de-allocate the PPI number. In this way, the buffer IDs are allocated and de-allocated (or “freed”), and the PPI number is allocated and de-allocated. The amount of buffer space usable by a microengine is dynamically allocated, and is not fixed, but rather can increase and decreased over time as packets flow through the IB-NFP.
Although certain specific embodiments are described above for instructional purposes, the teachings of this patent document have general applicability and are not limited to the specific embodiments described above. Accordingly, various modifications, adaptations, and combinations of various features of the described embodiments can be practiced without departing from the scope of the invention as set forth in the claims.
Swartzentruber, Ron L., Bouley, Rick
Patent | Priority | Assignee | Title |
11528232, | Aug 27 2021 | Hong Kong Applied Science and Technology Research Institute Company Limited | Apparatus and method for handling real-time tasks with diverse size based on message queue |
Patent | Priority | Assignee | Title |
6157657, | Oct 02 1997 | ALCATEL USA SOURCING, L P | System and method for data bus interface |
6182210, | Dec 16 1997 | Intel Corporation | Processor having multiple program counters and trace buffers outside an execution pipeline |
6549964, | Apr 23 1999 | VIA Technologies, Inc. | Delayed transaction method and device used in a PCI system |
7035212, | Jan 25 2001 | RPX Corporation | Method and apparatus for end to end forwarding architecture |
7251716, | Apr 28 2004 | Hitachi, LTD | Method and system for data processing with recovery capability |
7587549, | Sep 13 2005 | BROADCOM INTERNATIONAL PTE LTD | Buffer management method and system with access grant based on queue score |
20030123455, | |||
20060002414, | |||
20110246728, | |||
20120155256, | |||
20130215792, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Oct 29 2015 | SWARTZENTRUBER, RON L | NETRONOME SYSTEMS, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 036925 | /0927 | |
Oct 29 2015 | BOULEY, RICK | NETRONOME SYSTEMS, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 036925 | /0927 | |
Oct 30 2015 | Netronome Systems, Inc. | (assignment on the face of the patent) | / | |||
Dec 17 2020 | KREOS CAPITAL V UK LIMITED | NETRONOME SYSTEMS, INC | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 054883 | /0293 |
Date | Maintenance Fee Events |
Jan 13 2022 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Date | Maintenance Schedule |
Sep 04 2021 | 4 years fee payment window open |
Mar 04 2022 | 6 months grace period start (w surcharge) |
Sep 04 2022 | patent expiry (for year 4) |
Sep 04 2024 | 2 years to revive unintentionally abandoned end. (for year 4) |
Sep 04 2025 | 8 years fee payment window open |
Mar 04 2026 | 6 months grace period start (w surcharge) |
Sep 04 2026 | patent expiry (for year 8) |
Sep 04 2028 | 2 years to revive unintentionally abandoned end. (for year 8) |
Sep 04 2029 | 12 years fee payment window open |
Mar 04 2030 | 6 months grace period start (w surcharge) |
Sep 04 2030 | patent expiry (for year 12) |
Sep 04 2032 | 2 years to revive unintentionally abandoned end. (for year 12) |