A method for parallel processing implemented by a first core in a network unit, comprising locking an ingress queue if the ingress queue is not locked by another core, searching for an unlocked task queue from a first default subset of a plurality of task queues when the ingress queue is locked by another core, wherein the first subset is different from a second default subset of the plurality of task queues from which a second core begins a search for an unlocked task queue, and searching a remainder of the plurality of task queues for an unlocked task queue when all of the first default subset of task queues are locked and the ingress queue is locked.
|
1. A method for parallel processing implemented in a network unit, comprising:
employing a first core to lock an ingress queue when the ingress queue is not locked by another core, encapsulate packets from the ingress queue into tasks for scheduling and push the tasks to one of a plurality of task queues based on one of a priority of the task, a processing flow of the task, or an order type of the task when the ingress queue is locked by the first core, and unlock the ingress queue after the pushing of the tasks;
employing the first core to search a first subset of the plurality of task queues for a first unlocked task queue when the ingress queue is locked by another core, lock the first unlocked task queue and place a first task from the first unlocked task queue into an order queue when the first core finds the first unlocked task queue, and unlock the first unlocked task queue after placing the task;
employing the first core to search a first remaining subset of the task queues for a second unlocked task queue when all of the first subset of the task queues are locked and the ingress queue is locked, lock the second unlocked task queue and place a second task from the second unlocked task queue into the order queue when the first core finds the second unlocked task queue, and unlock the first unlocked task queue after placing the second task, wherein the first remaining subset of the task queues comprises members of the task queues outside of the first subset of the task queues; and
processing an order task that is at a head of the order queue.
8. In a network unit, a computer program product executable by a multi-core processor, the computer program product comprising computer executable instructions stored on a non-transitory computer readable medium that when executed by the multi-core processor cause the network unit to perform the following:
employing a first core to lock an ingress queue when the ingress queue is not locked by another core; encapsulate packets from the ingress queue into tasks for scheduling and push the tasks to one of a plurality of task queues based on one of a priority associated with the task; a processing flow associated with the task, or an order type associated with the task when the ingress queue is locked by the first core; and unlock the ingress queue after pushing of the tasks;
employing the first core to search for a first unlocked task queue from a first default subset of a plurality of task queues when the ingress queue is locked by another core, lock the first unlocked task queue and place a first task from the first unlocked task queue into an order queue when the first core finds the first unlocked task queue, and unlock the first unlocked task queue after placing the first task, wherein the first default subset is different from a second default subset of the task queues from which a second core begins a search for a second unlocked task queue; and
employing the first core to search a first remaining subset of the task queues for a second unlocked task queue when all of the first default subset of the task queues are locked by other cores and the ingress queue is locked by another core, lock the second unlocked task queue and place a second task from the second unlocked task queue into the order queue when the first core finds the second unlocked task queue, and unlock the first unlocked task queue after placing the second task, wherein the first remaining subset of the task queues comprises members of the task queues outside of the first default subset of the task queues; and
processing an order task that is at a head of the order queue.
2. The method of
3. The method of
6. The method of
7. The method of
employing a second core to search a second subset of the task queues for a third unlocked task queue when the ingress queue is locked by another core, wherein the first subset of the task queues differs from the second subset of the task queues; and
employing the second core to search a second remaining subset of the task queues for the third unlocked task queue when all of the second subset of the task queues are locked by other cores and the ingress queue is locked by another core, wherein the second remaining subset of the task queues comprises members of the task queues outside of the second subset of the task queues.
9. The computer program product of
|
Not applicable.
Not applicable.
Not applicable.
A multi-core processor is a single computing component (e.g., a central processing unit (CPU)) with two or more independent acting processing units or “cores.” The cores are the components that read and execute software instructions, such as add, move data, etc. Multi-core processors can run or execute multiple instructions at the same time (i.e., parallel processing), thereby increasing the overall speed for applications executing on a computer. Multi-cores are typically integrated onto a single integrated circuit. Multi-core processors have been common in servers, desktop computers, and laptops for some time, but have only recently become utilized in routers, switches, and other network nodes responsible for routing data packets across the Internet. However, the requirements and objectives of routers and similar devices are different from servers and the like and present additional challenges for parallel processing.
In one embodiment, the disclosure includes a method for parallel processing implemented by a first core in a network unit, comprising locking an ingress queue if the ingress queue is not locked by another core, searching for an unlocked task queue from a first subset of a plurality of task queues when the ingress queue is locked by another core, wherein the first subset is different from a second subset of the plurality of task queues from which a second core begins a search for an unlocked task queue, searching a remainder of the plurality of task queues for an unlocked task queue when all of the first default subset of task queues are locked and the ingress queue is locked.
In another embodiment, the disclosure includes a network unit for parallel processing, comprising a plurality of cores, a memory coupled to the plurality of cores, wherein the memory comprises a plurality of ingress queues, a plurality of task queues, and an order queue, wherein all the cores are allowed to access any of the ingress queues, the task queues, and the order queue, wherein the cores are configured such that when one or a subset of the cores locks the ingress queues, the other cores search for an unlocked task queue, wherein each core is associated with a subset of the task queues from which the search for the unlocked task queue begins, and wherein the subsets for at least two of the cores are different.
In another embodiment, the disclosure includes, in a network unit, a computer program product executable by a multi-core core, the computer program product comprising computer executable instructions stored on a non-transitory computer readable medium that when executed by the core cause the network unit to perform the following: lock with a first core an ingress queue if the ingress queue is not locked by another core, search with the other available first core for an unlocked task queue from a first default subset of a plurality of task queues when the ingress queue is locked by another core, wherein the first subset is different from a second default subset of the plurality of task queues from which a second core begins a search for an unlocked task queue, search with the first core a remainder of the plurality of task queues for an unlocked task queue when all of the first default subset of task queues are locked and the ingress queue is locked.
These and other features will be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings and claims.
For a more complete understanding of this disclosure, reference is now made to the following brief description, taken in connection with the accompanying drawings and detailed description, wherein like reference numerals represent like parts.
It should be understood at the outset that although an illustrative implementation of one or more embodiments are provided below, the disclosed systems and/or methods may be implemented using any number of techniques, whether currently known or in existence. The disclosure should in no way be limited to the illustrative implementations, drawings, and techniques illustrated below, including the exemplary designs and implementations illustrated and described herein, but may be modified within the scope of the appended claims along with their full scope of equivalents.
Modern CPUs may have between two and thirty-two cores in one CPU. However, industry is continually pushing for more cores on one CPU. With the industry adoption of the multi-core central processing units (CPUs), parallel processing is common in the server/computation market and is becoming common in routers/switches for packet scheduling and processing. However due to the possible large amount of memory sharing and cache-line issues, existing packet schedulers are either using a centralized model, are designed for a small number of cores (limited performance), or rely on hardware. Many existing parallel processing approaches focus on either a centralized model or a distributed model. Generally, the distributed model may scale with the number of processing units (e.g. CPUs/cores). However, order should be preserved for tasks, e.g. for packet, transaction processing. Due to Amdahl's law, enforcement of the order of task execution for parallel processing may be quite challenging, especially in the distributed model. Most, if not all, existing solutions cause serialization execution instead, and may result in low performance.
Disclosed are methods, apparatuses, and systems to substantially evenly distribute tasks among processing units while preserving task orders and providing high performance (e.g., 15 Mega packets per second (Mpps)). The disclosed systems provide rules to instruct the processors on how to select a function to perform in order to avoid contention for resources between the cores and to substantially maximize the use of all of the cores in a data processing system. All of the cores may lock and poll tasks from an ingress queue, but only one of the cores may lock the ingress queue(s) at a given time. When the ingress queue is locked by one core, the other cores may transition to locking and polling one of a plurality of task queues. Any of the cores may lock and poll any of the task queues. However, each core may be assigned a different subset of the task queues to begin a search for an unlocked task queue. This may reduce contention between the cores. Although contention may still occur, the contention between the cores may be at a much lower and controllable rate than provided by other methods. If all of the task queues in a cores' designated subset of task queues are locked by other cores, the core may attempt to find an unlocked task queue from the remaining task queues, thereby ensuring that a core does not remain idle and providing substantial optimization of processing resources. If all of the task queues are locked, which may be very rare, if even possible, a core may repeat the same ingress queue/task queues check again
The packet scheduler 124 may provide packets to the cores 124 for parsing, look up, feature processing, and sending to an egress path. The packet scheduler 124 may provide pre-processed packets to the cores 122. In an embodiment, the packet scheduler may provide the packets to the cores 122 when the cores 122 are available. The packet scheduler 124 may be composed of one or all the available cores 122 and may comprise a scheduler function that may be executed by one or more of the available cores 122. Each core executing the scheduler function may determine whether to poll the ingress queues 126, whether to pull a task from one of the task queues 127 and place the task into the order queues 128 to process the task, or order enforce to remove the task that it is at the head of the order queue 128. Each core 122 may specify a subset of the task queues 128 from which to begin a search for an unlocked task queue. If contention between the cores 122 becomes an issue or exceeds a threshold value, the members of each subset of the task queues 128 and or the assignment of the subsets of the task queues 128 to the cores 122 may be dynamically changed in order to reduce contention between the cores 122. For example, the threshold value may be a maximum time that a core 122 may be idle or may be a maximum number of tries that a core 122 may fail to achieve a lock on the task queues 128 due to the task queues 128 being locked by another core 122.
In contrast to many existing packet scheduling systems, a group of cores 206 may all poll tasks. To reduce the contention among cores, only one core 206 (e.g., core ‘a’ from the subset of cores “i”) from the available cores 206 may lock the ingress queue 202 and actively poll tasks during a certain time. Other cores 206 (e.g., subset of cores “j”) which cannot lock the ingress queue 202 due to its lock by core “a” 206 may transition to perform on one of the task queues 204. To reduce the lock cost (e.g., 300 CPU cycles), the Core “a” 206 may poll n number of tasks at each time. Thus the cost of lock for each task may be about time/n. Core “a” 206 may push each task polled in the ingress queue 202 into an individual TaskQ[1-m] in the task queues 204.
At substantially the same time that core “a” 206 is polling the tasks in the ingress queue 202, all other cores 206 (e.g., subset of cores “i” to cores “j” excluding core ‘a’) may poll tasks from any of the TaskQ[1-m] in the task queues 204 for task handling. However, if multiple cores 206 try to poll from one TaskQ at substantially the same time, high contention between the cores 206 may be created. Therefore, in an embodiment a pseudo-affinity between the cores 206 and the task queues 204 may be created. Thus, each core 206 may start from a different subset of the TaskQ, (e.g. core “1” from TaskQ[1-4], core “2” from TaskQ[5-8], etc.) based on the task queue priority, and may lock one of the task queues 204 for moving the task to the order queue 208. One difference between the disclosed pseudo-affinity and true affinity is that in pseudo-affinity, each of the task queues 204 may have multiple consumers (e.g. core 1 and core 2). Furthermore, if the default subset task queue 204 is locked, the core 206 may move to any other available task queue 204. Thus, core contention may still occur, but at much lower and controllable rate.
Each core 206 may move a task retrieved from the task queues 204 into the order queue 208. The order queue 208 may be a lockable or lockless queue. Each task in the order queue 208 may be handled within the order queue 208 (e.g., packet lookup, modification, forwarding, etc.). Tasks may exit from the order queue 208 when the task is at the head of the order queue 208.
If, at block 304, the ingress queues are locked by another core, the method 300 may proceed to block 312 where the core may attempt to lock a task queue in a specified subset of the task queues. At block 314, the core may determine whether any task queues in the subset of task queues are locked. If, at block 314, any of the task queues in the subset of task queues are not locked, the method 300 may proceed to block 316 where the core may acquire a lock on the first unlocked task queue it encounters in the subset of task queues. At block 318, the core may put the task from the task queue into the order queue and then release the lock on the task queue, after which, the method 300 may end.
If, at block 314, all of the task queues in the subset of task queues are locked, the method 300 may proceed to block 320 where the core may look for the first unlocked task queue in the remainder of the task queues. At block 322, the core may determine whether all remaining task queues are locked. If, at block 322, the core determines that all remaining task queues are locked, the method 300 may proceed to block 302.
If, at block 322, not all of the remaining task queues are locked, then the method 300 may proceed to block 324 where the core may acquire a lock on the first unlocked task queue encountered by the core from the remaining task queues. The method 300 may proceed to block 318 where the core may put the task from the task queue into the order queue, after which, the method 300 may end. At block 326, a core may process the task that is at the head of the order queue.
Although described primarily with reference to switches and routers and other devices that route packets through a network, the methods, systems, and apparatuses of this disclosure are not limited to such devices, but may be implemented in any device with multiple processors and/or a processor with multiple cores. For example,
The secondary storage 404 is typically comprised of one or more disk drives or tape drives and is used for non-volatile storage of data and as an over-flow data storage device if RAM 408 is not large enough to hold all working data. Secondary storage 404 may be used to store programs that are loaded into RAM 408 when such programs are selected for execution. The ROM 406 is used to store instructions and perhaps data that are read during program execution. ROM 406 is a non-volatile memory device that typically has a small memory capacity relative to the larger memory capacity of secondary storage 404. The RAM 408 is used to store volatile data and perhaps to store instructions. Access to both ROM 406 and RAM 408 is typically faster than to secondary storage 404.
At least one embodiment is disclosed and variations, combinations, and/or modifications of the embodiment(s) and/or features of the embodiment(s) made by a person having ordinary skill in the art are within the scope of the disclosure. Alternative embodiments that result from combining, integrating, and/or omitting features of the embodiment(s) are also within the scope of the disclosure. Where numerical ranges or limitations are expressly stated, such express ranges or limitations should be understood to include iterative ranges or limitations of like magnitude falling within the expressly stated ranges or limitations (e.g., from about 1 to about 10 includes, 2, 3, 4, etc.; greater than 0.10 includes 0.11, 0.12, 0.13, etc.). For example, whenever a numerical range with a lower limit, R1, and an upper limit, Ru, is disclosed, any number falling within the range is specifically disclosed. In particular, the following numbers within the range are specifically disclosed: R=R1+k*(Ru−R1), wherein k is a variable ranging from 1 percent to 100 percent with a 1 percent increment, i.e., k is 1 percent, 2 percent, 3 percent, 4 percent, 7 percent, . . . , 70 percent, 71 percent, 72 percent, . . . , 97 percent, 96 percent, 97 percent, 98 percent, 99 percent, or 100 percent. Moreover, any numerical range defined by two R numbers as defined in the above is also specifically disclosed. The use of the term about means ±10% of the subsequent number, unless otherwise stated. Use of the term “optionally” with respect to any element of a claim means that the element is required, or alternatively, the element is not required, both alternatives being within the scope of the claim. Use of broader terms such as comprises, includes, and having should be understood to provide support for narrower terms such as consisting of, consisting essentially of, and comprised substantially of. Accordingly, the scope of protection is not limited by the description set out above but is defined by the claims that follow, that scope including all equivalents of the subject matter of the claims. Each and every claim is incorporated as further disclosure into the specification and the claims are embodiment(s) of the present disclosure. The discussion of a reference in the disclosure is not an admission that it is prior art, especially any reference that has a publication date after the priority date of this application. The disclosure of all patents, patent applications, and publications cited in the disclosure are hereby incorporated by reference, to the extent that they provide exemplary, procedural, or other details supplementary to the disclosure.
While several embodiments have been provided in the present disclosure, it should be understood that the disclosed systems and methods might be embodied in many other specific forms without departing from the spirit or scope of the present disclosure. The present examples are to be considered as illustrative and not restrictive, and the intention is not to be limited to the details given herein. For example, the various elements or components may be combined or integrated in another system or certain features may be omitted, or not implemented.
In addition, techniques, systems, subsystems, and methods described and illustrated in the various embodiments as discrete or separate may be combined or integrated with other systems, modules, techniques, or methods without departing from the scope of the present disclosure. Other items shown or discussed as coupled or directly coupled or communicating with each other may be indirectly coupled or communicating through some interface, device, or intermediate component whether electrically, mechanically, or otherwise. Other examples of changes, substitutions, and alterations are ascertainable by one skilled in the art and could be made without departing from the spirit and scope disclosed herein.
Wang, Dong, Xu, Jun, Zhao, Guang
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
5970049, | Mar 27 1997 | THE CHASE MANHATTAN BANK, AS COLLATERAL AGENT | Apparatus and method for template-based scheduling processes using regularity measures |
6016305, | Mar 27 1997 | THE CHASE MANHATTAN BANK, AS COLLATERAL AGENT | Apparatus and method for template-based scheduling processes using regularity measure lower bounds |
6665495, | Oct 27 2000 | UNWIRED BROADBAND, INC | Non-blocking, scalable optical router architecture and method for routing optical traffic |
6728959, | Aug 08 1995 | Oracle International Corporation | Method and apparatus for strong affinity multiprocessor scheduling |
7286531, | Mar 28 2001 | Brilliant Optical Networks | Methods to process and forward control packets in OBS/LOBS and other burst switched networks |
7561571, | Feb 13 2004 | Oracle International Corporation | Fabric address and sub-address resolution in fabric-backplane enterprise servers |
7656887, | Nov 05 2004 | Hitachi, Ltd. | Traffic control method for network equipment |
7664110, | Feb 07 2004 | Oracle International Corporation | Input/output controller for coupling the processor-memory complex to the fabric in fabric-backplane interprise servers |
7685281, | Feb 13 2004 | Oracle International Corporation | Programmatic instantiation, provisioning and management of fabric-backplane enterprise servers |
7843906, | Feb 13 2004 | Oracle International Corporation | Storage gateway initiator for fabric-backplane enterprise servers |
7843907, | Feb 13 2004 | Oracle International Corporation | Storage gateway target for fabric-backplane enterprise servers |
7860097, | Feb 13 2004 | Oracle International Corporation | Fabric-backplane enterprise servers with VNICs and VLANs |
7860961, | Feb 13 2004 | Oracle International Corporation | Real time notice of new resources for provisioning and management of fabric-backplane enterprise servers |
7873693, | Feb 13 2004 | Oracle International Corporation | Multi-chassis fabric-backplane enterprise servers |
7953903, | Feb 13 2004 | Oracle International Corporation | Real time detection of changed resources for provisioning and management of fabric-backplane enterprise servers |
7979552, | Feb 13 2004 | Oracle International Corporation | Programmatic instantiation, provisioning, and management of fabric-backplane enterprise servers |
7990994, | Feb 13 2004 | Oracle International Corporation | Storage gateway provisioning and configuring |
8145785, | Feb 07 2004 | Oracle International Corporation | Unused resource recognition in real time for provisioning and management of fabric-backplane enterprise servers |
8194690, | May 24 2006 | EZCHIP SEMICONDUCTOR LTD ; Mellanox Technologies, LTD | Packet processing in a parallel processing environment |
8218538, | Feb 13 2004 | Oracle International Corporation | Storage gateway configuring and traffic processing |
8255644, | May 18 2009 | Intel Corporation | Network communications processor architecture with memory load balancing |
8301749, | Feb 13 2004 | Oracle International Corporation | Unused resource recognition in real time provisioning and management of fabric-backplane enterprise servers |
8594131, | Sep 29 2008 | QUEST SOFTWARE INC F K A DELL SOFTWARE INC ; Aventail LLC | Packet processing on a multi-core processor |
8943507, | Mar 12 2010 | Intel Corporation | Packet assembly module for multi-core, multi-thread network processors |
8949501, | Oct 31 2010 | Integrated Device Technology, Inc. | Method and apparatus for a configurable packet routing, buffering and scheduling scheme to optimize throughput with deadlock prevention in SRIO-to-PCIe bridges |
20060056406, | |||
20100080559, | |||
20140019982, | |||
20140181470, | |||
CN101436989, | |||
CN101616083, | |||
CN102685001, | |||
CN102685002, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Dec 28 2012 | Futurewei Technologies, Inc. | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Date | Maintenance Schedule |
Dec 01 2018 | 4 years fee payment window open |
Jun 01 2019 | 6 months grace period start (w surcharge) |
Dec 01 2019 | patent expiry (for year 4) |
Dec 01 2021 | 2 years to revive unintentionally abandoned end. (for year 4) |
Dec 01 2022 | 8 years fee payment window open |
Jun 01 2023 | 6 months grace period start (w surcharge) |
Dec 01 2023 | patent expiry (for year 8) |
Dec 01 2025 | 2 years to revive unintentionally abandoned end. (for year 8) |
Dec 01 2026 | 12 years fee payment window open |
Jun 01 2027 | 6 months grace period start (w surcharge) |
Dec 01 2027 | patent expiry (for year 12) |
Dec 01 2029 | 2 years to revive unintentionally abandoned end. (for year 12) |