A method of sorting packets for transmission over a communication network. The packets are sorted into groups in accordance with predetermined criteria, wherein the number of groups is equal to at least three times the square root of a fraction whose numerator is the maximum transmission rate and whose denominator is the minimum transmission rate. Each group is assigned a first departure time in accordance with the predetermined criteria. Each packet of each group is assigned a second departure time. Wherein the number of the departure time is equal to the square root of a fraction whose numerator is the maximum rate and whose denominator is the minimum transmission rate. Each packet is transmitted over the communication network in accordance with the second departure time.

Patent
   RE42121
Priority
Jan 12 2000
Filed
May 11 2006
Issued
Feb 08 2011
Expiry
Jan 12 2020
Assg.orig
Entity
Large
0
9
EXPIRED<2yrs
0. 2. A method comprising:
a network node assigning a departure parameter to a first packet, wherein the departure parameter includes a first value that corresponds to a first set of queues and a second value that corresponds to a second set of queues;
the network node selecting one of the first set of queues and the second set of queues to store the first packet responsive to the departure parameter;
the network node storing the first packet into a queue within the selected one of the first set of queues and the second set of queues; and
responsive to storing the first packet in one of the first set of queues, moving the first packet from the first set of queues to the second set of queues prior to transmitting the packet onto a communication network.
1. A method of sorting packets for transmission over a communication network including the steps of:
(a) sorting the packets into groups in accordance with predetermined criteria, wherein the number of groups is equal to at least three times the square root of a fraction whose numerator is the maximum transmission rate and whose denominator is the minimum transmission rate;
(b) assigning to each group a first departure time in accordance with said predetermined criteria;
(c) assigning to each packet of each group a second departure time, wherein the number of said departure time is equal to the square root of a fraction whose numerator is the maximum rate and whose denominator is the minimum transmission rate; and
(d) and transmitting each said packet over said communication network in accordance with said second departure time.
0. 29. A network node comprising a first set of queues and a second set of queues, wherein the network node is coupled to receive a first packet and is configured to assign a departure parameter to the first packet, wherein the departure parameter includes a first value relative to a first pointer to the first set of queues and a second value relative to the second set of queues, wherein the network node is configured to store the first packet into a queue within one of the first set of queues or the second set of queues, wherein the network node is configured to select the first set or the second set responsive to the departure parameter, and wherein, if the first packet is stored in one of the first set of queues, the network node is configured to move the first packet from the first set of queues to the second set of queues prior to transmitting the packet onto a communication network.
0. 18. A method comprising:
a network node assigning a departure parameter to a first packet, the departure parameter including a first value that is relative to a first pointer to a first set of queues and further including a second value that is relative to a second pointer to a second set of queues;
the network node comparing the first value to the first pointer;
the network node storing the first packet in one of the first set of queues responsive to a first result of the comparing; and
the network node storing the first packet in one of the second set of queues responsive to a second result of the comparing;
wherein packets stored in the first set of queues are subsequently moved to the second set of queues responsive to the first pointer, and wherein packets stored in the second set of queues are subsequently moved to a departure queue responsive to the second pointer.
0. 3. The method as recited in claim 2 further comprising the network node storing the first packet in a virtual channel queue corresponding to a virtual channel of the first packet, and wherein the departure parameter is assigned responsive to another packet from the virtual channel being transmitted onto the communication network.
0. 4. The method as recited in claim 2 wherein the first value is relative to a first pointer to the first set of queues, wherein the first pointer indicates a given queue of the first set of queues.
0. 5. The method as recited in claim 5 wherein the second value is relative to a second pointer indicating a given queue in the second set of queues from which a packet is being read.
0. 6. The method as recited in claim 5 wherein at least one packet from the given queue is moving to the second set of queues at a time that the first pointer is indicating the given queue.
0. 7. The method as recited in claim 2 further comprising the network node calculating the departure parameter dependent on a traffic contract corresponding to the first packet.
0. 8. The method as recited in claim 2 further comprising the network node calculating the departure parameter dependent on a quality of service associated with the first packet.
0. 9. The method as recited in claim 8 wherein the quality of service is constant bit rate service.
0. 10. The method as recited in claim 8 wherein the quality of service is variable bit rate service.
0. 11. The method as recited in claim 8 wherein the quality of service is available bit rate service.
0. 12. The method as recited in claim 8 wherein the quality of service is guaranteed frame rate service.
0. 13. The method as recited in claim 8 wherein the quality of service is unspecified bit rate service.
0. 14. The method as recited in claim 2 wherein the communication network comprises cable media.
0. 15. The method as recited in claim 2 wherein the communication network comprises digital subscriber line (DSL).
0. 16. The method as recited in claim 2 wherein the first packet is a cell.
0. 17. The method as recited in claim 2 wherein the first packet is an asynchronous transfer mode communication.
0. 19. The method as recited in claim 18 wherein the first result comprises the first value equaling the first pointer.
0. 20. The method as recited in claim 19 wherein the first result further comprises the first value equaling the first pointer plus one.
0. 21. The method as recited in claim 20 wherein the second result comprises any result other than the first result.
0. 22. The method as recited in claim 18 wherein the assigning is performed responsive to receiving the first packet.
0. 23. The method as recited in claim 18 further comprising the network node storing the first packet in a virtual channel queue corresponding to a virtual channel of the first packet, and wherein the assigning is performed responsive to another packet from the virtual channel departing from the departure queue.
0. 24. The method as recited in claim 18 wherein the second pointer changes values at a more rapid rate than the first pointer changes values.
0. 25. The method as recited in claim 18 further comprising the network node moving at least one packet from one of the second set of queues to the departure queue and changing the value of the second pointer.
0. 26. The method as recited in claim 25 further comprising the network node moving at least one packet from one of the first set of queues to one of the second set of queues and changing the value of the first pointer.
0. 27. The method as recited in claim 18 wherein the one of the first set of queues into which the first packet is stored is determined responsive to the first value.
0. 28. The method as recited in claim 18 wherein the one of the second set of queues into which the first packet is determined responsive to the second value.
0. 30. The network node as recited in claim 29 further comprising a set of virtual channel queues, wherein the network node is configured to store the first packet in a virtual channel queue corresponding to a virtual channel of the first packet, and wherein the network node is configured to assign the departure parameter responsive to transmitting another packet from the virtual channel on the communication network.
0. 31. The network node as recited in claim 29 wherein the first pointer indicates a given queue of the first set of queues, wherein the network node is configured to move at least one packet from the given queue to a corresponding queue in the second set of queues.
0. 32. The network node as recited in claim 29 wherein the network node is further configured to calculate the departure parameter dependent on a traffic contract corresponding to the first packet.
0. 33. The network node as recited in claim 29 wherein the network node is further configured to calculate the departure parameter dependent on a quality of service associated with the first packet.
0. 34. The network node as recited in claim 33 wherein the quality of service is constant bit rate service.
0. 35. The network node as recited in claim 33 wherein the quality of service is variable bit rate service.
0. 36. The network node as recited in claim 33 wherein the quality of service is available bit rate service.
0. 37. The network node as recited in claim 33 wherein the quality of service is guaranteed frame rate service.
0. 38. The network node as recited in claim 33 wherein the quality of service is unspecified bit rate service.
0. 39. The network node as recited in claim 29 wherein the communication network comprises cable media.
0. 40. The network node as recited in claim 29 wherein the communication network comprises digital subscriber line (DSL).
0. 41. The network node as recited in claim 29 wherein the first packet is a cell.
0. 42. The network node as recited in claim 29 wherein the first packet is an asynchronous transfer mode communication.

The present invention relates to the field of packet based communication systems and particularly switching technology.

Telephony, desktop video conferencing, video on demand, and other popular networking applications impose an increasing demand for bandwidth and simultaneous support of different types of service on the same communication network. To meet these demands, various high performance communication technologies are being deployed, including transmission over cable television lines using cable modems and DSL telephony services. One prominent data packet based technology is Asynchronous Transfer Mode (ATM).

ATM is designed to deal with the problem that some applications require very low latency, while other applications cannot tolerate loss of information but can support reasonable delays. Users also want a predictable and consistent level of quality when using a service. Quality of service (QoS) therefore becomes a key factor in the deployment of the next generation of networks. QoS differentiates services from one another by category. A service is represented to be a certain quality if it can consistently meet the same level of quality for a given set of measurable parameters. In traditional telephone systems, for example, QoS is measured in terms of delay to obtain dial tone, delay to set up the connection, trunk availability, quality of sound (e.g., noise, echo), and reliability of the connection. On the other hand, the Internet was designed as a “best effort” network, and did not originally intend to make any QoS commitments.

The various service categories supported by ATM are constant bit rate (CBR) service, variable bit rate (VBR) service, available bit rate (ABR) services guaranteed frame rate (GFR) service, and the residual category, unspecified bit rate (UBR) service.

In supporting these various service categories, network providers are therefore faced with a set of conflicting requirements. In response to market demand, network providers have to maximize network efficiency while meeting the specific QoS needs of the applications. The networks must also be capable of sharing bandwidth fairly among users and ensuring that any given user traffic cannot affect the QoS of other users. In addition, the networks have to support permanent connections as well as switched connections, which have very different holding time and utilization characteristics. Permanent connection are not set up and torn down frequently, but the link bandwidth may not be utilized at all times. In contrast, switched connections are set up and torn down frequently and the link bandwidth is generally highly utilized during the lifetime of the connection. Because of the diversity in the link speeds both in the accrues to the network and in the trunks, large speed mismatches need to be handled efficiently.

The inherent conflict created by the need to optimize bandwidth while ensuring different QoS can be resolved by using a combination of traffic control or traffic management techniques. A multiservice ATM network provides support for a wide variety of services with differing QoS requirements to be carried on the same switching nodes and links. Multiple services share the network resources (e.g., link bandwidth, buffer space, etc.) and may try to access a resource simultaneously. Resource contention arises because of this sharing, and buffering is required to temporarily store data packets. (In discussions of ATM technology a data packet is customarily referred to as a cell. According, this terminology will be adopted for purposes of this specification.)

The point at which this resource contention occurs is generally referred to as a “queuing” or “contention point”. Depending on the architecture, a switching node can be implemented with one or more queuing structures. A scheduling mechanism is implemented at each queuing structure to appropriately select the order in which cells should be served to meet the QoS objectives. A queuing structure and the corresponding scheduling algorithm attempt to achieve sometimes conflicting goals: (a) the flexibility to support a variety of service categories, and to easily evolve in support of new services; (b) the scalability to be simple enough to allow scaling up to large number of connections while allowing cost effective implementation; (c) the efficiency to maximize the network link utilization; (d) the guaranty of QoS to provide low jitter and end to end delay bounds for real time traffic; and (e) fairness to allow fast and fair redistribution of bandwidth that becomes dynamically available.

A variety of architectures are used to achieve the appropriate degree of traffic shaping. The architecture most pertinent to the present invention is known as direct exact sorting. FIG. 1 illustrates the direct exact sorting architecture used in the prior art. The direct exact sorting architecture employs a plurality of data structures known as queues. Each queue is a data structure in a which a plurality of cells are stored in memory in a sequential order. (The order is not physically sequential but is ordered by the software of the circuitry of the switch.) Because of the sequential order, there is a first cell and last cell in each queue. The queues release cells on a first-in first-out basis; consequently the last cell in sequence is referred to as the Head of Line (“HoL”) cell.

Referring to FIG. 1, this architecture employs a virtual connection queues stage 100 at the front end. Following the virtual connection queues stage 100, there is a timing queues stage 102. Following timing queues stage 102, a departure queue stage 104 is used to store the cells. When a cell arrives, it will be appended to a virtual connection queue based on its connection identifier which is determined by its virtual channel identifier and virtual path identifier. According to this cell's traffic contract, its departure time will be calculated by the algorithm commonly known in the prior art as associated dual leaky buckets. According to its assigned departure time, the cell is then appended to one of time queues shown in FIG. 2. Once real time pointer 214 points to a timing queue, all cells in the queue are appended to the departure queue 216 which for example would include cells 212, 210 and 208, where cells 212 all came from one timing queue and cells 210 all came from a second timing queue.

In the direct exact sorting architecture of the prior art, all cells with the same departure time are appended to the same timing queue, so the departure time of a cell becomes the sequence number indicating the timing queue to which the cell will be appended. This timing queue technology reduces the implementation complexity of the exact sorting of the time stamps. In particular, it avoids comparison and insertion operations which are time consuming.

In order to physically implement the direct exact sorting architecture of the prior art, the number of the timing queues can not be infinite. As a result, it is necessary to reuse the timing queue sequence numbers corresponding to the time stamps. As shown in FIG. 2, there are M timing queues including timing queues 200, 202, 203, 204 and 206. These timing queues must deal with connections that have a variety of cell rates. If rate_max is the maximum cell rate and rate_min is the minimum cell rate, the real time pointer must satisfy two conflicting objectives. First, the real time pointer must be moving at a rate of at least rate_max in order to service the connection with the maximum cell rate. (The rate of the real time pointer is measured in terms of timing queues per second.) Second, the departure time of the cells with the minimum cell rate must not be so much further in the future (that is, 1/rate_min) than the current state of the real time pointer that there is no appropriate timing queue to which the cell may be appended. These two objectives will be satisfied if (a) M is greater than or equal to rate_max/rate_min and (b) the rate of real time pointer is rate_max.

The direct exact sorting architecture suffers from a severe deficiency in that connections vary over a wide range of rates. Consequently, the value M may be very large. In addition, the value of M will increase linearly with the range of rate increasing, which means that the complexity of the implementation of this architecture will increase linearly. So this architecture is not suitable for accommodating connection with a very wide range of rates.

One aspect of the invention is a method of sorting packets for transmission over a communication network including the steps of sorting the packets into groups in accordance with predetermined criteria, assigning to each group a first departure time in accordance with the predetermined criteria, assigning to each packet of each group a second departure time in accordance with the predetermined criteria and transmitting each packet over the communication network in accordance with the second departure time.

Another aspect of the invention is a method of sorting packets for transmission over a communication network including the steps of sorting the packets into groups in accordance with predetermined criteria, wherein the number of groups is equal to two times the square root of a fraction whose numerator is the maximum transmission rate and whose denominator is the minimum transmission rate, assigning to each group a first departure time in accordance with the predetermined criteria, assigning to each packet of each group a second departure time, wherein the number of said departure time is equal to the square root of a fraction whose numerator is the maximum transmission rate and whose denominator is the minimum transmission rate, and transmitting each packet over the communication network in accordance with said the departure time.

The present invention is made more readily understandable by reference to the accompanying drawings in which:

FIG. 1 is a block document of the various stages used by prior art switches.

FIG. 2 is an illustration of certain details of the operation of the timing queues stage and the departure queue stage used by prior art switches.

FIG. 3 is a block diagram of the various stages used in a preferred embodiment of the present invention.

FIG. 4 is an illustration of certain details of the operation of the coarse pitch timing queues state, the fine pitch timing queues stage and the departure queue.

FIG. 5 is a flow chart illustrating an algorithm for assigning a departure time to a cell in a preferred embodiment of the present invention. This operation is triggered by an arriving cell to a virtual connection queue.

FIG. 6 is a flow chart illustrating an algorithm for assigning cells to course pitch queues and fine pitch queues.

FIG. 7 is a flow chart illustrating an algorithm for assigning a departure time to a cell in a preferred embodiment of the present invention. This operation is triggered by the cell's departure from the departure queue.

Referring now to FIG. 3, a block diagram is shown of an improved traffic shaper that uses a technology that is an improvement on direct exact sorting as it is implemented in the prior art. As shown in FIG. 3, a first preferred embodiment of the present invention consists of four stages instead of the three-stage architecture. Virtual connection queues stage 300 sorts the cells into queues based on each cell's virtual channel identifier and virtual path identifier or based on any other classification. A cell travels from virtual connection queues stage 300 to either (a) coarse pitch timing queues stage 302 or (b) fine pitch timing queue stage 304 depending on how soon it is scheduled to depart from the traffic shaper. The cells in coarse pitch timing queue stage 302 have to travel to fine pitch timing queue stage 304. From fine pitch timing queue stage 304 the cell travels to departure queue stage 306. (For purposes of this specification the virtual connection queues stage 300 is referred to as the “front end of the traffic shaper”, and coarse pitch timing queues stage 302 has to travel, fine pitch timing queues stage 304 and departure queue stage 306 are collectively referred to as the “back end of the traffic shaper”.)

FIG. 4 illustrates the operation of the various queues in detail. The HoL cell is assigned a departure time by either the algorithm set forth in the flow chart in FIG. 5 or the flow chart in FIG. 7. The algorithm illustrated in FIG. 5 commences with the arrival of a cell with a particular connection identifier in step (500). In step (502) the traffic shaper determines if there is a cell with the same connection identifier in the back end of the traffic shaper. If there is such a cell then the incoming cell is buffered in step (506) by being appended to a virtual connection queue. If there is not such a cell, then the cell is assigned a departure time in step (504).

The algorithm illustrated in FIG. 7 commences with departure of a cell with a specific connection identifier from the back end of the traffic shaper in the step (700). In step (702), the traffic shaper then uses a standard generic cell rate algorithm to calculate the departure time for the next cell with the same connection identifier. The traffic router in step (704) then tests if there is a cell with the same virtual connection identifier in the front end of the shaper. If the virtual connection queue in the front end of the traffic shaper for cells with that connection identifier is not empty, then in step (708) the departure time is assigned to the HoL cell in that virtual connection queue. Otherwise, in step (706) the departure time is stored for a future incoming cell with the same virtual connection identifier.

The departure time consists of two values: DT_ct and DT_ft. Dt_ct refers to the departure time determined on a coarse scale and DT_ft refers to the departure time determined on a fine scale.

The HoL cell is appended to one of the coarse pitch timing queues 400 through 408 based on the value of that cell's DT_ct if DT_ct is equal to RT_ct or if DT_ct is equal to RT_ct plus 1. Otherwise, the HoL cell is appended to one of the fine pitch timing queues 410 through 415 based on the value of that HoL cell's DT_ft.

A course pitch real time pointer continuously runs. When the course pitch real time pointer points to a specific coarse pitch timing queue, then the HoL cell in that queue is appended to one of the fine pitch timing queues based on the cell's DT_ft. The number of fine pitch timing queues is two times the number of course pitch timing queues. As shown in the algorithm illustrated in FIG. 6. At step (600) the algorithm determines if DT_ct is equal to RT_ct or if DT_ct is equal to RT_ct plus 1. If these conditions are not satisfied then the cell is appended to CPTQ[DT_ct] at step (602). If these conditions, however, are satisfied then at step (604) the specific fine pitch timing queue of fine pitch queues 606 and 608 to which the HoL cell is appended is determined by (a) the value of DT_ft of the cell and (b) whether DT_ft is odd or even.

In a first preferred embodiment of the invention, the fine pitch real time pointer would be counting K times faster than the coarse pitch real time pointer. By maintaining that speed the the fine pitch real time pointer would never be waiting for cells to be transferred from the coarse pitch timing queues. In addition, while one half of the fine pitch timing queue is being served, the other half is appended with cells from the coarse pitch timing queue. These newly appended cells do not fall behind the fine pitch real time pointer.

The effective number of queues for sorting purposes are the number of coarse pitch timing queues, K, times one-half the number of fine pitch timing queues, which is also K. According the effective number of timing queues is K*K. In order for this first preferred embodiment to function correct:
K*K≧rate_max/rate_min. or
K2≧rate_max/rate_min or
K rate_max / rate_min
The total amount of timing queues must therefore be at least 3 K or
3 * rate_max / rate_min .
This result compares favorably with the prior art where the number of timing queues is rate_max/rate_min.

Although the present invention has been described in terms of various embodiments, it is not intended that the invention be limited to these embodiments. Modification within the spirit of the invention will be apparent to those skilled in the art. For example in another embodiment, the VC Queues Stage 300 can be eliminated at the expense of increased timing queues. In this embodiment, an incoming cell is assigned a DT equal to DT_ct, DT_ft based on the GCRA algorithm and appended to the appropriate CPTQ or FPTQ. The DT calculation is only triggered by the arrival of a cell.

Uzun, Necdet

Patent Priority Assignee Title
Patent Priority Assignee Title
5231633, Jul 11 1990 Motorola, Inc Method for prioritizing, selectively discarding, and multiplexing differing traffic type fast packets
5499238, Nov 06 1993 Electronics and Telecommunications Research Institute; Korea Telecommunication Authroity Asynchronous transfer mode (ATM) multiplexing process device and method of the broadband integrated service digital network subscriber access apparatus
6104700, Aug 29 1997 ARISTA NETWORKS, INC Policy based quality of service
6167030, Mar 20 1997 Nokia Telecommunications, Oy Buffer-based traffic measurement system and method for nominal bit rate (NBR) service
6421342, Jul 02 1998 PARITY NETWORKS LLC Packet forwarding apparatus and method using pipelined node address processing
6507592, Jul 08 1999 Cisco Cable Products and Solutions A/S (AV); COCOM A S Apparatus and a method for two-way data communication
6563837, Feb 10 1998 Extreme Networks, Inc Method and apparatus for providing work-conserving properties in a non-blocking switch with limited speedup independent of switch size
6678277, Nov 09 1999 Hewlett Packard Enterprise Development LP Efficient means to provide back pressure without head of line blocking in a virtual output queued forwarding system
6687246, Aug 31 1999 Intel Corporation Scalable switching fabric
//
Executed onAssignorAssigneeConveyanceFrameReelDoc
Jan 11 2000UZUN, NECDETNew Jersey Institute of TechnologyASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0247180075 pdf
May 11 2006New Jersey Institute of Technology(assignment on the face of the patent)
Date Maintenance Fee Events
Sep 23 2011M1552: Payment of Maintenance Fee, 8th Year, Large Entity.
Dec 18 2015REM: Maintenance Fee Reminder Mailed.
May 11 2016EXP: Patent Expired for Failure to Pay Maintenance Fees.


Date Maintenance Schedule
Feb 08 20144 years fee payment window open
Aug 08 20146 months grace period start (w surcharge)
Feb 08 2015patent expiry (for year 4)
Feb 08 20172 years to revive unintentionally abandoned end. (for year 4)
Feb 08 20188 years fee payment window open
Aug 08 20186 months grace period start (w surcharge)
Feb 08 2019patent expiry (for year 8)
Feb 08 20212 years to revive unintentionally abandoned end. (for year 8)
Feb 08 202212 years fee payment window open
Aug 08 20226 months grace period start (w surcharge)
Feb 08 2023patent expiry (for year 12)
Feb 08 20252 years to revive unintentionally abandoned end. (for year 12)