A switching node 3 for providing admission control in a network comprises a number of separate priority queues 6 for receiving packets having different priority levels. priority levels 1 to n provide guaranteed delay services, priority levels n+1 to m provide guaranteed loss services, and priority level Pm+1 provides best effort service. Measuring means 51 to 5m continually measure the average bit rate entering each priority level buffer P1 to Pm, except the lowest Pm+1. When a request arrives with a kth priority level, the network capacities for priority levels l=k . . . Pm are calculated. For all choices of l, the measurements of traffic loads of levels l and higher are taken into account. These capacity requirements are necessary to guarantee the quality of service for all flows already admitted to the lower priority queues. These capacities are compared to the network capacity, and if there is sufficient capacity, the request is accepted. Otherwise, the request in rejected.
|
7. A switching node for an integrated services packet-Switched Network having a plurality of priority levels based on service guarantees associated therewith, the switching node comprising:
means for allocating each incoming flow to a respective one of the priority levels, based on service guarantees required by each said incoming flow; admission control means for determining whether, if an incoming flow is admitted, the service guarantees can be met for the incoming flow and all previously admitted flows; means for calculating capacities that are required for the incoming flow and all lower priority flows having guaranteed services; and means for admitting the incoming flow if there is sufficient network capacity to handle the sum of said capacities.
1. An admission control method for a switching node in an integrated services packet-Switched Network having a plurality of priority levels based on services guarantees associated therewith, the method comprising the steps of:
allocating each incoming flow to a respective selected one of the plurality of priority levels, based on service guarantees required by each said incoming flow; determining whether, if an incoming flow is admitted, the service guarantees can be met for the incoming flow and all previously admitted flows; and calculating capacities that are required for the incoming flow and all lower priority flows having guaranteed services, and admitting the incoming flow if there is sufficient network capacity to handle the sum of said capacities.
14. An integrated services packet-Switched Network having a plurality of priority levels based on service guarantees associated therewith, the network comprising a plurality of switching nodes, wherein at least one switching node comprises:
means for allocating each incoming flow to a respective one of the priority levels, based on service guarantees required by each said incoming flow; admission control means for determining whether, if an incoming flow is admitted, the service guarantees can be met for the incoming flow and all previously admitted flows; means for calculating capacities required for the incoming flow and all lower priority flows having guaranteed services; and means for admitting the incoming flow if there is sufficient network capacity to handle the sum of said capacities.
2. A method as claimed in
3. A method as claimed in
4. A method as claimed in
5. A method as claimed in
where:
ρ0, σn the token rate and bucket size of the new flow k assigned priority level of the new flow εd saturation probability (should be smaller than the loss values 11, 12, 13, . . . , e.g. 10-8) ρi, σi token rate and bucket size of flow i A1 . . . k set of flows belonging to the first k (1 . . . k) priority levels C output link rate Mi measured average rate of priority level i dk delay guarantee of priority level k.
6. A method as claimed in
where: η0, σ0 the token rate and bucket size of the new flow k assigned priority level of the new flow li saturation probability of guaranteed loss service i ηi, σi token rate and bucket size of flow i A1 . . . k set of flows belonging to the first k (1 . . . K) priority levels C output link rate Bk buffering capacity at queue k Mi measured average rate of priority level i. 8. A switching node as claimed in
9. A switching node as claimed in
10. A switching node as claimed in
11. A switching node as claimed in
queues 1 to n are provided for storing the guaranteed delay service data; queues n+1 to m are provided for storing the guaranteed loss service data; and a queue m+1 is provided for storing best effort service data.
12. A switching node according to
Where:
η0, σ0 the token rate and bucket size of the new flow k assigned priority level of the new flow εd saturation probability (should be smaller than the loss values 11, 12, 13, . . . , e.g. 10) ηi, σi token rate and bucket size of flow i A1 . . . k set of flows belonging to the first k (1 . . . k) priority levels C output link rate Mi measured average rate of priority level i dk delay guarantee of priority level k.
13. A switching node according to
where: η0, σ0 the token rate and bucket size of the new flow k assigned priority level of the new flow li saturation probability of guaranteed loss service i ηi, σi token rate and bucket size of flow i A1 . . . k set of flows belonging to the first k (1 . . . K) priority levels C output link rate Bk buffering capacity at queue k Mi measured average rate of priority level i.
15. An integrated services packet-Switched Network as claimed in
16. An integrated services packet-Switched Network as claimed in
17. An integrated services packet-Switched Network as claimed in
18. An integrated services packet-Switched Network as claimed in
queues 1 to n are provided for storing the guaranteed delay service data; queues n+1 to m are provided for storing the guaranteed loss service data; and a queue m+1 is provided for storing best effort service data.
19. An integrated services Packed-Switched Network according to
where:
η0, σ0 the token rate and bucket size of the new flow k assigned priority level of the new flow εd saturation probability (should be smaller than the loss values 11, 12, 13, . . . ,e.g. 10-8) ηi, σi token rate and bucket size of flow i A1 . . . k set of flows belonging to the first k (1 . . . k) priority levels C output link rate Mi measured average rate of priority level i dk delay guarantee of priority level k.
20. An integrated services Packed-Switched Network according to
where: η0, σ0 the token rate and bucket size of the new flow k assigned priority level of the new flow li saturation probability of guaranteed loss service i ηi, σi token rate and bucket size of flow i A1 . . . k set of flows belonging to the first k (1 . . . K) priority levels C output link rate Bk buffering capacity at queue k Mi measured average rate of priority level i. |
The invention relates to a service architecture for a telecommunication network, in particular, an Integrated Services Packet-Switched Networks which makes it possible to support different applications requiring different levels of quality-of-service (QoS).
The current trend of integrating communication networks requires the development of network architectures that are capable of supporting the diverse range of quality-of-service needs that are required for a diverse range of different applications. Applications differ in the traffic they generate and the level of data loss and delay they can tolerate. For example, audio data does not require the packet-error reliability required of data services, but audio data cannot tolerate excessive delays. Other applications can be sensitive to both data loss and delay.
The network architectures under consideration are based on the packet (or cell) switching paradigm, for example Transmission Control Protocol/Internet Protocol (TCP/IP) or Asynchronous Transfer Mode (ATM). The basic idea behind integrated services packet-switched networks is that all traffic is carried over the same physical network but the packets belonging to flows with different QoS requirements receive different treatment in the network. A flow represents a stream of packets having a common traffic (e.g. peak rate) and QoS (e.g. loss) description, and also having the same source and destination.
This differentiation in treatment is generally implemented by a switch mechanism that first classifies packets arriving at a Switching Node (SN) according to their QoS commitment, and then schedules packets for transmission based on the result of the classification. Ideally, the classifier and scheduler in each switching node should be simple, fast, scalable and cheap.
In order to protect existing commitments, the network must be able to refuse any new request. This is accomplished using Admission Control (AC) during which some mechanism (usually distributed) decides whether the new request can be admitted or not.
Among the several proposed solutions to the problem, they can be categorised into two main groups depending upon whether or not they have what is known as "per-flow" scheduling.
Solutions which use this form of per-flow scheduling have the disadvantage of requiring complex queue handlers and classifying/scheduling hardware. For each packet, the classifier 10 must determine the corresponding buffer Q1 to Qn that the packet should be put into. The large and variable number of queues means that the queue handler's function is complex. When the next packet is to be sent, the scheduler 14 must select the appropriate buffer to send from. The scheduler 14 can also be a bottleneck due to the large number of queues it must service. The per-packet processing cost can be very high and increases with the number of flows. In addition, the algorithms are not scalable, which means that as the volume and the number of flows increases, the processing complexity increases beyond what can be handled.
Proposed solutions which do not use per-flow scheduling have one of two limitations. First, they are able to provide only very loose QoS guarantees. In addition, the QoS metric values are not explicitly defined, (e.g. differential services architectures). Secondly, the provided guarantees are deterministic, which means that no statistical multiplexing gain can be exploited. As a result, the network utilisation is very low.
The aim of the present invention is to overcome the disadvantages of the prior art listed above by providing a service architecture having a simple and scalable way to guarantee different levels of quality-of-service in an integrated services packet-switched network. This is achieved using switching nodes that combine simple packet scheduling and measurement based admission control to provide explicit quality-of-service guarantees.
According to a first aspect of the present invention, there is provided an admission control method for a switching node of an integrated services packet-switched network, the method comprising the steps of:
allocating each incoming flow to a respective selected one of the plurality of priority levels, based on service guarantees required by said flow;
determining whether, if the incoming flow is admitted, the service guarantees can be met for the incoming flow and all previously admitted flows; and, if so,
admitting the incoming flow.
According to another aspect of the invention, a switching node comprises:
means for allocating each incoming flow to a respective one of the priority levels, based on service guarantees required by said flow;
admission control means for determining whether, if the incoming flow is admitted, the service guarantees can be met for the incoming flow and all previously admitted flows; and,
means for admitting the incoming flow if this is so.
According to a further aspect of the invention, an integrated services packet switched network comprises a plurality of switching nodes, wherein at least one switching node comprises:
means for allocating each incoming flow to a respective one of the priority levels, based on service guarantees required by said flow;
admission control means for determining whether, if the incoming flow is admitted, the service guarantees can be met for the incoming flow and all previously admitted flows; and,
means for admitting the incoming flow if this is so.
For a better understanding of the present invention reference will now be made, by way of example, to the accompanying drawings, in which:
The integrated services packed-switched network of the preferred embodiment offers three service categories. These are:
i. maximum delay guaranteed,
ii. maximum guaranteed packet loss, and
iii. best effort.
Within the delay and loss categories there are a fixed number of services. For services in the delay category the network provides different levels of maximum guaranteed delay (d1, d2, . . . ) and strict loss guarantee, while for services in the loss category different levels of maximum guaranteed loss (l1, l2, . . . ) are provided, but no explicit delay. For the best effort service no guarantees are given at all, but no admission control is performed either. This means that all bandwidth left unused by the other services can be exploited.
The edge devices are in turn connected to switching nodes 3. The purpose of each edge device 2 is to police the traffic of the admitted connections, and ensure that the service fields in the packet headers are set to the service class of the connection.
The operation of the network relies on a signalling protocol which communicates a new request from the end system to the network and among network nodes. The signalling protocol also signals the acceptance or rejection of a request, and the termination of the request from the end system. The exact protocol to be used does not form part of this invention, so long as it meets the criteria set out above.
In operation, an end system 1 signals a request for a new flow to an edge device 2. The signalling message contains the service descriptor and the traffic descriptor. The traffic descriptor may contain a peak rate or a leaky bucket descriptor, or both. The edge device 2 passes the request along the path of the flow and each switching node 3 makes an admission control decision locally. Rejection or acceptance of a request is signalled back to the end system 1.
If the flow was accepted the end system 1 starts to transmit data to the edge device 2. The edge device 2 is responsible for identifying the flow which the packet belongs to. It checks if the packet conforms to the flow's traffic descriptor. If not, the edge device 2 drops the packet. If the packet conforms, the edge device assigns a priority level to the packet based on the QoS requirements of the flow. The priority level is stored in the packet header (for example, the TOS field in IP packets or the VPI/VCI field in ATM cells). The packet travels along the path to its destination. In each switching node 3 the packet is sent to the queue corresponding to the value in the packet header. Queues are serviced on a strict priority basis, thereby conserving the work of the scheduler.
A priority classifier 4 assigns priorities based on the service request of the flow, which is communicated in the service descriptor in the signalling messages. Admitted packets are sent to a number of separate priority queues 6 depending upon the value contained in the packet header. The services within the "delay" service category always receive higher priority than services in the "loss" service category. Priority levels 1 to n provide guaranteed delay services, and priority levels n+1 to m provide guaranteed loss services. Within each type of category, the more stringent services receive higher priority than less stringent ones.
For example, d1 has a more demanding service guarantee, and hence a higher priority level, than d2, which is higher than d3, and so on. Likewise, within the loss category, l1 has a more demanding service guarantee, and hence a higher priority, than l2, which has a higher priority than l3, and so on. The best effort service is always assigned the lowest priority level Pm+1.
The switching node 3 has means 51 to 5m for continually measuring the average bit rate entering each priority level buffer P1 to Pm, except the lowest PM+1. These measurements are used to aid the admission control algorithm of the present invention.
When a request arrives with the kth priority level, the network capacities for priority levels l=k . . . Pm are calculated, as shown in step S2. For all choices of l, the measurements of traffic loads of levels l and higher are taken into account. These capacity requirements are necessary to guarantee the quality of service for all flows already admitted to the lower priority queues. These capacities are compared to the network capacity, step S3, and if there is sufficient capacity, step S4, the request is accepted, step S5. Otherwise, the request in rejected, step S6.
Thus, for example, if there are 10 admitted flows on each priority P1 to Pm+1, and a new request arrives with the 3rd priority level P3, then the admission control means determines if all 31 flows, those in P1-P3 and the new flow, can be guaranteed d3 delay using the whole link capacity. If the request arrives at the 6th priority level and this is the queue for loss service with guarantee l2, then the admission control means determines if l2 loss can be guaranteed to all of the 61 flows, namely those in P1-P6 and the new flow. This guarantees the quality of service for the kth level. However, it must be assured that the service guarantees for the lower priority levels can also be met. Therefore, as stated above, when a request is received, the capacity required for the kth level and lower levels is calculated. The new request is admitted if there is sufficient capacity for the kth level and all lower priority levels.
In this manner, the disadvantages of prior art priority scheduling techniques are avoided, thereby preventing the higher priority traffic from severely blocking the lower priority traffic.
Preferably, admission control for the guaranteed delay services relies on the following condition being met:
where:
ρ0, σ0 the token rate and bucket size of the new flow
k assigned priority level of the new flow
εd saturation probability (should be smaller than the loss values l1, l2, l3, . . . , e.g. 10-8)
ρi, σi token rate and bucket size of flow i
A1 . . . k set of flows belonging to the first k (1 . . . k) priority levels
C output link rate
Mi measured average rate of priority level i
dk delay guarantee of priority level k
Preferably, admission control for the guaranteed loss services relies on the following two conditions being met:
where:
ρ0, σ0 the token rate and bucket size of the new flow
k assigned priority level of the new flow
li saturation probability of guaranteed loss service i
ρi, σi token rate and bucket size of flow i
A1 . . . k set of flows belonging to the first k (1 . . . k) priority levels
C output link rate
Bk buffering capacity at queue k
Mi measured average rate of priority level i
After flows have been admitted according to the admission control criteria listed above, a priority scheduler 7 (shown in
The admission control method described above overcomes the problems of the prior art in that the admission algorithm is scalable, ie. does not depend upon the number of different priority levels, and does not allow the higher priority traffic to severely block the lower priority traffic.
The invention has the advantage that the amount of work required per-packet in the scheduler is minimal. The architecture is able to admit more flows than schemes based on WFQ, and statistical multiplexing can be done among several traffic classes which increases the amount of traffic that can be admitted.
When admitting to the kth priority, the invention will consider as if all the higher priority traffic belonged to this class. In this way, admission control is not carried out in separate classes, but in groups of classes, thereby increasing the statistical multiplexing gain. The scalable, aggregate level measurements used to monitor the real usage of the network resources means that network efficiency is improved.
Turányi, Zoltán, Veres, András
Patent | Priority | Assignee | Title |
6754215, | Aug 17 1999 | RAKUTEN GROUP, INC | Packet scheduling device |
6882623, | Feb 08 2000 | Alcatel | Multi-level scheduling method for multiplexing packets in a communications network |
6940813, | Feb 05 2003 | BEIJING XIAOMI MOBILE SOFTWARE CO ,LTD | System and method for facilitating end-to-end quality of service in message transmissions employing message queues |
7020143, | Jun 18 2001 | Ericsson Inc | System for and method of differentiated queuing in a routing system |
7046665, | Oct 26 1999 | RPX Corporation | Provisional IP-aware virtual paths over networks |
7046685, | Dec 15 1998 | Fujitsu Limited | Scheduling control system and switch |
7054938, | Feb 10 2000 | Telefonaktiebolaget L M Ericsson | Method and apparatus for network service reservations over wireless access networks |
7075927, | May 05 2000 | Fujitsu Limited | Method and system for quality of service (QoS) support in a packet-switched network |
7123583, | Jan 25 2001 | Ericsson AB | Dual use rate policer and re-marking logic |
7206282, | May 29 2001 | F5 Networks, Inc | Method and apparatus to balance flow loads in a multipurpose networking device |
7266606, | Jun 29 2001 | Alcatel-Lucent Canada Inc | Cascaded policing systems and methods |
7272144, | Jun 26 2002 | ARRIS ENTERPRISES LLC | Method and apparatus for queuing data flows |
7385994, | Oct 24 2001 | Intellectual Ventures II LLC | Packet data queuing and processing |
7493623, | Feb 05 2003 | HMD Global Oy | System and method for identifying applications targeted for message receipt in devices utilizing message queues |
7535831, | Sep 16 2003 | CIENA LUXEMBOURG S A R L ; Ciena Corporation | Method and apparatus for providing grades of service for unprotected traffic in an optical network |
7580353, | May 29 2001 | F5 Networks, Inc. | Method and apparatus to balance flow loads in a multipurpose networking device |
7724749, | May 10 2002 | InterDigital Technology Corporation | System and method for prioritization of retransmission of protocol data units to assist radio-link-control retransmission |
7756940, | Feb 05 2001 | Hitachi, Ltd. | Transaction processing system having service level control capabilities |
7779155, | Aug 29 2004 | WSOU Investments, LLC | Method and systems for resource bundling in a communications network |
7965717, | Jan 17 2003 | RPX CLEARINGHOUSE LLC | Multi-staged services policing |
8068497, | May 10 2002 | InterDigital Technology Corporation | System and method for prioritization of retransmission of protocol data units to assist radio-link-control retransmission |
8305890, | Sep 15 2008 | Uber Technologies, Inc | Method and apparatus for prioritizing voice over internet protocol signaling messages |
8565241, | May 10 2002 | InterDigital Technology Corporation | System and method for prioritization of retransmission of protocol data units to assist radio-link-control retransmission |
8730813, | Feb 23 2011 | Fujitsu Limited | Apparatus for performing packet-shaping on a packet flow |
8929385, | May 10 2002 | InterDigital Technology Corporation | System and method for prioritization of retransmission of protocol data units to assist radio link control retransmission |
9060037, | Sep 15 2008 | Uber Technologies, Inc | Method and apparatus for prioritizing voice over internet protocol signaling messages |
9468016, | Sep 15 2008 | Uber Technologies, Inc | Method and apparatus for prioritizing voice over internet protocol signaling messages |
9622257, | May 10 2002 | InterDigital Technology Corporation | Prioritization of retransmission of protocol data units to assist radio link control retransmission |
9723622, | Sep 15 2008 | Uber Technologies, Inc | Method and apparatus for prioritizing voice over internet protocol signaling messages |
Patent | Priority | Assignee | Title |
5917804, | Sep 05 1996 | Nortel Networks Limited | Connection admission control for ATM networks handling CBR and VBR services |
5926458, | Jan 31 1997 | AVAYA Inc | Method and apparatus for servicing multiple queues |
6097722, | Dec 13 1996 | RPX CLEARINGHOUSE LLC | Bandwidth management processes and systems for asynchronous transfer mode networks using variable virtual paths |
6104700, | Aug 29 1997 | ARISTA NETWORKS, INC | Policy based quality of service |
6240066, | Feb 11 1997 | WSOU Investments, LLC | Dynamic bandwidth and buffer management algorithm for multi-service ATM switches |
6269079, | Nov 12 1997 | International Business Machines Corporation | Systems, methods and computer program products for distributing connection information between ATM nodes |
6324165, | Sep 05 1997 | Ciena Corporation | Large capacity, multiclass core ATM switch architecture |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jun 11 1999 | Telefonaktiebolaget LM Ericsson (publ) | (assignment on the face of the patent) | / | |||
Jun 14 1999 | PORTA, ROBERTO | CRINOS INDUSTRIA FARMACOBIOLOGICA S P A | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 010137 | /0568 | |
Jun 14 1999 | FERRO, LAURA | CRINOS INDUSTRIA FARMACOBIOLOGICA S P A | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 010137 | /0568 | |
Jun 14 1999 | TRENTO, FABIO | CRINOS INDUSTRIA FARMACOBIOLOGICA S P A | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 010137 | /0568 | |
Jun 22 1999 | NASTRUZZI, CLAUDIA | CRINOS INDUSTRIA FARMACOBIOLOGICA S P A | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 010137 | /0568 | |
Jun 22 1999 | ESPOSITO, ELISABETTA | CRINOS INDUSTRIA FARMACOBIOLOGICA S P A | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 010137 | /0568 | |
Jun 22 1999 | MENEGATTI, ENEA | CRINOS INDUSTRIA FARMACOBIOLOGICA S P A | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 010137 | /0568 | |
Sep 20 1999 | VERES, ANDRAS | TELEFONAKTIEBOLAGET LM ERRICSSON | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 010313 | /0695 | |
Sep 20 1999 | TURANYI, ZOLTAN | TELEFONAKTIEBOLAGET LM ERRICSSON | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 010313 | /0695 |
Date | Maintenance Fee Events |
Mar 02 2007 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Mar 02 2011 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Mar 02 2015 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Sep 02 2006 | 4 years fee payment window open |
Mar 02 2007 | 6 months grace period start (w surcharge) |
Sep 02 2007 | patent expiry (for year 4) |
Sep 02 2009 | 2 years to revive unintentionally abandoned end. (for year 4) |
Sep 02 2010 | 8 years fee payment window open |
Mar 02 2011 | 6 months grace period start (w surcharge) |
Sep 02 2011 | patent expiry (for year 8) |
Sep 02 2013 | 2 years to revive unintentionally abandoned end. (for year 8) |
Sep 02 2014 | 12 years fee payment window open |
Mar 02 2015 | 6 months grace period start (w surcharge) |
Sep 02 2015 | patent expiry (for year 12) |
Sep 02 2017 | 2 years to revive unintentionally abandoned end. (for year 12) |