A method for optimally serving stations (STA) on Wireless Local Area network (LAN) using a controlled contention/resource reservation protocol of the IEEE 802.11e standard. The Wireless LAN includes multiple STAs, mobile or Stationary, airlinked to an access point as a Basic Service Set (BSS). A Hybrid Coordinator (HC) is co-located with the access point for allocating the bandwidth for the BSS using a controlled contention/resource reservation protocol defined in the IEEE Standard 802.11(e). The HC transmits contention control (CC) frames and initiates controlled contention intervals (CCI) having a selected number of slotted intervals. HC receives resource reservations (RR) detailing bandwidth needs from STA contenders during a specified time interval called the controlled contention interval (CCI.). Several parameters are installed in each CC for contention control purpose. The several parameters are controlled to optimize efficient use of the wireless medium and reduce access delays for RR frames contending for the wireless medium.
|
1. A method for optimally serving stations on Wireless local area networks using a controlled contention/resource reservation protocol of the IEEE 802.11e standard comprising:
(a) setting a counter empty_cc to 0;
(b) estimating the number of contenders according to prior results for the contention control/resource reservation protocol and observed traffic patterns;
(c) conducting a test to determine if the number of contenders is less than 1;
(d) determining Optimum controlled contention opportunities (cc_ops) and approximating a number of slots required to report results to a station as 1 or 2 slots;
(e) performing a test CC_OPs>1, wherein a “yes” condition sets cc_ops to+1 and a “no” condition transfers to step (f); and
(f) conducting a test: Empty_CCI<Max_Empty_CCI, wherein empty_cci is a number of empty controlled contention intervals (CCIs), wherein Max_Empty_CCI is a selected number of empty_ccis, and wherein a “yes' condition transfers to step (b).
0. 5. A method for serving stations on Wireless LANs using a controlled contention/resource reservation protocol of the IEEE 802.11e standard comprising:
(a) transmitting a contention control (CC) frame that specifies a time period for at least one controlled contention interval (CCI) during which station contenders can transmit resource reservations (RRs) detailing their bandwidth needs, the at least one controlled contention interval (CCI) having a selected number of controlled contention opportunities (CC—OPs) or slotted intervals; and
(b) receiving resource reservations (RRs) from one or more of the station contenders during respective ones of the controlled contention opportunities (CC—OPs) or slotted intervals;
(c) wherein the number of the controlled contention opportunities (CC—OPs) or slotted intervals within the at least one controlled contention interval (CCI) is equal to one of a) an estimate of the number of station contenders, b) an estimate of the number of station contenders+1, or c) an estimate of the number of station contenders+2.
0. 9. Apparatus for serving stations on Wireless LANs using a controlled contention/resource reservation protocol of the IEEE 802.11e standard comprising:
(a) transmitting apparatus which transmits contention control (CC) frames each initiating at least one specified time interval called a controlled contention interval (CCI);
(b) receiving apparatus which receives resource reservations (RRs) detailing bandwidth needs from station contenders during the controlled contention interval (CCI); and
(c) installing apparatus which installs in each CC frame several parameters for contention control purposes, one of said parameters specifying a number of controlled contention opportunities (CC—OPs) or slotted intervals that are included within the controlled contention interval (CCI), the resource reservations (RRs) being transmitted within respective ones of the controlled contention opportunities (CC—OPs) or slotted intervals;
(d) wherein the number of controlled contention opportunities (CC—OPs) or slotted intervals of a controlled contention interval (CCI) is equal to one of a) an estimate of the number of station contenders, b) an estimate of the number of station contenders+1, or c) an estimate of the number of station contenders+2.
4. The method of
(g) calculating a Permission Probability where: if contenders>max contenders, then Permission Probability=Max_Cntndrs/Cntndrs and Cntndrs=Max_Cntndrs; else a “no” condition setting a Permission Probability=1; else leaving cntndrs as is; and
(h) resetting the empty_cci to 0 and a “yes” condition incrementing the empty_cci counter.
0. 6. The method of claim 5 wherein the contention control (CC) frame further specifies a Permission Probability (PP).
0. 7. The method of claim 5 wherein the contention control (CC) frame includes a set of flags indicating Traffic Categories (TC) that may compete for the controlled contention opportunities (CC—Ops).
0. 8. The method of claim 5 wherein said at least one controlled contention interval (CCI) comprises two or more concatenated controlled contention intervals (CCIs) each having the same number of controlled contention opportunities (CC—OPs) or slotted intervals.
0. 10. The apparatus of claim 9 wherein at least one of the contention control (CC) frames initiates two or more concatenated controlled contention intervals (CCIs) each having the same number of controlled contention opportunities (CC—OPs) or slotted intervals.
|
This application claims the benefit of the filing date of Provisional Application Serial No. 60/335,504, filed Oct. 31, 2001, entitled “Methods For Allocating Controlled Opportunities In A Mediaplex Controlled Interval”, assigned to the same assignee as that of the present invention and fully incorporated herein by reference.
1. Field of the Invention
This invention relates to wireless communication methods and system. More particularly, the invention relates to a method and system for optimally serving stations on Wireless LANs using a Controlled Contention/Resource Reservation protocol of the IEEE 802.11e standard.
2. Description of the Prior Art
IEEE 802.11 is a standards body developing Wireless Local Area Network (WLAN) Standards [802.11, 802.11a, 802.11b]. Recently, that body has started development of a supplement that would specify the support of Quality of Service (QoS) within 802.11 WLANs. This work is being carried out by the 802.11e Task Group, and the most current draft of the QoS extensions being developed (as of the writing of this application) can be found in [802.11e]. A set of protocols has been proposed for use in 802.11(e) based on centralized control of the wireless media. In this protocol set, during specified periods of time called Contention Free Periods (CFPs) and Contention Free Bursts (CFBs), STATIONs (STA) may only use the wireless medium when granted permission from the Hybrid Coordinator (HC). The HC is responsible for allocating bandwidth on the wireless medium and ensuring that QoS needs are met. The HC generally grants the use of the medium to a STA by polling it. This transfers control of the medium to that STA for a limited period of time. Control of the medium must then revert to the HC.
A problem which is addressed within the 802.11e draft is how to make the HC aware of changing bandwidth needs for the STA it serves. A protocol included in 802.11e for doing this was first proposed in [00/33] to the 802.11 community. The protocol is termed the Contention Control/Resource Reservation (CC/RR) protocol. In this protocol, the HC grants the medium for use by Resource Reservation (RR) frames by transmitting a Contention Control (CC) frame. Only RR frames may be transmitted during the time period specified by the CC frame. This time period is called the Controlled Contention Interval (CCI). The RR frames detail the bandwidth needs of the STA transmitting them. Several parameters for the CCI are specified by the CC frame. These include a Permission Probability (PP), the number of Controlled Contention Opportunities (CC_OPs), and a set of flags indicating which Traffic Categories (TC or Priorities) may compete for the medium with RR frames during an upcoming CCI. The protocol states that when a STA receives a CC message and wishes to send a RR for an appropriate TC, it will chose a random number between 0 and 1. If the random number is less than or equal to the PP value, the STAs is permitted to transmit the RR. It then randomly selects a CC_OP in which to transmit (Note: the current draft of 802.11.e has eliminated the PP value, so STAs transmit a RR during a CCI whenever a permitted TC has one to transmit). Since other STAs may be transmitting RR frames, there is a possibility that multiple RRs will be transmitted in a CC_Op, and none will be received correctly (though it is possible due to RF capture effects one will be correctly received despite the contention). Such a CC_OP would be considered collided or “busy”. If only one RR is transmitted in the CC_Op, it most likely will be received correctly (although, due to interference or noise on the wireless medium or propagation issues it is possible, it will be lost anyway). And finally, some CC_OPs will not contain any RR, and in some sense those “empty” CC_OPs waste bandwidth on the medium.
While [00/331 and 802.11e detail the overall protocol, required frame formats, and how the transmitted CC parameters are used by the STA, there is no detail on how the key parameters are determined and set by the HC. What is needed in the art are methods which advantageously set the parameters for the CC/RR protocol so as to optimize performance for efficient use of the medium.
Wireless LANs operating under the IEEE 802.11(e) protocol include an Access Point serving a plurality of Mobile STAs (MS). The protocols provide centralized control of the wireless media during specified periods of time called Contention Free Periods (CFPs) and Contention Free Bursts (CFBs). A Hybrid Coordinator (HC), typically co-located with the Access Point, allocates bandwidth among the MS contenders. The HC regularly transmits Contention Control (CC) frames, which initiate Controlled Contention Intervals (CCI) having a selected number of slotted intervals. The HC receives Resource Reservations (RR) detailing bandwidth needs from the MS contenders during a specified time interval of the CC called the Controlled Contention Interval (CCI). Each CCI has several parameters including a number of slots or Controlled Contention Opportunities (CC_OP), an optional Permission Probability (PP) and a set of flags indicating which Traffic Categories (TC) may compete for the CC_OPs. When an MS contender receives a CC, an RR is transmitted which specifies the bandwidth needs of the MS contender and is assigned a CC_OP slot if it succeeds in drawing a random number less than the PP. Since other MS contenders may be transmitting RR frames, there is the possibility of collision and none will be received in the CCI, which wastes bandwidth, or some CC_OP or slots may not contain a RR, which also wastes bandwidth. An algorithm sets the CCI parameters to optimize efficient use of the medium and reduce access delays for RR frames contending for the wireless medium. Efficient use is defined in terms of network service time or bandwidth utilization. The algorithm assumes: First, each CCI contains at least one slot or CC_OP; second, there is no limit or at least a large limit on the number of CC_OPs in a CCI; third, perfect knowledge or an estimate of the number of contenders is available. The algorithm stores the value (a) Max_Empty_CCI defined as a selected number of Empty_CCIs to end the cycle for serving contenders and (b) Max_Contendr defined as the maximum number of contenders the HC desires to serve in a single CCI. In step 1, a counter Empty_CC is set to 0. Step 2 estimates the number of contender based on prior CCI results and traffic models. Step 3 conducts a test to determine if the number of contenders is less than 1. A “no” condition resets the Empty_CCI counter to 0. A “yes” increments the Empty_CCI counter. Step 3 transfers to Step 4 which starts a test: Cntndrs>than the stored Max_Cntndrs. A “yes” calculates a Permission Probability Max_Cntndrs/Contender in step 5 and sets Cntndrs=Max_Cntndrs. A “no” condition indicates a Permission Probability of 1. Both Steps 4 and 5 transfer to Step 6 which Determines Optimum CC_OPs and approximates the overhead as 1 or 2 slots. Step 6 transfers to Step 7 which performs a test” CC_OPs<1. A “yes” condition sets CC_OPs to +1 and transfers to Step 8. A “no” condition also transfers to Step 8 which conducts a test: Empty_CCI<Max_Empty_CCI. If “no” the process ends, and if “yes” the CCI is conducted and the process iterates returning to Step 2.
One aspect of the invention sets the number of slots or CC_OPs equal to the number of contending STAs where the efficiency is calculated on a slot basis or on an overall CCI basis, not taking into account overhead or the number of slots required to report results to the STA. The efficiency is lowered when taking into account overhead where the overhead is assumed to be one or two slots.
Another aspect uses multiple concatenated CCIs, which maximizes efficiency where every CCI uses the optimum number of slots for the number of contenders in existence.
Another aspect estimates the number of MS contenders based on prior CCI results or contender arrival rates where the number of contenders may be estimated as two times the number of busy slots in the last CCI which is increased by the predicted contender arrivals since the last CCI.
Another aspect calculates a permission probability to limit the expected number of contender in a CCI given the number of contenders and maximum allowed number of slots and limits the number of contenders based on the calculated probability.
The invention will be further understood from the following detailed description of a preferred embodiment taken in conjunction with an appended drawing, in which:
Before describing the present invention, a brief review of the IEEE 802.11 Wireless LAN Standard is believed appropriate for a better understanding of the invention.
The IEEE 802.11 standard defines over-the-air protocols necessary to support networking in a local area. The standard provides a specification for wireless connectivity of fixed, portable and moving STAs within the local area. The logical architecture of the 802.11 standard comprises a Medium Access Control (MAC) layer interfacing with a Logical Link Controller (LLC) and providing access control functions for shared medium physical layers. The primary service of the 802.11 standard is to deliver Medium Access Control Service Data Units (MSDU) between the LLC in a network interface card at a STA and an access point. Physical layers are defined to operate in the 2.4 GHz ISM frequency band with frequency hopping or Direct Sequence (DS) modulation. Other physical layers are also defined. The MAC layer provides access control functions for shared medium physical layers in support of the logical link control layer.
The medium used by WLANs is often very noisy and unreliable. The MAC implements a frame exchange protocol to allow the source of a frame to determine when the frame has been successfully delivered at the destination. The minimal MAC frame exchange consists of two frames: a frame sent from the source to the destination and an acknowledgement from the destination that the frame was received correctly. Multicast frames are not acknowledged. The MAC recognizes five timing intervals. Two intervals are determined by the physical layer and include a short interframe space (SIFS) and a slot time. Three additional intervals are built from the two basic intervals: a PCF interframe space (PIFS), a distributed interframe space (DIFS) and an extended interframe space (EIFS). The PIFS is equal to the SIFS plus one slot time; the DIFS is equal to the SIFS plus two slot times; the EIFS is much larger than any of the other intervals. If present, a point coordination function (PCF) uses a poll and response protocol to remedy contention for the medium. The point coordinator (PC) located in the access point regularly polls STAs for traffic while delivering traffic to the STAs. The PCF makes use of the PIFS to seize and maintain control of the medium. The PC begins a period of operation called the contention free period (CFP). The CFP alternates with a contention period (CP) where normal distributed control functions operate.
A combination of both physical and virtual carrier sense mechanisms enable the MAC to determine whether the medium is busy or idle. If the medium is not in use for an interval of DIFS, the MAC may begin transmission of a frame if back-off requirements have been satisfied. If a back-off requirement exists, the time when the medium is idle after DIFS is used to satisfy it. If either the physical or virtual carrier sense mechanism indicate the medium is in use during the DIFS interval, the MAC remains idle (defers) and waits for the medium to clear. Periodically, a beacon frame is transmitted by the PC after gaining access to the medium using PIFS timing.
After the PC has control of the medium, traffic is delivered to STAs in its network and STAs deliver traffic, if polled, during the contention-free period. The PC also sends a contention-free poll (CF-POLL) frame to those STAs that have requested contention-free service. A requesting STA may transmit one frame for each CF-POLL received. The STA responds with a null data frame if there is no traffic to send. A frame sent from the STA to the PC may include an acknowledgement of a data frame just received from the PC. The PC may use a minimal spacing of SIFS between frame sequences when the CFP is in progress. When a PC sends a data frame to a STA, a responding frame includes an acknowledgement using a SIFS interval between the data and Acknowledgement frame. When a PC sends a poll frame, minimally a null data frame must be sent in response to the PC, again using the SIFS timing. Acknowledgments and polls may be “piggybacked” on data frames, permitting a wide variety of allowed frame sequences. The PC may transmit its next frame if a response was not initiated before a PIFS interval expires, or may back-off if it so desires.
Further details on the 802.11 standard are described in the text “Wireless LANs—Implementing Inter-Operable Networks by J. Gyer, published by Macmillan Technical Publishing, 1999 (International Standard Book No. 1-57870-081-7) and The IEEE 802.11 Handbook—A Designer's Companion by R. O'Hara and A. Petrick, published by the IEEE, New York, N.Y., 1999 (ISBN 07381-1855-9).
Now turning to the description of the invention,
The WLAN 100 includes MS 101, 102 and 103 which serve as a Basic Service Set (BSS) and are air-linked 104 to an access point 105 via an Unlicensed National Information Infrastructure (U-NII) band, as in one embodiment, or in other frequency bands consistent with the requirements of 802.11 (e). The access point 105 or a wireless local bridge interfaces the BSS with a wired path 107 linked to a wired network 111 (e.g., PSTN) which in turn may be linked to other networks, e.g., the Internet. The access point 105 may be linked to other access points 105a . . . 105n in an Extended Service Set (ESS) via wired paths 113 and 115 (or via a wireless path). The text Wireless LANs Implementing Inter-operable Networks, supra at pages 44 and 53, provides further details on access points functioning as wireless local or remote bridges.
A Hybrid Coordinator (HC) 117 co-located with and connected to the access point 105 is responsible for allocating bandwidth on the wireless medium 104. The HC serves as a Point Coordinator (PC) that implements the frame exchange sequences and MSDU handling rules defined by the hybrid coordination function. The HC operates during the contention period and contention-free period and performs bandwidth management including the allocation of transmission opportunities (TXOP) to STAs and the initiation of control contention intervals. In performing the hybrid coordination function, the distributed coordination function (DCF) and the point coordination function (PCF) provide selective handling of MSDUs required for a QoS facility.
The STAs 101, 102 and 103 operate as a fully connected wireless network via the access point which provides distribution services necessary to allow mobile STAs to roam freely within the extended service set (ESS) where the APs communicate among themselves to forward traffic from one BSS to another and to facilitate the movement of mobile STAs from one BSS to another. Each AP includes a distributed service layer that determines if communications received from the BSS are relayed back to a destination in the BSS or forwarded to a BSS associated with another AP or sent to the wired network infrastructure to a destination not in the ESS. Further details on the distribution system are described in the text IEEE 802.11 Handbook —A Designer's Companion, supra, at pages 12-15.
The HC includes a QoS facility, which provides a set of enhanced functions, formats, frame exchange sequences and manage objects to support the selective handling of eight traffic categories or streams per bilateral wireless link. A traffic category is any of the identifiers usable for higher layer entities to distinguish MSDUs to MAC entities that support quality of service within the MAC data service. The handling of MSDUs belonging to different traffic categories may vary based on the relative priority indicated for that MSDU, as well as the values of other parameters that may be provided by an external management entity in a traffic specification for the particular traffic category, link and direction.
Now turning to
The superframe is initiated from the hybrid coordinator with beacon messages sent by the AP at regular intervals to the BSS. Beacon messages contain the domain ID, the WLAN network ID of the access point, communications quality information and cell search threshold values. The domain ID identifies the access points and mobile STAs that belong to the same WLAN roaming network. A mobile STA listening for beacons will only interpret beacon messages with the same domain ID. Additional details relating to the beacon messages are described in the text Wireless LANs—Implementing Inter-Operable Networks, supra at pages 210-212.
The CFP 202 includes contention control frames (CC) 208 during which period enhanced STAs may request transmission opportunities from the HC without the highly variable delays of DCF based contention and a busy BSS that supports LAN applications with quality requirements. Each instance of controlled contention occurs solely among a set of STAs that need to send reservation requests which meet criteria defined by the HC. Controlled contention takes place during a controlled contention interval (CCI), the starting time and duration of which are determined by the HC. Correct reception of RR frames received during a CCI is acknowledged in the next transmitted CC frame. Each controlled contention interval (CCI) 210 begins a PIFS interval after the end of a CC controlled frame. Only the HC is permitted to transmit CC controlled frames. CC frames may be transmitted during both the CP and CFP subject to the restriction that the entirety of the CC frame and the CCI, which follows the CC frame, shall fit within a single CP or a CFP. When initiating CC, the HC generates and transmits a control frame of subtype CC that provides the length of the CCI in terms of the number of CCI opportunities or slots and specifies the duration of the slot and the CCI. The duration of a slot is a number of microseconds to send a reservation request frame at the same data rate, coding and preamble options as used to send the CC frame plus one SIFS.
Returning to
When initiating controlled contention, the HC shall generate and transmit a control frame of subtype CC that includes a priority mask, the duration of each CC_OP and the number of CC_OPs within the CCI. The priority mask allows the HC to specify a subset of the priority values for which requests are permitted within the particular CCI to reduce the likelihood of collisions under high load conditions.
Upon receipt of a control frame of subtype CC, the STA(s) performs the CCI response procedure, as follows:
a) If the CCI length value in the received CC frame is 0 or if the STA has no pending request, the STA makes no further transmission until after the CCI, determined by an elapsed time, following the end of the CC frame equal to the number of microseconds indicated in the duration/ID field of the CC frame.
b) If the priority of the traffic belonging to the traffic category (TC) for which the request is pending corresponds to a bit position which is set to 0 in the priority mask field of the CC frame, no request is transmitted for that TC during the current CCI. Each STA may transmit no more than one request during each CCI. However, a STA with multiple TCs in need of new or modified transmission opportunities is permitted to select the TC for which a request is sent based on the value and the priority mask field of the CC frame. At the end of this step, each potential contending STA proceeds to step (c) below having selected exactly one request to be transmitted during the current CCI or all other STAs make no further transmission until after the CCI is completed.
c) The STA transmits a control frame of subtype RR with values in the quality of service controlled field that identifies the traffic category and either transmission duration or the transmission category queue size. The start of the RR transmission follows the end of the CC frame by a number of microseconds. The RR shall be transmitted at the same data rate as the CC frame that initiated the CCI. After transmitting the RR frame or determining the RR cannot be transmitted because a network allocation vector is set, the STA makes no further transmission until after the CCI is completed.
Now turning to addressing proper setting of the CC parameters, one must first consider what information is available upon which to base decisions. One piece of information is an estimate of the number of contending STAs (contenders) trying to deliver RR frames. It is also assumed that each STA would attempt to place no more than one RR per CCI. It should be obvious to one skilled in the art that if a constant probability for successfully placing a RR during a single CCI is desired, then the larger the number of contenders, the greater the number of CC_OPs (time slots or just slots) required. The cost of placing a failed RR is delay and a degree of wasted bandwidth (collisions or collided slots). However, the cost of over provisioning the number of slots is wasted bandwidth as well (empty slots). Note that throughout this application the terms RR and contenders, as well as the terms CC_OPs and slots, will be used interchangeably.
Clearly there is a tradeoff between delay and wasted bandwidth, as well as how the bandwidth is wasted. Depending on the system's configuration, the delay cost of a failed RR can be high or low. If the HC is configured to send back-to-back CC frames (initiating back-to-back CCI) until it believes all desired RR frames have been received, then the delay impact of a failed RR may be quite low. If on the other hand there is a substantial gap between CCIs (to allow for priority traffic and/or to simplify implementation), then there may be a large delay penalty for failure.
To proceed with an analysis, some system assumptions are required. First it is assumed that no maximum number of CC_OPs per CCI exists. In many systems there may be such a maximum. Ideally permission probability (PP) would be used in such a case to limit the number of contenders so as to guarantee a reasonable probability of success within a CCI. However, the invention described can operate without PP, and illustrative examples of this invention do not require permission probability. This application will mostly assume that no limit on the number of CC_OPs in a CCI exists (which will often be the case in practice).
Another assumption is that of perfect knowledge of the number of contenders. In practice this will never be the case. However, the invention described can operate with imperfect knowledge of this input as well. An estimate of the number of contenders can be made using one of several possible algorithms.
If no knowledge of prior CCI results or contender arrival rates exists, initially assume there are no contenders. Each CCI must contain at least one CC_Op (even if there are no contenders). If data from the last CCI is available, that data is used to estimate the number of contenders. For example, if the initial estimate of the number of contenders was zero, and there was one CC_Op in the first CCI, and that CC_Op was detected as busy, there most likely was a collision. This means there are probably at least two contenders. Thus this method assumes that there are two unreceived RR frames (contenders) for each CC_OP (slot) that is detected as busy.
In addition, based on observations of traffic patterns, or existing traffic specifications, it may be possible to estimate the number of new contenders since the last CCI. For example, if it was known that the current traffic loading was resulting in approximately five new web page accesses every second (each of which would require sending an RR), and it had been 200 milliseconds since the last CCI, then the system could assume that one more RR (contender) was probably waiting for service. The contender estimate from the prior CCI would then be updated to account for the additional contender estimated.
Note that the CCI rates for different service categories (classes) can be isolated from one another. That is, by properly setting the category field in the CCI, a single category or set of categories can be serviced to the exclusion of others. For example, voice with silence suppression would require more frequent CCIs then web browsing traffic. CCIs to service the voice traffic could be sent every 10 milliseconds, while the web browsing CCIs could be sent every 200 millisecond, or even be periodic. This capability could be useful since it can be shown that larger numbers of RR (contenders) can often be serviced more efficiently than smaller numbers. Since the web browsing traffic is less time critical, it is possible to have longer intervals between CCIs for that traffic, allowing greater efficiency than would be possible if it were serviced with the voice traffic.
Given the assumptions identified so far, the question arises as to the most efficient way to service a set of contenders. However, one must first define what one means by efficient. Before that can be done, one must also define what it means to service a contender. For this application, servicing a contender is defined as correctly receiving its RR and responding with a notification of receipt in a following CC frame, or by providing implicit notification by polling the contender. Given this, efficiency can be defined as minimizing the time required to receive the RR and to notify the contender of its receipt. More efficient methods service a set of contenders in less time than less efficient methods. This time will be a random variable, so the mean time or possibly the distribution of times around that mean must be considered. For instance, rather than the mean, one could measure the time sufficient to service a contender 95% of the time in a given set of conditions.
An alternative definition to service time efficiency is bandwidth efficiency. This is the number of RRs serviced on the link divided by their occupancy of the medium. The most efficient protocol in this case would be the one that consumes the least time on the medium per a RR. The definition of medium occupancy here includes all time reserved for use by RRs exclusively, including empty and collided CC_OPs. If one discounts notification of receipt, bandwidth efficiency would roughly be equivalent to the mean of service time efficiency. However its emphasis is different. Rather then focusing on the time required to respond to a request, it focuses on making sure that bandwidth on the medium is not wasted. Bandwidth efficiency is the focus of this invention. However, it is believed that the average service time is also minimized in some sense by the invention.
Given the assumptions and definition of efficiency above, one may now start to design and analyze methods of servicing contenders. By way of prior art, one straightforward method which can be applied is to attempt to service all contenders in a single CCI. It could be desired for instance to construct a CCI where 95% of the time all contenders are serviced within the CCI. One could then measure its bandwidth efficiency as all the number of contenders serviced divided by the number of CC_OPs or slots required in the CCI.
Constructing such a CCI for a given number of contenders requires a basic understanding of probability theory and the working of some equations. Appendix A contains some key equations and calculations in this regard. The resulting efficiencies for a varying number of contenders are given in
The key problem with the approach of using a single CCI to serve all the contenders is that there is no second chance (it is assumed there is a long time between CCIs). So we need to be very sure that we got all (or almost all) the contenders on the first shot. If we allow for multiple CCIs to be used in serving the contenders, we can then try to optimize the individual CCI for bandwidth efficiency. By concatenating enough CCIs it is possible to serve all the contenders with a reasonable probability of service. Note that if we optimize the CCIs for bandwidth efficiency, by definition they should be serving on average the most contenders possible. So it should take the least amount of time to service all the contenders. Thus (at least in a mean sense) service time is also optimized by this approach.
Optimization of CCI efficiency can be considered in several different ways. One way is on a per-a-slot (CC_OP) basis. Another way is overall within a CCI. Finally a third is overall across multiple CCI. All three will be considered here. Starting with the per-a-slot case, it should be clear that if each contender picks a slot at random, the probability of them picking a specific slot is 1/(the number of slots). Each contender contends within a slot independently. It is well known that a binomial distribution results for such a situation. It can be shown that this distribution is maximized when the number of contenders equals the number of slots. A proof of this is provided in Appendix B.
To further validate this statement, consider
Based solely on the per-slot data, one would expect that ideal performance is achieved when during each CCI the number of slots is set equal to the number of contenders. However this presumes that the results in each slot are independent of each other when in fact they are not. Consider the following example. If it is assumed that there are four slots, and three contenders, a value for the probability of success in a single slot could be computed as:
If the result for each slot was independent, the probability of success in all four slots could be computed as:
Of course, since there are only three contenders, there cannot possibly be four successes, and the answer must be zero. Clearly the results in different CC_OPs of a CCI are not independent. Therefore it cannot be assumed that the per-slot-solution optimizes the efficiency of the overall CCI. For the three contender, four slot case described above, per CCI efficiency (cci_eff) is actually a weighted average over the possibility of one, two and three successes in the CCI, and could be written for this case as:
Where Ps(x) is the probability of exactly x successes in the CCI. Since the distributions in each slot are interdependent, there is no reason that the single slot efficiency should be the same as the overall efficiency for the CCI. As a counter example for this, consider that if the slots were independent, there would be a finite probability for Ps(4), which would contribute to the sum. We would then expect the average efficiency for the CCI to be the same as for the single slot efficiency given that the slots are now independent. However, we already know that Ps(4) must be zero (slots are not independent). While it is possible that for the interdependent case the terms for Ps(1) through Ps(3) might be such that the average CCI efficiency is equal to the single slot efficiency, there is no reason to expect it.
Surprisingly, the overall efficiency for a CCI with interdependent distributions in each CC_OP seems to be identical to the efficiency of a single independent CC_OP. However showing this requires quite a bit of work. Consider first Appendix D as reflected in the table of
In Appendix F, the values from Appendix E are used to compute the per CCI efficiency for cases of up to 16 slots, and 16 contenders. This efficiency is evaluated with and without the overhead of CC frames.
Another important evaluation in Appendix F is the efficiency including overhead, as shown in
While an exact estimate of the CC frame overhead is difficult, a good approximation is not. A CC frame's size varies with the number of successful RRs being reported from the prior CCI and is always at least slightly larger than a RR frame. However, to a first order of approximation, CC and RR frames are about the same size. Thus, the overhead of a CC frame is roughly the size of a “slot” being used in the analysis. This assumption implies that the overhead may be estimated as between one and two slots per a CCI. It is believed that the overhead value is generally closer to one slot rather than two slots.
This being said, consider the data in
Clearly the optimum operating point is bounded between the data on
One could ask, “is it possible for the optimum value to drift more then one to two values away from slots=contenders?”. The equations in Appendix G were used to look out to values up to 50 contenders. These numbers are much larger than anything one would normally expect to be seen in an 802.11 infrastructure. Even for these large numbers, the optimum point was slots=contenders+1 for one slot overhead, and slots=contenders+2 for two overhead slots.
Another question which could be asked is “does the efficiency with overhead ever peak for a particular number of contenders?” Clearly without overhead the efficiency peaks at 50% for two contenders, and then seems to decrease forever. One might expect that with overhead the efficiency would initially increase, and then eventually start to decrease as with the no overhead case, causing a peak where the number of contenders is optimum. The answer is that there is no peak. At the end of Appendix B, there is a small proof that shows without overhead, the efficiency limit as the number of contenders goes to infinity is exp (−1) or roughly 0.3679. So while the no overhead solution decreases forever, it never drops below 36.79%. For the overhead solution, as the number of contenders goes to infinity, the overhead becomes less and less significant (since it is constant even as the number of contenders increase). So the overhead solution also approaches the 36.79% solution. However, for small numbers of contenders the overhead hurts the efficiency a lot and drags it below the 36.79% point. So while the no overhead solution approaches 36.79% from above, the overhead solutions approach it from below. However they do not ever reach it, so they never peak (at least for practical values of the parameters).
Yet another question concerns the robustness of the optimum solution. It turns out that if one uses the simple algorithm for estimating contenders given earlier, one is more likely to underestimate contenders than over-estimate. This is because if more then two contenders collide, it looks the same to the system as if two contenders have collided. Since it is normally more likely that it is a two-way collision than a three-way or more collision, it makes sense to assume only two contenders are present. Fortunately if we set the number of slots to the “optimum” operating point, it is fairly robust to underestimates. As seen clearest from the data in
So far we have discussed estimating bandwidth efficiency at two levels within a single slot, and within a single CCI. The next level is efficiency over multiple CCI. If we presume a given number of pending RRs, we can ask what is the most efficient mechanism to convey them over multiple CC. The problem of course is that, due to collisions, not all RR may be successful during a given CCI. This means that even if no further RR arrive, it may take multiple CCI to convey all the desired RR frames. The question is what is the optimum strategy for conveying the RR over multiple CCI, and what is the efficiency.
It seems obvious that if every CCI uses the optimum number of slots for the number of contenders believed in existence then the overall efficiency is maximized. At least that is the assumption for the invention in this disclosure. Appendix H analyzes the issue of efficiency over multiple CCI with and without overhead.
So finally, it is possible to describe an algorithm that is designed to realize the optimum efficiency for the CC/RR protocol which takes into account a method 1000 of estimating the number of contenders, as described in
In
A crucial element of the invention is the step 1119 titled “Determine Optimum CC_OPs”. The term optimum is used very loosely here, as there are degrees of optimality all of which would be considered within the spirit of the invention. As a first approximation, this block might simply set CC_OPs=Cntndrs. This is actual a really good approximation of the optimum and is considered within the spirit of the invention. However, accounting for the overhead can lead to a slightly more optimal and more robust solutions. As noted, the overhead depends on the specifics of the PHY and CC/RR implementation. In general it is believed to be within one to two CC_OPs but for most implementations is closer to one.
As an example, consider the following calculations for the basic 802.11 Direct Sequence (DS) PHY running at 1 MBPS. MAC frame sizes are 144 bits for a RR, 144 bits per CC+16 bits per feed back in each CC. At 1 MBPS this translates to 144 microseconds for the MAC portion of the RR frame, and 144 microseconds+16 microseconds for each feedback in a CC frame. The PHY overhead on all frames at this rate is 192 microsecond. The current protocol requires a Short Inter-Frame Space (SIFS) before the RR in each CC_Op which is 10 microseconds, and a PCF Inter-Frame Space (PIFS) is required before each CC frame which is 30 microseconds. This means that a RR requires 346 microseconds, and each CC without feedback requires 366 microseconds, and each feedback is 16 microseconds. Thus without feed back each CC represents 1.06 CC_OPs overhead. For back-to-back CCIs it may be possible to use a SIFS before all CCs but the first CCs without feedback which used a SIFS would be exactly one CC_OP overhead.
As for feedback, only successful RRs would require a feedback, and some of those might get implicit feedback through direct polling. If every CC_OP contained a successful RR which required feedback, the total additional overhead would be 4.6%. Since at best only 50% of the RRs are expected to be successful, this value would be capped around 2.3% and in general would be less. If back-to-back CCI are used, typically there will only be one CC frame overhead. Thus a total overhead of 1.09 CC_OPs is expected. The final CCI may require an additional CC frame and that entire CC frame should be counted as overhead, However, it may be permissible to delay feedback till the next service cycle, in which case no additional CC frame penalty is incurred. Also as noted, feedback may be done implicitly by simply polling the STA that sent the RR.
If desired, the overhead could simply be approximated as one CC_Op, in which case the block titled “Determine Optimum CC_OPs” would set CC_OPs=Cntndrs+1. Note that particularly if the method used to estimate the contenders tends to underestimate, it is useful to use Cntndrs+1. This would be within the spirit of the invention. However if an exact estimate was known for the overhead in a particular implementation (say 1.09 CC_OPs for the example provided above), it is possible to generate a table such as in
Finally, something not dealt with till now is the fact that the number of contenders is actually a random variable, and the described embodiments to this point have treated this random variable as a given constant. The block “Determine Optimum CC_OPs” could use more sophisticated statistical methods to refine the value of CC_OPs chosen even further than is described here. However, any value found will be close to the value of CC_OPs=Cntndrs, and such a method would be considered to be within the spirit of this invention.
Appendices incorporated into the specification include:
While the invention has shown and described in terms of a preferred embodiment, various changes can be made therein without departing from the spirit and scope of the invention, as defined in the appended claims, in which:
Patent | Priority | Assignee | Title |
9210723, | Jun 30 2010 | Orange | Transmission control in a telecommunication network |
Patent | Priority | Assignee | Title |
6282187, | Feb 01 1996 | ALCATEL USA SOURCING, L P | Network protocol for wireless broadband ISDN using ATM |
6285665, | Oct 14 1997 | WSOU Investments, LLC | Method for establishment of the power level for uplink data transmission in a multiple access system for communications networks |
6567416, | Oct 14 1997 | WSOU Investments, LLC | Method for access control in a multiple access system for communications networks |
6804222, | Jul 14 2000 | AT&T Corp. | In-band Qos signaling reference model for QoS-driven wireless LANs |
6850981, | Jul 14 2000 | AT&T Corp. | System and method of frame scheduling for QoS-driven wireless local area network (WLAN) |
7031287, | Jul 14 2000 | AT&T Corp. | Centralized contention and reservation request for QoS-driven wireless LANs |
7068633, | Jul 14 2000 | AT&T Properties, LLC; AT&T INTELLECTUAL PROPERTY II, L P | Enhanced channel access mechanisms for QoS-driven wireless lans |
20050041670, | |||
20080049761, | |||
20100080188, | |||
20100080196, | |||
20100085933, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Aug 11 2009 | AT&T Corp | AT&T PROPERTIES LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 023410 | /0595 | |
Oct 15 2009 | AT&T INTELLECTUAL PROPERTY II, LP. | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Mar 25 2015 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Mar 13 2019 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Oct 02 2015 | 4 years fee payment window open |
Apr 02 2016 | 6 months grace period start (w surcharge) |
Oct 02 2016 | patent expiry (for year 4) |
Oct 02 2018 | 2 years to revive unintentionally abandoned end. (for year 4) |
Oct 02 2019 | 8 years fee payment window open |
Apr 02 2020 | 6 months grace period start (w surcharge) |
Oct 02 2020 | patent expiry (for year 8) |
Oct 02 2022 | 2 years to revive unintentionally abandoned end. (for year 8) |
Oct 02 2023 | 12 years fee payment window open |
Apr 02 2024 | 6 months grace period start (w surcharge) |
Oct 02 2024 | patent expiry (for year 12) |
Oct 02 2026 | 2 years to revive unintentionally abandoned end. (for year 12) |