Table Of Content1
Buffer Sizing for 802.11 Based Networks
Tianji Li, Douglas Leith, David Malone
Hamilton Institute, National University of Ireland Maynooth, Ireland
Email: tianji.li, doug.leith, david.malone @nuim.ie
{ }
Abstract—Weconsiderthesizingof network buffersin 802.11 the download and allowing it to continue for a while (to
basednetworks.Wirelessnetworksfaceanumberoffundamental let the congestion control algorithm of TCP probe for the
issues that do not arise in wired networks. We demonstrate that
available bandwidth), the RTTs to the AP hugely increased
theuseoffixedsizebuffersin802.11networksinevitablyleadsto
to 2900-3400 ms. During the test, normal services such as
eitherundesirablechannelunder-utilizationorunnecessaryhigh
1 delays. We present two novel dynamic buffer sizing algorithms web browsing experienced obvious pauses/lags on wireless
1 thatachievehighthroughputwhilemaintaininglowdelayacross stationsusingthe network.Closer inspectionrevealedthatthe
0
awiderangeofnetworkconditions.Experimentalmeasurements bufferoccupancyatthe AP exceeded200 packetsmostof the
2
demonstrate the utility of the proposed algorithms in a produc-
time and reached 250 packets from time to time during the
n tion WLAN and a lab testbed.
test. Note that the increase in measured RTT could be almost
a
Index Terms—IEEE 802.11, IEEE 802.11e, Wireless LANs entirelyattributedtotheresultingqueuingdelayattheAP,and
J
(WLANs), Medium access control (MAC), Transmission control indicates that a more sophisticated approach to buffer sizing
3 protocol (TCP), Buffer Sizing, Stability Analysis.
is required. Indeed, using the A* algorithm proposed in this
] paper,theRTTsobservedwhenrepeatingthesameexperiment
I
N I. INTRODUCTION falltoonly90-130ms.Thisreductionindelaydoesnotcome
atthecostofreducedthroughput,i.e.,themeasuredthroughput
. In communication networks, buffers are used to accommo-
s with the A* algorithm and the default buffers is similar.
c date short-term packet bursts so as to mitigate packet drops
[ In this paper, we consider the sizing of buffers in
and to maintain high link efficiency. Packets are queued if
802.11/802.11e([1] [2]) based WLANs. We focus on single-
1 too manypacketsarrivein a sufficiently shortintervalof time
hop WLANs since these are rapidly becoming ubiquitous as
v during which a network device lacks the capacity to process
2 all of them immediately. the last hop on home and office networks as well as in so-
6 called “hot spots” in airports and hotels, but note that the
Forwiredrouters,the sizingofbuffersisan activeresearch
5 proposedschemes can be easily applied in multi-hopwireless
topic ([31] [5] [27] [32] [9]). The classical rule of thumb for
0
networks. Our main focus in this paper is on TCP traffic
. sizing wired buffers is to set buffer sizes to be the productof
1 since this continuesto constitute the bulk of traffic in modern
thebandwidthandtheaveragedelayoftheflowsutilizingthis
0 networks (80–90% [35] of current Internet traffic and also of
1 link, namely the Bandwidth-Delay Product (BDP) rule [31].
WLANtraffic[28]),althoughweextendconsiderationtoUDP
1 See Section VII for discussion of other related work.
traffic at various points during the discussion and also during
: Surprisingly, however the sizing of buffers in wireless
v our experimental tests.
i networks (especially those based on 802.11/802.11e) appears
X Compared to sizing buffers in wired routers, a number of
to have received very little attention within the networking
fundamental new issues arise when considering 802.11-based
r community.Exceptionsincludetherecentworkin[21]relating
a networks. Firstly, unlike wired networks, wireless transmis-
to buffersizing for voice traffic in 802.11e[2] WLANs, work
sions are inherently broadcast in nature which leads to the
in [23] which considers the impact of buffer sizing on TCP
packet service times at different stations in a WLAN being
upload/download fairness, and work in [29] which is related
stronglycoupled.For example,the basic 802.11DCF ensures
to 802.11e parameter settings.
that the wireless stations in a WLAN win a roughly equal
Buffers play a key role in 802.11/802.11e wireless net-
number of transmission opportunities [19], hence, the mean
works. To illustrate this, we present measurements from the
packetservicetimeatastationisanorderofmagnitudelonger
productionWLAN of the Hamilton Institute,which show that
when 10 other stations are active than when only a single
the current state of the art which makes use of fixed size
station is active. Consequently, the buffering requirements at
buffers, can easily lead to poor performance. The topology
each station would also differ, depending on the number of
of this WLAN is shown in Fig. 23. See the Appendix for
other active stations in the WLAN. In addition to variations
further details of the configuration used. We recorded RTTs
in the mean service time, the distribution of packet service
before and after one wireless station started to download a
times is also strongly dependent on the WLAN offered load.
37MByte file from a web-site. Before starting the download,
This directly affects the burstiness of transmissions and so
we pinged the access point (AP) from a laptop 5 times, each
buffering requirements (see Section III for details). Secondly,
time sending 100 ping packets. The RTTs reported by the
wireless stations dynamicallyadjust the physicaltransmission
pingprogramwasbetween2.6-3.2ms.However,afterstarting
rate/modulationusedinordertoregulatenon-congestivechan-
nel losses. This rate adaptation, whereby the transmit rate
ThisworkissupportedbyIrishResearchCouncilforScience,Engineering
andTechnology andScience Foundation IrelandGrant07/IN.1/I901. may change by a factor of 50 or more (e.g. from 1Mbps to
2
54Mbps in 802.11a/g), may induce large and rapid variations TSIFS (µs) 10
Idleslotduration(σ)(µs) 9
in required buffer sizes. Thirdly, the ongoing 802.11n stan-
Retrylimit 11
dards process proposes to improve throughput efficiency by Packet size(bytes) 1000
the use of large frames formed by aggregation of multiple PHYdatarate(Mbps) 54
PHYbasicrate(Mbps) 6
packets ([3] [18]). This acts to couple throughput efficiency
PLCPrate(Mbps) 6
and buffersizing in a new way since the latter directlyaffects
the availability of sufficient packets for aggregationinto large TABLEI
frames. MAC/PHYPARAMETERSUSEDINSIMULATIONS,CORRESPONDINGTO
802.11G.
It follows from these observations that, amongst other
things, there does not exist a fixed buffer size which can be
used for sizing buffers in WLANs. This leads naturally to
1(cid:13)
considerationof dynamic buffersizing strategiesthat adapt to
cpherafnogrimnganccoencdoitsitosnass.soIncitahteisdpwaiptherthweeudseemofonfisxteradtebuthffeermsaizjoesr (cid:13)............... WLAN(cid:13) AP(cid:13) wired link(cid:13) whoirsetds(cid:13)(cid:13)
in802.11WLANs(SectionIII)andpresenttwonoveldynamic
n(cid:13)
buffer sizing algorithms (Sections IV and V) that achieve
significant performance gains. The stability of the feedback Fig.1. WLANtopologyusedinsimulations.Wiredbackhaullinkbandwidth
loop induced by the adaptation is analyzed, including when 100Mbps.MACparameters oftheWLANarelistedinTableI.
cascaded with the feedback loop created by TCP congestion
controlaction.Theproposeddynamicbuffersizingalgorithms
that were previously fixed. In particular, the values of DIFS
are computationally cheap and suited to implementation on
(called AIFS in 802.11e) and CW may be set on a per
standard hardware. Indeed, we have implemented the algo- min
classbasis foreachstation.While the full802.11estandardis
rithms in both the NS-2 simulator and the Linux MadWifi
not implemented in current commodity hardware, the EDCA
driver [4]. In this paper, in addition to extensive simulation
extensions have been widely implemented for some years.
results we also present experimental measurements demon-
strating the utility of the proposed algorithms in a testbed
located in office environment and with realistic traffic. This C. Unfairness among TCP Flow
latterincludesamixofTCPandUDPtraffic,amixofuploads
Consider a WLAN consisting of n client stations each
and downloads, and a mix of connection sizes.
carryingoneTCPuploadflow.TheTCPACKsaretransmitted
Theremainderofthepaperisorganizedasfollows.Section
by the wireless AP. In this case TCP ACK packets can be
II introduces the background of this work. In Section III
easily queued/dropped due to the fact that the basic 802.11
simulationresultswithfixedsizebuffersarereportedtofurther
DCF ensures that stations win a roughly equal number of
motivatethiswork.Theproposedalgorithmsarethendetailed
transmission opportunities. Namely, while the data packets
in Sections IV and V. Experiment details are presented in
for the n flows have an aggregate n/(n + 1) share of the
Section VI. After introducingrelated workin Section VII, we
transmissionopportunitiestheTCPACKsforthenflowshave
summarize our conclusions in Section VIII.
onlya1/(n+1)share.Issuesofthissortareknowntoleadto
significant unfairness amongst TCP flows but can be readily
II. PRELIMINARIES
resolvedusing802.11efunctionalitybytreatingTCPACKsas
A. IEEE 802.11 DCF
a separate traffic class which is assigned higher priority [15].
IEEE 802.11a/b/g WLANs all share a common MAC al-
With regard to throughput efficiency, the algorithms in this
gorithm called the Distributed Coordinated Function (DCF)
paperperformsimilarlywhentheDCFisusedandwhenTCP
which is a CSMA/CA based algorithm. On detecting the
ACKs are prioritized using the EDCA as in [15]. Per flow
wireless mediumto be idle for a periodDIFS,each wireless behavior does, of course, differ due to the inherent unfairness
station initializes a backoff counter to a random number
in the DCF and we therefore mainly present results using the
selected uniformly from the interval [0, CW-1] where CW
EDCA to avoid flow-level unfairness.
is the contention window. Time is slotted and the backoff
counter is decremented each slot that the medium is idle.
D. Simulation Topology
An important feature is that the countdown halts when the
medium is detected busy and only resumes after the medium InSectionsIII,IVandV-G,weusethesimulationtopology
is idle again for a period DIFS. On the counter reaching showninFig.1wheretheAPactsasawirelessrouterbetween
zero,a stationtransmitsa packet.Ifa collision occurs(two or the WLAN and the Internet. Upload flows originate from
morestationstransmitsimultaneously),CWisdoubledandthe stations in the WLAN on the left and are destined to wired
process repeated. On a successful transmission, CW is reset host(s)in the wired networkon the right.Downloadflows are
to the value CW and a new countdown starts. from the wired host(s) to stations in the WLAN. We ignore
min
differences in wired bandwidth and delay from the AP to the
B. IEEE 802.11e EDCA wired hosts which can cause TCP unfairness issues on the
The 802.11e standard extends the DCF algorithm (yielding wired side (an orthogonal issue) by using the same wired-
the EDCA) by allowing the adjustment of MAC parameters part RTT for all flows. Unless otherwise stated, we use the
3
0.25 0.06 wireless stations) at 1Mbps and throughput efficiency below
0.2 0.05 50% at 216Mbps. Note that the transmit rates in currently
frequency00.1.15 frequency000...000234 a(thev.eagit.lrae3bn0lde0Midsrbtapofwst ai8sr0ds2su.1spt1pinlolrheteiqgduhibepyrmtceraunnrtrseamnlritetAardattyheeser.oxEscveceehndipa2sc1erto6ssM)sabtnhpdes
0.05 0.01 restrictedrange oftransmitrates 1Mbpsto 54Mbpssupported
00 2 4 6 8 10 00 50 100 150 200 by 802.11a/b/g,it can be seen that a buffersize of 50 packets
MAC service time (ms) MAC service time (ms) isrequiredtoensurethroughputefficiencyabove80%yetthis
(a) 2stations (b) 12stations
buffer size induces delays exceeding 1000 and 3000 ms at
Fig.2. MeasureddistributionofperpacketMACservicetime.Solidvertical transmit rates of 11 and 1 Mbps, respectively.
linesmarkthemeanvaluesofdistributions.Physicallayerdata/basicratesare Second, delay is strongly dependenton the traffic load and
11/1Mbps.
the physical rates. For example, as the number of competing
stations (marked as “uploads” in the figure) is varied from 0
to10,forabuffersizeof20packetsandphysicaltransmitrate
IEEE 802.11g PHY parameters shown in Table I and the
of 1Mbps the delay varies from 300ms to over 2000ms. This
wired backhaullink bandwidthis 100Mbpswith RTT 200ms.
reflects that the 802.11 MAC allocates available transmission
For TCP traffic, the widely deployed TCP Reno with SACK
opportunitiesequallyonaverageamongstthewirelessstations,
extension is used. The advertised window size is set to be
and so the mean service time (and thus delay) increases with
4096packets (each has a payloadof 1000bytes) which is the
the number of stations. In contrast, at 216Mbps the delay
default size of current Linux kernels. The maximum value of
remains below 500ms for buffer sizes up to 1600 packets.
the TCP smoothed RTT measurements (sRTT) is used as the
Our key conclusion from these observations is that there
measure of the delay experienced by a flow.
exists no fixed buffer size capable of ensuring both high
throughput efficiency and reasonable delay across the range
III. MOTIVATIONAND OBJECTIVES
of physical rates and offered loads experienced by modern
Wirelesscommunicationin802.11networksistime-varying
WLANs. Any fixed choice of buffer size necessarily carries
in nature, i.e., the mean service time and the distribution of
the cost of significantly reduced throughput efficiency and/or
service time at a wireless station vary in time. The variations excessive queuing delays.
are primarily due to (i) changes in the number of active
This leads naturally therefore to the consideration of adap-
wireless stations and their load (i.e. offered load on the
tive approaches to buffer sizing, which dynamically adjust
WLAN) and (ii) changes in the physical transmit rate used
the buffer size in response to changing network conditions to
(i.e. in response to changingradio channel conditions).In the
ensurebothhighutilizationofthewirelesslinkwhileavoiding
lattercase,itisstraightforwardtoseethattheservicetimecan
unnecessarily long queuing delays.
be easily increased/decreased using low/high physical layer
rates. To see the impactof offeredload on the service time at
a station, Fig. 2 plots the measured distribution of the MAC IV. EMULATINGBDP
layer service time when there are 2 and 12 stations active. It We beginbyconsideringa simpleadaptivealgorithmbased
can be seen that the mean service time changes by over an on the classical BDP rule. Although this algorithm cannot
order of magnitude as the number of stations varies. Observe take advantage of statistical multiplexing opportunities, it is
alsofromthesemeasureddistributionsthattherearesignificant of interest both for its simplicity and because it will play a
fluctuationsintheservicetimeforagivenfixedload.Thisisa role in the more sophisticated A∗ algorithm developed in the
direct consequence of the stochastic nature of the CSMA/CA next section.
contention mechanism used by the 802.11/802.11eMAC. As noted previously, and in contrast to wired networks,
This time-varing nature directly affects buffering require- in 802.11 WLANs the mean service time is generally time-
ments. Figure 3 plots link utilization1 and max sRTT (propa- varying (dependenton WLAN load and the physical transmit
gationplussmoothedqueuingdelay)vsbuffersizeforarange rateselectedbyastation).Consequently,theredoesnotexista
of WLAN offered loads and physical transmit rates. We can fixedBDPvalue.However,wenotethatawirelessstationcan
make a number of observations. measure its own packet service times by direct observation,
First, it can be seen that as the physical layer transmit i.e., by recording the time between a packet arriving at the
rate is varied from 1Mbps to 216Mbps, the minimum buffer headof the networkinterfacequeuet andbeingsuccessfully
s
size to ensure at least 90% throughput efficiency varies from transmitted t (which is indicated by receiving correctly the
e
about20packetstoabout800packets.Nocompromisebuffer corresponding MAC ACK). Note that this measurement can
size exists that ensures both high efficiency and low delay be readily implemented in real devices, e.g. by asking the
across this range of transmit rates. For example, a buffer size hardware to raise an interrupt on receipt of a MAC ACK,
of 80 packets leads to RTTs exceeding 500ms (even when and incurs only a minor computational burden. Averaging
only a single station is active and so there are no competing these per packet service times yields the mean service time
T . To accommodate the time-varying nature of the mean
1Here the AP throughput percentage is the ratio between the actual serv
servicetime, thisaveragecan betaken overa slidingwindow.
throughputachieved usingbuffersizesshowonthex-axisandthemaximum
throughput usingthebuffersizesshownonthex-axis. In this paper, we consider the use of exponential smoothing
4
AP throughput percentage (%)000000000.........1123456789 1111111 00ddddd oooooddwwwwwoowwnnnnnlllllnnooooollaaaaaooaaddddd,,,,,dd 01251ss,, 0 uuuu01 ppppu uullllpoooopplaaaaollooddddaaassddds Max smoothed RTT (ms)12245000000000000 1111111 00ddddd oooooddwwwwwoowwnnnnnlllllnnooooollaaaaaoodddddaa,,,,,dd 01251ss,, 0 uuuu01 ppppu uullllpoooopplaaaaollooddddaaassddds AP throughput percentage (%)000000000.........1123456789 1111111 00ddddd oooooddwwwwwoowwnnnnnlllllnnooooollaaaaaoodddddaa,,,,,dd 01251ss,, 0 uuuu01 ppppu uullllpoooopplaaaaollooddddaaassddds Max smoothed RTT (ms)12245000000000000 1111111 00ddddd oooooddwwwwwoowwnnnnnlllllnnooooollaaaaaoodddddaa,,,,,dd 01251ss,, 0 uuuu01 ppppu uullllpoooopplaaaaollooddddaaassddds
05 10 20 50 80 150 400 5 10 20 50 80 150 400 05 10 20 50 80 150 400 5 10 20 50 80 150 400
AP buffer size (pkts) AP buffer size (pkts) AP buffer size (pkts) AP buffer size (pkts)
(a) 1/1Mbps,throughput (b) 1/1Mbps,delay (c) 11/1Mbps,throughput (d) 11/1Mbps,delay
AP throughput percentage (%)000000000.........1123456789 1111111 00ddddd oooooddwwwwwoowwnnnnnlllllnnooooollaaaaaooaaddddd,,,,,dd 01251ss,, 0 uuuu01 ppppu uullllpoooopplaaaaollooddddaaassddds Max smoothed RTT (ms)12245000000000000 1111111 00ddddd oooooddwwwwwoowwnnnnnlllllnnooooollaaaaaoodddddaa,,,,,dd 01251ss,, 0 uuuu01 ppppu uullllpoooopplaaaaollooddddaaassddds AP throughput percentage (%)000000000.........1123456789 1111111 00ddddd oooooddwwwwwoowwnnnnnlllllnnooooollaaaaaoodddddaa,,,,,dd 01251ss,, 0 uuuu01 ppppu uullllpoooopplaaaaollooddddaaassddds Max smoothed RTT (ms)2245000000000 1111111 00ddddd oooooddwwwwwoowwnnnnnlllllnnooooollaaaaaoodddddaa,,,,,dd 01251ss,, 0 uuuu01 ppppu uullllpoooopplaaaaollooddddaaassddds
05 10 20 50 80 150 400 5 10 20 50 80 150 400 02 0 50 80 200 400 800 1600 2 0 50 150 400 800 1600
AP buffer size (pkts) AP buffer size (pkts) AP buffer size (pkts) AP buffer size (pkts)
(e) 54/6Mbps,throughput (f) 54/6Mbps,delay (g) 216/54Mbps,throughput (h) 216/54Mbps, delay
Fig. 3. Throughput efficiency and maximum smoothed round trip delays (max sRTT) for the topology in Fig. 1 when fixed size buffers are used. Here,
the APthroughput efficiency is theratio between thedownload throughput achieved using buffersizes indicated onthe x-axis andthe maximum download
throughputachieved usingfixedsizebuffers.Ratesbeforeandafterthe’/’areusedphysicallayerdataandbasicrates.Forthe216Mbpsdata,8packets are
aggregated intoeachframeattheMAClayertoimprovethroughput efficiency inan802.11n-like scheme.ThewiredRTTis200ms.
T (k+1) = (1 W)T (k)+W(t t ) to calculate Algorithm 1 Drop tail operation of the eBDP algorithm.
serv serv e s
− −
a running average since this has the merit of simplicity 1: Set the target queuing delay Tmax.
and statistical robustness (by central limit arguments). The 2: Set the over-provisionparameter c.
choiceofsmoothingparameterW involvesatrade-offbetween 3: for each incoming packet p do
accommodating time variations and ensuring the accuracy of 4: Calculate QeBDP = min(Tmax/Tserv + c,QemBaDxP)
the estimate – this choice is considered in detail later. where T is from MAC Algorithm 2.
serv
Given an online measurement of the mean service time 5: if current queue occupancy <QeBDP then
T , the classical BDP rule yields the following eBDP 6: Put p into queue
serv
buffer sizing strategy. Let T be the target maximum 7: else
max
queuing delay. Noting that 1/T is the mean service 8: Drop p.
serv
rate, we select buffer size Q according to Q = 9: end if
eBDP eBDP
min(T /T ,QeBDP) where QeBDP is the upper limit 10: end for
max serv max max
on buffer size. This effectively regulates the buffer size to
equal the current mean BDP. The buffer size decreases when
the serviceratefalls andincreaseswhentheservice raterises, buffer sizes. We therefore modify the eBDP update rule to
so as to maintain an approximatelyconstant queuingdelay of Q =min(T /T +c,QeBDP)where c is an over-
eBDP max serv max
Tmax seconds.Wemaymeasuretheflows’RTTstoderivethe provisioning amount to accommodate short-term fluctuations
valueforTmax inasimilarwaytomeasuringthemeanservice in service rate. Due to the complex nature of the service
rate,butintheexamplespresentedherewesimplyuseafixed time process at a wireless station (which is coupled to the
value of 200ms since this is an approximate upper bound on traffic arrivals etc at other stations in the WLAN) and of the
the RTT of the majority of the current Internet flows. TCP traffic arrival process (where feedback creates coupling
We note that the classical BDP rule is derived from the to the service time process), obtaining an analytic value for
behavior of TCP congestion control (in particular, the reduc- c is intractable. Instead, based on the measurements in Fig. 3
tion of cwnd by half on packet loss) and assumes a constant and others, we have found empirically that a value of c = 5
servicerateandfluid-likepacketarrivals.Hence,forexample, packetsworkswellacrossawiderangeofnetworkconditions.
at low service rates the BDP rule suggests use of extremely Pseudo-code for eBDP is shown in Algorithms 1 and 2.
small buffer sizes. However, in addition to accommodating The effectivenessof this simple adaptivealgorithm is illus-
TCP behavior, buffers have the additional role of absorbing trated in Fig. 4. Fig. 4(a) shows the buffer size and queue
short-term packet bursts and, in the case of wireless links, occupancy time histories when only a single station is active
short-term fluctuations in packet service times. It is these in a WLAN while Fig. 4(b) shows the corresponding results
latter effects that lead to the steep drop-off in throughput when ten additional stations now also contend for channel
efficiency that can be observed in Fig. 3 when there are access. Comparing with Fig. 3(e), it can be seen that buffer
competing uploads (and so stochastic variations in packet sizes of 330 packets and 70 packets, respectively, are needed
servicetimesduetochannelcontention,seeFig.2.)plussmall to yield 100% throughput efficiency and eBDP selects buffer
5
A1234l::::goSfroeirtRWtheteahmaciecothr2uadvnoMetsuireltAagrgrvCoeiiinccnoegegipvspepetaraaarMrcattikmoAteinetmCtpoeeArfdtCWtsohKef.oefroBprD.pP, raelcgoorrdithsemrv.ice end ughput percentage (%)104680000 moothed RTT (ms)334455605050500000000 111 0 dd odowowwnnlnloolaoadad,d, 0,1 00u puulppollaoodaadds
time te. P thro 20 1 download, 0 upload Max s250
5: CWa(lctulatet s)e.rvice time of p: Tserv = (1−W)Tserv + A 05 0 110 d do1ow0wn0nloloaadd, ,1 001 uuR5pp0TllooaaTdd s(m2s0)0 250 300 12500050 100 1R50TT (m2s0)0 250 300
e s
−
6: end for (a) Throughput (b) Delay
Fig.6. Performance ofthe eBDP algorithm as theRTTofwiredbackhaul
600 600 is varied. Data is shown for 1, 10 downloads and 0, 10 uploads. Here the
Occupancy Occupancy
Buffer size Buffer size APthroughputpercentageistheratiobetweenthethroughputachievedusing
pkts)500 pkts)500 the eBDP algorithm and that by a fixed buffer size of 400 packets (i.e. the
er (400 er (400 maximumachievable throughput inthiscase).
uff uff
P b300 P b300
A A
200 200
slight decrease in throughput when there is 1 download and
100 100
10 contending upload flows, which is to be expected since
0 50 Time (1s0e0conds) 150 200 0 50 Time (1s0e0conds) 150 200 Tmax is less than the link delay and so the buffer is less
(a) 1download, 0upload (b) 1download, 10uploads than the BDP. This could improvedby measuringthe average
RTT instead of using a fixed value, but it is not clear that the
Fig. 4. Histories of buffer size and buffer occupancy with the eBDP
benefit is worth the extra effort. We also observe that there is
algorithm. In (a) there is one download and no upload flows. In (b) there
are1downloadand10uploadflows.54/6Mbps physicaldata/basic rates. adifferencebetweenthemaxsmoothedRTTwithandwithout
uploadflows. The RTT in our setup consists of the wired link
RTT, the queuing delays for TCP data and ACK packets and
sizes which are in good agreement with these thresholds. the MAC layer transmission delays for TCP data and ACK
In Fig. 5 we plot the throughput efficiency (measured as packets. When there are no upload flows, TCP ACK packets
the ratio of the achieved throughput to that with a fixed can be transmitted with negligible queuing delays since they
400-packet buffer) and max smoothed RTT over a range of onlyhavetocontendwiththeAP.Whenthereareuploadflows
networkconditionsobtainedusingtheeBDP algorithm.Itcan however,stationswithTCPACKpacketshavetocontendwith
be seen thatthe adaptivealgorithmmaintainshighthroughput other stations sending TCP data packets as well. TCP ACK
efficiency across the entire range of operating conditions. packets therefore can be delayed accordingly, which causes
This is achieved while maintaining the latency approximately the increase in RTT observed in Fig. 6.
constant at around 400ms (200ms propagation delay plus Fig. 7 demonstrates the ability of the eBDP algorithm to
T =200msqueuingdelay)–thelatencyrisesslightlywith respond to changing network conditions. At time 300s the
max
the numberofuploadsdueto theover-provisioningparameter number of uploads is increased from 0 to 10 flows. It can be
c used to accommodatestochastic fluctuations in service rate. seen that the buffer size quickly adapts to the changed condi-
WhileT =200msisusedasthetargetdraintimeinthe tions when the weight W =0.001. This roughly corresponds
max
eBDPalgorithm,realistic traffictendsto consistof flowswith toaveragingoverthe last1000packets2. When thenumberof
a mix of RTTs. Fig. 6 plots the results as we vary the RTT uploadsisincreasedattime300s,ittakes0.6seconds(current
of the wired backhaul link while keeping T = 200ms. throughputis 13.5Mbpsso t=1000 8000/13.5 106=0.6)
max ∗ ∗
We observe that the throughput efficiency is close to 100% tosend 1000packets,i.e.,the eBDPalgorithmis ableto react
for RTTs up to 200ms. For an RTT of 300ms, we observe a to network changes roughly on a timescale of 0.6 second.
V. EXPLOITING STATISTICALMULTIPLEXING:THE A*
AP throughput percentage (%)10246800000 1 download Max smoothed RTT (ms)123456000000000000 1 download uTFsinoCzWarePbeohlcxefiwale3mtno3dpt8hltbeeapa,kaceeicktBkoacDefdatfPsvnsaiwsanbltehngaeeogAsnereeidLemtehnGodumfOfltrtRtioohipIsmTelmeHsasFiMfltmxiaogitwpmi.slseit8zisceathahntalhadrtremoewtuufhlghfetehiilcpspetlaiuevmatxeiew,bnuiglititfhnfoekiasfr.
10 downloads 10 downloads
00 2 4 6 8 10 00 2 4 6 8 10 single download flow, this falls to around 100 packets when
# of uploads # of uploads
10 download flows share the link. However, in both cases the
(a) Throughput (b) Delay
eBDP algorithm selects a buffer size of approximately 350
Fig.5. Performance oftheeBDPalgorithm asthenumberofuploadflows
is varied. Data is shown for 1, 10 download flows and 0, 2, 5, 10 uploads. 2As per [8], the current value is averaged over the last tobservations for
Wired RTT200ms.Here the APthroughput percentage is theratio between x% percentage of accuracy where x = 1−(1−W)t, t is the number of
thethroughputachievedusingtheeBDPalgorithmandthatbyafixedbuffer updates (which are packets in our case). When W = 0.001 and t= 1000
sizeof400packets (i.e.themaximumachievable throughput inthiscase). wehavethatx=0.64.
6
500 Occupancy buffersize[13].).However,usinglargebufferscanleadtohigh
Buffer size
400 queuing delays, and to ensure low delays the buffer should
er (pkts)300 bweithasthsemsamllalalsesptobsusfifbelre.siWzeethwaotuelndsutrheesresfuofrfiecileiknetlytohiogphelriantke
buff200 utilization.Thisintuitionsuggeststhefollowingapproach.We
P
A
observe the buffer occupancy over an interval of time. If the
100
bufferrarelyempties,we decreasethebuffersize. Conversely,
05 0 100 150 200 250 300 350 400 ifthebufferisobservedto beemptyfortoolong,we increase
Time (seconds)
the buffer size. Of course, further work is required to convert
Fig.7. Convergence oftheeBDPalgorithm following achangeinnetwork this basic intuition into a well-behaved algorithm suited to
conditions. Onedownload flow.Attime200sthenumberofuploadflowsis
practical implementation. Not only do the terms “rarely” ,
increased from0to10.
“too long” etc need to be made precise, but we note that an
innerfeedbackloopis createdwherebybuffersize is adjusted
15 350
depending on the measured link utilization, which in turn
AP throughput (Mbps)150 BDP/N1/2=10112 0Bd DodPwo=wn3ln3o8laodad, 0s, u0p ulopalodad Max smoothed RTT (ms)122350500000 BDP/N1/2=102BDP=338 dacosidoenzpdneteig.htneieoSdstntspiaoabtoncoinlkciettohyttnhelteaorenosxbsali,ulsyrwftasifntheiesgre,rowesofibuzhytetiehc.trhehTsfeeiehnoeicfsdtfaubesnrarcneecawdkdisleloofddeaoeedlppdoieboscnapradecsdeakjntiuestlsdoottoehnbdpeybrbeiuTasffsoCfeirePnder
1 download, 0 upload
10 downloads, 0 upload
01 0 50 100 150 300 400 100 10 50 80 200 300 400 essential.
AP buffer size (pkts) AP buffer limit (pkts)
We now introduce the following Adaptive Limit Tuning
(a) Throughput (b) Delay
(ALT)algorithm.Thedynamicsandstabilityofthisalgorithm
Fig.8. Impactofstatistical multiplexing. Thereare1/10downloads andno will then be analyzed in later sections. Define a queue
uploads.WiredRTT200ms. occupancy threshold q and let t (k) (referred to as the
thr i
idle time) be the duration of time that the queue spends at or
belowthisthresholdinafixedobservationintervalt,andt (k)
b
packets (see Figs. 4(a) and 9). It can be seen from Fig. 9 that
(referred to as the busy time) be the corresponding duration
as a result with the eBDP algorithm the buffer rarely empties
spentabovethe threshold.Note that t=t (k)+t (k) and the
i b
when 10 flows share the link. That is, the potential exists to
aggregate amount of idle/busy time t and t over an interval
i b
lower the buffer size without loss of throughput.
canbereadilyobservedbyastation.Also,thelinkutilitisation
In this section we consider the design of a measurement-
is lower bounded by t /(t +t ). Let q(k) denote the buffer
b b i
based algorithm (the ALT algorithm)that is capable of taking
size during the k-th observation interval. The buffer size is
advantage of such statistical multiplexing opportunities.
then updated according to
q(k+1)=q(k)+a t (k) b t (k), (1)
1 i 1 b
A. Adaptive Limit Tuning (ALT) Feedback Algorithm −
where a and b are design parameters. Pseudo-code for this
Our objective is to simultaneously achieve both efficient 1 1
ALT algorithm is given in Algorithm 3. This algorithm seeks
link utilization and low delays in the face of stochastic time-
tomaintainabalancebetweenthetimet thatthequeueisidle
variations in the service time. Intuitively, for efficient link i
and the time t that the queue is busy. That is, it can be seen
utilizationweneedtoensurethatthereisapacketavailableto b
thatwhena t (k)=b t (k),thebuffersizeiskeptunchanged.
transmitwheneverthestationwinsatransmissionopportunity. 1 i 1 b
When the idle time is larger so that a t (k) > b t (k), then
That is, we want to minimize the time that the station buffer 1 i 1 b
the buffer size is increased. Conversely, when the busy time
liesempty,whichinturncanbeachievedbymakingthebuffer
is large enough that a t (k)<b t (k), then the buffer size is
size sufficiently large (under fairly general traffic conditions, 1 i 1 b
decreased.
buffer occupancy is a monotonically increasing function of
More generally, assuming q converges to a stationary dis-
tribution(we discuss this in more detaillater), then in steady-
560000 OBucfcfuepr asnizcey satnadtethweemheaavneltihnaktuati1lEiza[ttii]on=isb1thEe[rtebf]o,rei.el.o,wEer[tib]ou=ndabe11dEb[ytb]
uffer (pkts)340000 E[ti+tbtb]= E[ttb] = 1+b11/a1. (2)
b
AP 200 where we have made use of the fact that t = ti(k)+tb(k)
100 is constant. It can therefore be seen that choosing b1 to be
a1
0 50 100 150 200 small then ensures high utilization. Choosing values for the
Time (seconds)
parameters a and b is discussed in detail in Section V-B,
1 1
(a) Timehistory
but we note here that values of a = 10 and b = 1 are
1 1
Fig.9. HistoriesofbuffersizeandbufferoccupancywiththeeBDPalgorithm found to work well and unless otherwise stated are used in
whenthereare10downloads andnouploads. this paper. With regard to the choice of observation interval
7
Algorithm 3 : The ALT algorithm. be the number of TCP flows sharing a link, w (k) be the
i
1: Set the initial queue size, the maximum buffer size qmax cwnd of flow i at the k-th congestion event, Ti the round-trip
and the minimum buffer size qmin. propagation delay of flow i. To describe the cwnd additive
2: Settheincreasestepsizea1 andthedecreasestepsize b1. increase we define the following quantities: (i) αi is the
rate in packet/s at which flow i increases its congestion
3: for Every t seconds do window3, (ii) α = n α is the aggregate rate at which
T i=1 i
4: Measure the idle time ti. flows increase their Pcongestion windows, in packets/s, and
5: qALT =qALT +a1ti−b1(t−ti). (iii) AT = ni=1αi/Ti approximates the aggregate rate,
6: qALT =min(max(qALT,qmin),qmax) in packets/s2P, at which flows increase their sending rate.
7: end for Followingthe k-thcongestionevent,flowsbackofftheircwnd
to β (k)w (k). Flows may be unsynchronized, i.e., not all
i i
flowsneedbackoffatacongestionevent.Wecapturethiswith
β (k) =1 if flow i does not backoff at event k. We assume
i
that the α are constant and that the β (k) (i.e. the pattern of
i i
flowbackoffs)areindependentoftheflowcongestionwindows
w (k) and the buffer size Q(k) (this appears to be a good
i
Q(k)(cid:13) Q(k+1)(cid:13) cwnd(cid:13) approximation in many practical situations, see [26]).
buffer size(cid:13)
occupancy(cid:13) To relate the queue occupancy to the flow cwnds, we adopt
a fluid-like approach and ignore sub-RTT burstiness. We also
assumethatq issufficientlysmallrelativetothebuffersize
thr
TI(cid:13)((cid:13)k)(cid:13) T(cid:13)B(cid:13)(k)(cid:13) that we can approximate it as zero. Considering now the idle
time T (k), on backoff after the k-th congestion event, if the
I
Fig.10. Illustrating evolution ofthebuffersize.
queue occupancy does not fall below q then T (k) = 0.
thr I
Otherwise, immediately after backoff the send rate of flow i
is β (k)w (k)/T and we have that
t, this is largely determined by the time required to obtain i i i
accurate estimates of the queue idle and busy times. In the E[B] n β (k)w (k)/T
T (k)= − i=1 i i i, (4)
reminder of this paper we find a value of t=1 second to be I P A
T
a good choice.
where E[B] is the mean service rate of the considered buffer.
It is prudent to constrain the buffer size q to lie between
At congestion event k the aggregate flow throughput nec-
theminimumandthe maximumvaluesq andq . Inthe
min max essarily equals the link capacity, i.e.,
following, the maximum size q and the minimum buffer
max
size q are set to be 1600 and 5 packets respectively. n w (k)
min i
=E[B].
T +Q(k)/E[B]
Xi=1 i
B. Selecting the Step Sizes for ALT
We then have that
Define a congestion event as an event where the sum of n n
w (k) w (k)T +Q(k)/E[B]
i i i
all senders’ TCP cwnd decreases. This cwnd decrease can be =
T T T +Q(k)/E[B]
caused by the response of TCP congestion controlto a single Xi=1 i Xi=1 i i
packetloss,ormultiplepacketlossesthatarelumpedtogether n w (k)
i
= +
inoneRTT.Defineacongestionepochasthedurationbetween T +Q(k)/E[B]
two adjacent congestion events. Xi=1 i
n
LetQ(k)denotethebuffersizeatthek-thcongestionevent. Q(k) wi(k) 1
Then, E[B]Xi=1 Ti+Q(k)/E[B]Ti
Q(k+1)=Q(k)+aT (k) bT (k) (3)
I − B Assume that the spread in flow round-trip propagation
whereT isthe“idle”time,i.e.,thedurationin secondswhen delays and congestion windows is small enough that
I
the queueoccupancyisbelow qthr duringthe k-thcongestion ni=1(wi(k)/(E[B]Ti +Q(k))(1/Ti) can be accurately ap-
epoch, and TB the “busy” time, i.e., the duration when the Pproximated by 1/TT, where TT = Pni=n1 T1i is the harmonic
queue occupancy is above q . This is illustrated in Fig. 10 mean of T . Then
thr i
for the case of a single TCP flow.
n
w (k)
Notice that a = a1 and b = b1 where a1 and b1 are i E[B]+ Q(k),
T ≈ TT
parameters used in the ALT algorithm. In the remainder of Xi=1 i
thissectionweinvestigateconditionstoguaranteeconvergence
and
and stability of the buffer dynamics with TCP traffic, which
(1 β (k))E[B] β (k)Q(k)/T
naturallylead to guidelinesfor the selection of a1 and b1. We TI(k) − T − T T (5)
≈ A
first define some TCP related quantities before proceeding. T
Consider the case where TCP flows may have different
3Standard TCP increases the flow congestion window by one packet per
round-trip times and drops need not be synchronized. Let n RTT,inwhichcaseαi≈1/Ti.
8
wbahcekroeffβfTa(ckto)r=oftPheniP=fl1ni=oβw1i(wks.i)(wWki)(/hkTe)i/nTifloiwssthaereesfyfenccthivreonaigzegdre,gi.aet.e, 11020000 BBCuuwffnffeedrr olimccitupancy 780000 BBCuuwffnffeedrr olimccitupancy
hβEai[vβ=eTβ]th=e∀iβs,atm.heenaβvTera=geβ.bWackhoenffflfoacwtosra,rie.eu.,nsEy[nβcih]ro=niβze,dthbeunt AP buffer (pkts) 468000000 AP buffer (pkts)234560000000000
If the queue empties after backoff, the queue busy time 200 100
TB(k) is directly given by 200 00 2500Time (3s0e00conds)3500 4000 0 100 Tim20e0 (seco3n0d0s) 400 500
T (k)=Q(k+1)/α (6) (a) Instability (b) Stability
B T
where α = n α is the aggregate rate at which flows Fig.11. Instability andstability ofthe ALTalgorithm. In(a), a=100, b=1,
T i=1 i themaximumbuffersizeis50000packets. In(b),a=10, b=1,themaximum
increase their cPongestion windows, in packets/s. Otherwise, buffersizeis400packets.Inbothfigures,thereis1downloadandnoupload.
T (k)=(Q(k+1) q(k))/α (7)
B T
−
Under mild independence conditions,
where q(k) is the buffer occupancy after backoff. It turns out
thatforthe analysisof stabilityitis notnecessaryto calculate αT aE[βT(k)]αT/(ATTT)
E[λ (k)]= − .
e
q(k)explicitly.Instead,lettingδ(k)=q(k)/Q(k),itisenough α +b
T
to note that 0 δ(k)<1. Observe that,
≤
Combining (3), (5), (6) and (7),
α 1( n 1/T )2
T = i=1 i
Q(k+1)= λe(k)Q(k)+γe(k)E[B]TT, q(k)≤qthr ATTT n Pni=11/Ti2
(cid:26) λf(k)Q(k), otherwise when we use the standard TCPPAIMD increase of one packet
where per RTT, in which case α 1/T . We therefore have that
i i
≈
1/n α /(A T ) 1. Also, when the standard AIMD
α aβ (k)α /(A T ) T T T
T T T T T ≤ ≤
λe(k) = − α +b , backoff factor of 0.5 is used, 0.5 < E[βT(k)] < 1. Thus,
T since a>0, b>0, α >0, it is sufficient that
α +bδ(k) 1 β (k) α T
T T T
λf(k) = αT +b ,γe(k)=a α−T +b ATTT. 1< αT −a E[λe(k)] αT <1
− α +b ≤ ≤ α +b
T T
Taking expectations,
A sufficientcondition(fromthe leftinequality)forstability is
E[Q(k+1)] thenthata<2αT+b.Usingagain(asintheeBDPalgorithm)
200ms as the maximum RTT , a rough lower bound on α
=E[λ (k)Q(k)+γ (k)E[B]T q(k) q ]p (k) T
e e T thr e
| ≤ is 5 (correspondingto 1 flow with RTT 200ms). The stability
+E[λ (k)Q(k)q(k)>q ](1 p (k))
f | thr − e constraint is then that
withp (k)theprobabilitythatthequeue q followingthe
e thr a<10+b. (9)
≤
k-th congestion event. Since the β (k) are assumed indepen-
i
dent of Q(k) we may assume that E[Q(k)q(k) q ] = Fig. 11(a) demonstrates that the instability is indeed ob-
thr
E[Q(k)q(k)>q ]=E[Q(k)] and | ≤ served in simulations. Here, a = 100 and b = 1 are used as
thr
| examplevalues,i.e.,thestabilityconditionsarenotsatisfied.It
E[Q(k+1)]=λ(k)E[Q(k)]+γ(k)E[B]TT (8) can be seen thatthe buffersize at congestioneventsoscillates
around 400 packets rather than converging to a constant
where
value. We note, however, that in this example and others the
λ(k)=p (k)E[λ (k)q(k) q ] instability consistently manifests itself in a benign manner
e e thr
| ≤
(small oscillations). However, we leave detailed analysis of
+(1 p (k))E[λ (k)q(k)>q ],
e f thr
− | the onset of instability as future work.
γ(k)=p (k)E[γ (k)q(k) q ]
e e | ≤ thr Fig.11(b)showsthecorrespondingresultswitha=10and
b = 1, i.e., when the stability conditions are satisfied. It can
be seen that the buffer size at congestion events settles to a
constant value, thus the buffer size time history converges to
C. A Sufficient Condition for Stability
a periodic cycle.
Provided λ(k) < 1 the queue dynamics in (8) are ex-
| |
ponentially stable. In more detail, λ(k) is the convex com- D. Fixed point
bination of E[λ (k)] and E[λ (k)] (where the conditional
e f When the system dynamicsare stable and p =0, from (8)
dependence of these expectations is understood, but omitted e
we have that
to streamline notation). Stability is therefore guaranteed pro-
(1 E[β ])
0vid<edE|E[λ[λ(ek()k])]<| <1 w1haenndb|E>[λ0f(skin)]c|e<α 1.isWneonh-anveegatthivaet kl→im∞E[Q(k)]= b/a−+E[βTT]E[B]TT. (10)
f T
and 0 δ(k) < 1. The stability condition is therefore that For synchronized flows with the standard TCP backoff
≤
E[λ (k)] <1. factor of 0.5 (i.e., E[β ] = 0.5) and the same RTT,
e T
| |
9
500 500
%)100 OBucfcfuepr asnizcey OBucfcfuepr asnizcey
malized throughput ( 468000 11 0d odwownlnolaoda,d 0s, u0p ulopalodad AP buffer (pkts)123400000000 AP buffer (pkts)123400000000
Nor 20 11 0d odwownlnolaoda,d 1s,0 1 u0p ulopalodasds
100−3 1/(1+b/a)10−2 10−1 100 0 200 Ti4m00e (seco60n0d) 800 1000 0 200 Ti4m00e (seco60n0d) 800 1000
b/a
(a) TheALTalgorithm (b) TheA*algorithm
Fig.12. Impactofb/aonthroughputefficiency. Themaximumbuffersize
Fig. 13. Convergence rate of the ALT and A* algorithms. One download
is400packets, andtheminimumbuffersizeis2packets.
flow,a=10,b=1.Attime500s the numberofupload flowsis increased
from0to10.
b(/1a−+EE[β[βTT])]E[B]TT reduces to the BDP when b/a = 0. This 500 OBucfcfuepr asnizcey 500 OBucfcfuepr asnizcey
indicates that for high link utilization we would like the ratio 400 400
bst/aatetothbeeesxmpaelclt.eUdsliinngk(u5t)1i,li(z6a)tiaonndi(s10lo)1wweerhbaovuentdheadtibnysteady- P buffer (pkts)230000 P buffer (pkts)230000
. (11) A A
1+ b αT ≥ 1+ b 100 100
aATTT a
0 100 200 300 400 500 0 200 400 600 800 1000
This lower bound is plotted in Fig. 12 together with the Time (seconds) Time (second)
measured throughput efficiency vs b/a in a variety of traffic (a) 10downloads only (b) 10 downloads, with 10 uploads
starting attime500s
conditions.Notethatinthisfigurethelowerboundisviolated
by the measured data when b/a > 0.1 and we have a Fig.14. Buffertimehistories withtheA*algorithm, a=10,b=1.
large number of uploads. At such large values of b/a plus
manycontendingstations,thetargetbuffersizesareextremely
small and micro-scale burstinessmeans thatTCP RTOs occur example in Fig. 13(a) is essentially a worst case. In the next
frequently.It is this that leads to violationof the lower bound section, we address the slow convergence by combining the
(11) (since p =0 does not hold). However, this corresponds ALT and the eBDP algorithms to create a hybrid algorithm.
e
to an extreme operating regime and for smaller values of b/a
the lower boundis respected. Itcan be seen from Fig. 12 that F. Combining eBDP and ALT: The A* Algorithm
the efficiency decreases when the ratio of b/a increases. In
We can combine the eBDP and ALT algorithms by using
order to ensure throughput efficiency 90% it is required
the mean packet service time to calculate Q as per the
≥ eBDP
that
eBDP algorithm (see Section IV), and the idle/busy times to
b
0.1. (12) calculate q as per the ALT algorithm. We then select the
a ≤ ALT
buffersizeasmin Q ,q toyieldahybridalgorithm,
eBDP ALT
Combined with the stability condition in inequality (9), we { }
referred to as the A* algorithm, that combines the eBDP and
havethata=10,b=1 arefeasibleintegervalues,thatis, we
the ALT algorithms.
choose a =10 and b =1 for the A* algorithm.
1 1 When channel conditions change, the A* algorithm uses
the eBDP measured service time to adjust the buffer size
E. Convergence rate promptly. The convergence rate depends on the smoothing
weight W. As calculated in Section IV, it takes around 0.6
In Fig. 13(a) we illustrate the convergencerate of the ALT
secondforQ toconverge.TheA* algorithmcanfurther
algorithm.Thereisonedownload,andattime500sthenumber eBDP
use the ALT algorithm to fine tune the buffer size to exploit
of upload flows is increased from 0 to 10. It can be seen
the potential reduction due to statistical multiplexing. The
that the buffer size limit convergesto its new value in around
effectiveness of this hybrid approach when the traffic load
200 seconds or 3 minutes. In general, the convergencerate is
is increased suddenly is illustrated in Fig. 13(b) (which can
determined by the product λ(0)λ(1)...λ(k). In this example,
be directly compared with Fig. 13(a)). Fig. 14(b) shows the
the buffer does not empty after backoff and the convergence
corresponding time histories for 10 download flows and a
rate is thus determined by λ (k) = αT+bδ(k). To achieve
f αT+b changing number of competing uploads.
fast convergence,we require small λ (k) so that Q(k+1)=
f
λ (k)Q(k) is decreasedquicklyto the desiredvalue.We thus
f
G. Performance
need large b to achieve fast convergence. However, b = 1 is
used here in order to respect the stability conditionin (9) and The basic impetus for the design of the A* algorithm is
the throughput efficiency condition in (12). Note that when to exploit the possibility of statistical multiplexing to reduce
conditions change such that the buffer size needs to increase, buffer sizes. Fig. 14(a) illustrates the performance of the A*
the convergence rate is determined by the a parameter. This algorithm when there are 10 downloads and no upload flows.
has a value of a = 10 and thus the algorithm adapts much Comparing with the results in Fig. 9 using fixed size buffers,
morequicklytoincreasethebufferthanto decreaseitandthe we can see that the A* algorithm can achieve significantly
10
AP throughput percentage (%)10246800000 111 0 dd odowowwnnlnloolaoadad,d, 0,1 00u puulppollaoodaadds Max smoothed RTT (ms)123456000000000000 111 0 dd odowowwnnlnloolaoadad,d, 0,1 00u puulppollaoodaadds P throughput percentage (%)10246800000 11 0d odwownlnolaodads Max smoothed RTT (ms)123456000000000000 1 download
1/(1+b/a) A 1/(1+b/a) 10 downloads
05 0 100 150 200 300 05 0 100 150 200 300 00 2 4 6 8 10 00 2 4 6 8 10
RTT (ms) RTT (ms) # of uploads # of uploads
(a) Throughputefficiency (b) Delay (a) Throughput (b) Delay
Fig. 16. Performance of the A* algorithm as the wired RTT is varied. Fig.17. Performance oftheA*algorithm whenthechannel has aBERof
Physical layer data and basic rates are 54 and 6 Mbps. Here the AP 10−5. Physical layer data and basic rates are 54 and 6Mbps. Here the AP
throughput percentage is the ratio between the throughput achieved using throughputpercentageistheratiobetweenthethroughputachievedusingthe
theA*algorithm andthemaximumthroughputusingfixedsizebuffers. A*algorithm andthemaximumthroughput usingfixedsizebuffers.
smallerbuffersizes(i.e.,areductionfrommorethan350pack- to that the current buffer size is too small which causes the
ets to 100 packets approximately) when multiplexing exists. TCP source backs off after buffer overflow. To accommodate
Fig. 15 summarizes the throughputand delay performance of more future packets, the buffer size can be increased. Note
the A* algorithm for a range of network conditions (numbers that increasing buffer sizes in this case would not lead to
ofuploadsanddownloads)andphysicaltransmitratesranging high delays but has the potential to improve throughput.This
from 1Mbps to 216Mbps. This can be compared with Fig. tradeoffbetween the throughputand the delaysthus holdsfor
3. It can be seen that in comparison with the use of a fixed both EDCA and DCF.
buffersizetheA*algorithmisabletoachievehighthroughput However, the DCF allocates roughly equal numbers of
efficiency across a wide range of operating conditions while transmissionopportunitiestostations.Aconsequenceofusing
minimizing queuing delays. DCF is thus that when the number of upload flows increases,
In Fig. 16 we further evaluate the A* algorithm when the uploads may produce enough TCP ACK packets to keep
the wired RTTs are varied from 50-300ms and the number the AP’s queue saturated. In fact, once there are two upload
of uploads is varied from 0-10. Comparing these with the flows, TCP becomes unstable due to repeated timeouts (see
results (Fig. 5 and 6) of the eBDP algorithm we can see [20]foradetaileddemonstration),causingtheunfairnessissue
that the A* algorithm is capable of exploiting the statistical discussedinSectionII-C. Therefore,wepresentresultsforup
multiplexing where feasible. In particular, significantly lower to two uploads in Fig. 18, as this is the greatest number of
delaysareachievedwith10downloadflowswhilstmaintaining uploadflowswhereTCPwithDCFcanexhibitstablebehavior
comparable throughput efficiency. using both fixed size buffers and the A* algorithm. Note that
inthiscaseusingtheA*algorithmonuploadstationscanalso
decreasethedelaysandmaintainhighthroughputefficiencyif
H. Impact of Channel Errors
their buffers are frequently backlogged.
In the foregoing simulations the channel is error free and
Wealsopresentresultswhentherearedownloadflowsonly
packetlossesaresolelyduetobufferoverflowandMAC-layer
(so the unfairness issue does not exist). Fig. 19 illustrates
collisions. In fact, channel errors have only a minor impact
the throughput and delay performance achieved using the A*
on the effectiveness of buffer sizing algorithms as errors play
algorithmandfixed400-packetbuffers.AsintheEDCAcases,
a similar role to collisions with regard to their impact on
we can see that the A* algorithm is able to maintain a high
link utilization. We supportthis claim first using a simulation
throughputefficiency with comparatively low delays.
example with a channel having an i.i.d noise inducing a bit
Note that DCF is also used in the production WLAN test
errorrate (BER) of 10−5. Results are shownin Fig. 17 where
where the A* algorithm is observed to perform well (see
we can see a similar trend as in the cases when the medium
Section I).
is error free (Figs. 15(e) 15(f)).
We further confirm this claim in our test-bed implementa-
tions where tests were conducted in 802.11b/g channels and J. Rate Adaptation
noiserelatedlosseswereobserved.SeeSectionVIfordetails. We did not implement rate adaptation in our simulations.
However, we did implement the A* algorithm in the Linux
MadWifi driver which includes rate adaptation algorithms.
I. DCF Operation
We tested the A* algorithm in the production WLAN of
The proposed buffer sizing algorithms are still valid for
the Hamilton Institute with the default SampleRate algorithm
DCF since link utilization and delay considerations remain
enabled. See Section I.
applicable,as is the availability of service time (for the eBDP
algorithm) and idle/busy time measurements (for the ALT
algorithm). In particular, if the considered buffer is heavily VI. EXPERIMENTALRESULTS
backlogged, to ensure low delays, the buffer size should be WehaveimplementedtheproposedalgorithmsintheLinux
reduced. If otherwise the buffer lies empty, it may be due MadWifi driver, and in this section we present tests on an