impact of bluetooth mac layer on the performance of tcp traffic

12
WIRELESS COMMUNICATIONS AND MOBILE COMPUTING Wirel. Commun. Mob. Comput. 2006; 6:865–876 Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/wcm.446 Impact of Bluetooth MAC layer on the performance of TCP traffic Jelena Miˇ si´ c 1 , Ka Lok Chan 2 and Vojislav B. Miˇ si´ c 1*, 1 University of Manitoba, Winnipeg, Manitoba, Canada 2 The Hong Kong University of Science and Technology, Hong Kong, China Summary Recent updates of the Bluetooth specification have introduced significant changes in the Bluetooth protocol stack, including optional flow control. When the Bluetooth piconet is used to carry TCP traffic, complex interactions between TCP congestion control mechanisms and data link layer controls of Bluetooth will occur. In this paper, we model the performance of the piconet with TCP traffic, expressed through segment loss probability, round-trip time, and goodput, through both probabilistic analysis and discrete-event simulations. We show that satisfactory performance for TCP traffic may be obtained through proper dimensioning of the Bluetooth architecture parameters. Copyright © 2006 John Wiley & Sons, Ltd. KEY WORDS: Bluetooth; Bluetooth piconet; TCP; token bucket 1. Introduction Bluetooth is an emerging standard for wireless personal area networks (WPANs), in which devices organized in small networks (piconets) with one device acting as the master and up to seven others acting as active slaves [1–3]. The initial use of Bluetooth was simple cable replacement, which is why most of the initial perfor- mance analyses of data traffic in Bluetooth were focus- ing on scheduling techniques—the manner in which master visits the slaves. As the official Bluetooth spec- ification does not prescribe any particular scheduling technique, a number of proposals have been made [4– 6]. Most of these assume that the traffic consists of in- dividual UDP datagrams or TCP segments. These TCP segments are embedded into IP datagrams and sent to *Correspondence to: V. B. Miˇ si´ c, Department of Computer Science, University of Manitoba, Winnipeg, MB R3T 2N2, Canada. E-mail: [email protected] the L2CAP layer, where they are segmented into Blue- tooth packets and placed in infinite-size buffers; the baseband layer then takes individual packets and sends them over the radio link. In this case, TCP congestion window and transmission rate will increase until time- out events occur due to high end-to-end delays. How- ever, these will be caused by very high loads, more or less independently of the scheduling policy at the baseband layer. However, the number of possible uses of Bluetooth is growing to include various networking tasks between computers and computer-controlled devices such as PDAs, mobile phones, smart peripherals, and the like. As a consequence, the majority of the traffic over Blue- tooth networks will belong to different flavors of the ubiquitous TCP/IP family of protocols. In order to cater Copyright © 2006 John Wiley & Sons, Ltd.

Upload: jelena-misic

Post on 06-Jul-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

WIRELESS COMMUNICATIONS AND MOBILE COMPUTINGWirel. Commun. Mob. Comput. 2006; 6:865–876Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/wcm.446

Impact of Bluetooth MAC layer on the performanceof TCP traffic

Jelena Misic1, Ka Lok Chan2 and Vojislav B. Misic1*,†

1University of Manitoba, Winnipeg, Manitoba, Canada2The Hong Kong University of Science and Technology, Hong Kong, China

Summary

Recent updates of the Bluetooth specification have introduced significant changes in the Bluetooth protocol stack,including optional flow control. When the Bluetooth piconet is used to carry TCP traffic, complex interactionsbetween TCP congestion control mechanisms and data link layer controls of Bluetooth will occur. In this paper,we model the performance of the piconet with TCP traffic, expressed through segment loss probability, round-triptime, and goodput, through both probabilistic analysis and discrete-event simulations. We show that satisfactoryperformance for TCP traffic may be obtained through proper dimensioning of the Bluetooth architecture parameters.Copyright © 2006 John Wiley & Sons, Ltd.

KEY WORDS: Bluetooth; Bluetooth piconet; TCP; token bucket

1. Introduction

Bluetooth is an emerging standard for wireless personalarea networks (WPANs), in which devices organized insmall networks (piconets) with one device acting as themaster and up to seven others acting as active slaves[1–3]. The initial use of Bluetooth was simple cablereplacement, which is why most of the initial perfor-mance analyses of data traffic in Bluetooth were focus-ing on scheduling techniques—the manner in whichmaster visits the slaves. As the official Bluetooth spec-ification does not prescribe any particular schedulingtechnique, a number of proposals have been made [4–6]. Most of these assume that the traffic consists of in-dividual UDP datagrams or TCP segments. These TCPsegments are embedded into IP datagrams and sent to

*Correspondence to: V. B. Misic, Department of Computer Science, University of Manitoba, Winnipeg, MB R3T 2N2, Canada.†E-mail: [email protected]

the L2CAP layer, where they are segmented into Blue-tooth packets and placed in infinite-size buffers; thebaseband layer then takes individual packets and sendsthem over the radio link. In this case, TCP congestionwindow and transmission rate will increase until time-out events occur due to high end-to-end delays. How-ever, these will be caused by very high loads, moreor less independently of the scheduling policy at thebaseband layer.

However, the number of possible uses of Bluetoothis growing to include various networking tasks betweencomputers and computer-controlled devices such asPDAs, mobile phones, smart peripherals, and the like.As a consequence, the majority of the traffic over Blue-tooth networks will belong to different flavors of theubiquitous TCP/IP family of protocols. In order to cater

Copyright © 2006 John Wiley & Sons, Ltd.

866 J. MISIC, K. L. CHAN AND V. B. MISIC

to such applications, the recently adopted version 1.2 ofthe Bluetooth specification allows each L2CAP chan-nel to operate in either Basic L2CAP mode (essentiallythe L2CAP mode supported by the previous versionof the specification [1]), flow control, or retransmis-sion mode [2,3]. All three modes offer segmentation,but only the latter two control buffering through pro-tocol data unit (PDU) sequence numbers, and controlthe flow rate by limiting the required buffer space. Ad-ditionally, the retransmission mode uses a go-back-nrepeat mechanism with an appropriate timer to retrans-mit missing or damaged PDUs as necessary.

It is clear that complex interactions between the TCPcongestion control and the L2CAP flow control andbaseband scheduling mechanisms may be expected ina Bluetooth network carrying TCP traffic. The mainfocus of the present paper is to investigate those inter-actions, both analytically and through simulations, andthus provide insight into the performance of TCP datatraffic in Bluetooth piconets. We model the segmentloss probability, probability distribution of TCP roundtrip time (including the L2CAP round trip time) andTCP sending rate. We also discuss the dimensioningof various parameters of the architecture with regardsto the tradeoff between end-to-end delay and achiev-able throughput under multiple TCP connections fromdifferent slave devices.

The paper is organized as follows. Section 2 de-scribes the system model and the basic assumptionsabout the piconet operation under TCP traffic, and dis-cusses some recent related work in the area of perfor-mance modeling and analysis of TCP traffic. Section 3discusses the segment loss probability and the proba-bility distribution of the congestion window size, andpresents the analytical results. Sections 4 and 5 discusssome interesting observations regarding interaction ofthe many parameters involved and their impact on net-work performance and fairness. Section 6 concludesthe paper.

2. System Model and Related Work

Bluetooth piconets are small, centralized networks withµ ≤ 8 active nodes or devices [2,3]. One of the nodesacts as the master, while the others are slaves. All com-munications in the piconet take place under master’scontrol: a slave can talk only when addressed by themaster, and only immediately after being addressed bythe master. As slaves can only talk to the master, henceall communications in the piconet, including slave-to-slave ones, must be routed through the master.

We consider a piconet in which individual slavescreate TCP connections with each other, so that eachTCP connection will traverse two hops in the network.There are no TCP connections starting or ending atthe master. We focus on TCP Reno [7], which isprobably the most widely used variant of TCP as ofnow. We will assume that the application layer at slavei (where i = 1 . . µ − 1) sends messages of predefinedsize at a rate of λi. Two different message sizes willbe considered: 480 and 1460 bytes, the latter onebeing quite common since it is the maximum size thatwill fit in an Ethernet packet [8]; we will explicitlymention the packet size used in our experiments.Either way, the message will be sent within a singleTCP segment, provided the TCP congestion windowis not full. The TCP segment will be encapsulatedin an IP packet with the appropriate header; thispacket is then passed on to the L2CAP layer. Weassume that the segmentation algorithm produces theminimum number of Bluetooth baseband packets [9].Smaller size packets will fit into two DH5 packets,while the longer size ones will require four DH5packets and one DH3 packet per TCP segment. Weassume that the enhanced data rate facility definedin version 2.0 of the specification is not used since itis optional and many existing devices do not supportit [3]. We also assume that each TCP segment willbe acknowledged with a single, empty TCP segmentcarrying only the TCP ACK bit; such acknowledgmentrequires one DH3 baseband packet. Therefore, thetotal throughput per piconet can reach a theoreticalmaximum of 0.5(480 × 8/(2 × 5) × 625 �s) ≈240 kbps in case of 480-byte packets, and0.5(1460 × 8/(4 × 5 + 2 × 3) × 625 �s) ≈ 360 kbpsin case of 1460-byte ones. As we consider that all slaveshave identical traffic, each slave can achieve a goodputof at most about (240/i) or (360/i) kbps, respectively.

Our analysis setup implements the L2CAP flow con-trol mode through a token bucket [10], with the dataqueue of size S and a token queue of size W wherein to-kens arrive at a constant rate of tb. Furthermore, there isan outgoing queue for baseband data packets of size L,from which the data packets are serviced by the sched-uler using the chosen intra-piconet scheduling policy.Note that the devices acting as slaves have one outgo-ing, or uplink, queue, whilst the device operating as thepiconet master will maintain several downlink queues,one per each active slave, in parallel. This implemen-tation is shown in Figure 1.

The parameters of this architecture, namely the sizesof the queues and the token rate in the token bucket,may be adjusted in order to achieve delay-throughput

Copyright © 2006 John Wiley & Sons, Ltd. Wirel. Commun. Mob. Comput. 2006; 6:865–876

THE IMPACT OF BLUETOOTH MAC ON TCP 867

L2CAP

HCI/BB

FLOW CONTROL:Token Bucket

queuesize S

tokenrate tb

queuesize W

output queuesize L

SEGMENTATION/REASSEMBLY

ENCAPSULATION, SCHEDULING

FRAGMENTATION/RECOMBINATION

service data units (SDUs)

APPLICATIONS USING TCP/IP

RF INTERFACE

Fig. 1. Architectural blocks of the Bluetooth L2CAP layerwith our implementation of flow control function (control

paths not shown for clarity).

trade-off for each slave. This scheme can limit thethroughput of the slave through the token rate, whilesimultaneously limiting the length of the burst of base-band packets submitted to the network. Depending onthe token buffer size and traffic intensity, overflowsof the token buffer can be detected through the TCPloss events such as three duplicate acknowledgments ortime-outs. In the former case, the size of the congestionwindow will be halved and TCP will continue workingin its additive increase-multiplicative decrease (AIMD)phase. In the latter case, the congestion window willshrink to one, and TCP will enter its slow-start routine.

Of course, the performance of Bluetooth networks isalso affected by the intra-piconet scheduling policy. As

the master cannot know the status of all slaves’ queuesat any time, simpler policies, such as limited or ex-haustive service, are to be preferred [4]. Both limitedand exhaustive service are limiting cases of a familyof schemes known as E-limited service [11]. In thispolling scheme, the master and a slave exchange up toM packets, or less if both outgoing queues are emptied,before the master moves on to the next slave (limitingcases of M = 1 and ∞ correspond to limited and ex-haustive service policies, respectively). Our previouswork indicates that the E-limited service offers betterperformance than either limited or exhaustive service[12]. E-limited service does not require that the mas-ter knows the status of slaves’ uplink queues, and doesnot waste bandwidth when there are no data packets toexchange. Furthermore, it provides fairness by default,since the limit on the number of frames exchanged withany single slave prevents the slaves from monopolizingthe network. Of course, the TCP protocol itself providesmechanisms to regulate fairness among TCP connec-tions from one slave, but the bandwidth that can beachieved in Bluetooth networks is too small to warranta detailed analysis in this direction.

The performance of TCP traffic, in particular thesteady-state send rate of the bulk-transfer TCP flows,has recently been assessed as the function of segmentloss rate and round trip time (RTT) [13]. The systemis modeled at the ends of rounds that are equal to theround trip times and during which a number of seg-ments equal to the current size of the congestion win-dow is sent. The model assumes that both the RTT andloss probability can be obtained by measurement, andderives average value of congestion window size.

The same basic model, but with improved accuracywith regard to the latency and steady state transmissionrate for TCP Tahoe, Reno, and SACK variants, has beenused in Reference [14]. The authors have also studiedthe impact of correlated segment losses under threeTCP versions.

In our approach, we model the system at the mo-ments of acknowledgment arrivals and time-out events,instead of at the ends of successive rounds as in bothpapers mentioned above. In this manner, we are able toobtain more accurate information about performance,and to derive the TCP congestion window size andthroughput in both non-saturated and saturated oper-ating regimes. In addition, the blocking probabilitiesof all the queues along the path are known and eachpacket loss can be treated independently.

We note that both of the papers mentioned aboveconsider RTT as a constant due to the large number ofhops, whereas in our work there are two hops only, and

Copyright © 2006 John Wiley & Sons, Ltd. Wirel. Commun. Mob. Comput. 2006; 6:865–876

868 J. MISIC, K. L. CHAN AND V. B. MISIC

RTT must be modeled as a random variable which isdependent on the current congestion window size.

Recently, the adaptive increase-multiplicative de-crease (AIMD) technique was applied to control theflow over the combination of wireless and wirelinepath, by using the concept similar to the token bucketfilter [15]. It is assumed that packets can be lost overthe wireless link only, and that the all packet transmis-sion times take exactly one slot. The system is modeledat the ends of the packet transmission times. This ap-proach is orthogonal to ours, since we assume that thewireless channel errors will be handled through for-ward error correction (FEC), and focus on finite queuelosses at the data-link layer instead.

3. Analysis of the TCP Window Size

Under the assumptions outlined above, the probabil-ity generating function (PGF) for the burst size ofdata packets at the baseband layer is Gbd(z) = z5. Thecorresponding PGF for the acknowledgment packetburst size is Gba(z) = z. The PGF for the size distri-bution of baseband data packets is Gpd(z) = 0.8z5 +0.2z3, while the PGF for the size distribution of ac-knowledgment packets is Gpa(z) = z3. Therefore, thePGF for the packet size as Gp(z) = 0.5Gbd(Gpd(z)) +0.5Gba(Gpa(z)), and the mean packet size as Lad =

0.5G′bd(Gpd(z))|z=1 + 0.5G′

ba(Gpa(z))|z=1 = 3.8. (alltime variables are expressed in units of time slots ofthe Bluetooth clock, T = 625 �s). We will also assumethat the receiver advertised window is larger than thecongestion window at all times.

The paths traversed by the TCP segment sent fromslave i to slave j and the corresponding acknowledg-ment sent in the opposite direction, are shown schemat-ically in Figure 2. A TCP segment or acknowledgmentcan be lost if any of the buffers along the path is full(and, consequently, blocks the reception of the packetin question). Probability distributions of token bucketqueue lengths and outgoing buffer queue lengths at ar-bitrary times, as well as the corresponding blockingprobabilities for TCP segments (PBd, PBdL) and TCPacknowledgments (PBa, PBaL), can be calculated ac-cording to the approach outlined in Reference [16].Segments/acknowledgments are also passing throughthe outgoing downlink queue at the master. However,we assume that this queue is much longer than the cor-responding queues at the slaves, and the blocking prob-abilities PBMd, PBMa are much smaller and may safelybe ignored in calculations. Then, the total probabilityp of losing a TCP segment or its acknowledgment is

p = 1 − (1 − PBd)(1 − PBdL)(1 − PBMd)(1 − PBa)

× (1 − PBaL)(1 − PBMa) (1)

MASTER

SLAVE 1

polling(E-limited)

output (downlink)queue to slave 2

output(uplink)queue

L2CAPflow control

(token bucket)

output(uplink)queue

TCP ACKs

TCP buffer

token (ACK)queue

TCP ACKsfrom slave 2

tokens

TCP segmentsfrom host(slave 1)

host application

output (downlink)queue to slave 1

SLAVE 2

L2CAPflow control

(token bucket)

tokens

polling(E-limited)

polling(E-limited)

polling(E-limited)

Fig. 2. The path sof the TCP segment and its acknowledgment, together with the blocking probabilities and LSTs of the delaysin respective queues (adapted from Reference [16]).

Copyright © 2006 John Wiley & Sons, Ltd. Wirel. Commun. Mob. Comput. 2006; 6:865–876

THE IMPACT OF BLUETOOTH MAC ON TCP 869

We will model the length of the TCP window atthe moments of acknowledgments arrivals and seg-ment loss events. Since TCP window of size w growsby 1/w after the successful acknowledgment, it is notpossible to model the system using probability generat-ing functions—we have to find the probability densityfunction. This probability distribution is a hybrid func-tion represented by the mass probability w1 of windowsize being 1, and by the continuous probability den-sity function w(x) for window sizes from 2 to ∞. Wealso need to determine the probability density functionof the congestion window threshold t(x) at the mo-ments of acknowledgment arrival or packet loss. Wewill represent the probability of the time-out event asPto = p(1 − (1 − p)3), and the probability of loss bythree duplicate acknowledgments as Ptd = p(1 − p)3.In order to simplify the derivation we assume that win-dow size immediately after the change of the state isx. If the event that changed the state was positive ac-knowledgment we assume that the window size beforethis event was x − 1/x. Then the probability densityfunction of congestion window size can be approxi-mated as w(x − 1/x) ≈ w(x) − w′(x)1/x. These prob-ability density functions can be described by followingequations:

w1 = Pto + Ptd

∫ 3

x=2w(x)dx w = 1

t(x) = t(x)(1 − p) + w(2x)p w ≥ 2

w(x) =(

w(x) − w′(x)1

x

)(1 − p)

∫ x−1/x

0t(y)dy

+ w(0.5x)(1 − p)∫ ∞

0.5x

t(y)dy + w(2x)Ptd

(2)

where∫ x−1/x

0 t(y)dy denotes the probability that thecongestion window size x − 1/x before the acknowl-edgment is above the threshold size (i.e., that TCP isworking in the AIMD mode), and the probability thatthe current size 0.5x of the congestion window is lowerthan the threshold (i.e., that the system is in the slowstart mode) is given by

∫ ∞0.5x

t(y)dy.The system (2) of integro-differential equations

could be solved numerically, but sufficient accuracymay be obtained through the following approximation:

w(x) = C1e−0.25px2(1+3p−3p2+p3)/(1−p) (3)

where the normalization constant C1 is determinedfrom the condition 1 − w1 = ∫ ∞

2 w(x)dx.Furthermore, mean TCP window size w and mean

threshold size t can be calculated as

w = w1 +∫ ∞

2xw(x)dx

t =∫ ∞

1xw(2x)dx (4)

since t(x) = w(2x).As mentioned above, it is possible to find the proba-

bility distribution of delays through all the token bucketfilters and the baseband buffers along the path of theTCP segment and its acknowledgment [16]. The delayis calculated from the moment when the TCP segmententers the token bucket filter at the source device untilthe acknowledgment is received by that same device.This delay is equal to the sum of delays in all the buffersfrom Bluetooth protocol stack along the path. We willrefer to this delay as the L2CAP round trip delay, andits probability distribution can be described through thecorresponding Laplace–Stieltjes transform

D∗L2CAP(s) = D∗

tb,d(s)D∗L,d(s)D∗

LM,d(s)D∗tb,a(s)

× D∗L,a(s)D∗

LM,a(s) (5)

We can also calculate the probability distributionof the number of outstanding (unacknowledged) seg-ments at the moments of segment acknowledgment ar-rivals. This number for an arbitrary TCP segment isequal to the number of segment arrivals during theL2CAP round trip time of that segment. The PGFfor the number of segment arrivals during the L2CAPround trip time can be calculated, using the approachfrom [11], as

A(z) = D∗L2CAP(λi − zλi)

The probability of k segment arrivals during the RTTtime is equal to

ak = dk

dzkD∗

L2CAP(λi − zλi)|z=0

Using the PASTA property (Poisson arrivals see timeaverages), we conclude that arriving segment will seethe same probability distribution of outstanding seg-ments as the incoming acknowledgment. Therefore,the probability Pt that the arriving segment will finda free token and leave the TCP buffer immediately, is

Copyright © 2006 John Wiley & Sons, Ltd. Wirel. Commun. Mob. Comput. 2006; 6:865–876

870 J. MISIC, K. L. CHAN AND V. B. MISIC

equal to

Pt =∞∑

k=1

ak

∫ ∞

k+1w(x)dx (6)

and the probability that the segments will be stored inthe TCP buffer is 1 − Pt. Then, the rate at which TCPsends the segments to the token bucket filter, which willbe referred to as the TCP send rate Ri, has two com-ponents. One of these is contributed by the segmentsthat find acknowledgments waiting in the token queue,and thus can leave the TCP buffer immediately; an-other one comes from the segments that have to wait inthe TCP buffer until an acknowledgment (for an earliersegment) arrives. The expression for TCP send rate,then, becomes Ri = Ptλi + Ri(1 − p)(1 − Pt), whichmay be simplified to

Ri = λi

Pt

Pt + p − pPt

(7)

As both components of the previous expression can bemodeled as Poisson processes, the TCP sending pro-cess can also be modeled as Poisson process.

In order to calculate the delay through the TCPbuffer, we need to determine the probability distribu-tion of the number of segments buffered by the TCPdue to the insufficient size of the congestion window(we assume that the TCP buffer has infinite capacity).The probability that k segments are buffered is

qk =∞∑i=1

ai+k

∫ i+1

i

w(x)dx + ak+1w1 (8)

and the LST for the delay through the TCP buffer is

D∗TCP(s) =

∞∑k=0

qke−sk (9)

The entire round trip time of the TCP segment, then,becomes

RTT∗(s) = D∗TCP(s)D∗

L2CAP(s) (10)

4. Simulation Results for TCPPerformance in Bluetooth Piconet

In order to verify the results of the queueing theoreticanalysis, we have built a simulator of a Bluetooth pi-

conet using the ns2 network simulator [17] with Blue-hoc extensions [18]. The network was compliant withthe version 1.2 of the Bluetooth specification [2], as themore recent specification 2.0 which allows higher datarates [3] is not supported by all commercially avail-able devices. We have considered the piconet with twoslaves only, having two simultaneous but independentTCP connections: one from slave 1 to slave 2, and theother from slave 2 to slave 1. This setup, while admit-tedly simplistic, offers much better possibility to ob-serve, identify, and interpret different interactions thanwould be possible in a piconet with more members.

In the first set of experiments, independent variableswere token buffer queue size S (expressed in TC seg-ments) and offered load per connection. The incomingmessage size was set to 1460 bytes. Other parametershad fixed values, as follows:

1. The value of the scheduling parameter was M = 5,so that the entire TCP segment can be sent in onepiconet cycle, that is, during a single visit of themaster to the source slave.

2. The token rate was fixed to tb = 250 kbps.

80 100 120 140 160 180 200 220 240

5

10

15

20

250

100

200

300

400

offered load (kbps)token bucketbuffer size S

RTT (msec)(a)

80 100 120 140 160 180 200 220 240

5

10

15

20

250

40

80

120

160

(b)

offered load (kbps)

TCP goodput (kbps)

token bucketbuffer size S

Fig. 3. TCP performance as the function of the buffer size S.(a) Round-trip time (RTT); (b) Goodput.

Copyright © 2006 John Wiley & Sons, Ltd. Wirel. Commun. Mob. Comput. 2006; 6:865–876

THE IMPACT OF BLUETOOTH MAC ON TCP 871

3. The token buffer capacity was W = 3 KB.4. The output rate of the token bucket queue was set to

max rate = 1 Mbps (note that the theoretical maxi-mum data rate supported in the Bluetooth piconet isonly about 1 Mbps, for versions 1.1 and 1.2 of thespecification. or

5. The outgoing (uplink) queue size was L = 20 pack-ets.

The values of TCP goodput and RTT obtainedthrough simulation are shown in Figure 3. The sizeof token bucket buffer varied from 5 to 25, and the of-fered load varied from 80 to 240 kbps. We observe thatonly for S = 25 the goodput reaches the physical limit(for the given segment size) of approximately 180 kbpsper connection. At the same time, the round trip delayreaches 400 ms.

The mean congestion window size and mean slowstart threshold are shown in Figure 4, when the tokenbucket buffer size S varies from 5 to 25, and the of-fered load varies from 80 to 240 kbps. The mean con-gestion window size grows with S, which is expected

80 100 120 140 160 180 200 220 240

5

10

15

20

250

20

40

60

(a)

offered load (kbps)

average congestion windowsize (segments)

token bucketbuffer size S

80 100 120 140 160 180 200 220 240

5

10

15

20

250

10

20

30

40

(b)

offered load (kbps)

Average slow start threshold (segments)

token bucketbuffer size S

Fig. 4. TCP performance as the function of the buffer size S.(a) Mean size of the congestion window; (b) Mean value of

the slow start threshold.

80 100 120 140 160 180 200 220 240

5

10

15

20

25

(a)

0

0.2

0.4

0.6

0.8

offered load (kbps)

Number of timeouts per sec

token bucketbuffer size S

80 100 120 140 160 180 200 220 240

5

10

15

20

250

0.2

0.4

0.6

(b)

offered load (kbps)

token bucketbuffer size S

Fig. 5. TCP performance as the function of the buffer size S.(a) Time-out rate; (b) Fast retransmission rate.

since larger S means lower buffer loss. Furthermore,the congestion window grows with the segment arrivalrate under very low loads, since there are not enoughpackets to expand the window size. For moderate andhigh offered loads, the average window size experi-ences a sharp decrease with the offered load. Thisis consistent with analytical results, since the packetloss probability is directly proportional to the offeredload.

Finally, Figure 5 shows the rate of TCP timeouts andfast retransmissions as functions of offered load andbuffer size S. Similar conclusions regarding parametervalues can be drawn.

5. Interaction of Different Parametersand its Impact on Performance

In order to demonstrate the interaction between TCPcongestion control mechanisms and Bluetooth data linklayer protocols, we have plotted the time dependen-cies of different performance parameters for a num-ber of sample simulation runs. Performance parameters

Copyright © 2006 John Wiley & Sons, Ltd. Wirel. Commun. Mob. Comput. 2006; 6:865–876

872 J. MISIC, K. L. CHAN AND V. B. MISIC

50 100 150 200 250 300 350 4000

20

40

60

80

100

120

140

160

(e)

time (sec)

RTT (msec)

total TCP goodput (kbps)

slave 1, 2 TCP goodput (kbps)

slave 1, 2 congestion window size

50 100 150 200 250 300 350 4000

20

40

60

80

100

120

140

160

(d)

time (sec)

RTT (msec)

total TCP goodput (kbps)

slave 1, 2 TCP goodput (kbps)

slave 1, 2 congestion window size

50 100 150 200 250 300 350 4000

20

40

60

80

100

120

140

160

(c)

time (msec)

total TCP goodput (kbps)

slave 1, 2 TCP goodput (kbps)

slave 1, 2 congestion window sizeRTT (msec)

50 100 150 200 250 300 350 4000

20

40

60

80

100

120

140

160

180

200

(b)

time (sec)

total TCP goodput(kbps)

slave 1, 2 TCP goodput (kbps)

RTT (msec)slave 1, 2congestion window size

50 100 150 200 250 300 350 4000

20

40

60

80

100

(a)

time (sec)

slave 1 TCP goodput (kbps)

slave 2 TCP goodput (kbps)

total TCP goodput (kbps)

RTT (msec)

slave 1, 2 congestion window size

50 100 150 200 250 300 350 4000

50

100

150

200

(f)

time (sec)

RTT (msec)

total TCP goodput (kbps)

slave 1, 2 TCP goodput (kbps)

slave 1, 2 congestion window size

Fig. 6. Pertaining to the issue of fairness among TCP connections when 1-limited scheduling is used for intra-piconet polling.(a) Token bucket queue size 1 segment, token rate 100 kbps; (b) Token bucket queue size 1 segment, token rate 150 kbps; (c)Token bucket queue size 3 segments, token rate 100 kbps; (d) Token bucket queue size 3 segments, token rate 150 kbps; (e)

Token bucket queue size 10 segments, token rate 100 kbps; (f) Token bucket queue size 10 segments, token rate 150 kbps.

Copyright © 2006 John Wiley & Sons, Ltd. Wirel. Commun. Mob. Comput. 2006; 6:865–876

THE IMPACT OF BLUETOOTH MAC ON TCP 873

measured include TCP goodput, both overall and per-slave (or, more precisely, per-connection), size of thecongestion window for each slave, and round-trip time(RTT); independent variables were baseband buffersize S and token rate tb. The resulting diagrams ob-tained when 1-limited polling discipline (i.e., M = 1)is used for intra-piconet polling, are shown in Figure 6.

As in the previous experiment, the piconet had twoslaves and two simultaneous TCP connections. How-ever, the incoming message size was set to 480 bytes,which allows for maximum achievable goodput of ap-proximately 240 kbps. Baseband buffer size L was setto 10 segments in all simulation experiments. The totaloffered load was modeled as a Poisson process, and itsaverage rate was increased by 50 kbps every 50 s.

We will now explain the variation of performanceparameters using Figure 6(a), where the token bucketbuffer size S is one segment only, and token rate is100 kbps.

5.1. First 50 Seconds

In this period, average TCP segment arrival rate perconnection is 25 kbps. Since this rate is much smallerthat the token bucket rate, all TCP segments are passedon to the baseband buffer without any loss. The av-erage piconet cycle time is around 16 baseband slottimes, which is much larger than the average segmentinter-arrival time. Due to the variability of the Poissonarrival process, only a small number of segments will belost due to the overflow of the baseband buffer. Suchrare losses will be detected through three duplicatedacknowledgments, and the connection that experiencesuch loss will be penalized by having its congestionwindow size halved. Since the impact of various pa-rameters does not properly show in this time period, itis not depicted in the diagrams.

5.2. Next 50 Seconds

In this time period, average TCP segment arrival rateper connection is 50 kbps. In this load range, first time-out loss events will begin to occur and the connec-tions affected by these will be pushed into slow startmode. However, due to the relatively large segment ar-rival rate, the slow start mechanism will cause frequentbuffer overflows. As a result, the congestion windowsize for either connection will fluctuate around values1, 2, and 4. Depending on the pattern of segment ar-rivals, some connections may have repeating losses andare thus unable to exit the slow start regime. This is thecause of unfairness among the connections, since con-

nections with larger congestion window sizes will havelarger throughput.

5.3. Period Between 150 and 200 Seconds

In this period, average TCP segment arrival rate is equalto 75 kbps. Compared to the token rate of 100 kbps, thismeans that losses at token bucket queue will be quitefrequent. Moreover, the losses at the baseband buffer L

will start to occur. The particular pattern of the arrivalprocess might push some connections in repeated slow-starts, which will again result in unfair throughput.

5.4. Period After 200 Seconds

In this period, average segment arrival is the same astoken rate, and thus close to the maximum connec-tion throughput. In this regime, time-out losses are in-evitable. As a result, the maximum goodput is onlyaround 90 kbps which is less than 50% of the maxi-mum achievable value for this segment size.

If we increase the token rate to 150 kbps, as shownin Figure 6(b), losses at the token bucket filter will beavoided for average TCP segment arrival rates equal to,and lower than, 100 kbps. This will improve fairnesssince packets can be lost under the same conditions forboth connections, and congestion window sizes will bemore uniform across time for both connections. Conse-quently, the achieved goodput is close to the maximumtheoretical value.

When the token buffer size increases to three seg-ments, while the token bucket rate remains 100 kbps, asshown in Figure 6(c), we note that fairness significantlyimproves. This is due to the decoupling of time-outevents, and the variability of TCP segment arrival pro-cess. However, the maximum goodput is still severelyclipped due to insufficient token rate. One would expectthat a simple increase of token rate to 150 kbps wouldrestore the goodput, as was observed at lower tokenbucket buffer size (diagrams in the topmost row). How-ever, Figure 6(d) shows improvement only for averagesegment arrival rates up to and including 75 kbps (i.e.,in the first 150 s), where the average goodput per con-nection indeed approaches the average segment arrivalrate. At higher segment arrival rates, frequent occur-rences of time-out loss events push the TCP conges-tion control to the slow start regime and the throughputdecreases accordingly.

When the token bucket buffer size is increased to 10segments and the token rate is 100 kbps, as shown inFigure 6(e), the congestion windows of both connec-tions will grow smoothly in the first 100 s or so—up to

Copyright © 2006 John Wiley & Sons, Ltd. Wirel. Commun. Mob. Comput. 2006; 6:865–876

874 J. MISIC, K. L. CHAN AND V. B. MISIC

50 100 150 200 250 300 350 4000

20

40

60

80

100

120

140

160

180

200

(d)

time (sec)

total TCP goodput (kbps)

slave 1, 2 TCP goodput (kbps)

slave 1, 2congestion window size

RTT (msec)

50 100 150 200 250 300 350 4000

20

40

60

80

100

120

140

(e)

time (sec)

RTT (msec)

total TCP goodput (kbps)

slave 1, 2 TCP goodput (kbps)

slave 1, 2 congestion window size

50 100 150 200 250 300 350 4000

50

100

150(c)

total TCP goodput (kbps)

slave 1, 2 TCP goodput (kbps)

RTT (msec)

slave 1, 2congestion window size

time (sec)

50 100 150 200 250 300 350 4000

50

100

150

(b)

time (sec)

RTT (msec)

total TCP goodput (kbps)

slave 1, 2 TCP goodput (kbps)

slave 1, 2 congestion window size

50 100 150 200 250 300 350 4000

50

100

150

(a)

time (sec)

RTT (msec)

total TCP goodput (kbps)

slave 1, 2 TCP goodput (kbps)

slave 1, 2 congestion window size

50 100 150 200 250 300 350 4000

50

100

150

200

250(f)

time (sec)

RTT (msec)

total TCPgoodput (kbps)

slave 1, 2TCP goodput (kbps)

slave 1, 2 congestionwindow size

Fig. 7. Pertaining to the issue of fairness among TCP connections when 3-limited scheduling is used for intra-piconet polling.(a) Token bucket queue size 1 segment, token rate 100 kbps; (b) Token bucket queue size 1 segment, token rate 150 kbps; (c)Token bucket queue size 3 segments, token rate 100 kbps; (d) Token bucket queue size 3 segments, token rate 150 kbps; (e)

Token bucket queue size 10 segments, token rate 100 kbps; (f) Token bucket queue size 10 segments, token rate 150 kbps.

Copyright © 2006 John Wiley & Sons, Ltd. Wirel. Commun. Mob. Comput. 2006; 6:865–876

THE IMPACT OF BLUETOOTH MAC ON TCP 875

the average segment arrival rates of 50 kbps per TCPconnection. However, once the segment arrival ratereaches 75 kbps, duplicate acknowledgment and time-out loss events begin to appear. When the arrival ratereaches 100 kbps per connection, loss events are mostlydue to time-outs, which essentially clips the goodput.

Moreover, due to the variability of the arrival pro-cess, spurious synchronization of arrival and loss eventscan occur. This effect may lead to unfairness, whichlowers the throughput even more. Due to such cou-pling, at the token rate of 100 kbps the goodput for thetoken buffer size of 10 segments in Figure 6(e) is lowerthan the corresponding goodput for the token buffersize of three segments from Figure 6(c).

Finally, Figure 6(f) shows the performance for thetoken buffer size of 10 segments and token rate of150 kbps. Under these conditions, loss events do notoccur until the average segment arrival rate per connec-tion reaches 100 kbps. In fact, in the first 200 s (wherethe arrival rate per connection is below 100 kbps), onlyone duplicate-acknowledgment event per connectioncan be observed. In subsequent time periods, a mixtureof duplicate acknowledgments and time-out events oc-curs, which allowed throughput to approach it maxi-mum value.

Similar effects may be observed in Figure 7, whichshows the performance of the network when thescheduling parameter has the value of M = 3; othervariables were assigned the same values as in the previ-ous experiment. Note that, in this setting, the entire TCPsegment can be transferred from the slave to the master(or vice versa) within a single piconet polling cycle,which was not possible in the previous experiment.

When the token buffer size is one segment and thetoken rate is 100 kbps, as shown in Figure 7(a), thegoodput is clipped at about 140–145 kbps, which is wellbelow the maximum theoretical rate. This is caused byinsufficient token rate. At the same time, fairness isreasonably good, but not ideal, due to the decouplingof time-out events and the variability of TCP segmentarrival process.

When the token rate is increased to 150 kbps,Figure 7(b), the performance actually worsens becausethe small size of the token buffer becomes the limit-ing factor. Frequent time-out loss events keep the con-gestion windows small, and cause large fluctuations ingoodput for one or both connections.

Increasing the token buffer size to three segmentsdoes not bring much improvement in goodput if thetoken rate is still 100 kbps only, as can be seen fromFigure 7(c). Fairness, on the other hand, is much im-proved compared to the previous case.

Significant improvements in goodput can be no-ticed only when the token rate is increased as well,Figure 7(d). While both connections are essentially tieddown in the slow-start regime (with congestion windowsize below 10) due to frequent time-out events, the to-ken buffer size and token rate are sufficiently high toallow the piconet achieve close to 200 kbps of goodput.

When the buffer size increases, as in Figure 7(e) and(f), losses tend to decrease but at the same time RTTincreases, since each packet has to wait for all the pack-ets that precede it. The limiting factor, in this case,is the token rate: at lower token rate of 100 kbps, thegoodput get clipped to values much lower than the the-oretical maximum—actually, only about half of thatvalue in Figure 7(e). Although long-time fairness isreasonably good, since each flow manages to get outof the slow-start regime—temporarily at least—at theexpense of the other flow. The large fluctuations of in-dividual goodput per connection are due to the spurioussynchronization effects explained above.

At a higher token rate of 150 kbps, Figure 7(f), bothcongestion window sizes can increase without majortime-out loss events until the average segment arrivalrate reaches 200 kbps. Subsequent losses reduce thecongestion window sizes, yet the piconet is able toachieve goodput value much closer to the theoreticalmaximum. Fairness is also rather good, despite someinitial imbalance caused by the switch to the slow-startregime.

6. Conclusion

In this paper we have analyzed the performance ofthe Bluetooth piconet carrying TCP traffic, assumingTCP Reno is used. We have analytically modeled thesegment loss probability, TCP send rate, and the prob-ability density functions of the congestion windowsize and the round trip time. We have investigated theimpact of token bucket buffer size, token rate, outgoingbaseband buffer size, and scheduling parameter onvarious TCP parameters such as goodput, round-triptime, congestion window size, time-out rate, andfast retransmission rate. Our results show that buffersizes around 25 baseband packets are sufficient forachieving maximal goodput. Scheduling should beconfined to the E-limited policy, with the schedulingparameter set to the value corresponding to the numberof baseband packets in the TCP segment. Finally,token rate should be set to value which is at least 50%higher than the portion of the total piconet throughputdedicated to particular slave.

Copyright © 2006 John Wiley & Sons, Ltd. Wirel. Commun. Mob. Comput. 2006; 6:865–876

876 J. MISIC, K. L. CHAN AND V. B. MISIC

References

1. Bluetooth SIG. Specification of the Bluetooth System. Version1.1, February 2001.

2. Bluetooth SIG. Specification of the Bluetooth System. Version1.2, November 2003.

3. Bluetooth SIG. Draft Specification of the Bluetooth System. Ver-sion 2.0, November 2004.

4. Capone A, Kapoor R, Gerla M. Efficient polling schemes forBluetooth picocells. In Proceedings of IEEE International Con-ference on Communications ICC 2001, Vol. 7, Helsinki, Finland,June 2001; 1990–1994.

5. Das A, Ghose A, Razdan A, Saran H, Shorey R. Enhancingperformance of asynchronous data traffic over the Bluetoothwireless ad-hoc network. In Proceedings Twentieth Annual JointConference of the IEEE Computer and Communications Soci-eties IEEE INFOCOM 2001, Vol. 1, Anchorage, AK, April 2001;591–600.

6. Kalia M, Bansal D, Shorey R. MAC scheduling and SAR poli-cies for Bluetooth: a master driven TDD pico-cellular wirelesssystem. In Proceedings Sixth IEEE International Workshop onMobile Multimedia Communications (MOMUC’99), San Diego,CA, November 1999; 384–388.

7. Jacobson V. Modified TCP congestion avoidance algorithm.Note posted on end-to-end-interest mailing list, availableat ftp://ftp.isi.edu/end2end/end2end-interest-1990.mail, April1990.

8. Bluetooth SIG. Bluetooth network encapsulation protocol(BNEP) specification. Technical report, Revision 0.95a, June2001.

9. Kalia M, Bansal D, Shorey R. Data scheduling and SAR forBluetooth MAC. In Proceedings VTC2000-Spring IEEE 51stVehicular Technology Conference, Vol. 2, Tokyo, Japan, May2000; 716–720.

10. Bertsekas DP, Gallager R. Data Networks (2nd edn). Prentice-Hall: Englewood Cliffs, NJ, 1991.

11. Takagi H. Queueing analysis, Vol. I: Vacation and Priority Sys-tems. North Holland, Amsterdam: The Netherlands, 1991.

12. Misic J, Chan KL, Misic VB. Admission control in Bluetooth pi-conets. IEEE Transactions on Vehicular Technology 2004; 53(3):890–911.

13. Padhye J, Firoiu V, Towsley DF, Kurose JF. Modeling TCPReno performance: a simple model and its empirical validation.ACM/IEEE Transactions on Networking 2000; 8(2): 133–145.

14. Sikdar B, Kalyanaraman S, Vastola KS. Analytical models forthe latency and steady-state throughput of TCP Tahoe, Reno, andSACK. ACM/IEEE Transactions on Networking 2003; 11(6):959–971.

15. Cai L, Shen X, Mark JW. Delay analysis for AIMD flows inwireless/IP networks. In Proceedings Globecom’03, Vol. 6. SanFrancisco, CA, December 2003; 3221–3225.

16. Misic J, Misic VB. Performance Modeling and Analysis of Blue-tooth Networks: Polling, Scheduling, and Traffic Control. CRCPress: Boca Raton, FL, 2006.

17. The Network Simulator ns-2. Software and documentation avail-able from http://www.isi.edu/nsnam/ns/, 2003.

18. Bluehoc. The Bluehoc Open-Source Bluetooth Simula-tor, version 3.0. Software and documentation available fromhttp://www-124.ibm.com/developerworks/opensource/bluehoc/,2003.

Authors’ Biographies

Jelena Misic is associate professor ofComputer Science at the Universityof Manitoba, Winnipeg, Manitoba,Canada. Her previous posts include theHong Kong University of Science andTechnology, Hong Kong, and InstituteVinca, Belgrade, Serbia, Yugoslavia.She received her Ph.D. in Computer

Engineering in 1993 from University of Belgrade, Serbia,Yugoslavia. Her current research interests include wirelessnetworks, sensor networks, and mobile computing. She is amember of IEEE.

Ka Lok Chan received his MPhil degreein Computer Science at the Hong KongUniversity of Science and Technology.His research interests include modelingand performance evaluation of wirelessnetworks.

Vojislav B. Misic received his Ph.D. inComputer Science in 1993 from Uni-versity of Belgrade, Serbia, Yugoslavia.He is currently associate professor ofComputer Science at the Universityof Manitoba, Winnipeg, Manitoba,Canada. His research interests includemodeling and performance evaluation

of wireless networks, but also systems and SoftwareEngineering. He is a member of ACM, AIS, and IEEE.

Copyright © 2006 John Wiley & Sons, Ltd. Wirel. Commun. Mob. Comput. 2006; 6:865–876