nccu.mclab a new tcp congestion control mechanism over mobile ad hoc networks by router-assisted...

91
NCCU.M CLab A New TCP Congestion Control Mechanism over Mobile Ad Hoc Networks by Router-Assisted Approach Student: Ho-Cheng Hsiao Advisor: Yao-Nan Lien 2006.10.5

Post on 21-Dec-2015

213 views

Category:

Documents


0 download

TRANSCRIPT

NCCU.MCLab

A New TCP Congestion Control Mechanism over

Mobile Ad Hoc Networks by Router-Assisted Approach

Student: Ho-Cheng HsiaoAdvisor: Yao-Nan Lien

2006.10.5

MCLab@NCCU

Outline

• Introduction

• Related work

• Our router-assisted approach

• Performance evaluation

• Conclusion

MCLab@NCCU

Outline

• Introduction

• Related work

• Our router-assisted approach

• Performance evaluation

• Conclusion

MCLab@NCCU

Introduction

• TCP Congestion Control– Trial-and-error based flow control for control

congestion• Connection-oriented• End-To-End• Reliable

– Slow Start– Congestion Avoidance– Fast Retransmits and Fast Recovery

MCLab@NCCU

Introduction

Slow Start

time out

3 duplicate ACKs

Congestion Avoidance

(RTT)

threshold

threshold

MCLab@NCCU

Introduction• Objective of TCP Congestion Control Protocol

– Utilize resource as much as it can– Dissolve congestion– Avoid congestive collapse

• Generally occur in bottleneck point

• However, the nature of MANET has exposed the weakness of TCP

– Lack of infrastructure – Unstable Medium– Mobility– Limited bandwidth– Difficulty to distinguish loss due to congestion or link failure

MCLab@NCCU

Introduction

• Analysis of TCP problem over MANET (1)– Slow Start

• It takes several RTTs for slow-start to probe the max. available bandwidth in MANET

• Connections spend most of time in Slow Start phase due to frequent timeout

– up to 40% of connection time

• Slow-start tends to generate too much packets– Network overloaded!!

Slow-start always overshoot and cause periodic packet loss

MCLab@NCCU

Introduction

• Analysis of TCP problem over MANET (2)– Loss-based congestion indication

• Event of packet loss = congestion (regular network)

– by three duplicate ACKs or timeout

• Packet losses in MANET can be classified into– Congestion loss– Random loss

» Link failure or route change (in most case)» Transmission error

Not every loss is due to congestion

MCLab@NCCU

Introduction

• Analysis of TCP problem over MANET (3)– AIMD ( Additive Increase and Multiplicative

Decrease )• Additive Increase

– Slow convergence to the full available bandwidth

• Multiplicative Decrease– Unnecessary decrease of congestion window size while

detecting packet losses

– This scheme is a good way to deal with congestion in regular network

» Not a good scheme in MANET!Avoid unnecessary congestion window drop is the key to better performance

MCLab@NCCU

Introduction

Route Failure and Random Loss

Slow Start

Overshooting

Falsely

Trigger

End-to-End

Congestion

Control

Under or over

utilization

Performance

Degradation

Unaware of Network Condition

What happen if we have more information about it??

MCLab@NCCU

Introduction

• Explicit Router-assisted Techniques– Explicit router feedback is able to indicate internal

network condition

– Several explicit information provided by router can enhance the performance of transport protocol

• Available bandwidth (with respect to a path)• Queue length• Queue size• Loss rate

MCLab@NCCU

Introduction

• Our approach– Design a new TCP congestion control

mechanism that is aware of network condition over MANET

– The protocol can dynamically respond with different situations according to the explicit information from routers

MCLab@NCCU

Outline

• Introduction

• Related work

• Our router-assisted approach

• Performance evaluation

• Conclusion

MCLab@NCCU

Related Work

• Router-assisted congestion control– TCP-F (TCP-Feedback)– TCP-ELFN (Explicit Link Failure Notificatoin)– ATCP (Ad Hoc TCP)

• Other proposals– Adaptive CWL (Congestion Window Limit)

MCLab@NCCU

Related Work

• TCP-F (TCP-Feedback)– Sender is able to distinguish route failure and network

congestion

– Network components detect the route failure and notify sender by RFN packet (Route Failure Notification)

– Sender then freezes all the variable (RTO and cwnd) until it receives the RRN packet (Route Recovery Notification)

MCLab@NCCU

Related Work• TCP-ELFN (Explicit Link Failure Notification)

– This scheme is based on DSR (Dynamic Source Routing)

– ELFN message is similar to “host unreachable” ICMP message. It contains:

• Sender and receiver address• Ports• Packet sequence number

– Sender disables its retransmission timer and enter “standby mode” after receiving ELFN

– Then sender keeps probing the network by sending a small packet to restored the route (nature of DSR)

MCLab@NCCU

Related Work• ATCP (Ad Hoc TCP)

– A layer called ATCP is inserted between the TCP and IP layers of the source node

– ATCP listens to the network state by • ECN (Explicit Congestion Notification) message

– Congestion!! • ICMP “Destination Unreachable” message

– Network Partitioning!!

– Sender can be put into 3 states:• Persist State – by ICMP message• Congestion Control State – by ECN message • Retransmit State – by packet loss without ECN flag

• Note - After receiving three duplicate ACKs, sender does not invoke congestion control and puts TCP in Persist State and quickly retransmit the loss packet from buffer ( Multipath routing or channel loss)

– Recomputation of congestion window size after route re-establishment

MCLab@NCCU

Related Work

• Adaptive CWL (Congestion Window Limit)– If the congestion window size is greater than an upper

bound, the TCP performance will degrade.

– Find the BDP (Bandwidth Delay Product) of a path in MANET

– They use this upper bound of BDP to dynamically adjust TCP’s max. window size

MCLab@NCCU

Related Work

• In [29] [31] [33], they show TCP with a small congestion window tends to outperform TCP with large congestion window size in wireless multihop network (e.g., 1 or 2)

• In [29], Fu et al. reported that , there exists an optimal TCP window size W* by which TCP achieves the best throughput– However TCP operates at an average window size

which is much larger than W*. (Overshooting)

MCLab@NCCU

Outline

• Introduction

• Related work

• Our router-assisted approach

• Performance evaluation

• Conclusion

MCLab@NCCU

Design Philosophy

• Fully utilize the available bandwidth alone the path

• Reducing the chance of congestion

• Distinguish the loss between congestion loss and random loss

MCLab@NCCU

Design Procedure

• Estimation of the available bandwidth

• Dynamic adjustment of sender sending rate according to router feedback

• Recovery of random lost packets

MCLab@NCCU

Objective

• Allowing sender to reach appropriate sending rate quickly

• Maintain throughput

• Dissolve congestion

• Provide fairness with other TCP variants

MCLab@NCCU

TCP Muzha

• Window-based router-assisted congestion control

• Sender function– Modification of Slow Start

• Router function– Estimation of available bandwidth– Computation of DRAI (Data Rate Adjustment Index)– Handling random loss

• Receiver function– Return the ACK with DRAI back to Sender

MCLab@NCCU

TCP Muzha

• Modification of slow-start

– Sender dynamically adjusts its sending rate according to the router feedback collected by receiver.

• Without causing periodic packet loss• Avoid the overshooting problem

– Router feedback• We use available bandwidth

MCLab@NCCU

TCP Muzha

• Router function– Estimation of available bandwidth

• Most traffic flows are tend to pass through the routers which have more bandwidth

• We assume each router is aware of :– Incoming and outgoing traffic state– Aggregate bandwidth state

– Routers have more precise information regarding to TCP flows of a bottleneck node.

MCLab@NCCU

TCP Muzha• How to use the available

bandwidth?– Direct publication of actual

available bandwidth?• Seduce greedy TCP sender• Bandwidth fluctuation

• Our approach– Routers compute a fuzzilized

index according to available bandwidth as a guideline for sender to adjust sending rate

Router

Sender1

Sender5

Sender4

Sender3

Sender2

Destination1

Destination4

Destination3

Destination5

Destination2

公佈出來的value是被5個聯結所分享

MCLab@NCCU

TCP Muzha

• Why index?– Consistency of bandwidth computation– Simplicity – Ease of implementation

– The simplest case• ECN

– Bi-level data rate adjustment index (0,1)

– By router’s assistance , sender is able to control its sending rate more precisely

• multi-level data rate adjustment index

MCLab@NCCU

Data Rate Adjustment Index Conversion - Multi-level (1/6)• How many levels are appropriate?

– Currently there is no such research or investigation on this topic

• We use simulation to find the possible solution in order to adapt to the nature of MANET

– Our leveling conception• Avoid congestion• Fine and delicate adjustment of data rate• Maintain throughput

MCLab@NCCU

Data Rate Adjustment Index Conversion - Multi-level (2/6)

• Our previous study

Aggressive Deceleration

Moderate Acceleration/Deceleration

Aggressive Acceleration

MCLab@NCCU

Data Rate Adjustment Index Conversion - Multi-level (3/6)

router routerSender Receiver

Parameter Range

Node Number 4 ~ 32

Bandwidth 2Mbps

MAC 802.11

Receiver window size

4 、 8 、 32

Parameters

Number of levels

Different setting for each level

Level range

Number of Node

Bandwidth

……

0

50

100

150

200

3 4 5 6 7

# of level

Avg

. thr

ough

put (

kbps

)rwnd= 4

rwnd= 8

rwnd= 32

MCLab@NCCU

Data Rate Adjustment Index Conversion - Multi-level (4/6)

5

2

3

4

1

Data Rate

Adjustment Index

Index 1: Aggressive Deceleration

Index 2 : Moderate Deceleration

Index 3 : Stabilizing

Index 4 : Moderate Deceleration

Index 5 : Aggressive Acceleration

MCLab@NCCU

Data Rate Adjustment Index Conversion - Multi-level (5/6)• Each router estimates available bandwidth itself

and convert this to DRAI– Total bandwidth / Number of Flow Convert

• Then each router compares and marks every passed packet with DRAI– If the value is greater, then DRAI stay untouched– Otherwise, replace it with new DRAI

• The smallest value of DRAI– Minimal data Rate Adjustment Index (MRAI)

MCLab@NCCU

Data Rate Adjustment Index Conversion - Multi-level (6/6)

Sender

Router Router Receiver

bottleneck

11Mbps 1Mbps 5Mbps

11Mbps 1Mbps 1Mbps

DRAI

5 1 1

MRAI

1

1Mbps

MCLab@NCCU

TCP Muzha

• Handling of Random loss– The effect of random loss

• Retransmission• Timeout• Reduction of congestion window size

– Original indication of packet loss• 3 duplicate ACK• Our approach

» 3 duplicate ACK with deceleration marking (congestion)» 3 duplicate ACK with acceleration marking or no marking

(random loss)

MCLab@NCCU

TCP Muzha

• We simplify three phases of TCP NewReno into two phases– Congestion Avoidance

(CA)– Fast Retransmit and Fa

st Recovery (FF)

CA

FF

Start

Update CWND

by MRAI

Send packet

New Ack / Timeout

New Ack

Trip

le

Du

plic

ate

AC

K

MCLab@NCCU

TCP Muzha

MCLab@NCCU

Outline

• Introduction

• Related work

• TCP Muzha

• Performance evaluation

• Conclusion

MCLab@NCCU

Performance Evaluation

• Parameters– Chain topology (4~32 hops)– Link Layer : IEEE 802.11 MAC– Bandwidth : 2MB/s– Transmission radius : 250m– Routing: AODV

1 2 3 4 5 6 7 8 9

Single TCP Flow

MCLab@NCCU

Performance Evaluation

• Evaluation metrics – Congestion window size change– Throughput– Retransmission– Fairness

MCLab@NCCU

Performance Evaluation

• Metrics – Congestion window size change– Throughput– Retransmission– Fairness

MCLab@NCCU

Performance Evaluation

MCLab@NCCU

Performance Evaluation

MCLab@NCCU

Performance Evaluation

MCLab@NCCU

Performance Evaluation

• Metrics – Congestion window size change– Throughput– Retransmission– Fairness

MCLab@NCCU

Throughput vs. number of hop (window_= 4)

0

50

100

150

200

250

4 5 6 7 8 9 10 13 16 32

Number of Hops

Thr

ough

put [

kbps

]

Newreno

Sack

Vegas

Muzha

Performance Evaluation

MCLab@NCCU

Throughput vs. number of hops (window_ = 8)

0

50

100

150

200

250

4 5 6 7 8 9 10 13 16 32

Number of Hops

Thr

ough

put [

kbps

]

Newreno

Sack

Vegas

Muzha

Performance Evaluation

MCLab@NCCU

Performance Evaluation

Throughput vs. number of hops (window_ = 32)

0

50

100

150

200

250

4 5 6 7 8 9 10 13 16 32

Number of Hops

Thr

ough

put [

kbps

]

Newreno

Sack

Vegas

Muzha

MCLab@NCCU

Performance Evaluation

• Metrics – Congestion window size change– Throughput– Retransmission– Fairness

MCLab@NCCU

Performance Evaluation

Number of Retransmission (window_ = 4)

0

5

10

15

20

25

30

35

4 5 6 7 8 16 32

Number of Hops

Newreno

Sack

Vegas

Muzha

MCLab@NCCU

Performance Evaluation

Number of Retransmission (window_= 8)

0

10

20

30

40

50

60

70

4 5 6 7 8 16 32

Number of Hops

Newreno

Sack

Vegas

Muzha

MCLab@NCCU

Performance Evaluation

Number of Retransmission (window_ = 32)

0

20

40

60

80

4 5 6 7 8 16 32

Number of Hops

Newreno

Sack

Vegas

Muzha

MCLab@NCCU

Performance Evaluation

• Metrics – Congestion window size change– Throughput– Retransmission– Fairness

MCLab@NCCU

Performance Evaluation

• Fairness test 1 – Cross Topology– 4 hops, 6 hops, and 8 hops– Bandwidth : 2Mb/s– Simulation time: 50 sec– Two Sets

• TCP Vegas, TCP NewReno• TCP Muzha, TCP NewReno

• Fairness test 2– Throughput dynamics

TCP Flow 1

TCP Flow 2

MCLab@NCCU

Performance Evaluation

• Fairness Index– Jain’s Fairness Index [ ]

n: Number of Flow

χi : throughput of the i-th flow

MCLab@NCCU

Performance Evaluation

• Fairness comparison - Throughput

Fairness Test 1 (Cross Topology)

0

50

100

150

200

250

4 hop 6 hop 8 hop

Number of Hop

Avg

. Thr

ough

put (

kbps

)

TCP NewReno

TCP Muzha

Aggregate

Fairness Test 1 (Cross Topology)

0

50

100

150

200

250

4 hop 6 hop 8 hop

Number of Hop

Avg

. Thr

ough

put (

kbps

)

TCP NewReno

TCP Vegas

Aggregate

MCLab@NCCU

Performance Evaluation

• Fairness comparison – Jain’s Index

Fairness Index (Muzha vs. Newreno)

0

0.2

0.4

0.6

0.8

1

1.2

4 hop 6 hop 8 hop

Fairness Index

Fairness Index (Vegas vs. Newreno)

0

0.2

0.4

0.6

0.8

1

1.2

4 hop 6 hop 8 hop

Fairness Index

MCLab@NCCU

Performance Evaluation

• Fairness test 1 – Cross Topology– 4 hops, 6 hops, and 8 hops– Bandwidth : 2Mb/s– Two Sets

• TCP Vegas, TCP NewReno• TCP Muzha, TCP NewReno

• Fairness test 2– Throughput dynamics– Three flows, each starts at different times

MCLab@NCCU

Performance Evaluation

Throughput Dynamic (NewReno)

02468

101214161820

0 2 4 6 8 10 12 14 16 18 20 22 24 26 28

Time (s)

Cong

estio

n W

indo

w S

ize

(pkt

s)

flow1

flow2

flow3

Throughput Dynamics (Muzha)

0

2

4

6

8

10

12

14

16

0 2 4 6 8 10 12 14 16 18 20 22 24 26 28

Time (s)

Con

gest

ion

Win

dow

Siz

e (p

kts)

flow1

flow2

flow3

Throughput Dynamics (SACK)

0

2

4

6

8

10

12

14

16

18

0 2 4 6 8 10 12 14 16 18 20 22 24 26 28

Time (s)

Con

gest

ion

Win

dow

Siz

e (p

kts)

flow1

flow2

flow3

Throughput Dynamic (Vegas)

0

5

10

15

20

25

30

35

0 2 4 6 8 10 12 14 16 18 20 22 24 26 28

Time (s)

Con

gest

ion

Win

dow

Siz

e (p

kts)

flow1

flow2

flow3

MCLab@NCCU

Outline

• Introduction

• Related work

• TCP Muzha

• Performance evaluation

• Conclusion

MCLab@NCCU

Conclusion• TCP is still a dominant transport layer protocol for conventional and

emerging application

• We proposed a new TCP scheme over MANET by router-assisted approach in order to improve the performance of TCP.

• By assistance of router, our scheme has about 5%~10% throughput improvement and less retransmission in MANET.

• Our proposed protocol provides fairness service for different flows while coexisting with other TCP variants.

• The future work is the further investigation of DRAI function and different explicit information from router.

MCLab@NCCU

Discussion

• AIMD

• LIMD

• MIMD

MCLab@NCCU

Discussion

• Estimation of available bandwidth– Moving Average

Utilization at time tLink Utilization Measurement

Moving Average

NCCU.MCLab

Q&A

MCLab@NCCU

MCLab@NCCU

MCLab@NCCU

Discussion

• Model ?– BDP (Bandwidth-Delay Product)?

• TCP receiver estimates the optimum window size according to the router feedback and sets CWL by the awnd (Advertised Window)

• Use of Pacing ?

MCLab@NCCU

TCP Muzha

MCLab@NCCU

Summary• MANET is a very unstable network

• Most versions of TCP suffer great performance degradation over such network

• Router-assisted approach may not be easy to implement on WAN

• However due to the unique characteristic of MANET (Hybrid role for each host ) and ease of modification, router-assisted approach can be a reality to improve the performance of TCP over MANET

MCLab@NCCU

TCP Muzha Ver. 2

• Modification of TCP Muzha– Congestion Control

• Based on TCP New Reno

– New Data Rate Adjustment Index (DRAI) – Link Error and packet loss Handling

MCLab@NCCU

Conclusion

• TCP is still a dominant transport layer protocol for conventional and emerging application

• We proposed a new TCP scheme over MANET by router-assisted approach in order to improve the performance of TCP.

• The future work is the consideration of mobility

MCLab@NCCU

Related Work• TCP Reno, New Reno and SACK

– TCP Reno• Slow-Start• AIMD ( Additive Increase Multiplicative Decrease)• Fast retransmit and recovery

– Only single packet drop within one window can be recovered– Long retransmission timeout in case of multiple packet losses

– New Reno • Deal with multiple losses within a single window• Retransmit one lost packet per one RTT until all the lost packets

from the same window are recovered.– SACK

• Deal with the same problem as New Reno• Use SACK option field which contains a number of SACK blocks• Lost packets in a single window can be recovered within one RTT

MCLab@NCCU

Related Work

• TCP Vegas– Vegas measures RTT to calculate the amount

of packets that sender can transmit

– The congestion window size can only be doubled every other RTT and reduced by 1/8 to keep proper amount of packets in the network

• Smooth the change of data rate

MCLab@NCCU

Related Work

• TCP Veno– Combinations of TCP Vegas and Reno– Veno uses Vegas to determine types of packe

t loss• Random loss or actual congestion

– Veno modifies Reno with less aggressive sending rate

• Prevention of unnecessary throughput degradation

MCLab@NCCU

Related Work

• ECN (Explicit Congestion Notification)– ECN must be supported by both TCP senders

and receivers.

– ECN-compliant TCP senders initiate their congestion avoidance algorithm after receiving marked ACK packets from the TCP receiver

– A RED extension that marks packets to signal congestion

MCLab@NCCU

Related Work

• Anti-ECN– Routers set a bit in the packet header to indic

ate an under-utilized link– Allow sender to increase as fast as slow start

over an uncongested path

MCLab@NCCU

Related Work

• Quick Start – Slow start require significant number of RTTs

and large amount of data to fully use the available bandwidth .

– Quick Start allows sender to use higher sending rate according to the explicit request permission from routers along the path

• If the routers are underutilized, they may approve the sender’s request for a higher sending rate

MCLab@NCCU

Related Work

• XCP (Explicit Congestion control protocol)– XCP generalizes the ECN

• Instead of one bit congestion indication, XCP routers inform the senders about the degree of congestion at the bottleneck

– Decoupling of utilization control from fairness control

MCLab@NCCU

Related Work

• TCP-Muzha– Router-assisted approach– Find out where the bottleneck is and the infor

mation of the bottleneck– Multi-level Date Rate Adjustment Index

• Fuzzy multilevel congestion notification

MCLab@NCCU

Our Approach

• AIMD vs AIAD vs MIAD vs MIMD– AI : 1,2,3– AD: 1,2,3– MI: 1.125, 0.19, 0.25– MD: 0.5, 0.65, 0.8

• Final four setting– AIMD (1, 0.8)– AIAD (1, 3)– MIMD (1.125, 0.8)– MIAD (1.125, 3)

MCLab@NCCU

Performance Evaluation

Avg. delay vs. number of hops (window_ =32)

0

50

100

150

200

250

300

350

4 5 6 7 8 9 10 13 16 32

Number of Hops

Ave

rage

del

ay [m

s]

Newreno

Sack

Vegas

Muzha

Avg. delay vs. number of hops (window_ = 4)

0

50

100

150

200

250

4 5 6 7 8 9 10 13 16 32

Number of Hops

Ave

rage

del

ay [m

s]

Newreno

Sack

Vegas

Muzha

Avg. delay vs. number of hops (window_ = 8)

0

100

200

300

400

500

600

4 5 6 7 8 9 10 13 16 32

Number of Hops

Ave

rage

del

ay [m

s]

Newreno

Sack

Vegas

Muzha

MCLab@NCCU

Performance EvaluationMuzha

NewReno

SACK

Vegas

MCLab@NCCU

Introduction

• TCP overshooting problem

Network overloaded

by overshooting

MAC loss due to

contentionRoute Failure

Route Recovery (Generate more traffic)

TCP connection failure then timeout

TCP Restart

(slow-start again )

MCLab@NCCU

Related Work• TCP-Bus (Buffering capacity and Sequence information)

– Based on source-initiated on-demand ABR routing protocol

– If a route fails ,pivoting node notify source• Explicit Route Disconnection Notification

– With the sequence number of the TCP segment in the head of queue

– During the route re-establishment, packets sent are buffered , RTO doubled

– When a route is discovery, the receiver sends to the sender the last sequence number it has successfully received

– The sender only selectively retransmit the lost packets and the intermediate node starts sending the buffered packets.

– Reliable retransmission of the control message (ERDN/ERSN)• Detect the channel after sending the control message• Retransmission if sending fails

– Contribution• packets buffered, reliable transmission of control packet

MCLab@NCCU

TCP Muzha

DRAI Meaning Change of CWND

5 Aggressive Acceleration cwnd = cwnd *2

4 Moderate Acceleration cwnd = cwnd+1

3 Stabilizing cwnd = cwnd

2 Moderate Deceleration cwnd = cwnd -1

1 Aggressive Deceleration cwnd =cwnd *1/2

MCLab@NCCU

Performance Evaluation

• Index Comparison (4 hops) (Bar graph)

TCP Newreno TCP Vegas TCP Muzha

Avg. Throughput (kbps)

175 214 186

Avg. Retransmission

14 6 10

Fairness

( with Newreno)0.925 0.99

MCLab@NCCU

Performance Evaluation

• Index Comparison (6 hops)

TCP Newreno TCP Vegas TCP Muzha

Avg. Throughput (kbps)

54 89 65

Avg. Retransmission

16 6 10

Fairness

( with Newreno)0.74 0.99

MCLab@NCCU

Performance Evaluation

• Index Comparison (8 hops)

TCP Newreno TCP Vegas TCP Muzha

Avg. Throughput (kbps)

34 35 37

Avg. Retransmission

25 9 18

Fairness

( with Newreno)0.89 0.99

MCLab@NCCU

Performance Evaluation

• 4 hop cross topology

Throughput (Kbps)

Delay

(ms)

Flow1 (Vegas) 35.7 72.4

Flow2 (Newreno)

118.0 100.4

Aggregate 153.7

Fairness 0.72

Throughput (Kbps)

Delay

(ms)

Flow1 (Muhza) 106.0 63.9

Flow2 (Newreno) 102.0 65.3

Aggregate 208

Fairness 0.99

Vegas +Newreno Muzha +Newreno

MCLab@NCCU

Performance Evaluation

• 6 hop cross topology

Throughput (Kbps)

Delay

(ms)

Flow1 (Vegas) 23.2 264.0

Flow2 (Newreno)

89.9 101.6

Aggregate 113.1

Fairness 0.74

Throughput (Kbps)

Delay

(ms)

Flow1 (Muhza) 65.3 119.5

Flow2 (Newreno) 53.6 112.4

Aggregate 118.9

Fairness 0.99

Vegas +Newreno Muzha +Newreno

MCLab@NCCU

Performance Evaluation

• 8 hop cross topology

Throughput (Kbps)

Delay

(ms)

Flow1 (Vegas) 20.2 91.9

Flow2 (Newreno)

41.7 168.0

Aggregate 61.9

Fairness 0.89

Throughput (Kbps)

Delay

(ms)

Flow1 (Muhza) 43.7 97.2

Flow2 (Newreno) 36.8 112.4

Aggregate 80.5

Fairness 0.99

Vegas +Newreno Muzha +Newreno