characterization of tcp flows over large-fat networks

20
1 PFLDNet 2003 Characterization of TCP flows over Large-Fat Networks Antony Antony, Johan Blom, Cees de Laat, Jason Lee, Wim Sjouw University of Amsterdam 03/02/2003

Upload: bernard-ellison

Post on 31-Dec-2015

35 views

Category:

Documents


2 download

DESCRIPTION

Characterization of TCP flows over Large-Fat Networks. Antony Antony, Johan Blom, Cees de Laat, Jason Lee, Wim Sjouw University of Amsterdam 03/02/2003. SW. SW. PC. PC. OC 12, 622Mbps. OC 48, 2.5Gbps. SURFnet Amsterdam Chicacgo 96msec, 2001 - 2002. R. R. PC. PC. OC 192, 10 Gbps. - PowerPoint PPT Presentation

TRANSCRIPT

1PFLDNet 2003

Characterization of TCP flows over Large-Fat Networks

Antony Antony, Johan Blom,

Cees de Laat, Jason Lee, Wim Sjouw

University of Amsterdam

03/02/2003

2PFLDNet 2003

Networks used

PC SW PCSW

OC 48, 2.5Gbps

OC 12, 622Mbps

SURFnet Amsterdam Chicacgo 96msec, 2001 - 2002

PC PCOC 192, 10 Gbps

SURFnet Amsterdam Chicacgo 96msec, iGrid 2002

R R

PC PC2.4 Gbps

DataTag CERN Chicacgo 110 msec, 2002, till now

R RSW SW

3PFLDNet 2003

Layer - 2 requirements from 3/4

TCP is bursty due to sliding window protocol and slow start algorithm.

Window = BandWidth * RTT & BW == slow

fast - slowMemory-at-bottleneck = ----------- * slow * RTT fast

WS WSL2

fast->slowL2

slow->fastfast fast

high RTTslow

4PFLDNet 2003

5000 1 kByte UDP packets(19 of 24)

5PFLDNet 2003

Self-clocking of TCPWS WS

L2fast->slow

L2slow->fast

fast fasthigh RTT

20 µsec14 µsec

20 µsec

20 µsec

20 µsec

(20 of 25)

6PFLDNet 2003

Forbidden area, solutions for s when f = 1 Gb/s, M = 0.5 MbyteAND NOT USING FLOWCONTROLs

rtt

158 ms = RTT Amsterdam - Vancouver

OC12

OC9

OC6

OC3

OC1

Possible BW due to lack of buffer at the bottleneck

7PFLDNet 2003

Characterising a TCP Flow

• Three different phases of TCP– Bandwidth discovery phase– Steady state– Congestion avoidance

• Is it due to Implementations, Protocol or Philosophical ?

8PFLDNet 2003

Receiver can’t cope with burst

9PFLDNet 2003

TCP Flow fall off from slow start without packet a loss

10PFLDNet 2003

Modifications

• Faster Host/NIC

• Pacing out Packets at device level.

• HSTCP, using Net100 2.1

• Q length on the Interface (Linux Specific)– IFQ manipulation using Net100– Changing TXQ using ifconfig

11PFLDNet 2003

Adding Delay at Device level

12PFLDNet 2003

NIKHEF -> EVL 618Mbps, 3min

13PFLDNet 2003

NIKHEF -> ANL 410Mbps, ~Hour over a 622Mbps Path

14PFLDNet 2003

Throughput vs TXQ Amsterdam to Chicago (Linux)

Linux Default (100)

15PFLDNet 2003

TCP performance comparison.Network Single

StreamMulti Stream UDP

Lambda (622M) 80 540 560

Lambda (1.2G) 120 580 800

iGrid2002 120 580 800

Post iGrid2002

(HSTCP)

730 730 900

DataTag (HSTCP) 950 950 950

16PFLDNet 2003

Near GigE using vanilla TCP!

• 980Mbps Sunnyvale to Amsterdam (196 msec)Jumbo Frames, no congestion, entire path was OC192

• Throughput drops when there is congestion– With HSTCP (Net100) flow recovers from

congestion events

Sunnyvale – Chicago – Amsterdam – CERN – Amsterdam196 msec Path

17PFLDNet 2003

Ideal Flow (196 msec, 980Mbps)(Jumbo Frames)

18PFLDNet 2003

Conclusion

• Throughput of a TCP flow depend on Slow Start Behavior– If you get early congestion, your flow will probably

not recover before you finish

• TCP is not robust.

• Is this an Implementation , Protocol or Philosophical problem ?

19PFLDNet 2003

Future Work

• Higher speed single TCP flows in the WAN– Using 10Gig NICs on the end hosts

• Use traces captured from wire to examine behavior of TCP.

• Closer look at TCP behavior over Lambdas vs. routed networks.

20PFLDNet 2003

Thanks!

URL http://www.science.uva.nl/research/air/