need - university of illinoisweb.engr.illinois.edu/~dmnicol/ece541/slides/analysis-lecture...number...
Post on 31-Mar-2018
221 Views
Preview:
TRANSCRIPT
Issues in High PerformanceSimulation
David M. Nicol
University of Illinois at Urbana-Champaign
PADS 2002 Tutorial – p. 1/70
Outline
This is a discussion about methodology and modelingIn Particular, details matter
How do we evaluate, compare, and validate simulators?
How can we abstract traffic behavior with high accuracy?
This lecture ponders these questions.
PADS 2002 Tutorial – p. 2/70
Demonstration of Need
Illustrative experience highlights need for for thinking thisthroughNovember 2001 report released comparing JavaSim,SSFNet, nsNotable characteristics of report
Proposed an architecture for study
Varied parameters of that architecture in order to increasemodel size
Used run-time as the metric of merit
Documented software used and experiments run,provided code that build and ran models.
But then other things cropped up...PADS 2002 Tutorial – p. 3/70
The Architecture:Dumbell
TCPsession
server host client host
sessionTCP
100ms
1.5Mbps
10ms
100Mbps10ms
100Mbps
Model size increased by increasing the number of sessions(up to 10,000)
PADS 2002 Tutorial – p. 4/70
Sample Results from Original Study
PADS 2002 Tutorial – p. 5/70
Sample Results from Original Study
Conclusions were that while JavaSim starts off slower, itscales better as the problem size grows.
PADS 2002 Tutorial – p. 6/70
Our Replication : Heterogeneous Parameters
We downloaded and built the tools. Measuring upper edge ofsend window on 1 session we see different behaviors!
0
50000
100000
150000
200000
250000
300000
350000
0 500 1000 1500 2000
Num
ber o
f Seg
men
ts Tr
ansm
itted
simulation time (seconds)
JavaSimSSFNet
ns2
Why?
PADS 2002 Tutorial – p. 7/70
Different Parameters
Further investigation showed that different TCPparameters were used.
Parameter JavaSim ns2 SSFNet
Maximum receiver window size 128 20 32Maximum sender window size 128 20 32Initial ssthresh value 16 20 216 bytesMSS (bytes) 950 1000 960TCP+IP header size (bytes) 50 - 40
Need to run the same problem on each simulator!
PADS 2002 Tutorial – p. 8/70
Our Replication: Bring them In Plumb
Need to run the same problem on each simulator
Aligned parameters to ns defaults
0
20000
40000
60000
80000
100000
120000
140000
160000
180000
0 500 1000 1500 2000
Nex
t Seg
men
t ID
simulation time (seconds)
JavaSimSSFNet (Java)
ns2SSFNet (C++)
PADS 2002 Tutorial – p. 9/70
But is it exact?
Periodically subtract off ns upper window edge
-40
-20
0
20
40
0 500 1000 1500 2000 2500 3000 3500
Diff
eren
ce in
Nex
t Seq
uenc
e ID
Simulation Time
SSFNetJavaSim
No. SSFNet uses 3-way hand-shake. But differences aren’tlarge.
PADS 2002 Tutorial – p. 10/70
Our Replication : Performance
Running time as function of number of connections, 1000simulated seconds. Faster machine with more memory.
1
10
100
1000
10 100 1000 10000
seco
nds
Connections
JavaSimSSFNet
ns2
Question : Why the steep climb at the end?
PADS 2002 Tutorial – p. 11/70
Look at the Workload
Each simulator reports “events” as a measure of work
1e+06
1e+07
1e+08
10 100 1000 10000
Num
ber o
f Eve
nts
Connections
JavaSimSSFNet
ns2
Huh????
PADS 2002 Tutorial – p. 12/70
Workload Should be Constant
As # connections increases, bottleneck link saturates
Ack-based feedback in TCP holds back transmission.
Not a suitable architecture to use for study of scalability!
computational load should increase, as well as memoryload
So why does the event count rise in SSFNet?
PADS 2002 Tutorial – p. 13/70
TCP Review
Sliding window protocol
1. Send a “round” of packets
2. Wait for acknowledgements to return
3. Repeat
Lost packet detection
If an acknowledgement is not returned within a certaintime, the packet is considered lost
Timeout period is a function of measured Round TripTime (RTT)
PADS 2002 Tutorial – p. 14/70
A Matter of Implementation
Detective work showed that most of SSFNet’s time spent intimer events.Ah hah!
ns and JavaSim both schedule a timeout timer equal tothe TCP timeout period, one timer per session.
SSFNet emulates BSD and schedules one slow timerevery 0.5 sec, for all sessions, checking for expiredimplicit timers
The number of these timers (firing off every 0.5 sec),increases linearly with the number of hosts
PADS 2002 Tutorial – p. 15/70
Round Trip Time
The topology increases the buffer with the number ofconnections
The RTT becomes dominated by the time spent in queuefor the saturated bottleneck link. Values are unrealistic.
0.1
1
10
100
0 200 400 600 800 1000
TCP
RTT
Val
ue (a
vera
ge)
Simulation Time (seconds)
1 Connection10 Connections
100 Connections1000 Connections
Est. 100 ConnectionsEst. 1000 Connections
PADS 2002 Tutorial – p. 16/70
Adjusted Event Count
SSFNet adjusted (temporarily)
Upturn in event count after 3600 due to segment lifetimetimer
1e+06
1e+07
1e+08
10 100 1000 10000
Num
ber o
f Eve
nts
Connections
JavaSimSSFNet
SSFNet(adjusted)ns2
PADS 2002 Tutorial – p. 17/70
More Puzzles
If workload is constant, why do ns and JavaSim curves turnup?Used execution profiler and discovered
ns spends a lot of time in event cancellation, linearlyscanning for the event to remove
JavaSim spends a lot of time scanning channels when amessage is received
Communicating with ns and JavaSim developers, patcheswere obtained to fix linearizations
PADS 2002 Tutorial – p. 18/70
Modified Performance Curves
1
10
100
1000
10 100 1000 10000
seco
nds
Connections
JavaSim(adjusted)SSFNet(adjusted)
ns2(adjusted)
JavaSim is uniformly 10x slower than SSFNet
Difference between SSFNet and ns tends to < 50%
PADS 2002 Tutorial – p. 19/70
Lessons Learned
Compare Apples-to-Apples, to the largest extent possibleExact match often difficultValidate empirically, assess magnitude of differences
Make sure that model reflects reality1000’s of connections per interface does notTCP RTT time in 10’s or 100’s of seconds does not
Look for explanations of behavior in the dataWe may not have noticed the linearizations if wehadn’t asked why the performance curves turn up
Scour the tools for hidden implementations that give riseto non-scalable behavior
PADS 2002 Tutorial – p. 20/70
Java and Memory Use
Original performance reports suggest there is a significantdifference in memory footprint between JavaSim and SSFNet
Upturn in SSFNet behavior can be explained by garbagecollection costs
To understand these costs we need to understand howgarbage collection works in JavaJDK 1.3 and 1.4 use HotSpot system
PADS 2002 Tutorial – p. 21/70
HotSpot Garbage Collection
All new object created in Eden
Survivors of “scavenge” in Eden and Survivor space,copy to alternative Survivor space
Objects that survive certain number of copies are tenuredto the Old space
Full mark-and-compact operation done on whole heapwhen Old is filled. We call the remainder the “core”...
Eden
survivor
Old
PADS 2002 Tutorial – p. 22/70
Measuring Memory Use
One can get a snapshot of GC activity using -Xloggccommand-line argument.Example:25.4:[GC 342945K->326329K(382080K),0.25 secs]29.6:[GC 350009K->332307K(382080K),0.24 secs]30.0:[Full GC 355986K->246448K(382080K),5.03 secs]36.9:[GC 270128K->264248K(382080K),0.42 secs]40.3:[GC 287928K->272803K(382080K),0.26 secs]
minor reclaimation is fast, Full GC is expensive
size after Full GC is measure of intrinsic memory demand
PADS 2002 Tutorial – p. 23/70
Example
Plot memory size when Triggered, and when Packed.
20 simulated second, 5000 connections, scaled bandwidth
JavaSim time in GC : 205s, SSFNet : 126s
0
100000
200000
300000
400000
500000
600000
700000
800000
0 200 400 600 800 1000 1200 1400
Kby
tes
minor reclaimation index
Trigger:JavaSimPacked:JavaSimTrigger:SSFNetPacked:SSFNet
PADS 2002 Tutorial – p. 24/70
Memory Costs and Scalability
Simple Model: Long-lived and Short-lived objectsvariable role!l fraction of objects long-livedSl size of long-lived objectSs size of short-lived objectµs average object size !lSl + (1 ! !l)Ss
ps Pr{ short-lived object survives reclaimation }
T number of copies to tenure"obj(n) object demand rate, problem size n
#y,#o copy and compaction costs (per byte)me,mo Eden and Old memory sizes
PADS 2002 Tutorial – p. 25/70
Minor Reclaimation Cost Rate
Steady State analysis is upper bound on costs
Compute average number of objects copied at minorreclaimation
(1 ! !l)me/((1.0 ! ps)µs) short-lived objects!lmeT/µs long-lived objects
Each copied, rate of memory demand is "obj(n) " µs.
Cost rate is thus
#y"obj(n)
!
Ss(1 ! !l)
(1.0 ! ps)+ Sl!lT
"
Key thing to note for purposes of scalability is that this islinear in n
PADS 2002 Tutorial – p. 26/70
Major Reclaimation Cost Rate
Let k(n) be the “memory kernel” for a problem of size n.
Assume that the cost of a Full GC is #o " k(n)
Between Full GCs, mo ! k(n) bytes are tenured
me(!lSl)/(!lSl + (1 ! !l)Ss) bytes tenured per MR
Rate of bytes being tenured to Old!
"obj(n)(!lSl + (1 ! !l)Ss
me
"!
me!lSl
!lSl + (1 ! !l)Ss
"
Rate of GC cost accumulation is then cubic in n
#o " "obj(n) "!
k(n) " !lSl
mo ! k(n)
"
PADS 2002 Tutorial – p. 27/70
Scalability Revisited
Model predicts linear growth, then Full GC explosion
Example from Dumbell Architecture, SSFNet, scaled bandwidth
0.002
0.004
0.006
0.008
0.01
0.012
0.014
0.016
0.018
0 2000 4000 6000 8000 10000 12000
Exc.
Tim
e Pe
r Con
nect
ion
Per s
im-s
ec
Number of Connections
Heap Size 375MbHeap Size 750Mb
PADS 2002 Tutorial – p. 28/70
Performance vs. Problem Size (Again)
Performance revisited
Scaled bandwidth Dumbell, 60 simulated seconds
IBM T23 Thinkpad, 1.1GHz CPU, 1Gb memory
0
2000
4000
6000
8000
10000
12000
0 2 4 6 8 10 12 14 16
Exc.
Tim
e, 6
0 Si
mul
ated
Sec
onds
Number of Connections (K)
SSNet:JavaSSFNet:C++
ns2
PADS 2002 Tutorial – p. 29/70
Performance vs. Problem Size (Again)
Performance revisited
Scaled bandwidth Dumbell, 60 simulated seconds
IBM T23 Thinkpad, 1.1GHz CPU, 1Gb memory
0
5000
10000
15000
20000
25000
0 2 4 6 8 10 12 14 16
Exc.
Tim
e, 6
0 Si
mul
ated
Sec
onds
Number of Connections (K)
SSNet:JavaSSFNet:C++
ns2JavaSim
PADS 2002 Tutorial – p. 30/70
Core Memory vs. Problem Size
Measurements from run-time traces
0
100000
200000
300000
400000
500000
600000
700000
800000
0 2 4 6 8 10 12 14 16
Core
Mem
ory
(K)
Number of Connections (K)
SSFNet:JavaSSFNet:C++
JavaSim
PADS 2002 Tutorial – p. 31/70
And Now For Something Completely Different
Fluid Modeling of TCP
TCP traffic dominates Internet
Any serious simulation study of an Internet applicationmust take effects of TCP into consideration
Segment-level simulation is expensive, just forbackground traffic. Fluid models offer hope of workloadreduction. Example:
Secure multi-cast.100-to-1 ratio of application trafficonly 1% of traffic relates to applicationFind efficient way to model background traffic and itseffect on application traffic
PADS 2002 Tutorial – p. 32/70
What is Fluid Modeling?
Continuous simulation, traffic described by fluid flow rates“Classical” Approaches
Use of stochastic differential equations
Rather advanced math, focused on a single switch, toderive loss probabilities
“Fluid Stochastic Petri Nets”
Classical Petri net extended by Fluid places andtransitions.
Fluid flows to/from fluid places at rates described bypiece-wise continuous functions
Analytic solution involves integration of time-dependentPDEs.
PADS 2002 Tutorial – p. 33/70
FSPN Example : Alternating Switch
Switch with two arrival streams, one bursty, ON/OFF. Theother slow and constant. Stream allocates time-slots(exponentially distributed) to each stream. Traffic may bebuffered.
λ offλ on
λ high
λ low
rhigh
rlow
On
Off
µ
highlow
µ
PADS 2002 Tutorial – p. 34/70
Fluid Modeling for Simulation
Objective is speed, so we put further constraints on flowdescription.
A flow is defined by a source, a sink, and a static pathbetween them.
For any flow, at any point on its path, the flow rate insimulation time is a piece-wise constant function. Volumeof data over time found by integration.
Fluid can accumulate in finite-capacity buffers (orservers), and be lost when arriving to a full buffer
Computation only when rate changes
PADS 2002 Tutorial – p. 35/70
Example
Packet (1000 bits) inserted every 10 ms on 100Mbs link
Used Bandwidth
100Mbs
TimePacket Oriented View
Time
100Mbs
Used Bandwidth
10Mbs
Continuous Fluid View
Much faster to simulate, and difference matters little at largertime-scale (?)
PADS 2002 Tutorial – p. 36/70
Context
Several groups have demonstrated that fluid models havegreat potential
We have shown that with careful formulation of latencyand loss equations, a fluid model can yield aggregatestatistics very close to that of a similar discrete model
We and others have observed certain sensitivities inperformance under congestion conditions—a solution isneeded
No one has modeled a complex transmission controlalgorithm, with feedback, using fluids.
PADS 2002 Tutorial – p. 37/70
Transmission Control Protocol
TCP is system layer software that receives data from anapplication to send (through the socket interface, typically),and delivers it at the other end through the socket interface
PADS 2002 Tutorial – p. 38/70
TCP Primer
TCP is full duplex protocol for data exchange
Specification is segment oriented (MSS typically 1500bytes)
Acknowledgement required for every segment sent
Flow and congestion control implemented through a“window”
Window and maximum window size changes dynamically
Time-outs and acknowledgement analysis identifyprobable lost segments—retransmitted
PADS 2002 Tutorial – p. 39/70
Flow Control
LBA LBS(last byte acked) (last byte sent)-------[...................]---------
<-------------------------->max window size
Full window means acknowledgements throttletransmission
Large window allows “pipeline” through network withlarge bandwidth-delay product
Maximum window size controls volume of data inserted innetwork
Window size is minimum of Advertised Receive Window,and congestion control window PADS 2002 Tutorial – p. 40/70
Congestion Control (Slow Start)
Dynamically change maximum window size to discoveravailable bandwidth
Slow-start avoids impulse dump. Maximum window sizestarts at 1 segment, increases by 1 with eachacknowledged segment
Transition to Congestion Avoidance when size crossesthreshold ssthresh, or segment is lost
PADS 2002 Tutorial – p. 41/70
Congestion Control (Congestion Avoidance)
Congestion Avoidance is a “tuning” phase
Congestion window cwnd increases by 1.0/cwnd witheach acknowledgement
Transition to Slow Start on time-out (or lost packet)set ssthresh = cwnd/2;set cwnd = one segmententer slow-start
PADS 2002 Tutorial – p. 42/70
Example : Congestion Control
Window size as function of “bursts” (which fill window, notnumber of segments)
PADS 2002 Tutorial – p. 43/70
Reliable Transmission
TCP offers data to reader in order it was sent
segments can be lost due to congestion
TCP uses time-outs and acknowledgement logic to detectmissing segments
A segment noted as lost is retransmitted by sender
Logic based on TCP header information
Source Port Destination PortSequence Number
Acknowledgement NumberHeader Length Flags Advertised Receive Window
PADS 2002 Tutorial – p. 44/70
Retransmission Logic
Sender sets timer on every segment sent. Time-outcauses retransmission.
Acknowledgement number is largest byte number ofcontiguous acknowledged bytes.
Acknowledgement of a segment that reveals a “hole”at the receiver is acked with the sameacknowledgement number as the lastacknowledgement
“Fast Retransmit” feature is that 3 acks with the sameacknowledgement number trigger retransmission of thethe first segment indicated as lost.
PADS 2002 Tutorial – p. 45/70
Fluid Modeling : Traffic
Delays through network depend on traffic volume
We describe a flow in terms of a rate change event :simulation timeraw flow rate (bytes per unit simulation time)delivered fractiondata ratio (data per delivered byte)ack ratio (acked data per delivered byte)window alignment (byte indices at transmission)
As the network model shapes the raw flow rate, effects onTCP related concepts can be inferred.
Rate change events are generated and passed throughthe network
PADS 2002 Tutorial – p. 46/70
Fluid Modeling : Communication Latency
Communication channels have fixed latency fortransmission of a byte from end to end
If a channel has latency L, a rate event that occurs at oneend at time t is not observed at the other end until timet + L.
Channels are necessarily FIFO
PADS 2002 Tutorial – p. 47/70
Fluid Modeling : Acknowledgements
Bytes acknowledged, rather than segments
Normally, the acknowledged byte flow out of a TCP agentis identical to the data byte flow into a TCP agent.
Rate change on incoming flow is reflected immediately asa change in the ack byte ratio component of the outflow.
But since TCP acknowledgements account only for data thatis contiguously received, something needs to happen whenit’s not.
PADS 2002 Tutorial – p. 48/70
Fluid Modeling : Send Window
Model segment outflow and ack inflow with piece-wiseconstant rate functions of simulation time
Volume of data in window is thus piece-wise constantfunction
Maximum size of window is piece-wise linear function ofsimulation time
Event times defined bySolution to linear equation (e.g. time to windowsaturation)Arrival of changes in external flows (e.g. application toTCP, acknowledgement rate from TCP partner)
PADS 2002 Tutorial – p. 49/70
Some Subtleties
Important to model growth in maximum window size
In real TCP, the send data throughput increases aswindow size increases, until saturation or loss
Two Modes
Slow-Start: Every byte acknowledged increases maxwindow size by one byte
Congestion-Avoidance : Real growth rate is non-linear,increase by MSS each acknowledged “round”.cwnd(t) has a rate function "cwnd(t).
PADS 2002 Tutorial – p. 50/70
Example: Slow-Start
Assume 20 units round-trip delay
Time
Segm
ents
0 20 40 60 68
01
37
15 LBS(t)
LBA(t)
PADS 2002 Tutorial – p. 51/70
TCP Sender Output
TCP always sends output “as fast as possible” subject toconstraints:
Application data rate "app(s)
Available outbound bandwidth "bw(s)
Received acknowledgement rate "ack(s)
Growth in cwnd(s)
!send(s) =
8
>
>
<
>
>
:
min{!bw(s), !app(s)} if LBS(s) ! LBA(s) < cwnd(
min{!bw(s), !app(s), !ack(s) + !cwnd(s)} if LBS(s) ! LBA(s) = cwnd(
0 if LBS(s) ! LBA(s) > cwnd(
PADS 2002 Tutorial – p. 52/70
TCP Sender Model
Fluid model state sets off on some trajectory as a function inflow rates
An event happens when the state encounters a constraint
A Timer is scheduled to fire when encountered
send (s) λ ack (s)
λ cwnd (s)
λ
cwnd(s)
s s+d
y = LBS(s)−LBA(s) + (x−s)( − )
y = cwnd(s) + (x−s)
LBS(s)−LBA(s)
PADS 2002 Tutorial – p. 53/70
Timers
Timers are scheduled to adjust state trajectory and TCPoutput function when constraints reached
ConstrainedTimer scheduled whenLBS(s) ! LBA(s) #= cwnd(s), sender in one of threemodes, depending on operative inequality/equality
ModeTransition, scheduled in slow-start, when"ack(s) > 0.
IncreaseCWND, scheduled in congestion-avoidance,when "ack(s) > 0.
PADS 2002 Tutorial – p. 54/70
Important Optimization
Even if "ack(s) > 0, IncreaseCWND need be scheduled only ifan increase in cwnd() changes output behavior, provided thatwe can compute CWND when we need to.
Example : LBS(s) ! LBA(s) < cwnd(s)—increasingcwnd(s) won’t increase "out(s), because cwnd(s) isn’t theconstraint.
cwnd(s) can be reconstructed as needed bybook-keeping and some arithmetic
PADS 2002 Tutorial – p. 55/70
TCP Sender State Space
Describe state (W,M,C, Ia) where
W $ {unconstrained, constrained, exceeded}, dependingon relationship of LBS(s) ! LBA(s) to cwnd(s)
M $ {slow-start,congestion-avoidance,suspended}
C $ {+,!, 0}, depending on sign of difference"send(s) ! "ack(s) ! "cwnd(s).
Ia $ {0, 1} depending on whether "ack(s) > 0.
The state determines the output rate, and which timers arescheduled to fireExternally received rate changes can force a state-change,and cancel (and/or reschedule) timers
PADS 2002 Tutorial – p. 56/70
Input Delay Element
A naive fluid simulation runs “too fast”
first bit of a flow is immediately served, while a packetmust wait for whole thing to show up
Can correct this with an Input Delay Element, whichaccumulates one packet’s worth of data before signallingtransitionsDetails are unimportant—concept is. Used
at inputs in switches and routers
at input of TCP receiver
at input of ack flow, at TCP sender
PADS 2002 Tutorial – p. 57/70
TCP Receiver
Acks all intended flow, just reflects rate changes on inputflow, translated to ack flow
Buffering may be needed if outgoing bandwidth is limited
λinnet
λinbde
λbdeout
λrcvIDE log
λ bw
λ ackrcv
Network Interface
Fluid Buffer
PADS 2002 Tutorial – p. 58/70
Data Loss
Model elements in network interior may discard data
Creates a rate change event with diminished deliveredfraction component, attached cork.
Fluid Model response–
pass cork along back to sender
sendersuspends and changes congestion window sizeaccumulates (and eventually resends) lost datawaits for all outstanding acksenters slow start
PADS 2002 Tutorial – p. 59/70
Accuracy
Theorem
If
network latency is the same in packet and fluidformulation, and
there are no lost packets,
then the first bit of every fluidized segment leaves the TCPsender at the same time as the corresponding discretepacket.Proof is by induction on ordered departure times of segmentsand acks
PADS 2002 Tutorial – p. 60/70
Event Reduction
Compare events needed for TCP (not network) in packet andfluid formulations
In packet formulation, count 4 events/packet
In fluid formulation count 8 events per “round”
View the problem in terms of rounds
cwnd increases by computable amount each round
Number of packets increases each round, until cwnd isnot a constraint—when window is large enough so thatthe sender does not stall—in fluid formulation we need nonew events until the flow stops!
Important transitions when cwnd reaches ssthresh, andwhen it ceases to be a constraint
PADS 2002 Tutorial – p. 61/70
Ratio of Packet/Fluid Events : Slow Start
p is packet number. In slow-start
R(p) =4p
8 log p
=p
2 log p
Not quite linear in p. Let s be number of segments in last
slow-start round.
Approximately log2 ssthresh = s ! 1
PADS 2002 Tutorial – p. 62/70
Ratio of Packet/Fluid Events : Congestion Avoidance
Before pipe fills
After round log 2 + m + 1
p = 2s ! 1 + (m ! 1)s + m(m + 1)/2,
so thatR(p) =
m2 + (2s ! 1)m + 2(s ! 1)
4(log s + 1 + m)
m % &p, implies R(p) grows as &p.
PADS 2002 Tutorial – p. 63/70
Ratio of Packet/Fluid Events : Full Pipe
HOWEVER, once pipe fills
R(p) =p
4(log s + 1 + ms)
where ms is the congestion avoidance round number whenthe pipe fills. Linear growth in p
PADS 2002 Tutorial – p. 64/70
Ratios depend on problem parameters
ssthresh = 16 segments, "app = 100,000 segs/sec
1
10
100
1000
10000
100000
1 10 100 1000 10000 100000 1e+06
Pack
et e
vent
s / F
luid
eve
nts
Transfer Length (segments)
ssthresh = 16 segments, Application rate 100,000 segments/sec
1ms RTT10ms RTT
100ms RTT1000ms RTT
PADS 2002 Tutorial – p. 65/70
Ratios depend on problem parameters
ssthresh = 256 segments, "app = 100,000 segs/sec
1
10
100
1000
10000
100000
1 10 100 1000 10000 100000 1e+06
Pack
et e
vent
s / F
luid
eve
nts
Transfer Length (segments)
ssthresh = 256 segments, Application rate 100,000 segments/sec
1ms RTT10ms RTT
100ms RTT1000ms RTT
PADS 2002 Tutorial – p. 66/70
Ratios depend on problem parameters
ssthresh = 16 segments, "app = 1000 segs/sec
1
10
100
1000
10000
100000
1 10 100 1000 10000 100000 1e+06
Pack
et e
vent
s / F
luid
eve
nts
Transfer Length (segments)
ssthresh = 16 segments, Application rate 1000 segments/sec
1ms RTT10ms RTT
100ms RTT1000ms RTT
PADS 2002 Tutorial – p. 67/70
Ratios depend on problem parameters
ssthresh = 256 segments, "app = 1000 segs/sec
1
10
100
1000
10000
100000
1 10 100 1000 10000 100000 1e+06
Pack
et e
vent
s / F
luid
eve
nts
Transfer Length (segments)
ssthresh = 256 segments, Application rate 1000 segments/sec
1ms RTT10ms RTT
100ms RTT1000ms RTT
PADS 2002 Tutorial – p. 68/70
Transfer Lengths Needed for Event Reductions
RTT = 1ms RTT = 100ms
ssthresh Application rate (seg/s) 10 100 1000 10 100 100016 100,000 472 17861 178061 472 76156256 100,000 255 1655 16055 255 582116 1000 21 201 2001 472 17861 178061256 1000 21 201 2001 255 1655 16055
PADS 2002 Tutorial – p. 69/70
Conclusions
Many features of modern TCP can be modeled in a fluidcontext
Promise of significant performance advantages product ishigh
We’re working on implementation, will compare with realTCP in SSFNet framework
PADS 2002 Tutorial – p. 70/70
top related