![Page 1: 1 Measurement Study of Low-bitrate Internet Video Streaming Dmitri Loguinov Hayder Radha City University of New York and Philips Research](https://reader035.vdocuments.net/reader035/viewer/2022081518/5515067a550346a87d8b45f8/html5/thumbnails/1.jpg)
1
Measurement Study of Measurement Study of Low-bitrate Internet Video Low-bitrate Internet Video StreamingStreaming
Dmitri LoguinovDmitri LoguinovHayder RadhaHayder Radha
City University of New York and Philips ResearchCity University of New York and Philips Research
![Page 2: 1 Measurement Study of Low-bitrate Internet Video Streaming Dmitri Loguinov Hayder Radha City University of New York and Philips Research](https://reader035.vdocuments.net/reader035/viewer/2022081518/5515067a550346a87d8b45f8/html5/thumbnails/2.jpg)
2
MotivationMotivation
• Market research by Telecommunications Reports International (TRI) from August 2001
• The report estimates that out of 70.6 million households in the US with Internet access, an overwhelming majority use dialup modems (Q2 of 2001)
52.2
1.2 0.1
9.13.14.9
0
15
30
45
60
Paid dialupISPs
Free ISPs Cable DSL Internet TV Satellite
ho
use
ho
lds
(mil
lio
n)
![Page 3: 1 Measurement Study of Low-bitrate Internet Video Streaming Dmitri Loguinov Hayder Radha City University of New York and Philips Research](https://reader035.vdocuments.net/reader035/viewer/2022081518/5515067a550346a87d8b45f8/html5/thumbnails/3.jpg)
3
Motivation (cont’d)Motivation (cont’d)
• Consider broadband (cable, DSL, and satellite) Internet access vs. narrowband (dialup and webTV)
• 89% of households use modems and 11% broadband
11.55%
88.45%
0%
20%
40%
60%
80%
100%
dialup broadband
su
bs
cri
be
rs (
pe
rce
nt)
![Page 4: 1 Measurement Study of Low-bitrate Internet Video Streaming Dmitri Loguinov Hayder Radha City University of New York and Philips Research](https://reader035.vdocuments.net/reader035/viewer/2022081518/5515067a550346a87d8b45f8/html5/thumbnails/4.jpg)
4
Motivation (cont’d)Motivation (cont’d)
• Customer growth in Q2 of 2001:– +5.2% dialup
– +0.5% cable modem
– +29.7% DSL
– Even though DSL grew fast in early 2001, the situation is now different with many local companies going bankrupt and TELCOs increasing the prices
• A report from Cahners In-Stat Group found that despite strong forecasted growth in broadband access, most households will still be using dial-up access in the year 2005 (similar forecast in Forbes ASAP, Fall 2001, by IDC market research)
• Furthermore, 30% of US households state that they still have no need or desire for Internet access
![Page 5: 1 Measurement Study of Low-bitrate Internet Video Streaming Dmitri Loguinov Hayder Radha City University of New York and Philips Research](https://reader035.vdocuments.net/reader035/viewer/2022081518/5515067a550346a87d8b45f8/html5/thumbnails/5.jpg)
5
Motivation (cont’d)Motivation (cont’d)
• Our study was conducted in late 1999 – early 2000
• At that time, even more Internet users connected through dialup modems, which sparked our interest in using modems as the primary access technology for out study
• However, even today, our study could be considered up-to-date given the numbers from recent market research reports
• In fact, the situation is unlikely to change until year 2005
• Our study examines the performance of the Internet from the angle of an average Internet user
![Page 6: 1 Measurement Study of Low-bitrate Internet Video Streaming Dmitri Loguinov Hayder Radha City University of New York and Philips Research](https://reader035.vdocuments.net/reader035/viewer/2022081518/5515067a550346a87d8b45f8/html5/thumbnails/6.jpg)
6
Motivation (cont’d)Motivation (cont’d)
• Note that previous end-to-end studies positioned end-points at universities or campus networks with typically high-speed (T1 or faster) Internet access
• Hence, the performance of the Internet perceived by a regular home user remains largely undocumented
• Rather than using TCP or ICMP probes, we employed real-time video traffic in our work, because performance studies of real-time streaming in the Internet are quite scarce
• The choice of NACK-based flow control was suggested by RealPlayer and Windows MediaPlayer, which dominate the current Internet streaming market
![Page 7: 1 Measurement Study of Low-bitrate Internet Video Streaming Dmitri Loguinov Hayder Radha City University of New York and Philips Research](https://reader035.vdocuments.net/reader035/viewer/2022081518/5515067a550346a87d8b45f8/html5/thumbnails/7.jpg)
7
Overview of the TalkOverview of the Talk• Experimental Setup
• Overview of the Experiment
• Packet Loss
• Underflow Events
• Round-trip Delay
• Delay Jitter
• Packet Reordering
• Asymmetric Paths
• Conclusion
![Page 8: 1 Measurement Study of Low-bitrate Internet Video Streaming Dmitri Loguinov Hayder Radha City University of New York and Philips Research](https://reader035.vdocuments.net/reader035/viewer/2022081518/5515067a550346a87d8b45f8/html5/thumbnails/8.jpg)
8
Experimental SetupExperimental Setup• MPEG-4 client-server real-time streaming architecture
• NACK-based retransmission and fixed streaming bitrate (i.e., no congestion control)
• Stream S1 at 14 kb/s (16.0 kb/s IP rate), Nov-Dec 1999
• Stream S2 at 25 kb/s (27.4 kb/s IP rate), Jan-May 2000
National dial-up ISP
UUNET (MCI)
Internet
server
T1 link
client
campus network
modem link
![Page 9: 1 Measurement Study of Low-bitrate Internet Video Streaming Dmitri Loguinov Hayder Radha City University of New York and Philips Research](https://reader035.vdocuments.net/reader035/viewer/2022081518/5515067a550346a87d8b45f8/html5/thumbnails/9.jpg)
9
OverviewOverview• Three ISPs (Earthlink, AT&T WorldNet, IBM Global Net)
• Phone database included 1,813 dialup points in 1,188 cities
• The experiment covered 1,003 points in 653 US cities
• Over 34,000 long-distance phone calls
• 85 million video packets, 27.1 GBytes of data
• End-to-end paths with 5,266 Internet router interfaces
• 51% of routers from dialup ISPs and 45% from UUnet
• Sets D1p and D2p contain successful sessions with streams S1 and S2, respectively
![Page 10: 1 Measurement Study of Low-bitrate Internet Video Streaming Dmitri Loguinov Hayder Radha City University of New York and Philips Research](https://reader035.vdocuments.net/reader035/viewer/2022081518/5515067a550346a87d8b45f8/html5/thumbnails/10.jpg)
10
Overview (cont’d)
24
14
27
25
4
5
3
8
2425
2229 22
9
138 23
20
12
21
1311713
7
13
14
248
16
125
4
3
45
5
167
3
19
46
66
22
Legend(total 653)
20 to 29 (14)14 to 20 (6)
9 to 14 (10)6 to 9 (9)1 to 6 (12)
• Cities per state that participated in the experiment
![Page 11: 1 Measurement Study of Low-bitrate Internet Video Streaming Dmitri Loguinov Hayder Radha City University of New York and Philips Research](https://reader035.vdocuments.net/reader035/viewer/2022081518/5515067a550346a87d8b45f8/html5/thumbnails/11.jpg)
11
Overview (cont’d)Overview (cont’d)• Streaming success rate during the day shown below
• Varied between 80% during the night (midnight – 6 am) to 40% during the day (9 am – noon)
0%
15%
30%
45%
60%
75%
90%
0:00 - 3:00 3:00 - 6:00 6:00 - 9:00 9:00 - 12:00 12:00 - 15:00 15:00 - 18:00 18:00 - 21:00 21:00 - 0:00
time (E DT)
![Page 12: 1 Measurement Study of Low-bitrate Internet Video Streaming Dmitri Loguinov Hayder Radha City University of New York and Philips Research](https://reader035.vdocuments.net/reader035/viewer/2022081518/5515067a550346a87d8b45f8/html5/thumbnails/12.jpg)
12
Overview (cont’d)Overview (cont’d)• Average end-to-end hop count 11.3 in D1p and 11.9 in D2p
• The majority of paths contained between 11 and 13 hops
• Shortest path – 6 hops, longest path – 22 hops
0%
5%
10%
15%
20%
25%
30%
6 7 8 9 10 11 12 13 14 15 16 17 18 19 2 21 2
end-to-end hops
D1p D2p
![Page 13: 1 Measurement Study of Low-bitrate Internet Video Streaming Dmitri Loguinov Hayder Radha City University of New York and Philips Research](https://reader035.vdocuments.net/reader035/viewer/2022081518/5515067a550346a87d8b45f8/html5/thumbnails/13.jpg)
13
Packet LossPacket Loss• Average packet loss was 0.5% in both datasets
• 38% of 10-min sessions with no packet loss
• 75% with loss below 0.3%
• 91% with loss below 2%
• During the day, average packet loss varied between 0.2% (3 am - 6 am) and 0.8% (9am - 6pm EDT)
• Per-state packet loss varied between 0.2% (Idaho) to 1.4% (Oklahoma), but did not depend on the average RTT or the average number of end-to-end hops in the state (correlation –0.16 and –0.04, respectively)
![Page 14: 1 Measurement Study of Low-bitrate Internet Video Streaming Dmitri Loguinov Hayder Radha City University of New York and Philips Research](https://reader035.vdocuments.net/reader035/viewer/2022081518/5515067a550346a87d8b45f8/html5/thumbnails/14.jpg)
14
Packet Loss (cont’d)Packet Loss (cont’d)
0.0%
0.2%
0.4%
0.6%
0.8%
1.0%
1.2%
1.4%
1.6%
state
0
2
4
6
8
10
12
14
16
Average loss Average hops
![Page 15: 1 Measurement Study of Low-bitrate Internet Video Streaming Dmitri Loguinov Hayder Radha City University of New York and Philips Research](https://reader035.vdocuments.net/reader035/viewer/2022081518/5515067a550346a87d8b45f8/html5/thumbnails/15.jpg)
15
Packet Loss (cont’d)Packet Loss (cont’d)• 207,384 loss bursts and 431,501 lost packets
• Loss burst lengths (PDF and CDF):
0%
10%
20%
30%
40%
50%
60%
70%
80%
0 5 10 15 20
loss burst length (packets)
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
PDF CDF
![Page 16: 1 Measurement Study of Low-bitrate Internet Video Streaming Dmitri Loguinov Hayder Radha City University of New York and Philips Research](https://reader035.vdocuments.net/reader035/viewer/2022081518/5515067a550346a87d8b45f8/html5/thumbnails/16.jpg)
16
Packet Loss (cont’d)Packet Loss (cont’d)• Most bursts contained no more than 5 packets (however,
the tail reached to over 100 packets)
• RED was disabled in the backbone; still 74% of loss bursts contained only 1 packet apparently dropped in FIFO queues
• Average burst length was 2.04 packets in D1p and 2.10 packets in D2p
• Conditional probability of packet loss was 51% and 53%, respectively
• Over 90% of loss burst durations were under 1 second (maximum 36 seconds)
• The average distance between lost packets was 21 and 27 seconds, respectively
![Page 17: 1 Measurement Study of Low-bitrate Internet Video Streaming Dmitri Loguinov Hayder Radha City University of New York and Philips Research](https://reader035.vdocuments.net/reader035/viewer/2022081518/5515067a550346a87d8b45f8/html5/thumbnails/17.jpg)
17
Packet Loss (cont’d)Packet Loss (cont’d)• Apparently heavy-tailed distributions of loss burst lengths
• Pareto with = 1.34; however, note that data was non-stationary (time of day or access point non-stationarity)
1.E-04
1.E-03
1.E-02
1.E-01
1.E+00
1 10 100
loss burst length (packets)
Loss Burst Lengths Power (Loss Burst Lengths)Expon. (Loss Burst Lengths)
![Page 18: 1 Measurement Study of Low-bitrate Internet Video Streaming Dmitri Loguinov Hayder Radha City University of New York and Philips Research](https://reader035.vdocuments.net/reader035/viewer/2022081518/5515067a550346a87d8b45f8/html5/thumbnails/18.jpg)
18
Underflow eventsUnderflow events• Missing packets (frames) at their decoding deadlines cause
buffer underflows at the receiver
• Startup delay used in the experiment was 2.7 seconds
• 63% (271,788) of all lost packets were discovered to be missing before their deadlines
• Out of these 63% of lost packets:
– 94% were recovered in time
– 3.3% were recovered late
– 2.1% were never recovered
• Retransmission appears quite effective in dealing with packet loss, even in the presence of large end-to-end delays
![Page 19: 1 Measurement Study of Low-bitrate Internet Video Streaming Dmitri Loguinov Hayder Radha City University of New York and Philips Research](https://reader035.vdocuments.net/reader035/viewer/2022081518/5515067a550346a87d8b45f8/html5/thumbnails/19.jpg)
19
Underflow events (cont’d)Underflow events (cont’d)• 37% (159,713) of lost packets were discovered to be
missing after their deadlines had passed
• This effect was caused by large one-way delay jitter
• Additionally, one-way delay jitter caused 1,167,979 data packets to be late for decoding
• Overall, 1,342,415 packets were late (1.7% of all sent packets), out of which 98.9% were late due to large one-way delay jitter rather than due to packet loss combined with large RTT
• All late packets caused the “freeze-frame” effect for 10.5 seconds on average in D1p and 8.5 seconds in D2p (recall
that each session was 10 minutes long)
![Page 20: 1 Measurement Study of Low-bitrate Internet Video Streaming Dmitri Loguinov Hayder Radha City University of New York and Philips Research](https://reader035.vdocuments.net/reader035/viewer/2022081518/5515067a550346a87d8b45f8/html5/thumbnails/20.jpg)
20
Underflow events (cont’d)Underflow events (cont’d)• 90% of late retransmissions missed the deadline by no more
than 5 seconds, 99% by no more than 10 seconds
• 90% of late data packets missed the deadline by no more than 13 seconds, 99% by no more than 27 seconds
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
0.01 0.1 1 10 100
seconds
data retransmissions
![Page 21: 1 Measurement Study of Low-bitrate Internet Video Streaming Dmitri Loguinov Hayder Radha City University of New York and Philips Research](https://reader035.vdocuments.net/reader035/viewer/2022081518/5515067a550346a87d8b45f8/html5/thumbnails/21.jpg)
21
Round-trip DelayRound-trip Delay• 660,439 RTT samples
• 75% of samples below 600 ms and 90% below 1 second
• Average RTT was 698 ms in D1p and 839 ms in D2p
• Maximum RTT was over 120 seconds
• Data-link retransmission combined with low-bitrate connection were responsible for pathologically high RTTs
• However, we found access points with 6-7 second IP-level buffering delays
• Generally, RTTs over 10 seconds were considered pathological in our study
![Page 22: 1 Measurement Study of Low-bitrate Internet Video Streaming Dmitri Loguinov Hayder Radha City University of New York and Philips Research](https://reader035.vdocuments.net/reader035/viewer/2022081518/5515067a550346a87d8b45f8/html5/thumbnails/22.jpg)
22
Round-trip Delay (cont’d)Round-trip Delay (cont’d)• Distributions of the RTT in both datasets (PDF) were similar
and contained a very long tail
0%
10%
20%
30%
40%
50%
60%
0.1 1 10
RTT (seconds)
D1p D2p
![Page 23: 1 Measurement Study of Low-bitrate Internet Video Streaming Dmitri Loguinov Hayder Radha City University of New York and Philips Research](https://reader035.vdocuments.net/reader035/viewer/2022081518/5515067a550346a87d8b45f8/html5/thumbnails/23.jpg)
23
Round-trip Delay (cont’d)Round-trip Delay (cont’d)• Distribution tails closely matched hyperbolic distributions
(Pareto with between 1.16 and 1.58)
1.E-05
1.E-04
1.E-03
1.E-02
1.E-01
1.E+00
0.1 1 10 100
RTT (seconds)
D1p D2p Power (D2p)
![Page 24: 1 Measurement Study of Low-bitrate Internet Video Streaming Dmitri Loguinov Hayder Radha City University of New York and Philips Research](https://reader035.vdocuments.net/reader035/viewer/2022081518/5515067a550346a87d8b45f8/html5/thumbnails/24.jpg)
24
Round-trip Delay (cont’d)Round-trip Delay (cont’d)• The average RTT varied during the day between 574 ms (3
am – 6 am) and 847 ms (3 pm – 6 pm) in D1p
• Between 723 ms and 951 ms in D2p
• Relatively small increase in the RTT during the day (by only 30-45%) compared to that in packet loss (by up to 300%)
• Per-state RTT varied between 539 ms (Maine) and 1,053 ms (Alaska); Hawaii and New Mexico also had average RTTs above 1 second
• Little correlation between the RTT and geographical distance of the state from NY
• However, much stronger positive correlation between the number of hops and the average state RTT: = 0.52
![Page 25: 1 Measurement Study of Low-bitrate Internet Video Streaming Dmitri Loguinov Hayder Radha City University of New York and Philips Research](https://reader035.vdocuments.net/reader035/viewer/2022081518/5515067a550346a87d8b45f8/html5/thumbnails/25.jpg)
25
Delay JitterDelay Jitter• Delay jitter is one-way delay variation
• Positive values of delay jitter are packet expansion events
• 97.5% of positive samples below 140 ms
• 99.9% under 1 second
• Highest sample 45 seconds
• Even though large delay jitter was not frequent, once a single packet was delayed by the network, the following packets were also delayed creating a “snowball” of late packets
• Large delay jitter was very harmful during the experiment
![Page 26: 1 Measurement Study of Low-bitrate Internet Video Streaming Dmitri Loguinov Hayder Radha City University of New York and Philips Research](https://reader035.vdocuments.net/reader035/viewer/2022081518/5515067a550346a87d8b45f8/html5/thumbnails/26.jpg)
26
Packet ReorderingPacket Reordering• Average reordering rates were low, but noticeable
• 6.5% of missing packets (or 0.04% of sent) were reordered
• Out of 16,852 sessions, 1,599 (9.5%) experienced at least one reordering event
• The highest reordering rate per ISP occurred in AT&T WorldNet, where 35% of missing packets (0.2% of sent packets) were reordered
• In the same set, almost half of the sessions (47%) experienced at least one reordering event
• Earthlink had a session where 7.5% of sent packets were reordered
![Page 27: 1 Measurement Study of Low-bitrate Internet Video Streaming Dmitri Loguinov Hayder Radha City University of New York and Philips Research](https://reader035.vdocuments.net/reader035/viewer/2022081518/5515067a550346a87d8b45f8/html5/thumbnails/27.jpg)
27
Packet Reordering (cont’d)Packet Reordering (cont’d)• Reordering delay Dr is time between detecting a missing
packet and receiving the reordered packet
• 90% of samples Dr below 150 ms, 97% below 300 ms, 99% below 500 ms, and the maximum sample was 20 seconds
0%
10%
20%
30%
40%
50%
0 200 400 600 800 1000
Reordering delay Dr (ms)
![Page 28: 1 Measurement Study of Low-bitrate Internet Video Streaming Dmitri Loguinov Hayder Radha City University of New York and Philips Research](https://reader035.vdocuments.net/reader035/viewer/2022081518/5515067a550346a87d8b45f8/html5/thumbnails/28.jpg)
28
Packet Reordering (cont’d)Packet Reordering (cont’d)• Reordering distance is the number of packets received
during the reordering delay (84.6% of the time a single packet, 6.5% exactly 2 packets, 4.5% exactly 3 packets)
• TCP’s triple-ACK avoids 91.1% of redundant retransmits and quadruple-ACK avoids 95.7%
6.5% 4.5% 2.2% 1.4% 0.5% 0.2% 0.1% 0.0% 0.0%
84.6%
0%
20%
40%
60%
80%
100%
1 2 3 4 5 6 7 8 9 10
reordering distance dr (packets)
![Page 29: 1 Measurement Study of Low-bitrate Internet Video Streaming Dmitri Loguinov Hayder Radha City University of New York and Philips Research](https://reader035.vdocuments.net/reader035/viewer/2022081518/5515067a550346a87d8b45f8/html5/thumbnails/29.jpg)
29
Path AsymmetryPath Asymmetry• Asymmetry detected by analyzing the TTL of the returned
packets during the initial traceroute
• Each router reset the TTL to a default value (such as 255) when sending a “TTL expired” ICMP message
• If the number of forward and reverse hops was different, the path was “definitely asymmetric”
• Otherwise, the path was “possibly (or probably) symmetric”
• No fail-proof way of establishing path symmetry using end-to-end measurements (even using two traceroutes in reverse directions)
![Page 30: 1 Measurement Study of Low-bitrate Internet Video Streaming Dmitri Loguinov Hayder Radha City University of New York and Philips Research](https://reader035.vdocuments.net/reader035/viewer/2022081518/5515067a550346a87d8b45f8/html5/thumbnails/30.jpg)
30
Path Asymmetry (cont’d)Path Asymmetry (cont’d)• 72% of sessions operated over definitely asymmetric paths
• Almost all paths with 14 or more end-to-end hops were asymmetric
• Even the shortest paths (with as low as 6 hops) were prone to asymmetry
• “Hot potato” routing is more likely to cause asymmetry in longer paths, because they are more likely to cross AS borders than shorter paths
• Longer paths also exhibited a higher reordering probability than shorter paths
![Page 31: 1 Measurement Study of Low-bitrate Internet Video Streaming Dmitri Loguinov Hayder Radha City University of New York and Philips Research](https://reader035.vdocuments.net/reader035/viewer/2022081518/5515067a550346a87d8b45f8/html5/thumbnails/31.jpg)
31
ConclusionConclusion• Dialing success rates were quite low during the day (as low
as 40%)
• Retransmission worked very well even for delay-sensitive traffic and high-latency end-to-end paths
• Both RTT and packet-loss bursts appeared to be heavy-tailed
• Our clients experienced huge end-to-end delays both due to large IP buffers as well as persistent data-link retransmission
• Reordering was frequent even given our low bitrates
• Most paths were in fact asymmetric, where longer paths were more likely to be identified as asymmetric