shadowstream : performance evaluation as a capability in production internet live stream network

Post on 23-Feb-2016

40 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

ShadowStream : performance Evaluation as a Capability in Production Internet Live Stream Network. ACM SIGCOMM 2012 2012.10.15 Cing -Yu Chu. Motivation. Live streaming is a major Internet application today Evaluation of live streaming Lab/ testbed , simulation, modeling Scalability realism - PowerPoint PPT Presentation

TRANSCRIPT

SHADOWSTREAM: PERFORMANCE EVALUATION AS A CAPABILITY IN PRODUCTION INTERNET LIVE STREAM NETWORK

ACM SIGCOMM 20122012.10.15

CING-YU CHU

MOTIVATION• Live streaming is a major Internet

application today• Evaluation of live streaming

• Lab/testbed, simulation, modeling• Scalability• realism

• Live testing

CHALLENGE• Protection

• Real views’ QoE• Masking failures from real viewers

• Orchestration• Orchestrating desired experimental

scenarios (e.g., flash-crowd)• Without disturbing QoE

MODERN LIVE STREAMING• Complex hybrid systems

• Peer-to-peer network• Content delivery network

• BitTorrent-like• Tracker peers watching same channel

overlay network topology• Basic unit: pieces

MODERN LIVE STREAMING• Modules

• P2P topology management• CDN management• Buffer and playpoint management• Rate allocation• Download/upload scheduling• Viewer-interfaces• Share bottleneck management• Flash-crowd admission control• Network-friendliness

METRICS• Piece missing ratio

• Pieces not received by the playback deadline

• Channel supply ratio• Total bandwidth capacity (CDN+P2P) to

total streaming bandwidth demand

MISLEADING RESULTS SMALL-SCALE• EmuLab: 60 clients vs. 600 clients• Supply ratio

• Small: 1.67• Large: 1.29

• Content bottleneck!

MISLEADING RESULTS SMALL-SCALE• With connection limit

• CDN server’s neighbor connections are exhausted by those clients that join earlier

MISLEADING RESULTS MISSING REALISTIC FEATURE• Network diversity

• Network connectivity• Amount of network resource• Network protocol implementation• Router policy• Background traffic

MISLEADING RESULTS MISSING REALISTIC FEATURE• LAN-like network vs. ADSL-like network

• Hidden buffers• ADSL has larger buffer but limited upload

bandwidth

SYSTEM ARCHITECTURE

STREAMING MACHINE• self-complete set of algorithms to

download and upload pieces• Multiple streaming machines

• experiment (E)• Play buffer

R+E TO MASK FAILURES• Another streaming machine

• For protection• repair (R)

R+E TO MASK FAILURES• Virtual playpoint

• Introducing a slight delay• To hide the failure from real viewers

• R = rCDN• Dedicated CDN resources• Bottleneck

R = PRODUCTION• Production streaming engine

• Fine-tuned algorithms (hybrid architecture)• Larger resource pool• More scalable protection• Serving clients before experiment starts

PROBLEM OFR = PRODUCTION• Systematic bias

• Competition between experiment and production

• Protect QoE higher priority for production underestimate experiment

PCE• R = P + C

• C: CDN (rCDN) with bounded resource• P: production• δ

PCE• rCDN as a filter• It “lowers” the piece missing ratio curve

of experiment visible by production down by δ

IMPLEMENTATION• Modular process for streaming machines• Sliding window to partition downloading

tasks

STREAMING HYPERVISOR• Task window management: sets up

sliding window• Data distribution control: copies data

among streaming machines• Network resource control: bandwidth

scheduling among stream machines• Experiment transition

STREAMING HYPERVISOR

TASK WINDOW MANAGEMENT• Informs a streaming machine about the

pieces that it should download

DATA DISTRIBUTION CONTROL

• Data store• Shared data store• Each streaming machine pointer

NETWORK RESOURCE CONTROL

• Production bears higher priority• LED-BAT to perform bandwidth estimation

• Avoid hidden buffer network congestion

EXPERIMENT ORCHESTRATION• Triggering• Arrival• Experiment Transition• Departure

SPECIFICATION AND TRIGGERING• Testing behavior pattern

• Multiple classes• Each class

• Arrival rate function during interval• Duration function L

• Triggering condition

tstart

ARRIVAL• Independent arrivals to achieve global

arrival pattern• Network-wide common parameters

• tstart, texp and λ(t)• Included in keep-alive message

EXPERIMENT TRANSITION• Current t0, join at ae,i [t0, ae,i]• Connectivity Transition

• Production neighbor’s production (not in test)

• Production rejoins

EXPERIMENT TRANSITION• Playbuffer State Transition

• Legacy removal

DEPARTURE• Early departure

• Capturing client state snapshot• Using disconnection message• Substitution

• Arrival process again

• Only equal or more frequent than the real viewer departure pattern

EVALUATION• Software Framework• Experimental Opportunities• Protection and Accuracy• Experiment Control• Deterministic Replay

SOFTWARE FRAMEWORK• Compositional Run-time

• Block-based architecture

• Total ~8000 lines of code• Flexibility

EXPERIMENTAL OPPORTUNITIES• Real traces from 2 living streaming

testing channel (impossible in testbed)• Flash-crowd• No client departs

PROTECTION AND ACCURACY• EmuLab (weakness)

• Multiple experiment with same settings• 300 clients• δ ~ 4%• Buggy code!

EXPERIMENT CONTROL• Trace-driven simulation• Accuracy of distributed arrivals

• Impact of clock synchronization• Up to 3 seconds

DETERMINISTIC REPLAY• Minimize logged data• Hypervisor

• Protocol packet: whole payload• Data packet: only header

top related