randy h. katz computer science division electrical engineering and computer science department

79
1 Berkeley-Helsinki Summer Course Lecture #15: Content Distribution Networks and Service Composition Paths Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department University of California Berkeley, CA 94720-1776

Upload: griffin-munoz

Post on 30-Dec-2015

24 views

Category:

Documents


0 download

DESCRIPTION

Berkeley-Helsinki Summer Course Lecture #15: Content Distribution Networks and Service Composition Paths. Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department University of California Berkeley, CA 94720-1776. Outline. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

1

Berkeley-Helsinki Summer Course

Lecture #15: Content Distribution Networks and Service Composition Paths

Randy H. Katz

Computer Science Division

Electrical Engineering and Computer Science Department

University of California

Berkeley, CA 94720-1776

Page 2: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

2

Outline

• Rationale for Content Distribution• Web Caching Issues and Architectures• Akamai Architecture• Streaming Content Distribution• Business Issues in Content Distribution• Service Composition

Page 3: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

3

Outline

• Rationale for Content Distribution• Web Caching Issues and Architectures• Akamai Architecture• Streaming Content Distribution• Business Issues in Content Distribution• Service Composition

Page 4: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

4

Services Within the Network: Caching and

Distribution

“Internet Grid”Parallel Network BackbonesInternet Exchange Points

Co-Location

Scalable Servers

WebCaches

Page 5: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

5

• Move data closer to consumer

• Backbone caches save b/w

• Edge caches for QoS• 4 billion hits/day@AOL!• Even more crucial for

broadband access networks, e.g., cable, DSL

ISP Backbone

Local POP

Local POP

Local POP

Internet

Caching Advantages for Service Providers

$$

$$

Eric Brewer

Page 6: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

6

Reverse CachingForward Proxy Cache

Cache handles client requestsReverse Proxy

CacheCache fronts origin server

Internet

$Internet

$

Eric Brewer

Page 7: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

7

Surge Protection viaClustered Caches

Reverse caches buffer load across multiple sites

www.site 3.com

www.site 5.com

www.site 4.com

www.site 6.com

Internet

www.site 1.com

Hosting Provider Network

Reverse ProxyCluster

www.site 2.com

$ $

$ $

Eric Brewer

Page 8: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

8

$ $

$ $

$ $

$ $

Content DistributionWe can connect these caches!

Internet

Hosting Provider Network

Reverse ProxyCluster

ForwardCaches

ISP Network

Push content out to the edge

Eric Brewer

Page 9: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

9

Outline

• Rationale for Content Distribution• Web Caching Issues and Architectures• Akamai Architecture• Streaming Content Distribution• Business Issues in Content Distribution• Service Composition

Page 10: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

10

Isolatedmulticast

clouds

Traditionalunicastpeering

multicastcloud

multicastcloud

multicastcloud

multicastcloud

multicastcloud

Example: Application-level Multicast

Solve the multicast management and peering problems by moving up the protocol stack

Steve McCanne

Page 11: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

11

Example: Application-level Multicast

Solve the multicast management and peering problems by moving up the protocol stack

Steve McCanne

Page 12: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

12

Multicast as anInfrastructure Service

• Global multicast as an “infrastructure service”, not a core network primitive

– Circumvents technical/operational/business barriers of no interdomain multicast routing, management, billing

• No coherent architecture for infrastructure services, because of end-to-end principle

• Needed: Service stack to complement the IP protocol stack

– Open redirection– Content-level peering

Steve McCanne

Page 13: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

13

The Service Stack

TCPservice

IP service

ApplicationsEndHost

Router

Network

Services

End host

Services

End-to-endargument

here

Steve McCanne

Page 14: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

14

The Service Stack

IP service

Applications

DNS

EndHost

Overlay

Router

Network

Services

End host

Services

Infrastructure

Services

TCPservice

DNSstub

Steve McCanne

Page 15: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

15

The Service Stack

TCPservice

IP service

CacheServices

ProxyServices

Applications

DNS

EndHost

Overlay

Router

Network

Services

End host

Services

Infrastructure

Services

DNSstub

Steve McCanne

Page 16: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

16

The Service Stack

IP service

CacheServices

ProxyServices

Applications

DNS

redirection

EndHost

Overlay

Router

Network

Services

End host

Services

Infrastructure

Services

TCPservice

DNSstub

Steve McCanne

Page 17: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

17

Service Elements for Internet Broadcast

TCPservice

IP and Scoped IP Multicast

Network

Services

End host

Services

Infrastructure

ServicesBroadcast Redirection

DNSstub

Applications

DNS

EndHost

Overlay

Router

redirectionstub

Steve McCanne

Page 18: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

18

Incremental Path

TCPservice

IP and Scoped IP Multicast

Network

Services

End host

Services

Infrastructure

ServicesBroadcast Redirection

ApplicationsEndHost

Overlay

Router

DNS

DNSstub

G2, WMT, QT4RTSP, RTP

Steve McCanne

Page 19: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

19

Broadcast Overlay Architecture

Clients

Broadcasters

Content Broadcast

ManagementPlatform and

Tools

Steve McCanne

EdgeServers

Load Balancing ThruServer Redirection;

Content BroadcastNetwork

Content DistributionThrough MulticastOverlay Network

RedirectionFabricInter-ISP Redirection

Peering

Page 20: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

20

A New Kind of Internet

• Actively push services towards the edges: caches, content distribution points

• Manage redirection, not routes• New applications-specific protocols

– Push content to the edge– Invalidate remote content for freshness– Collate remote logs into a single log– Internet TV/Radio: streaming media that works

• Twilight of the end-to-end argument– Trusted service providers/network intermediaries– Service providers create own application-specific

overlays, e.g., cache and streaming media content distribution

Page 21: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

21

Outline

• Rationale for Content Distribution• Web Caching Issues and Architectures• Akamai Architecture• Streaming Content Distribution• Business Issues in Content Distribution• Service Composition

Page 22: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

22

Web Caching Service: Akamai

Number of Servers 5000

Number of Networks 350

Number of Countries 50(as of Fall 2000)

Typically 70-90%of Web content

Page 23: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

23

ARLs and Akamai Traffic Redirection

• http://www.foo.com/images/logo.gif when Akamaized becomes:

– http://a836.g.akamaitech.net/7/836/123/e358f5db004e9/www.foo.com/images/logo.gif

» Serial #: content “bucket” served from same server

» Akamai Domain: redirection to Akamai server» Type Code: identify the Akamai service» Serial #» Content Provider Code: Akamai content provider» Object Data: expiration/version information» URL: original locator, if content not at server

Page 24: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

24

Akamai’s DNS Extensions

• *.g.akamai.net mapped onto an IP address• Two level hierarchy

– HLDNS: redirect lookup to LLDNS “close” to clientRecompute network map every O(10 minutes)Resolution has TTL of 30 minutes

– LLDNS: redirect to “optimally located” Akamai server for the clientRecompute network map every O(10 seconds)Resolution has TTL of 30 seconds

– Map generation based on:» Internet congestion» System load» User demands» Server status

Page 25: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

25

Akamai Fault Tolerance

• Machine failures– Buddy system with paired back-up servers– Recovery time is 1 second once detected

• Network outages/data center outages– Continuous monitoring– Set response to infinity when out, thereby driving site from

network maps– Recovery time is 1-2 minutes due to frequent map updates

• Content provider home site must be robust!• 7x24x365 NOC• Geoflow Monitoring Software/Traffic Analyzer

Page 26: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

26

Internet Cache Protocols

• Internet Cache Protocol (ICP)– Peer-to-peer: check if missing content is in a nearby cache

• Cache Array Resolution Protocol (CARP)– Confederation of caches to form a larger, unified cache

• Cache Digest Protocol– Exchange descriptions of what is contained in each cache– Used to manage peered caches– Stale cached data can be an issue

• Web Cache Coordination Protocol (WCCP)– Intercept HTTP and redirect to cache– Cisco Cache Engine: WCCP manages router redirection

Page 27: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

27

Impediments to Caching

• Cache busting– Server actively prevents content from being cached– E.g., set EXPIRES field to value in the past, CACHE-CONTROL: no-cache or no-store

– Responses» Hit metering: inform origin server of # users

accessing cached content» Ad-Insertion: proxy server inserts ads, freeing

origin server from doing so

• Replication– Mirror sites– In a sense, content distribution is selective mirroring!

Page 28: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

28

Outline

• Rationale for Content Distribution• Web Caching Issues and Architectures• Akamai Architecture• Streaming Content Distribution• Business Issues in Content Distribution• Service Composition

Page 29: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

29

Example CDN Application:Internet Broadcast

• Media Distribution– Application-level multicast– Enforceable “content QoS”

• Content Peering– Channel peering– Redirection peering

McCanne, FFNets

Page 30: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

30

Media Distribution

ApplicationLevel

Multicast

Redirection

?Backbones

Access Networks

Access Networks

McCanne, FFNets

Page 31: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

31

Media Distribution

ApplicationLevel

Multicast

Redirection

McCanne, FFNets

Page 32: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

32

Media Distribution

ApplicationLevel

Multicast

Redirection

McCanne, FFNets

Page 33: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

33

Media Distribution

McCanne, FFNets

Page 34: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

34

Congested Peering Points

McCanne, FFNets

Page 35: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

35

CDN Quality of Service

• How to overcome congestion at peering points?• Hard because peering policies evolve, hot spots

move• A few existing approaches

– Route around hot spots– Satellite bypass– Dispersity routing (Maxemchuk, 1977)

• A new alternative– Provision the overlay network

» Seemingly intractable (QoS across ISP boundaries)» But in reality, not so hard to do approximately perfect…

McCanne, FFNets

Page 36: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

36

CDN Quality of Service

• Build on intradomain SLAs– Given ISP typically offers great SLA for on-net destinations

McCanne, FFNets

Page 37: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

37

CDN Quality of Service

• Build on intradomain SLAs– Given ISP typically offers great SLA for on-net

destinations from a “transit/colo” connection– But all bets are off when you cross a peering point

McCanne, FFNets

Page 38: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

38

CDN Quality of Service

• Solution– Create private, content-level peering points– Bypass congested Internet peering points– Enforce application-level QoS polices

co-locatetransit links

co-locatetransit links

McCanne, FFNets

Page 39: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

39

CDN Quality of Service

• Solution– Create private, content-level peering points– Bypass congested Internet peering points– Enforce application-level QoS polices

McCanne, FFNets

Page 40: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

40

Congested Peering Points

McCanne, FFNets

Page 41: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

41

Congested Peering Points

P-NAP

P-NAP

transitlinks

McCanne, FFNets

Page 42: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

42

Bypassing Congestion

P-NAP

P-NAP

Page 43: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

43

Broadcast from Anywhere

P-NAP

P-NAP

McCanne, FFNets

Page 44: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

44

Content-level QoS

• Mark and police traffic at injection point

• Signal QoS policies across overlay network

• Ensure content QoS on each overlay hop

– Map contant QoS to underlying network QoS

– e.g., diffserv, rsvp, mpls

• No need for ubiquitous, end-to-end QoS in network

• No need to modify apps or end hosts

MPLSunicastmesh

ATMPVC

IP Multicastw/ diffserv

DSL unicast

ingress policing

McCanne, FFNets

Page 45: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

45

Content-level QoS

ApplicationLevel

Multicast

Redirection

Completely managedand provisioned

The broadcast edge…move it as close aspossible to the user

}

}

McCanne, FFNets

Page 46: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

46

Channel Peering

• Establish data peering relationships at “content exchange points”

– Easy with application-level multicast

• Enforce QoS across peering point• The catch

– How to do settlement?– Same problem with IP multicast peering (don’t want to

turn it on because of lost revenue stream)

• The solution: audience tracking– As in EXPRESS Multicast (Hollbrook & Cheriton)

McCanne, FFNets

Page 47: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

47

Audience Tracking

Can now be a transitcarrier for broadcasttraffic… not viable withvanilla multicast

}

McCanne, FFNets

Page 48: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

48

Audience Tracking

So, given such information you can actually make the channel peering component of content peering viable…

McCanne, FFNets

Page 49: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

49

Redirection Peering

Broadcasttransit

providers

Contentaggregators

(broadcasters)

Accessnetworks(affiliates)

McCanne, FFNets

Page 50: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

50

Redirection Peering

Accessnetworks(affilates)

Broadcasttransit

providers

Contentaggregators

(broadcasters)

McCanne, FFNets

Page 51: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

51

Redirection Peering

Accessnetworks(affilates)

Broadcasttransit

providers

Contentaggregators

(broadcasters)

McCanne, FFNets

Page 52: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

52

Redirection Peering

• Need common architecture to allow different vendors to create different pieces and work with one another (yet still compete)

• The challenges– Define the redirection architecture– New client/infrastructure protocol & API (a la DNS)– Do so in backward compatible way– Others…

• One of the next big architectural issues for the Internet…

McCanne, FFNets

Page 53: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

53

Summary

• The “Broadcast Internet” is upon us– Media distribution with app-level multicast– Content peering

• Lots of intelligence in network– At odds with end-to-end?

• Ultimately, these technologies will emerge as the “BGP for Internet broadcast” and truly catalyze convergence

McCanne, FFNets

Page 54: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

54

Outline

• Rationale for Content Distribution• Web Caching Issues and Architectures• Akamai Architecture• Streaming Content Distribution• Business Issues in Content Distribution• Service Composition

Page 55: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

55

Alternative Broadband Content Delivery Models

• Push Model– DirectTV, Broadcast.com

• Pull Model– Web Browsing

• Interactive – Push-Pull Model

» Mix of broadcast data and on-demand request• WebTV, OpenTV, ….

– Interactive Game Model

Page 56: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

56

Content-Deliver-Present:XM Radio

Distribution PresentationContent

End-to-End Controlled QoS

Dedicated Satellite +XM Radio managedterrestrial repeaters

DedicatedPublisher

DedicatedTerminal

XMRadio

100 Channels

Content Push

Page 57: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

57

Case Study: XM Radio

• Wide-area uni-directional, highbandwidth distribution-basedaccess to vehiclesand homes

• Framework formultimedia contentdisseminationbeyond real-timeaudio

• Localizationservices atredirection points

Page 58: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

58

Content-Deliver-Present:DirecTVDistribution PresentationContent

End-to-End Controlled QoS

Dedicated Satellite

IndependentChannels

DedicatedTerminal

DedicatedSet-top

Box + TV

Content Push

Page 59: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

59

Content-Deliver-Present:Internet

Distribution PresentationContent

Internet

Publishers(Web Sites)

Access Networks& Terminals

TV

PC

Cell Phone...

Computing/storage in the NetSupporting services like

Web Caching, Content Distribution

Controlled QoS

Application-SpecificOverlay Network

Controlled QoS

Content PushContent Pull

Page 60: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

60

Content Delivery Service: Cidera

(formerly SkyCache)• Satellite-based broadcast overlay

network to improve movement of Internet information

– Web pages, software updates, streaming media

• Customers– Content Publishers: quick access to the

network edges, redundancy

– Enterprises: Virtual VSAT Network, redundancy

– ISPs: quick access to nationwide POPs, redundancy

• Distribution ONLY; not servers, not content, not access

Page 61: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

61

Content Dissemination and Caching: Edgix

• Acceleratedcontent-delivery,worldwide

• “One router hop away from end customer”

• For ISPs and large corporate customers

• Satellite bypass of Akamai

Page 62: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

62

Controlled QoS

Content-Deliver-Present:Web TVDistribution PresentationContent

Publishers Access Networks& Terminals

CoaxAccess

Internet

WebTVSet-top

+TV

Cable

Web Sites

Channels

Web TVService

Dial-upAccess

Web page xformE-mail

Page 63: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

63

Controlled QoS

Content-Deliver-Present:Internet Access over

CellularDistribution PresentationContent

Publishers

Access Networks& Terminals

CellularHandset/Modem

PSTN

Web SitesCellular

DataService

CellularAccess

Web page xformE-mail

CellularAccess

Network

Internet

Page 64: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

64

Vehicle LAN

Computers, Displays,Audio Out, Etc.

ScalableServers

Internet

In-Vehicle Service

Scenario

Caches

Broadband Downlink:Radio/TV/Digital Media

Info Content(News/Maps)

Hybrid Networkingw/ Narrowband Uplink

AccessISPBackbone

Revenue Model: Subscription fees andequipment purchase vs. advertiser paysfor targeted ad insertion based on location, activity, vehicle ownerdemographics, etc.

Web-based I/F availablein-vehicle, at home, at work

Vehicle Portal:Info, Repair

Records, Ads

Portal

Page 65: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

65

Home LAN

DigitalRecorder

HomeControl

Set-topBox

ScalableServers

Internet

In-Home ServiceScenario

ISPBackbone

Revenue Model: Subscription fees andequipment purchase vs. advertiser paysfor targeted ad insertion based on location, activity, home ownerdemographics, etc.

Web-based I/F availableat home, at work, in-vehicle

Home Portal:Info, Repair

Records, Ads

Portal

Broadband Downlink:Radio/TV/Digital Media

Info Content(News/Maps)

Caches

Hybrid Networking

Access

LMDS/MMDSCable, DSL

Page 66: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

66

Outline

• Rationale for Content Distribution• Web Caching Issues and Architectures• Akamai Architecture• Streaming Content Distribution• Business Issues in Content Distribution• Service Composition

Page 67: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

67

Service Composition

• Assumptions– Providers deploy services throughout network– Portals constructed via service composition

» Quickly enable new functionality on new devices» Possibly through SLAs

– Code is initially non-mobile» Service placement managed: fixed locations, evolves slowly

– New services created via composition» Across service providers in wide-area: service-level path

Page 68: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

68

Service Composition

Provider Q

Textto

speech

Provider R

CellularPhone

Emailrepository

Provider AVideo-on-demandserver

Provider B

ThinClient

Provider A

Provider B

Replicated instancesTranscoder

Page 69: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

69

Architecture for Service Composition and

Management

Composed services

Hardware platform

Peering relations,Overlay network

Service clusters

Logical platform

Application plane

Handling failures

Service-levelpath creation

Servicelocation

Networkperformance

Detection

Recovery

Page 70: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

70

Architecture

Internet

Service cluster: compute cluster capable

of running services

Peering: monitoring

& cascading

Destination

Source

Composedservices

Hardware platform

Peering relations,Overlay network

Serviceclusters

Logical platform

Application plane • Overlay nodes are clusters

– Compute platform– Hierarchical monitoring

– Overlay network provides context for service-level path creation & failure handling

Page 71: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

71

Service-Level Path Creation

• Connection-oriented network– Explicit session setup plus state at intermediate nodes– Connection-less protocol for connection setup

• Three levels of information exchange– Network path liveness

» Low overhead, but very frequent– Performance Metrics: latency/bandwidth

» Higher overhead, not so frequent» Bandwidth changes only once in several minutes» Latency changes appreciably only once an hour

– Information about service location in clusters» Bulky, but does not change very often » Also use independent service location mechanism

Page 72: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

72

Service-Level Path Creation

• Link-state algorithm for info exchange– Reduced measurement overhead: finer time-scales– Service-level path created at entry node– Allows all-pair-shortest-path calculation in the graph– Path caching

» Remember what previous clients used» Another use of clusters

– Dynamic path optimization» Since session-transfer is a first-order feature» First path created need not be optimal

Page 73: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

73

Session Recovery: Design Tradeoffs

• End-to-end:– Pre-establishment possible– But, failure information has

to propagate– Performance of alternate

path could have changed

• Local-link:– No need for information to

propagate– But, additional overhead

Overlay n/w

Handling failures

Service-levelpath creation

Servicelocation

Networkperformance

Detection

Recovery

Findingentry/exit

Page 74: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

74

The Overlay Topology: Design Factors

• How many nodes?– Large number of nodes implies reduced latency

overhead– But scaling concerns

• Where to place nodes?– Close to edges so that hosts have points of entry and

exit close to them– Close to backbone to take advantage of good

connectivity

• Who to peer with?– Nature of connectivity– Least sharing of physical links among overlay links

Page 75: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

75

Problem: Internet Badly Suited to Mission-Critical

Applications

• Commercial peering architecture:– Directly conflicts with robustness– Ignores many existing alternate paths

• The Internet’s Global scale:– Prevents sophisticated algorithms– Route selection uses fixed, simple metrics– Routing isn't sensitive to path quality

Internettakesbad path

A

B

A

BNetworkProblem

MIT RON Project

Page 76: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

76

Proposed Solution: Resilient Overlay Network

(RON)

• One RON per distributed app• RON nodes in different ASs form an overlay network• RON nodes run application-specific routing protocol

among themselves• Application data is tunneled over reliable, secure

transport between RON nodes

RON Uses Better Path

A

B

A

BNetworkProblem

MIT RON Project

Page 77: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

77

Advantages

• Better robustness– Less susceptible to DoS attacks– Wider choice of routes– Route selection tailored to application needs

• Better security– Traffic is encrypted and authenticated– Routing protocol is authenticated– Single administrator for entire RON

• Better responsiveness.– Application-specific routing metrics for QoS

MIT RON Project

Page 78: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

78

Research Questions

• How to design overlay networks?– Self-configuration– Understanding performance of underlying net

• How to design robust responsive routing protocols?

– Fast fail-over– Sophisticated metrics– Application-directed path selection

• Solutions take advantage of RON properties– Just one RON per application– Each RON run by a single administrator

MIT RON Project

Page 79: Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department

79

Building the RON Prototype

• Explore end-to-end Internet performance• Simulate RON path selection algorithms• Deploy a realistic RON using:

– Co-located hosts on different backbones– End-system API for application-directed routing

• Test prototype using multi-party, secure video-conferencing

IP

TCP

ControlPath

DataPath

Resource Manager

UDP

RONRouter

TopologyManager

PerformanceDatabase

ActiveProber

MIT RON Project