multicast

98
1 0940_03F8_c1 NW97_US_106 Introduction to IP Multicast David Meyer Cisco Systems 0940_03F8_c1 NW97_US_106

Upload: khong-biet-gi-het

Post on 08-Nov-2015

7 views

Category:

Documents


0 download

DESCRIPTION

overview about multicast

TRANSCRIPT

Introduction to IP MulticastIP Multicast Host-to-Router Protocols
IP Multicast Routing Protocols
Protocol Independent Multicast—PIM
Better bandwidth utilization
Lesser host/router processing
Receivers’ addresses unknown
*
by a class D IP address
Members of the group could be present anywhere in the Internet
Members join and leave the group
and indicate this to the routers
Senders and receivers are distinct:
i.e., a sender need not be a member
Routers listen to all multicast addresses
and use multicast routing protocols
to manage groups
IP group addresses
Range from 224.0.0.0 through 239.255.255.255
Well known addresses designated by IANA
Reserved use: 224.0.0.0 through 224.0.0.255
224.0.0.1—all multicast systems on subnet
224.0.0.2—all routers on subnet
Transient addresses, assigned
and reclaimed dynamically
Global scope: 224.0.1.0-238.255.255.255
Limited scope: 239.0.0.0-239.255.255.255
Site-local scope: 239.253.0.0/16
Organization-local scope: 239.192.0.0/14
Mapping IP group addresses to data link multicast addresses
RFC 1112 defines OUI 0x01005e
Low-order 23-bits of IP address map into low-order
23 bits of IEEE address (eg. 224.2.2.2–01005e.020202)
Ethernet and FDDI use this mapping
Token Ring uses functional address-c000.4000.0000
*
Hosts
Routers
How hosts tell routers about group membership
Routers solicit group membership from directly connected hosts
RFC 1112 specifies first version of IGMP
IGMP v2 and IGMP v3 enhancements
Supported on UNIX systems, PCs,
and MACs
IGMP v1
224.0.0.1 with ttl = 1
Query interval 60–120 seconds
Reports
sending by others
Unsolicited reports sent by host, when it
first joins the group
IGMP v2:
and is the last member (reduces leave latency
in comparison to v1)
to forward data for the group for that subnet
Standard querier election
of the hosts sending to the group
*
Multicast Routing Protocols
(Reverse Path Forwarding)
What is RPF?
A router forwards a multicast datagram if received on the interface used to send unicast datagrams to the source
B
C
A
F
Source
Receiver
Unicast
Multicast
D
E
is forwarded
is typically silently discarded
When a datagram is forwarded, it is sent out each interface in the outgoing interface list
Packet is never forwarded back out the
RPF interface!
Receiver 1
0940_03F8_c1 NW97_US_106
Multicast Routing
Protocols—Characteristics
Distribution trees
Source tree
Uses more memory O(S x G) but you get optimal paths from source to all receivers, minimizes delay
Shared tree
Uses less memory O(G) but you may get suboptimal paths from source to all receivers,
may introduce extra delay
Pruned branches can later be grafted
to reduce join latency
Dense-mode PIM—Protocol Independent Multicast
*
Assumes group membership is sparsely populated across a large region
Uses either source or shared distribution trees
Explicit join behavior—assumes no one wants
the packet unless asked
or Core (Core Based Tree)
*
based on RPF rule
Uses own routing table
Many implementations
mrouted, Bay, …
dense groups
based on RPF rule
is torn down
are pruned
Uses asserts to determine forwarder
for multi-access LAN
Rate-limited prunes on RPF P2P links
*
Source
Asserts
Source
Prune
D
F
I
B
C
A
E
G
H
Senders register with the RP
Data flows down the shared tree and goes only
to places that need the data from the sources
Last hop routers can join source tree if the data rate warrants by sending joins to the source
RPF check for the shared tree uses the RP
RPF check for the source tree
uses the source
Only one RP is chosen for a particular group
RP statically configured or dynamically learned (Auto-RP, PIM v2 candidate RP advertisements)
Data forwarded based on the source state (S, G)
if it exists, otherwise use the shared state (*, G)
Draft: draft-ietf-idmr-pim-sm-specv2-00.txt
Draft: draft-ietf-idmr-pim-arch-04.txt
C
C
Sends Joins Towards the Source
C
Shortest Path (SPT) Tree
It Sends Prunes Up the RP tree for
the Source. RP Deletes (S, G) OIF and
Sends Prune Towards the Source
C
(S, G) Prune
C
List of Both (*, G) and (S, G)
Data from Source Arrives at E
C
RP Forwards Data to Receivers
through Shared Tree
What Are the Technical Scaling Issues
2
Using Directory Services
This presentation focuses on large-scale multicast routing in the Internet
The problems/solutions presented
of IP multicast
We believe the current set of deployed technology is sufficient
for enterprise environments
Introduction—Why Would You Want to Deploy IP Multicast?
You don’t want the same data traversing your links many times— bandwidth saver
*
Introduction—Why Would You Want to Deploy IP Multicast?
You want to discover a resource but don’t know who is providing it or if you did, don’t want to configure it— expanding ring search
Reduce startup latency for subscribers
*
Internet Service Providers are seeing revenue potential for deploying IP multicast
Initial applications
Map network layer address to link layer address
Routers will figure out where receivers are and
are not
*
Hosts can be receivers and not send to the group
Hosts can send but not be receivers of the group
Or they can be both
*
Multiple IP group addresses map into a single link-layer address
You need IP-level filtering
Hosts join groups, which means they receive traffic from all sources sending to the group
*
There are some protocol and architectural issues (continued)
*
Basic Router Model
Since hosts can send any time to any group, routers must be prepared to receive on all link-layer group addresses
And know when to forward or drop packets
*
interfaces leading to receivers
sources when utilizing source distribution trees
*
Routers maintain state to deliver data down a distribution tree
Source trees
Router keeps (S,G) state so packets can flow from the source to all receivers
Trades off low delay from source against router state
*
Shared trees
Router keeps (*,G) state so packets flow from the root of the tree to all receivers
*
On demand, in response to data arrival
Dense-mode protocols (PIM-DM and DVMRP)
MOSPF
*
Building distribution trees requires knowledge of where members are
flood data to find out where members are not (Dense-mode protocols)
flood group membership information (MOSPF), and build tree as data arrives
send explicit joins and keep join state (Sparse-mode protocols)
*
Construction of source trees requires knowledge of source locations
In dense-mode protocols you learn them when data arrives (at each depth of the tree)
Same with MOSPF
In sparse-mode protocols you learn them when data arrives on the shared tree (in leaf routers only)
Ignore since routing based on direction from RP
Pay attention if moving to source tree
*
Data Distribution Concepts
To build a shared tree you need to know where the core (RP) is
Can be learned dynamically in the routing protocol (Auto-RP, PIMv2)
Can be configured in the routers
Could use a directory service
*
Broadcast radio transmissions
Expanding ring search
Generic few-sources-to-many-receiver applications
point of view
Many low-rate sources
Applications that don’t require low delay
Consistent policy and access control across most participants in a group
*
Is the service what runs on top of multicast?
Or is it the transport itself?
Do you bill based on sender or receiver, or both?
How to control access
Should receivers be rate-controlled?
0940_03F8_c1 NW97_US_106
Deployment Obstacles—
Non-Technical Issues
Making your peers fan out instead of you (save replication in your network)
Closest exit vs latest entrance — all a wash
Multicast-related security holes
Eaves-dropping simpler since receivers are unknown
*
gains popularity
When policy and access control per source are the rule rather than the exception
Group state will become a problem
as IP multicast gains popularity
10,000 three member groups across
the Internet
Deployment Obstacles— Technical Issues
*
ISPs don’t want to depend on competitor’s RP
Do we connect shared trees together?
Do we have a single shared tree
across domains?
inter-domain groups?
Unicast and multicast topologies may not be congruent across domains
Due to physical/topological constraints
Due to policy constraints
*
How to Control Multicast Routing Table State in the Network?
Fundamental problem of learning group membership
Flood and Prune
Where to put root of shared tree (RP)
ISP third-party RP problem
*
Four possibilities for distributing group-to-RP mappings
(1) Multi-level RP
(2) Anycast clusters
Level-0 RPs are inside domains
They propagate joins from downstream routers to a Level-1 RP that may be in another domain
Level-0 shared trees connected via a Level-1 RP
If multiple Level-1 RPs, iterate up to Level-2 RPs
*
Requires PIM protocol changes
If you don’t locate the Level-0 RP at the border, intermediate PIM routers think there may be two RPs for the group
Still has the third-party problem, there is ultimately one node at the root of the hierarchy
Data has to flow all the way to the highest-
level RP
Shares burden among ISPs
Build RP clusters at interconnect points (or in dense-mode clouds)
*
Closest border router in cluster is used as the RP
Routers within a domain will use that domain’s RP
Provided you have an RP for that group range at an interconnect point
*
Idea: connect domains together
If you can’t connect shared trees together easily, then don’t
Multicast Source Discovery Protocol
*
(3) MSDP
An RP in a domain has a MSDP peering session with an RP in
another domain
Runs over TCP
Source Active (SA) messages indicate active sending sources in a domain
*
Its packets get PIM registered to domain’s RP
RP sends SA message to its MSDP peers
Those peers forward the SA to their peers away from the originating RP
*
So each domain can depend solely on its
own RP (no third-party problem)
Do not need to store SA state at each MSDP peer
Could encapsulate data in SA messages for low-rate bursty sources
Could cache SA state to speed up join latency
*
Group-to-RP mapping distribution:
(4) Directory Services
*
a) single shared tree across domains
Put RP in client’s domain
Optimal placement of the RP if the domain had a multicast source or receiver active
Policy for RP is consistent with policy for domain’s unicast prefixes
Use directory to find RP address for a
given group
First-hop router DNS resolves
First-hop router sends PIM join toward RP
*
All routers have consistent RP addresses via DNS
*
Group-to-RP mapping distribution:
(4) Directory Services
When domain group allocation exists, a domain can be authoritative for a DNS zone
1.224.pim.mcast.net
128/17.1.224.pim.mcast.net
Build PIM-SM source trees across domains
Put multiple A records in DNS to describe sources for the group
1.0.2.224.sources.pim.mcast.net IN CNAME dino-ss20
0940_03F8_c1 NW97_US_106
Standards Solutions
Ultimate scalability of both routing and group allocation can be achieved using BGMP/MASC
Use BGP4+ (MBGP) to deal with topology non-congruency
*
Use a PIM-like protocol between domains (“BGP for multicast”)
BGMP builds shared tree of domains for a group
So we can use a rendezvous mechanism at the domain level
Shared tree is bidirectional
*
Runs in routers that border a multicast routing domain
Runs over TCP like BGP
Joins and prunes travel across domains
Can build unidirectional source trees
MIGP tells the borders about group membership
*
Multicast Address Set Claim (MASC)
How does one determine the root domain for a given group?
Group prefixes are temporarily leased to domains
*
Claims for group allocation resolve collisions
Group allocations are advertised across domains
Lots of machinery for aggregating group allocations
*
Tradeoff between aggregation and anticipated demand for group addresses
Group prefix allocations are not assigned to domains—they are leased
Applications must know that group addresses may go away
Work in progress
RFC 2283
MBGP allows you to build a unicast RIB and multicast RIB independently with one protocol
Can use the existing or new (mulitcast) peering topology
MBGP carries unicast prefixes of multicast capable sources
*
That RP as well as all border routers run MBGP
Interconnect runs dense-mode PIM
*
2) multiple interconnect points between ISPs
ISPs can multicast peer for any groups as long as their respective RPs are colocated on the same interconnect
*
MBGP Deployment Scenarios
3) address range depends on DNS to rendezvous or build trees
ISPs decide which domains will have RPs that they will administer
ISPs decide which groups will use
source trees and don’t need RPs
ISPs administer DNS databases