otv ppt by networkers home

Post on 16-Jul-2015

432 Views

Category:

Education

10 Downloads

Preview:

Click to see full reader

TRANSCRIPT

OTV

Simplify DCI

Agenda

• Distributed Data Centers: Goals and Challenges

• OTV Architecture Principles

• OTV Design Considerations & New Features

Distributed Data Centers Goals

• Ensure business continuity

• Distributed applications

• Seamless workload mobility

• Maximize compute resources

Traditional Layer 2 Extension

EoMPLS

VPLSDark Fiber

Challenges in Traditional Method

Technology Pillars

Technology Pillars

Simplify DCI

•Nexus 7000 First platform to support OTV (since 5.0 NXOS Release)

•ASR 1000 Now also supporting OTV (since 3.5 XE Release)

Agenda

Distributed Data Centers: Goals and Challenges OTV Architecture Principles –Control Plane and Data Plane –Failure Isolation–New Feature –Multi-homing –L2 Multicast Forwarding –QoS and Scalability –Path Optimization OTV Design Considerations & New Features

Terminology

OTV Devices and Interfaces Edge Device –Performs all OTV functionality –Usually located at the Aggregation

Layer or at the Core Layer –Support for multiple OTV Edge

Devices (multi-homing) in the same site

Internal Interface –Site facing Interfaces of the Edge

Devices –Carry VLANs extended through OTV –Regular Layer 2 interfaces –No OTV configuration required –Supports IPv4 & IPv6

Terminology

OTV Devices and Interfaces Join Interface –One of the uplink of the Edge Device –Point-to-point routed interface (physical

interface, sub-interface or port-channel supported)

–Used to physically “join” the Overlay network

–No OTV specific configuration required –IPv4 only

Overlay Interface –Virtual interface with most of the OTV

configuration –Logical multi-access multicast-capable

interface –Encapsulates Layer 2 frames in IP unicast

or multicast

OTV Control Plane

Building the MAC Tables

• No unknown unicast flooding (selective unicast flooding in 6.2)

• Control Plane Learning with proactive MAC advertisement

• Background process with no specific configuration

• IS-IS used between OTV Edge Devices

OTV Control Plane

Neighbor Discovery and Adjacency Formation Before any MAC address can be advertised the

OTV Edge Devices must: ‒Discover each other ‒Build a neighbor relationship with each other

Neighbor Relationship built over a transport infrastructure:

‒Multicast-enabled (all shipping releases) ‒Unicast-only (from NX-OS release 5.2 & IOS-XE 3.9)

OTV Control Plane

• Neighbor Discovery (over Multicast Transport)

OTV Control Plane (Multicast Transport)

OTV Control Plane (Multicast Transport)

OTV Control PlaneMAC Address Advertisements (Multicast-Enabled Transport)

• Every time an Edge Device learns a new MAC address, the OTV control plane will advertise it together with its associated VLAN IDs and IP next hop.

• The IP next hops are the addresses of the Edge Devices through which these MACsaddresses are reachable in the core.

• A single OTV update can contain multiple MAC addresses for different VLANs.

• A single update reaches all neighbors, as it is encapsulated in the same ASM multicastgroup used for the neighbor discovery.

Core

IP A

West

East

3 New MACs are learned on VLAN 100

Vlan 100 MAC A

Vlan 100 MAC B

Vlan 100 MAC C

South-East

VLAN MAC IF

100 MAC A IP A

100 MAC B IP A

100 MAC C IP A

4

OTV update is replicated by the core

3

3

2

VLAN MAC IF

100 MAC A IP A

100 MAC B IP A

100 MAC C IP A

4

3 New MACs are learned on VLAN 100

1

Multicast Transport

OTV Control and Data Plane over Multicast Transport

Use a High-Available Multicast Rendez-Vous Point (RP) configuration

‒PIM Anycast (RFC4610) or MSDP (Multicast Source Discovery Protocol)

•Requirements to Control Plane ‒PIM Any-Source-Multicast (ASM) Sparse-

Mode

•Requirements to Data Plane ‒PIM Source-Specific-Multicast (SSM) or BiDir

OTV Control PlaneNeighbor Discovery (Unicast-only Transport)

• Ideal for connecting a small number of sites

• With a higher number of sites a multicast transport is the best choice

OTV Control Plane

CLI Verification Establishment of control plane adjacencies between OTV Edge Devices

(multicast or unicast transport):

Unicast MAC reachability information:

TransportInfrastructure

OTV Data Plane: Inter-Site Packet Flow

OTV OTV OTV OTV

MAC TABLE

VLAN MAC IF

100 MAC 1 Eth 2

100 MAC 2 Eth 1

100 MAC 3 IP B

100 MAC 4 IP B

MAC 1 MAC 3

IP A IP BMAC 1 MAC 3

MAC TABLE

VLAN MAC IF

100 MAC 1 IP A

100 MAC 2 IP A

100 MAC 3 Eth 3

100 MAC 4 Eth 4

Layer 2Lookup

5

IP A IP BMAC 1 MAC 3MAC 1 MAC 3Layer 2Lookup

1 Encap

2

Decap

4

MAC 1 MAC 3West Site

MAC 1MAC 3

EastSite

1. Layer 2 lookup on the destination MAC. MAC 3 is reachable through IP B.

2. The Edge Device encapsulates the frame.3. The transport delivers the packet to the Edge

Device on site East.

4. The Edge Device on site East receives and decapsulates the packet.

5. Layer 2 lookup on the original frame. MAC 3 is a local MAC.

6. The frame is delivered to the destination.

3

6

IP A IP B

Source

OTV

OTV Data Plane: Multicast DataMulticast State Creation

Receiver

OTV

IP A

IP B

West East

OIL-List

Group IF

Gs Gd Overlay

Client IGMP snoop

2

Client IGMPreport to join

Gs

1

IGMPv3 report to join (IP A, Gd) , the SSM group in the

Core.

3.2

Receive GM-Update Update OIL

4

SSM Tree for Gd

From Right to Left

1. The multicast receivers for the multicast group “Gs” on the East site send IGMP reports to join the multicast group.

2. The Edge Device (ED) snoops these IGMP reports, but it doesn’t forward them.

3. Upon snooping the IGMP reports, the ED does two things:

1. Announces the receivers in a Group-Membership Update (GM-Update) to all EDs.

2. Sends an IGMPv3 report to join the (IP A, Gd) group in the core.

4. On reception of the GM-Update, the source ED will add the overlay interface to the appropriate multicast Outbound Interface List (OIL).

It is important to clarify that the edge devices join the core multicast groups as hosts, not as routers!

GM-Update3.1

Multicast-enabled Transport

Source

OTV

OTV Data Plane: Multicast DataMulticast Packet Flow

Receiver

OTV

IP A

IP B

West EastIP C

Receiver

South

OTV

OIF-List

Group IF

Gs Gd Overlay

Encap2

Lookup

1

IPs Gs

IP A GdIPs Gs

TransportReplication

3

IP A GdIPs Gs IP A GdIPs Gs

4

4

IP A GdIP s Gs

IPs Gs

IPs Gs

Decap 5

Decap 5

Multicast-enabled Transport

OTV Data Plane Encapsulation

•42 Bytes overhead to the packet IP MTU size (IPv4 packet)

•Outer IP + OTV Shim - Original L2 Header (w/out the .1Q header)

•802.1Q header is removed and the VLAN field copied over to the OTV shim header

•Outer OTV shim header contains VLAN, overlay number, etc.

•Consider Jumbo MTU Sizing

Configuration

Overlay Transport Virtualization (OTV) in a Nutshell

•OTV is a MAC-in-IP method that extends Layer 2

connectivity across a transport network infrastructure

•OTV supports both multicast and unicast-only transport networks

•OTV uses ISIS as the control protocol

•OTV on Nexus7000 does not encrypt encapsulated payload

Edge Device

• Performs OTV functions

• Multiple OTV Edge Devices can exists at each site

• OTV requires the Transport Services (TRS) license

• Creating non default VDC’s requires Advanced Services license

Internal Interfaces

• Regular layer 2 interfaces facing the site

• No OTV configuration required

• Supported on M-series modules

• Support for F1 and F2e starting in 6.2

• Support for F3 in 6.2(6)

Join Interface

• Uplink on Edge device that joins the Overlay

• Forwards OTV control and data traffic

• Layer 3 interface • Supported on M-

series modules • Supported on F3

modules in 6.2(6)

Overlay Interface

• Virtual Interface where the OTV configurations are applied

• Multi-access multicast-capable interface

• Encapsulates Layer 2 frames

AED

• OTV supports multiple edge devices per site

• A single OTV device is elected as AED on a per-vlan basis

• The AED is responsible for advertising MAC reachability and forwarding traffic into and out of the site for its VLANs

OTV Overview

Site VLAN and Site Identifier

•Site VLAN needs to be configured and active even if you do not have multiple OTV devices in the same site. The site VLAN should not be extended across overlay

•Site Identifier can be any number between 0000.0000.0001 and ffff.ffff.ffff. Value will always be displayed in MAC format

•Site Identifier must be unique for each site

Site VLAN and Site Identifier

Multicast Transport: Overlay

•Multicast Transport requires the configuration of a control-group and data-group

•Adjacencies are built and MAC reachability information is exchanged over the control-group

•The data-group is a source specific multicast (SSM) delivery group for extending multicast traffic across the overlay. It can be configured as any subnet within the transport’s SSM range.

•The data-group range should not overlap with the control-group

Multicast Transport

Multicast Enabled Core

Multicast Transport

Multicast Transport Full Picture

Unicast Transport: Overlay•OTV can run across a unicast only transport

•Unicast Transport requires the configuration of one or more adjacency servers. OTV devices register with the adjacency server which in turn provides each with an OTV Neighbor List (oNL).

•Think of the adjacency server as a special process running on a generic OTV edge device

•A primary and secondary adjacency server can be configured for redundancy

Adjacency Server•Primary and Secondary Adjacency servers are stateless;

every OTV client must register with both servers

•OTV client will not process the oNL from the secondary server while the primary server is still active

•OTV uses graceful exit of Adjacency Servers. If the primary server is rebooted or reconfigured, it can notify the OTV clients allowing them to immediately use the secondary

Primary Adjacency Server Overlay

Agenda

Distributed Data Centers: Goals and Challenges OTV Architecture Principles –Control Plane and Data Plane –Failure Isolation –New Feature–Multi-homing –L2 Multicast Forwarding –QoS and Scalability –Path Optimization OTV Design Considerations & New Features

Spanning-Tree and OTV

Site Independence • Site transparency: no changes to the

STP topology

• Total isolation of the STP domain

• Default behavior: no configuration is required

• BPDUs sent and received ONLY on Internal Interfaces

Unknown Unicast and OTV No Longer Unknown Unicast

Storms Across the DCI

• No requirements to forward

unknown unicast frames

• Assumption: end-host are not silent or uni-directional

• Default behavior: no configuration is required

Agenda

Distributed Data Centers: Goals and Challenges OTV Architecture Principles –Control Plane and Data Plane –Failure Isolation –New Feature–Multi-homing –L2 Multicast Forwarding –QoS and Scalability –Path Optimization OTV Design Considerations & New Features

New Features

•F1/F2E used as Internal Interfaces

•Selective Unicast Flooding

•Dedicated Data Broadcast Group

•OTV VLAN Translation

•OTV Fast Convergence

•Tunnel Depolarization with Secondary IP

•Loopback Join-Interface

New Feature

6.2(2) and above

•F1 or F2E can be mixed with M-series VDC and used as OTV internal interface

•Unicast MAC address can be statically configured to flood across OTV

•Dedicated Broadcast Group

–Allows separation and prioritization of control traffic vs. data plane broadcast traffic

OTVSupported Line Card Topologies :: NX-OS 6.1 and Prior Releases

• OTV VDC must use only M-Series ports for both Internal and Join Interfaces[M1-48, M1-32, M1-08, M2-Series]

• OTV VDC Types (M-only)• Aggregation VDC Types (M-only, M1-F1 or F2/F2E)

Aggregation VDC

OTVSupported Line Card Topologies :: NX-OS 6.2 and Later Releases

• OTV VDC Join Interfaces must use only M-Series ports[M1-48, M1-32, M1-08, M2-Series]

• OTV VDC Internal Interfaces can use M-Series, F1 and F2E ports (F1 and F2E must be in Layer 2 proxy mode)• OTV VDC Types (M-only, M1-F1, M1-F2E)• Aggregation VDC Types (M-only, M1-F1, M1-F2E, F2, F2E, F2F2E)

Aggregation VDC

OTV VLAN Translation

• VLAN translation allows OTV to map a local VLAN to a remote VLAN.

OTV Fast Convergence

•Previously, AED election ran independently on each edge device which required a short black-holing timer to prevent multiple active AEDs for a VLAN

•AED Server: centralized model where a single edge device runs the AED election for each VLAN and assigns VLANs to edge devices.

•Per-VLAN AED and Backup AED assigned and advertised to all sites

•Fast Remote Convergence: on remote AED failure, OTV routes are updated to new AED immediately

•Fast Failure Detection: Detect site VLAN failures faster with BFD and core failures with route tracking

OTV Fast Convergence

OTV Fast Convergence

OTV Fast Convergence

Loopback Join-Interface

Physical Join Interface Limitations •Bandwidth to the core is limited to one physical link or port-channel •Changes to join-interface will churn all OTV overlay states, since the

overlay encapsulation for all routes need to be updated •PIM cannot be enabled on the join-interface, since the OTV solution

assumes it's an IGMP host interface •Unable to utilize the redundancy of multiple uplinks when available,

and the flexibility of dynamic unicast routing convergence on uplink failures

•If join-interface goes down, the connectivity to the core is broken. User intervention is needed to provide alternate core connectivity

Loopback Join-Interface

Tunnel Depolarization with Secondary IP

• All encapsulated traffic between AED’s have same source and destination IP’s limiting the advantages of Etherchannel and ECMP load-balancing

• Secondary IPs allows OTV to forward traffic between multiple endpoints to prevent polarization along the path

Tunnel Depolarization with Secondary IP

ARP Neighbor-Discovery (ND) Cache

• ARP cache maintained in Edge Device by snooping ARP replies

• First ARP request is broadcasted to all sites. Subsequent ARP requests are replied by local Edge Device

• Timeout can be adjusted (as per NX-OS 6.1(1))

• Drastic reduction of ARP traffic on DCI

• ARP spoofing can be disabled

• IPv4 only feature

• Default behavior: no configuration is required

Site VLAN and Site Identifier

Dual Site Adjacency, 5.2(1) and above

1. Site Adjacency established across the site vlan

2. Overlay Adjacency established via the Join interface across Layer 3 network

Internal Link

• If a communication problem occurs on the site vlan, each OTV device can still advertise AED capabilities across overlay to prevent an active/active scenario

Join Interface Down

• Dual Site Adjacency also has mechanism for advertising AED capabilities on local failure to improve convergence

• Join interface down

Interface VLAN Down

• Dual Site Adjacency also has mechanism for advertising AED capabilities on local failure to improve convergence

• Join interface down • Internal Vlans down

AED Down

• Dual Site Adjacency also has mechanism for advertising AED capabilities on local failure to improve convergence

• Join interface down • Internal Vlans down • AED down or initializing

Agenda

Distributed Data Centers: Goals and Challenges OTV Architecture Principles –Control Plane and Data Plane –Failure Isolation –New Feature–Multi-homing –L2 Multicast Forwarding –QoS and Scalability –Path Optimization OTV Design Considerations & New Features

OTV Multi-homing

OTV Multi-homing

OTV Multi-homing

Agenda

Distributed Data Centers: Goals and Challenges OTV Architecture Principles –Control Plane and Data Plane –Failure Isolation –New Feature–Multi-homing –Mobility–L2 Multicast Forwarding –QoS and Scalability –Path Optimization OTV Design Considerations & New Features

OTV and MAC Mobility

OTV and MAC Mobility

OTV and MAC Mobility

Agenda

Distributed Data Centers: Goals and Challenges OTV Architecture Principles –Control Plane and Data Plane –Failure Isolation –New Feature–Multi-homing –Mobility–L2 Multicast Forwarding –QoS and Scalability –Path Optimization OTV Design Considerations & New Features

L2 Multicast Traffic between Sites

L2 Multicast Traffic between Sites

L2 Multicast Traffic between Sites

L2 Multicast Traffic between Sites

L2 Multicast Traffic between Sites

Agenda

Distributed Data Centers: Goals and Challenges OTV Architecture Principles –Control Plane and Data Plane –Failure Isolation –New Feature–Multi-homing –Mobility–L2 Multicast Forwarding –QoS and Scalability –Path Optimization OTV Design Considerations & New Features

QOS and OTV

QOS and OTV

Agenda

Distributed Data Centers: Goals and Challenges OTV Architecture Principles –Control Plane and Data Plane –Failure Isolation –New Feature–Multi-homing –Mobility–L2 Multicast Forwarding –QoS and Scalability –Path Optimization OTV Design Considerations & New Features

Path Optimization

Question ???

top related