cisco asr 9000 system architectured2zmdbbm9feqrf.cloudfront.net/2015/usa/pdf/brkarc-2003.pdf ·...
Embed Size (px)
TRANSCRIPT


Cisco ASR 9000 System Architecture
BRKARC-2003
Javed Asghar, Technical Marketing Architect - Speaker
Dennis Cai, Distinguished Engineer, Technical Marketing – Question Manager
CCIE #6621, R&S, Security

Swiss Army Knife Built for Edge Routing WorldCisco ASR9000 Market Roles
Carrier Ethernet
Mobile Backhaul
Multiservice Edge
Broadband
Gateway
DC gateway
Web/OTT
Large Enterprise
WAN
Cable/MSO
1. High-End Aggregation &
Transport
1. Mobile Backhaul
2. L2/Metro Aggregation
3. CMTS Aggregation
4. Video Distribution &
Services
2. DC Gateway Router
1. DC Interconnect
2. DC WAN Edge
3. WEB/OTT
3. Services Router
1. Business Services
2. Residential Broadband
3. Converged Edge/Core
4. Enterprise WAN

Other ASR9000 or Cisco IOS XR Sessions• … you might be interested in
• BRKSPG-2904 - ASR-9000/IOS-XR Understanding forwarding, troubleshooting the system and XR operations
• TECSPG-3001: Advanced - ASR 9000 Operation and Troubleshooting
• BRKSPG-2202: Deploying Carrier Ethernet Services on ASR9000
• BRKARC-2024: The Cisco ASR9000 nV Technology and Deployment
• BRKMPL-2333: E-VPN & PBB-EVPN: the Next Generation of MPLS-based L2VPN
• BRKARC-3003: ASR 9000 New Scale Features - FlexibleCLI(Configuration Groups) & Scale ACL's
• BRKSPG-3334: Advanced CG NAT44 and IOS XR Deployment Experience

Agenda• ASR9000 Hardware Overview
• Line Card System Architecture
• Typhoon, Trident, SIP-700, VSM
• Tomahawk
• Interface QoS Capability
• Switch Fabric Architecture• Bandwidth and Redundancy Overview
• Fabric QoS – Virtual Output Queuing
• Packet Flow and Control Plane Architecture
• Unicast
• Multicast
• L2
• IOS-XR Overview
• 32-bit and 64-bit OS
• IOS-XRv 9000 Virtual Forwarder
• Netconf/Yang

ASR9000 Hardware Overview

Compact & Powerful
Access/AggregationFlexible Service Edge
High Density Service Edge
and Core
• Small footprint with full IOS-XR feature capabilities for distributed environments (BNG, Pre-agg etc.)
• Optimized for ESE and MSE with high M-D scale for medium to large sites
• Scalable, ultra high density service routers for large, high-growth sites
Fixed 2RU
120 Gbps
2 LC/2RU
2.4 Tbps
8 LC/21RU
6.4 Tbps
10 LC/30RU
30 Tbps
20 LC/44RU
60T Tbps
4 LC/10RU
3.2 Tbps
ASR 9904
ASR 9001 / 9001-S
ASR 9006
ASR 9010
ASR 9912
ASR 9922
MSE E-MSE Peering P/PE Mobility
ASR9k Chassis Portfolio Offers Maximum FlexibilityPhysical and Virtual
nV Satellites: ASR 9000v, ASR 901/903
CEIOS XR
8 LC/21RU
24 Tbps
ASR 9910
IOS XRv
x86

Edge Linecard Silicon Slice Evolution
FutureTomahaw
k Class
800G
Tomahawk
28nm
240 Gbps
Tigershark
28nm
200 Gbps
SM15
28nm
1.20 Tbps
X86
6 Core
2 Ghz
NowTyphoon
Class
360G
Typhoon
55nm
60 Gbps
Skytrain
65nm
60 Gbps
Sacramento
65nm
220 Gbps
PowerPC
Quad Core
1.5 Ghz
Trident
90nm
15 Gbps
Octopus
130nm
60 Gbps
Santa Cruz
130nm
90 Gbps
PowerPC
Dual Core
1.2 Ghz
PastTrident
Class
120G

ASR 9001 Compact Chassis
Fixed 4x10G
SFP+ ports
Fan Tray
Field Replaceable
Redundant
(AC or DC)
Power Supplies
Field Replaceable
Sub-slot 0 with MPA Sub-slot 1 with MPA
Shipping since IOS-XR 4.2.1 May 2012
Supported MPAs:
20x1GE2x10GE 4x10GE1x40GE
Side-to-Side airflow
2RU
Front-to-back air flow with air flow baffles, 4RU, require V2 fan

ASR-9001 System Architecture Overview
MPAs
2,4x10GE
20xGE
1x40GE
SFP+ 10GE
SFP+ 10GE
SFP+ 10GE
SFP+ 10GE
Typhoon
FIA
FIA
Typhoon
Sw
itch
Fa
bric
AS
IC
RP
CPU
“RP CPU” and “linecard CPU” – same arch. as the larger systems
uses a single crossbar ASIC (just due to smaller bandwidth requirements)
MPAs
2,4x10GE
20xGE
1x40GE
On-board
2x10 SFP+
portsLC
CPUInternal
EOBC

ASR 9001-S Compact Chassis
Sub-slot 0 with MPA Sub-slot 1 with MPA
Pay As You Grow• Low entry cost
• SW License upgradable to full 9001
60G bandwidth are disabled by software. SW license to enable it
Shipping since IOS-XR 4.3.1 May 2013
Supported MPAs:
20x1GE2x10GE 4x10GE1x40GE
Side-to-Side airflow
2RU
Front-to-back air flow with air flow baffles, 4RU, require V2 fan

ASR-9001S System Architecture Overview
MPAs
2,4x10GE
20xGE
1x40GE
SFP+ 10GE
SFP+ 10GE
SFP+ 10GE
SFP+ 10GE
Typhoon
FIA
FIA
Typhoon
Sw
itch
Fa
bric
AS
IC
RP
CPU
“RP CPU” and “linecard CPU” – same arch. as the larger systems
uses a single crossbar ASIC (just due to smaller bandwidth requirements)
MPAs
2,4x10GE
20xGE
1x40GE
On-board
2x10 SFP+
portsLC
CPUInternal
EOBC
Disabled by Default
Upgradable via License

Cisco ASR 9006 OverviewFeature Description
Total Capacity 3.68T
Capacity per Slot 920G
Slots 6 slots - 4 Line Cards and 2 RSPs
Rack size 10RU
Power 1 Power Shelf, 4 Power Modules
2.1 KW DC / 3.0 KW AC supplies
Fan: Side to Side Airflow
Optional Baffle for Front-to-Back Airflow
2 Fan Trays, FRU
RSPs Integrated Fabric, 1+1 Redundancy
Line cards Tomahawk
Typhoon
VSM
SIP700 & SPAs
Side-to-back airflow, 10 RU
Front-to-back air flow with air flow baffles, 13RU, vertical

Cisco ASR 9010 OverviewFeature Description
Total Capacity 7.36T
Capacity per Slot 920G
Slots 10 slots - 8 Line Cards and 2 RSPs
Rack size 21RU
Power: 2 Power Trays
2.1 KW DC / 3.0 KW AC supplies
4.4 KW DC / 6.0 KW AC supplies
Fan: Front to Back Airflow
2 Fan Trays, FRU
RSPs Integrated Fabric, 1+1 Redundancy
Line cards Tomahawk
Typhoon
VSM
SIP700 & SPAs

Cisco ASR 9904 OverviewFeature Description
Total Capacity 6T
Capacity per Slot 3T
Slots 4 slots - 2 Line Cards and 2 RSPs
Rack size 6RU
Power 1 Power Trays, 4 Power Modules
2.1 KW DC / 3.0 KW AC supplies
Fan Side to Side Airflow, Front-to-Back Optional
1 Fan Tray, FRU
RSPs Integrated Fabric, 1+1 Redundancy
Line cards Tomahawk
Typhoon
SIP700, VSM
SW XR 5.1.0 – September 2013
Front-to-back air flow with air flow baffles, 10RU

Cisco ASR 9912 Overview
Features Description
Total Capacity 30T
Capacity per Slot 3T
Slots 10 slot chassis
Rack Size 30 RU
Power 4 Power Trays
2.1 KW DC / 3.0 KW AC supplies
4.4 KW DC / 6.0 KW AC supplies
Fan 2 Fan Trays
Front to back airflow
RP 1+1 RP redundancy
Fabric (SFC) 6+1 fabric redundancy
SW XR 4.3.2 – Shipping

Cisco ASR 9922 Overview
Features Description
Total Capacity 60T
Capacity per Slot 3T
Slots 20 Line cards, 2 RP, 7 SFC
Rack Size 44 RU (Full Rack)
Power 4 Power Trays
2.1 KW DC / 3.0 KW AC supplies
4.4 KW DC / 6.0 KW AC supplies
Fan 4 Fan Trays
Front to back airflow
RP 1+1 RP redundancy
Fabric (SFC) 6+1 fabric redundancy
Line cards Tomahawk
Typhoon, VSM

Cisco ASR 9910 Overview
Feature Description
Total Capacity 24T
Capacity per Slot 3T
Slots 10 slots - 8 Line Cards and 2 RSPs
Rack size 21RU
Power: 2 Power Trays
4.4 KW DC supplies
6.0 KW AC supplies
Fan: Front to Back Airflow
2 Fan Trays, FRU
RSPs Integrated Fabric, 1+1 Redundancy
Fabric Cards 5 Fabric cards on rear of chassis for additional capacity
230G per FC at FCS
Up to 6+1 Redundancy using RSP’s integrated fabric
Line cards Tomahawk
Typhoon
VSM
SIP700
Target IOS XR 6.1
Q1CY16
Note: This information may change until FCS

Cisco ASR 9910 Details
Target IOS XR 6.1
Q1CY16
Greater Capacity, Greater Flexibility:
Start with 2 RSPs, then add fabric
cards to scale beyond 460G per slot
ASR 9910 comes prepared with
sufficient power and cooling to
support high density 100G line cards.
Available with IOS XR 6.1 with the
same feature parity as the rest of the
ASR 9000 family.
5 Fabric Cards
2 Fan Trays
2 RSPs
8 Line Card Slots
2 Power Trays
This information may change until FCS

ASR-9910 Mid-plane ArchitectureNew Mid-plane Architecture for Greater Flexibility
M
I
D
P
L
A
N
E
Fabric card 0 – 230G
RSP0 – 230G
RSP1 – 230G
FT0
FT1
Power Tray 0
LC0
Fabric
1G/10G/40G/100G
Power Tray 1
LC71G/10G/40G/100G
Fabric card 4 – 230G
Control
Line Side

ASR 9000 Route Switch Processor Common for ASR 9904, ASR 9010, and ASR 9010
Common internal HW with RP for feature parity on IOS XR
Integrated Multi-Stage Switch Fabric
TR and SE Memory options
Time and Synchronization Support
RPS440 RSP880
Availability Q1CY12 Q1CY15
Processor Four Cores - 2.1GHz Eight Cores - 2.2GHz
NPU Bandwidth 60G 240G
Fabric Planes 5 7
Fabric Capacity 440G 880G
Memory 6G for TR
12G for SE
16G for TR
32G for SE
SSD 2x 16GB Slim SATA 2x 32GB Slim SATA
LC Support Typhoon/Trident Tomahawk/Typhoon
ASR 9904
ASR 9006
ASR 9010

ASR 9900 Route Processor Common for ASR 9912 and ASR 9922
Built for massive control place scale
Ultra High Speed Control Plane with Multi-Core Intel CPU
Huge Scale through High Memory options
Time and Synchronization Support
RP1 RP2
Availability Q1CY12 Q1CY15
Processor Four Cores
2.1GHz
Eight Cores
2.2GHz
NPU Bandwidth 60G 240G
Fabric Planes 5 7
Memory 6G for TR
12G for SE
16G for TR
32G for SE
SSD 2x 16GB Slim SATA 2x 32GB Slim SATA
LC Support Typhoon Tomahawk/Typhoon
ASR 9912
ASR 9922

9904/9006/9010 RSP440
9922/9912 RP1
RSP8809922/9912
RP2
2nd Gen RP and Fabric ASIC 3rd Gen RP and Fabric ASIC
ProcessorsIntel x86
4 Core 2.27 GHz
Intel x86
4 Core 2.27 GHz
Intel x86 (Ivy Bridge EP)
8 Core 2GHz
Intel x86 (Ivy Bridge EP)
8 Core 2GHz
RAMRSP440-TR: 6GB
RSP440-SE: 12GB
-TR: 6GB
-SE: 12GB
-TR: 16GB
-SE: 32GB
-TR: 16GB
-SE: 32GB
SSD 2x 16GB Slim SATA 2x 16GB Slim SATA 2x 32GB Slim SATA 2x 32GB Slim SATA
nV EOBCports
2 x 1G/10G SFP+ 2 x 1G/10G SFP+ 4 x 1/10G SFP+ 4 x 1/10G SFP+
Punt BW 10GE 10GE 40GE 40GE
Switch fabric bandwidth
220G + 220G (9006/9010)
385G + 385G (9904)
(fabric integrated on RSP)
660G+110G
(separated fabric card)
450G + 450G (9006/9010)
800G + 800G (9904)
1.61Tb + 230G
(separated fabric card)
Route Switch Processors and Route ProcessorsRSP used in ASR9904/9006/9010, RP in ASR9922/9912
23

ASR 9900 - Switch Fabric Cards
In-Service
Upgrade
Common for ASR 9912 and ASR 9922
7 Fabric Card Slots
Decoupled, multi-stage switch fabric hardware
True HW separation between control and data plane
Add bandwidth per slot easily & independently
Similar architecture to CRS
SFC110 SFC2
Availability Q1CY12 Q1CY15
Fabric Capacity
per SFC
110G 230G
Fabric Capacity
Per Line Card Slot
660G N+1
770G N+0
1.38T N+1
1.61T N+0
Fabric
Redundancy
N+1 N+1
LC Support Typhoon Tomahawk/Typhoon
ASR 9912
ASR 9922

New ASR-9922 and ASR-9006 V2 Fans
ASR 9922-FAN-V2
IOS XR 5.2.2
ASR 9006-FAN-V2
IOS XR 5.3.0
V2 Fan provides higher cooling capacity for ultra high density cards.
Motor and blade shape optimized to produce higher CFM.
New material capable of dissipating more heat
In service upgrade

PAYG Power N+1 Redundancy for DC
N+N Redundancy for AC
Typhoon and Tomahawk LCs supported with V2
Plan for the future beyond 1T per slot with V3
Version AC DC Chassis
V2 3.0KW 2.1KW ASR9006, ASR9010,
ASR9904, ASR9912, ASR9922
V3 6.0KW 4.4KW ASR9010, ASR9912, ASR9922
Line Card Used BW Consumption at 27C Consumption at 40C
Typhoon 360G / Slot 2.8 Watts/G 3.5 Watts/G
Tomahawk 1T / Slot 1.6 Watts/G 1.9 Watts/G

Benefit of Power Reductions for 100G4 Tbps: Tomahawk vs. Typhoon in Provider-owned European Data Center
4x 100G Density vs Typhoon
$3460 per port per year power savings
$1,384,000 savings over 10 years
Notes: 9922 with 20 2x100G vs. 9912 with 5 8x100G 8-year facility
amortization with 1.7 PUE, 40C max, France power (USD $.19 per
kWh) N:N Power redundancy
Year
Po
we
r Co
st

Line Card System Architecture1. Typhoon, Trident, SIP-700, VSM2. Tomahawk3. Inteface QoS

Modular SPA Linecard• 20Gbps, feature ritch, high scale, low speed Interfaces
SIP-700
• Distributed Control and
Data Plane
• 20Gbits, 4 SPA Bays
• L3 i/f, route, session
protocol – scaled for MSE
needs
Scalability
• Flexible uCode Architectue
for Feature Richness
• L2 + L3 ServicesL FR, PPP,
HDLC, MLPPP, LFI
• L3VPN, MPLS, Netflow,
6PE/6VPE
Powerful & Flexible QFP
Processor
• IC-Stateful Switch Over
Capability
• MR-APS
• IOS-XR base for high
scale and Reliability
High Availability
• ChOC-3/12/48 (STM1/4/16)
• POS: OC3/STM1, OC12/STM4,
OC-48/STM16, OC192/STM64
• ChT1/E1, ChT3/E3, CEoPs, ATM
SPA Support
SPAs
• 128k Queues
• 128k Policers
• H-QoS
• Color Policing
Quality of Service

ASR 9000 Ethernet Line Card Overview
A9K-40G A9K-4T A9K-8T/4 A9K-2T20G A9K-8T A9K-16T/8
A9K-36x10GE
A9K-2x100GE
(A9K-1x100G)
A9K-24x10GE
A9K-MOD80
A9K-MOD160
MPAs20x1GE2x10GE 4x10GE8x10GE1x40GE 2x40GE
-L, -B, -E
-TR, -SE
First-generation LC
Trident NPU:
15Gbps, ~15Mpps,
bi-directional
Second-gen LC
Typhoon NPU:
60Gbps, ~45Mpps,
bi-directional
-L: low queue, -B: Medium queue, -E: Large queue, -TR: transport optimized, -SE: Service edge optimized

• Micro-CPU based forwarding chip
• Feature flexibility, future proven
• Programmable forwarding tables
• High Performance
• Line rate performance on all line cards
• End-to-End Internal System QoS
• Efficient multicast replication
• High Control Plane Scale
• 4M IPv4 or 2M IPv6 FIB per line card
• 2M MACs learned in hardware
24x10GE
2x100GE
ASR 9000 80-360G “Typhoon Class” LinecardsHyper Intelligence & Service Scalability
36x10GE
Modular 80G & 160G

Network Processor Architecture Details
• TCAM: VLAN tag, QoS and ACL classification
• Stats memory: interface statistics, forwarding statistics etc
• Frame memory: buffer, Queues
• Lookup Memory: forwarding tables, FIB, MAC, ADJ
• TR/SE
• Different TCAM/frame/stats memory size for different per-LC QoS, ACL, logical interface scale
• Same lookup memory for same system wide scale mixing different variation of LCs doesn’t impact system wide scale
-
STATS MEMORY
FRAME MEMORYLOOKUP
MEMORY TCAM
FIB MAC
NPU Complex
Forwarding chip (multi core)
TR and SE has different
memory sizeTR and SE has same
memory size
-TR: transport optimized, -SE: Service edge optimized

NP
Switch Fabric
3x10GE
SFP +
3x10GE
SFP +
3x10GE
SFP +
3x10GE
SFP +
3x10GE
SFP +
3x10GE
SFP +
3x10GE
SFP +
3x10GE
SFP +
FIA
FIA
FIA
FIA
Sw
itch
Fab
ric A
SIC
CPU
PuntFPGA FIA
CPU
Switch Fabric
RP
LC1
3x10GE
SFP +
3x10GE
SFP +
NP
NP
3x10GE
SFP +
3x10GE
SFP +
NP
NP
3x10GE
SFP +
3x10GE
SFP +
NP
NP
3x10GE
SFP +
3x10GE
SFP +
NP
NPFIA
FIA
FIA
FIA
Sw
itch
Fa
bric
AS
IC
CPULC2
1 2
2
NP learn MAC address in hardware (around
4M pps)
NP flood MAC notification (data plane)
message to all other NPs in the system to sync
up the MAC address system-wide. MAC
notification and MAC sync are all done in
hardware
1
2
NP
NP
NP
NP
NP
NP
NP
Hardware based MAC learning: ~4Mpps/NP
MAC Learning and Sync
Data packet
33

Typhoon LC: 24x10G Ports Architecture
3x 10G
FIA
30G3x10GE
SFP +
3x10GE
SFP +
Typhoon
Typhoon
3x 10G
30G
FIA
3x 10G
30G3x10GE
SFP +
3x10GE
SFP +
Typhoon
Typhoon
3x 10G
30G
FIA
3x 10G
30G3x10GE
SFP +
3x10GE
SFP +
Typhoon
Typhoon
3x 10G
30G
FIA
3x 10G
30G3x10GE
SFP +
3x10GE
SFP +
Typhoon
Typhoon
3x 10G
30G
Line cardlocal
Fabric
Complex
(NextGen
S/F ASIC)
RSP 440 Switch
Fabric
RSP 0
RSP 1
8x55G
60G
90G
60G
90G
60G
90G
60G
90G
No Congestion Point
Non-Blocking everywhere
Typhoon NPU BW and Performance1. 120G and 90Mpps Uni-directional
2. 60G and 45Mpps full-duplex (each direction ingress/egress)

Typhoon Line Card Architectures36x10G Port LC 2x100G Port LC
MOD80 LC MOD160 LC

Metric Trident Typhoon
FIB Routes (v4/v6) 1.3M/650K* 4M/2M
Multicast FIB 32K 128K
MAC Addresses 512K* 2M
L3 VRFs 4K 8K
Bridge Domains / VFI 8K 64K
PW 64K 128K
L3 Subif / LC 4K 8K (TR)
20K (SE)
L2 Interfaces (EFPs) / LC 4K (-L)
16K (-B)
32K (-E)
16K (TR)
64K (SE)
MPLS labels 256K 1M
Queue Scale (per NP) 64K egress + 32K ingress 192K egress + 64K ingress
Policer scale (per NP) 32K/64K (-L,-B/-E) 32K/256K (-TR/-SE)
Trident vs. Typhoon– Major Feature Scale
* It require scale profile configuration to reach maximum FIB or MAC on the Trident line card. On Typhoon, FIB and MAC has dedicated memory

ASR9k Full-Mesh IPv4 PerformanceASR9010,
2x RSP-440
8x Typhoon 24x10G = 192 ports
Full mesh of flows = ~72,000 flows
(36k Full-duplex Flows)
.
.
.
.
.
.
.
.
IXIA
Tx/RxIXIA
Tx/Rx
ASR9k

A9k-24x10GE – L3 Multicast Scale Performance Setup and Profile
ASR 9010
A9k-2
4x1
0G
E
.
.
.
.
.A9k-2
4x1
0G
E
port 0
port 1
port 21
port 22
port 23
IXIA Multicast Receivers
L3 sub-interface multicast
receiver facing ports
IXIA Multicast Receivers
IXIA Multicast Receivers
IXIA Multicast Receivers
IXIA Multicast Receivers
port 10
L3 sub-interface multicast
Source facing ports
IXIA Multicast Source
Scale Profile:1. Unidirectional Multicast
2. 2k (S,G) mroutes
3. 24 OIFs per (S,G) mroute
10G IGMP Joins

A9k-24x10GE – L3 Multicast Scale Throughput Performance Results
240G 240G 240G 240G 240G 240G 240G 240G 240G357M
203M
109M
56M
29M 20M 15M 7M 3M 020406080100120140160180200220240260280300320340360
0
20
40
60
80
100
120
140
160
180
200
220
240
260
64 128 256 512 1,024 1,518 2,048 4,096 9,000
Mfp
s
Gb
ps
Frame Size
A9k-24X10GE - L3 Multicast Throughput PerformanceScale: 2k (S,G), 24 OIFs per (S,G)
Agg Rx Throughput (Gbps) Agg Rx Throughput (Mfps)

ASR 9000 VSM• Data Center Compute:
• 4 x Intel 10-core x86 CPU
• 2 Typhoon NPU for hardware network processing
• 120 Gbps of Raw processing throughput
• HW Acceleration
• 40 Gbps of hardware assisted Crypto throughput
• Hardware assist for Reg-Ex matching
• Virtualization Hypervisor (KVM)
• Service VM life cycle management integrated into IOS-XR
OS / Hypervisor
VMM
VM-4
Service-3
VM-1
Service-1VM-3
Service-4
VM-2
Service-2
CGN
Shipping
IPSec
ShippingSecGW
Q2/Q3 CY15
Firewall
On Radar
Anti-DDOS*
Q1,CY15
3rd Party Apps*
Virtualized Services Module (VSM) Overview

DDR3DIMM
DataPath
Switch
Virtualized Services Sub- Module Router Infrastructure Sub-Module
10
GE/
FCo
ESF
P+
Linecardlocal Fabric
Complex
RSP/RP Switch Fabric
RSP/RP 0
RSP /RP1
Crypto
DDR3DIMM
Crypto
DDR3DIMM Crypto
DDR3DIMM
Crypto
Typhoon NPU
Typhoon NPU
VSM Details

Tomahawk 8x100GE CPAK Line Card
Tomahawk 4x100GE CPAK Line Card
LAN Only Shipping 5.3.0
LAN/WAN/OTN April 2015
(5.3.1)
LAN/WAN/OTN April 2015
(5.3.1)
New Tomahawk LCs

CPAK 100 GE LR4CPAK 100 GE ER4 CPAK 10x10-LR
CPAK 100 GE SR10CPAK Options
100 GE LR4 10x10 GE 2x40 GE 100 GE SR10
Investment Protection
Start with 10 GE and upgrade to 100 GE in the future
Flex 100G CPAK

Tomahawk LC: 8x100GE Architecture
XB
AR
(S
M1
5)
Ce
ntra
l XB
AR
Ivy Bridge CPU Complex
TomahawkNP
TigersharkFIA
PHY
TomahawkNP
TigersharkFIA
PHY
TomahawkNP
TigersharkFIA
PHY
TomahawkNP
TigersharkFIA
PHY
CPAK:
100G, 40G, 10G
L2/L3/L4 lookups, all VPN types, all
feature processing, mcast replication,
QoS, ACL, etc …
Slice Based Architecture
Macsec Suite B+, G.709, OTN,
Clocking
VoQ buffering, Fabric credits, mcast
hashing, scheduler for fabric and
egress port
240G
240G
240G
240G
240G
240G
240G
240G
FPOE, Auto-Spread,
DWRR, RBH, replication
Per Slice Power Management: (100-200W Power Savings)
PE1(admin-config)# hw-module power disable slice [0-3] location

Metric Tomahawk-SE Scale Typhoon –SE Scale
MPLS Labels 1M 1M
MAC Addresses2M @ FCS
6M - future2M
FIB Routes (v4/v6) – Search
Memory
4M(v4) / 2M(v6) - XR
10M(v4) / 5M(v6) possible in
future release
4M(v4+v6)
VRF 16K 8K/LC
Bridge Domains 256K/LC 64K/LC
TCAM TCAM (80Mb) TCAM (40Mb)
Packet Buffer12GB/NPU
200ms
2GB/NPU
100ms
EFPs 256K/LC (64K/NP) 64K/LC
L3 Subif (incl. BNG) 128K (64K/NP) 20k/LC (sys)
Egress Queues 1M/NPU (4M for 8x100GE) 256k/NPU
Policers 512k/NPU 256k
-Note: Actual available LC/NPU capacity depends on software release and subject to change!
Tomahawk Vs Typhoon Hardware Capability

Tomahawk Line Card CPU
CPU Subsystem:
Intel IVY Bridge EN 6 cores @ ~2GHz with 3 DIMMs
Integrated acceleration engine for Crypto, Pattern Matching and Compression
1x32GB SSD
HW Parameter Typhoon LC CPU Tomahawk-SE Linecard
ProcessorP4040
4 Core 1.5GHz
Ivy Bridge EN
6 Core 2GHz
LC CPU Memory 4GB 24 GB
Cache L1, L2, L3
L1: 32KB Instructions,
L2: 256KB
L3: 2.5MB per core

• MPA’s Supported:2x100GE CPAK. 1x100GE CPAK. 20x10GE SFP+All Typhoon MPA’
• Flexibility to use CPAK 10G/40G/100G optics on 100G MPAs
• 2 Flavors for FlexibilityMOD400 has 2 Tomahawk ASICS (FCS – August 2015)MOD200 has 1 Tomahawk ASIC (FCS – Oct 2015)
MPA #1(2x100G)
MPA #2(20x10G)
Mo
d4
00
LC
2 x 100GCPAKs
20 x 10G SFP+
MOD200Support Matrix
Comb 1 Comb 2 Comb 3
EP0 2x100G-MPA 20x10G-MPA
1x100G orany Typhoon MPAs
EP1 None None 1x100G or any Typhoon MPAs
MOD400Support Matrix
Comb 1 Comb 2 Combo 3 Comb 4
EP0 2x100G-MPA 20x10G-MPA 2x100G-MPA 1x100G or anyTyphoon MPAs
EP1 2x100G-MPA 20x10G-MPA 20x10G-MPA 2x100G-MPA OR20x10G-MPA
Tomahawk Modular LCs – Mod400 & Mod200

Tomahawk Line Card Architectures8x100G/80x10G Port LC 4x100G/40x10G Port LC
MOD400 LC MOD200 LC

12x100G Tomahawk LC “Skyhammer”

High Level Differences:Tomahawk 8x100G and 12x100G
8x100GE “Octane” 12x100GE “Skyhammer”
8-port 100GE with 4 NPU slices 12-port 100GE with 6 NPU slices
CPAK Optics QSFP28 Optics
5-Fabrics (7-Fabric in roadmap for Nov/Dec 2015 FCS) 7-Fabric card
Compatible with all 99xx-series and 90xx-series chassis Support only for 99xx-series chassis
Full L3VPN/L2VPN feature support Only L3 features in Ph-1, L2 support in Ph-2
Has external TCAM for High Qos/ACL scale Has no external TCAM . Will use only 5Mb internal TCAM
Total TCAM entries: 192K Total TCAM entries: 32K
32K sub-interfaces 1K sub-interfaces
4M V4 Route, 2M V6 routes Under discussion

Tomahawk TCAM Optimized ScaleMetric Tomahawk
-TR Scale
Tomahawk
-SE Scale
Skyhammer
Scale
MPLS Labels 1M
Addresses 2M (6m Future)
FIB Routes (v4/v6) – Search
Memory
4M(v4) / 2M(v6) – XR (10m/5m Future)
Mroute/MFIB (v4/v6) 128k/32k (256k/64k Future)
VRF 8K (16k Future)
Bridge Domains 64K
TCAM (acl space v4/v6) TCAM – 1/4 SE TCAM (80Mbit) Internal TCAM (5Mbit)
Packet Buffer Packet Buffer – 100ms
(6G/NPU)
Packet Buffer – 200ms
(12G/NPU)
Packet Buffer – 100ms
EFPs 16K/LC 128K/LC (64K/NP) N/A
L3 Subif (incl. BNG) 8K 128K (64K/NP) 1K
IP/PPP/LAC subscriber
sessions per LC
16K 256K (64K/NP) N/A
Egress Queues 8 Queues / port + nV Sat Q’s 1M/NPU (4M for 8x100GE!) 8 Queues / port
Policers 32K/NPU 512K/NPU
QOS/ACL (v4/v6) 16k v4 or 4k v6 ACEs/LC 98k v4 / 16k v6 ACEs 24K v4/1.5K v6

ASR9k Optical Solution Before IPoDWDM LCASR9k + NCS2k Optical Shelf
Low Cost
Interconnect
ASR9000 NCS2k
100GE Linecard Coherent Transponder
SR10
CFP/CPAK
transceiver
SR10
CXP/CPAK
Integrated nV Optical System

• Tomahawk based Linecard
• Feature and scale parity with other –TR and –SE Tomahawk cards
• 2xCFP2 based DWDM ports (100G, 200G)
• BPSK, QPSK, 16QAM modulation options
• 96 channels, ITU-T 50GHz spacing
• FlexSpectrum support
• HD FEC, SD FEC (3000+ km w/o regen)
• 20x10GE SFPP ports (SR, LR, ZR, CWDM, DWDM)
• Flexible port options up to 400 Gbps total capacity
• 2 x 200G DWDM (CFP2) or
• 2 x 100G DWDM (CFP2) + 20 x 10G (SFP+) or
• 1 x 100G + 1x200G DWDM (CFP2) + 10 x 10G (SFP+)
• OTN and pre-FEC FRR
• Target FCS: mid-CY15 (5.3.2) for 100G DWDM
• 10G Gray Ports and 200G DWDM in a future release
FCS Mid-CY15
400G IPoDWDM LC Overview (Tomahawk Based LC)

SM15
FIA
VOQ
Tomahaw
k
NPMEM
X240EtnaCoherent
CFP2
200G capable
10GE SFP+
10GE SFP+
#1
#20
HD-FEC
FPGA
TCAM
SerDes
Mux
FIA
VOQ
Tomahaw
k
NPMEM
X240EtnaCoherent
CFP2
200G capable
HD-FEC
FPGA
TCAM
X240
Ivy Bridge
CPU Complex
400G IPoDWDM LC Specific HW components

400Gbs IPoDWDM LC Internals
SM15
FIA
VOQ
Tomahaw
k
NPMEM
X240EtnaCoherent
CFP2
200G capable
10GE SFP+
10GE SFP+
#1
#20
HD-FEC
FPGA
TCAM
SerDes
Mux
FIA
VOQ
Tomahaw
k
NPMEM
X240EtnaCoherent
CFP2
200G capable
HD-FEC
FPGA
TCAM
X240
Ivy Bridge
CPU Complex
#0
#1

Tomahawk Per Slice MACSEC PHY Capability • MACSEC Security Standards Compliant with:
• IEEE 802.1EA-2006
• IEEE 802.1AEbn- 2011 (256-bit key)
• IEEE 802.1AEbw-2013 (extended packet numbering)
• Security Suites Supported:
• AES-GCM-128, 128-bit key (32 bits)
• AES-GCM-256, 256-bit key (32 bits)
• AES-GCM-XPN-128, provides extended packet number counter (64 bits)
• AES-GCM-XPN-256, provides extended packet number counter (64 bits)
• Unique Security Attributes Per Security Association (SA):
• 10G port = 32 SA
• 40G port = 128 SA
• 100G port = 256 SA
• Per Slice Port Combination Supported (CPAK)
• 2x100G, 20x10G, 4x40G, 1x100G + 10x10G, 2x40G + 10x10G, 2x40G + 1x100G
• All Tomahawk LC variations support MACSEC
• 8x100G, 4x100G, MOD-400, MOD-200

ASR9k MACSEC Phase 1: XR 5.4.0 ReleaseSP/CE/DCI/Enterprise Usecases
Usecase #2: Link MACSEC over LAG members
MACSEC Links
MACSEC
on LAGMember link
InheritanceCE CEPEPE P
Usecase #1: Link MACSEC in MPLS/IP Topology
MACSEC Links
ASR9k
DCI/CE DCI/CE
Usecase #3 CE Port Mode MACSEC over L2VPN
MKA
L2VPN
CE/WAN
MACSEC Links
port
modeport
mode
ASR9k ASR9k
DCI/CE DCI/CE
Usecase #4 VLAN Clear Tags MACSEC over L2VPN
MKA
L2VPN
CE/WAN
MACSEC Links
vlan
clear-tags
vlan
clear-tags
ASR9k ASR9k

Secure L2VPN as a Service: PW/EVPN/PBB (any L2) Circuit Encryption using MACSEC
PHY NPUFabric
1
0
G
1
0
G
1
0
G
Port 1
Port 2
Port 3
Clear Frame
EFP1
EFP2
EFP3
Phy Secure Channel
Loopback + Clear Tags
(Cisco IPR)
DA SA VLA
N
PAYLOA
D
FCS
Frame Encryption on Port 2 inside PHY
DA SA VLA
N
SEGTA
G
PAYLOA
D
ICV FCS
Clear Frame
DA SA VLA
N
PAYLOA
D
FCS
Frame bypass Port 3 MACSEC inside PHYEncrypted EoMPLS PW
VC
Label
DA SA VLA
N
SEGTA
G
PAYLOA
D
ICV FCS
DA SA VLA
N
SEGTA
G
PAYLOA
D
ICV FCS

Tomahawk MACSEC
Raw Performance
• Tomahawk MACSEC
• Raw Scale
AES-GCN-256-bit Raw Performance (full-duplex)
Per LC Slice 200Gbps
Per LC Slot 800Gbps
Per Chassis ASR9006 = 3.2Tbps
ASR9010 = 6.4Tbps
ASR9904 = 1.6Tbps
ASR9012 = 8Tbps
ASR9922 = 16Tbps
AES-GCN-256-bit Raw Scale
Total MACSEC Ports
Per System
10G = 1,600
40G = 320
100G = 160
Total MACSEC SAs
Per System
10G Tx/Rx SAs = 51,200
40G Tx/Rx SAs = 40,960
100G Tx/Rx SAs = 40,960

• Generally target >3.3x performance of Typhoon system
• Cust benchmark: 2x 100GE/Tomahawk NPU vs 6x 10GE/Typhoon NPU
Customer Profile Tomahawk Performance
Customer
ProfileApplication Profile
Tomahawk
LR Frame
Typhoon LR Frame
@6x10G
Performance Incr
Ratio
Web/OTT 1
LSR: top label swap + Bundle + TE Tunnel + iMarking +
eMarking/Queue 160B IP
@126mpps
[email protected] 3.8x
LER: MPLS imp + ieQoS + iNF + ieBundle + RSVP
TE/[email protected] 5.5x
Web/OTT 2
LSR: label swap + Bundle + TE Tunnel + iMarking +
eMarking/queuing 160B IP
@126mpps
[email protected] 3.8x
LER: IP + Bundle +TE Tunnel + iMarking +
eMarking/queuing + iNF + [email protected] 6.3x
Tier1
ISP/Peering
1
Ipv4 & v6 recursive + in/out ACL + uRPF + in NF + iMAC
Accounting460B IP
@[email protected] 4.2x
Tier1
SP/Peering
2
IP to IP (recursive) + QoS(out) + ACL(in+out) +
Netflow(1:5K,in) + Bundle286B IP
@81.6mppsiMix LR@6x10G > 3.3x

• Generally target 3.3x performance of Typhoon system
• Benchmark set: 2x100GE/T-hawk NPU vs 6x 10GE/Typhoon NPU
Tomahawk Baseline Performance
LineCard
NPU
BW
Packet
Fwding
Capacity
IPv4 NR
+ Rx ACL
IPv6 NR +
Rx ACL
IPv4 + in NF
(1:1k)
IPv6
In Policing
+Out Shaping
L2 Xconnect VPLS + QoS
Tomahawk 240G149
mpps
128B Eth
@
169mpps
196B Eth
@
115mpps
225B Eth
@
102mpps
233B Eth
@
104mpps
123B Eth @
174.6mpps
449B Eth
@
53.3mpps
Typhoon 60G45
mpps
178B Eth
@
37.8mpps
300B Eth
@
23.4mpps
205B Eth
@
33.4mpps
316B Eth
@
22.3mpps
148B Eth @
44.6mpps
445B Eth
@
16.1mpps
Perf Incr Ratio 4x 3.3x 4.48x 4.48x 3.07x 4.42x 3.91x 3.3x

Jan 2015XR 5.3.0
8x100GE LAN PHY*
8x100GE OTN*
4x100GE OTN
MOD400
400G IPoDWDM
12x100G
MOD200
April 2015XR 5.3.1
August 2015XR 5.3.2
Commons
Line Cards
RP2
RSP880
SFC2
Mar 2016XR 6.1.0
All Platforms
9910, 9912, 9922
90xx, 9904, 9910
Nov 2015XR 6.0.0
8x100G 7-Fab
* Oversubscribed on 9010, 9006 with Single RSP
9912, 9922
Tomahawk Roadmap

Tomahawk Line Card Port QOS Overview• 4 Priority Interface Queuing:
• PQ1, PQ2, PQ3 Strict Priority• PQ4 = Remaining Queues (CBWFQs)
• Configurable QoS policies using IOS XR MQC CLI
• QoS policy is applied to interface (physical, bundle or logical*), attachment points
– Main InterfaceMQC applied to a physical port will take effect for traffic that flows across all sub-interfaces on that
physical port will NOT coexist with MQC policy on sub-interface ** you can have either port-based or subinter-face based policy on a given physical port
– L3 sub-interface– L2 sub-interface (EFP)
• QoS policy is programmed into hardware microcode and queue ASIC on the Line card NPU
* Some logical interface could apply qos policy, for example PWHE, BVI
** it could have main interface level simple flat qos co-exist with sub-interface level H-QoS on ingress direction

Tomahawk Line Card Port QoS Overview – con’t
• 5 level hierarchy flexible queuing/scheduling support
– 5 level scheduling hierarchy:
Port groups (L0), Ports (L1), Logical Ports (L2), Subinterfaces (L3), Classes (L4)
– Egress & Ingress, shaping and policing
• Three strict priority scheduling with priority propagation
• Flexible & granular classification, and marking
–Full layer 2, full layer 3/4 IPv4, IPv6, mpls
• Dedicated queue ASIC – TM (traffic
manager) per NPU for QoS functions
• -SE and -TR LC version has different queue
buffer/memory size, different number of
queues
NPUFIA Switch
Fabric ASIC
TM

Port G
roup
L1
Port
L2
Logical Port
5 Level Hierarchy QoS Queuing Overview
L4
Classes
• L0, L1 level schedulers are automated
by TM, not user configurable
• L2, L3 and L4 can be flexibly mapped
to parent level scheduler
• Hierarchy levels used are determined
by how many nested levels a policy-
map is configured for and applied to a
given sub-interface
• Up to 16 classes per child/grandchild
level (L4)
Port
C-VLAN
C-VLAN
Class
Class
Class
Class
C-VLAN
C-VLAN
Class
Class
Class
Class
L0
PG
EFP1 (S-Vlan
or Vlans)
EFP2 (S-Vlan
or Vlans)
L3
Sub-interfaces

Queue/Scheduler Hierarchy MQC Capabilities
Priority 1
P2
P3
Normal
Pri Qs
L1
Shape (PIR,
or Port
Shaper)
L2: 2-param
Shape
BRR Weight
(Bw or BwR)
L3: 3-param
Shape
Bandwidth
BRR W.
L4: 2-param
Shape, BRR W (Bw or BwR), Priority, WRED/Q-Limit
P1
P2
Normal
Pri Qs

QoS Classification Criteria
L2 Header Fields L3 Header Fields Internal Marking
L2
Interfaces/EF
Ps
Or L3
Interfaces
Inner/outer COS,
inner/outer vlan,
DEISource/Destination MAC address*match all/match any
Outer EXPDSCP/TOSTTL, TCP flags, Source/destination L4 portsProtocolSource/Destination IPv4address*
Discard-classQos-group
Notes: - Support match all or match any
- Max 8 match statements per class, max 8 match entries per match statement
- Not all header fields can be used in one MQC policy-map, see details next

Tomahawk 3 Level Policing• Supports grand parent policing
• Conform-aware coupled between child and parent, parent and grandparent- Only 1R2C policer at grand-parent level
• Color aware policing 1R2C, MEF 2R3C, No IETF 2R3C- Color value can be one of the fields: outer CoS, Exp, Prec, QoS-group, DSCP and Discard-class
- Only one conform-color value per policer, others are considered as exceed-color
• Color-aware coupled policing- Child level: incoming packet’s color field value is used for matching a color (Parent or child remarking will not take effect for color matching)
- Parent level: the child level marking if any will be effective in matching the color at parent level
- Grand parent level: only support 1R2C without color aware
Child-Level Parent-Level Grand-Parent Level Allowed
No-policer No-policer Policer Allow
Policer No-Policer Policer Allow
No-Policer Policer Policer Allow
Policer Policer Policer Allow
Coupled Child Coupled Parent Policer Reject
Coupled Child No-Policer Coupled-Parent Reject
Policer Coupled Child Coupled-Parent Allow

Shaping, Policing Overhead Accounting
• L2 frame length is being used by default without preamble and IFG. • Same for ingress/egress • Same for all QOS actions
• QoS policy can be configured to take into account arbitrary L1 framing and L2 overhead
Preamble7
SFD1
DA6
Length/Type
2Payload46-1500
FCS4
SA6
Inter Frame Gap12
VLAN4

Switch Fabric Architecture1. Bandwidth and Redundancy Overview2. Fabric QoS – Virtual Output Queuing

CPU
FIA
NP FIA
FIA
CPU
SerDesXBAR
NP
EP0
PHY
EP1
PHY
SwitchFabric(SM15)
Cisco ASR 9000 High Level System Architecture“At a Glance”
72
Linecard
Switch Fabric
integrated on RSP
or Separated
RSP/RP
CPU
FIA
NP FIA
FIA
CPU
SerDesXBAR
NP
EP0
PHY
EP1
PHY
SwitchFabric(SM15)
Multi stage (1,2,3)
fabric operation
Scalable & flexible
Slice based data plane
Fully distributed &
Redundant system

ASR 9006/9010 Switch Fabric Overview3-Stage Non-Blocking Fabric (Separate Unicast and Multicast Crossbars)
FIAFIA
FIARSP0
Arbiter
fabric
RSP1
Arbiter
fabric
Fabric bandwidth:
8x55Gbps =440Gbps/slot with dual RSP
4x55Gbps =220Gbps/slot with single RSP
fabricFIAFIA
FIA
fabric
Fabric frame format:
Super-frame
Fabric load balancing:
Unicast is per-packet
Multicast is per-flow8x55Gbps
Stage 1 Stage 2 Stage 3
Typhoon LCTyphoon LC
Ingress LinecardEgress Linecard
RSP440
8x55Gbps
Active-Active Fabric
Unicast
Crossbar
Multicast
Crossbar
73

Ingress PortQoS
End-to-End priority (P1,P2, 2xBest-effort) propagation Unicast VOQ and back pressureUnicast and Multicast separation
Egress PortQoS
Ingress side of LC Egress side of LC
NP PHY
NP PHYFIA
CPU
NPPHY
NPPHY
FIA
CPU
Switch Fabric
1
3
4
1 2 3 4
4 Priority Egress Destination
Queues (DQs) per SFP (VQI)
virtual port, aggregated at
egress port rate
4 VOQ per each SFP virtual port in the entire system
Up to 8K VOQs per TSK FIA (vs 4k per SKT FIA)
ASR9k End-2-End System QoS Overview
2
74

ASR9k Virtual Output Queuing (VoQ) System Architecture VoQ Components (Where are they)

Egress NPU Backpressure and VoQ in ActionResult is No Head of Line Blocking (HOLB)

ASR9904 RSP880 Switch Fabric ArchitectureActive/Active 3-Stage Fabric, Scale to 1.6Tbps LCs
FIAFIA
FIA RSP880
Arbiter
RSP880
FIAFIA
FIA
Stage 1 Stage 2 Stage 3
Tomahawk Line CardTomahawk Linecard
Ingress Linecard Egress LinecardFabricSM15
FabricSM15
FabricSM15
Fabric frame format:
Super-frame
Fabric load balancing:
Unicast is per-packet
Multicast is per-flow
Fabric bandwidth:
10x 115Gbps ~1.1Tbps/slot with dual RSP 5 x 115Gbps ~ 575Gbps/slot with single RSP
2x 5x 115Gbps ~ 1.15Tbps
2x 5x 115Gbps ~ 1.15Tbps
Arbiter
FabricSM15
Fabric bandwidth:
10x 115Gbps ~1.15Tbps/slot with dual RSP 5 x 115Gbps ~ 575Gbps/slot with single RSP
77

ASR90xx – RSP880 and Mixed LC Operation
FIAFIA
FIARSP880
Arbiter
RSP880
FIAFIA
FIA
Stage 1 Stage 2 Stage 3
Tomahawk Line CardTyphoon Linecard
Ingress LinecardEgress Linecard
Fabric bandwidth:
8x115Gbps ~ 900Gbps/slot with dual RSP
4x115Gbps ~ 450Gbps/slot with single RSP
Fabric frame format:
Super-frame
Fabric load balancing:
Unicast is per-packet
Multicast is per-flow
8x115Gbps
8x55GbpsFabric bandwidth:
8x55Gbps =440Gbps/slot with dual RSP
4x55Gbps =220Gbps/slot with single RSP
Fabric
Fabric
SM15
Fabric
SM15
Arbiter
FabricSM15
78

ASR90xx – RSP440 and Mixed LC Operation
FIAFIA
FIARSP0
RSP1
fabricFIAFIA
FIA
Arbiter
fabric
Stage 1 Stage 2 Stage 3
Tomahawk Line CardTyphoon Linecard
Ingress LinecardEgress Linecard
RSP440
Fabric bandwidth:
8x55Gbps =440Gbps/slot with dual RSP
4x55Gbps =220Gbps/slot with single RSP
Fabric frame format:
Super-frame
Fabric load balancing:
Unicast is per-packet
Multicast is per-flow
8x55Gbps
8x55GbpsFabric bandwidth:
8x55Gbps =440Gbps/slot with dual RSP
4x55Gbps =220Gbps/slot with single RSP
Fabric
SM15
Arbiter
fabric
79

SFC v2
Tomahawk Line Card
FIAFIA
FIA
ASR99xx Switch Fabric Card (FC2) Overview6+1 All Active 3-Stage Fabric Planes, Scale to 1.6Tbps LCs
Tomahawk Line Card
FIAFIA
FIA
5x 2x115G (120G raw)
bi-directional
= 1.15Tbps
5x2x115G bi-directional
= 1.15Tbps
Fabric
SM15
Fabric
SM15
Fabric frame format:
Super-frame
Fabric load
balancing:
Unicast is per-packet
Multicast is per-flow
Fabric bandwidth:
10x115Gbps ~ 1.15 Tbps/slot with 5x FC2
8x115Gbps ~ 920 Gbps/slot with single RSP
80

SFC v2
Tomahawk Line Card
FIAFIA
FIA
ASR9922/12 – SFC2 and Mixed Gen LCs When using 3rd Gen Fabric Cards
Typhoon Linecard
FIAFIA
FIAfabric
(5-1)x2x55G bi-directional
= 440Gbps (protected)
(5-1)x2x115G bi-directional
= 920Gbps (protected)
Fabric Lanes 6 & 7 are only used towards
3rd Gen 7-fabric plane linecards
Fabric
SM15
81

ASR9K Tomahawk Module Fabric Redundancy and Bw Allocation Summary for 8x100G LCs with RSP880s/SFCv2
Fabric
Redundancy
Per Slot
Fabric BW
Per 8x100G
LC Data BW
System
QoS/Priority
Protection
System
Dual RSP880s 920G 800G Y ASR9010
ASR9006Single RSP880 460G 400G Y
Dual RSP880s 1.15T 800G YASR9904
Single RSP880 575G 500G Y
5x SFCv2 1.15T 800G Y
ASR9922
ASR9912
4x SFCv2 920G 800G Y
3x SFCv2 690G 600G Y
2x SFCv2 460G 400G Y

Packet Flow and Control Plane Architecture1. Unicast2. Multicast3. L2

ASR9000 Fully Distributed Control Plane
84
Switch Fabric
3x10GE
SFP +
3x10GE
SFP +
Typhoo
n
NP
3x10G
E
SFP +3x10GE
SFP +
NP
NP
3x10GE
SFP +
3x10GE
SFP +
NP
NP
3x10GE
SFP +
3x10GE
SFP +
NP
NPFIA
FIA
FIA
FIA
Sw
itch
Fab
ric A
SIC
PuntFPGA FIA
Switch Fabric
RP
LPTS
LC CPU: ARP, ICMP, BFD, NetFlow,
OAM, etc
RP CPU: Routing, MPLS, IGMP, PIM,
HSRP/VRRP, etc
LPTS (local packet transport service):
control plane policing
Control packet
Punt Switch
CPU
CPU

Layer 3 Control Plane Overview
LDP RSVP-TEBGP
ISIS
OSPF
EIGRPStatic
FIB Adjacency
LC NPU
ARP
LSD RIB
AIBSW FIB
LC CPU
RP
AIB: Adjacency Information Base
RIB: Routing Information Base
FIB: Forwarding Information Base
LSD: Label Switch Database
Over internal EOBC
Hierarchical FIB table
structure for prefix
independent
convergence: TE/FRR,
IP/FRR, BGP, Link
bundle

IOS-XR 2-Stage Lookup Packet FlowUnicast Packet Flow• Ingress lookup yields packet egress port and applies ingress features
• Egress lookup performs packet-rewrite and applies egress features
• All ingress packets are switched by Central switch fabric
Switch Fabric
Switch Fabric
3x
10G3x10GE
SFP +
3x10GE
SFP +
Typho
on
Typho
on
3x
10G
3x
10G3x10GE
SFP +
3x10GE
SFP +
Typho
on
Typho
on
3x
10G
3x
10G3x10GE
SFP +
3x10GE
SFP +
Typho
on
Typho
on
3x
10G
3x
10G3x10GE
SFP +
3x10GE
SFP +
Typho
on
Typho
on
3x
10G
FIA
FIA
FIA
FIA
Sw
itch
Fab
ric
AS
IC
Ingress
Typhoo
n
FIA
FIA
FIA
FIA
Sw
itch
Fa
bric
AS
ICEgress
Typhoo
n
Ingress
Typhoo
n
Egress
Typhoo
n
100GE
MAC/P
HY
100GE
MAC/P
HY
100
G
100
G
100
G
100
G

from wire
Ingress NPU
Egress NPU
TCAM rxIDB L3FIB
Packet classification
Source interface info
L3 FIB lookup
Next-hop
Packet rewriteSystem headers added
rewrite
L3FIB
L3 FIB lookup
Next-hop
Switch Fabric Port (egress NPU)
destination interface info
Fabric
ECH type: tell egress NPU type of lookup it should execute
L3 Unicast ForwardingPacket Flow (Simplified) Example
lookup key
L3: (VRF-ID, IP DA)
SFP
Rx LAG hashing
LAG SFPLAGID
ACL and QoS Lookup
also happen in parallel
ECH Type: L3_UNICAST
SFP
ECH Type: L3_UNICAST
=> L3FIB lookuprewrite txIDB
Tx LAG hashing
LAG
to wire
ACL and QoS Lookup
happens before rewrite
tx-adj
rx-adj

L3 Multicast Control Plane
1. Incoming joins (IGMP, PIM, MLD, MLDP, etc …)2. NPU punts joins to RP directly bypassing LC CPU3. Each protocol updates multicast info (VRF, route, olist, etc …) to MRIB4. MRIB downloads all multicast info (VRF, route, olist, etc …) to MFIB in each LC5. MFIB programs HW with multicast info: Typhoon NPU, FIA and LC Fabric

L2 Multicast Control Plane
1. Incoming joins (IGMP, PIM, MLD, etc …)2. NPU punts joins to RP directly bypassing LC CPU3. Each snooping protocol updates L2FIB (mrouter, port-list, input port, BD, etc … )4. RP L2FIB downloads L2 multicast info to each LC L2FIB5. LC L2FIB programs HW with multicast info: Typhoon NPU, FIA and LC Fabric

Multicast Replication Model Overview2-Stage Replication
Multicast Replication in ASR9k is like an SSM tree
2-stage replication model:
1. Fabric to LC replication
2. Egress NP OIF replication
ASR9k doesn’t use inferior “binary tree” or “root uniary tree” replication model

FGID (Slotmask)
© 2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential EDCS:xxxx 5
LC 7
LC 6
LC 5
LC 4
RSP
0
RSP
1
LC 3
LC 2
LC 1
LC 0
Logical
Slot
9 8 7 6 5 4 3 2 1 0
Slot Slot Mask
Logical Physical Binary Hex
LC7 9 1000000000 0x0200
LC6 8 0100000000 0x0100
LC5 7 0010000000 0x0080
LC4 6 0001000000 0x0040
RSP0 5 0000100000 0x0020
RSP1 4 0000010000 0x0010
LC3 3 0000001000 0x0008
LC2 2 0000000100 0x0004
LC1 1 0000000010 0x0002
LC0 0 0000000001 0x0001
Target Linecards FGID Value (10 Slot Chassis)
LC6 0x0100
LC1 + LC5 0x0002 | 0x0080 = 0x0082
LC0 + LC3 + LC7 0x0001 | 0x0008 | 0x0200 = 0x0209
FGID Calculation Examples
FGIDs: 10 Slot Chassis
LC 3
LC 2
LC 1
LC 0
RSP 1
RSP 0
Logical
Slot
Phy
Slot
Number
5
4
3
2
1
0
Slot Slot Mask
Logical Physical Binary Hex
LC3 5 0000100000 0x0020
LC2 4 0000010000 0x0010
LC1 3 0000001000 0x0008
LC0 2 0000000100 0x0004
RSP1 1 0000000010 0x0002
RSP0 0 0000000001 0x0001
FGIDs: 6 Slot Chassis

ASR9k NG-MVPN (MLDP/P2MP-TE Packet FlowImposition PE

ASR9k NG-MVPN (MLDP/P2MP-TE Packet FlowDisposition PE

2 Stage Forwarding Model VPLS Packet Flow Example (Known Unicast)
Switch Fabric
Switch Fabric
3x
10G3x10GE
SFP +
3x10GE
SFP +
Typho
on
Typho
on
3x
10G
3x
10G3x10GE
SFP +
3x10GE
SFP +
Typho
on
Typho
on
3x
10G
3x
10G3x10GE
SFP +
3x10GE
SFP +
Typho
on
Typho
on
3x
10G
3x
10G3x10GE
SFP +
3x10GE
SFP +
Typho
on
Typho
on
3x
10G
FIA
FIA
FIA
FIA
Sw
itch
Fab
ric
AS
IC
Ingress
Typhoo
n
FIA
FIA
FIA
FIA
Sw
itch
Fa
bric
AS
ICEgress
Typhoo
n
Ingress
Typhoo
n
Egress
Typhoo
n
100GE
MAC/P
HY
100GE
MAC/P
HY
100
G
100
G
100
G
100
G
ETH ETH VC ETH VC Local Fab Hdr ETH VC Local Fab Hdr ETH VC LDPETH L2
Rewrite

IOS-XR Architecture1. 32-bit and 64-bit OS 2. IOS-XRv 9000 Virtual Forwarder3. Netconf/Yang Programmability

IOSd
Kernel
Linux-BinOSH
oste
d A
pp
1
Hoste
d A
pp
2
Operational Infra
Virtualization Layer
IOS “Blob”
Cisco IOS Cisco IOS-XE Cisco IOS-XR Cisco Virtual IOS-XR
XR Code v2
Kernel
Linux, 64bit
Distributed Infra
BG
P
OS
PF
PIM
AC
L
QoS
LP
TS
SN
MP
XM
L
NetF
low
Kernel
Linux, 64bit
System
Admin
XR Code v1
Kernel
Linux, 64bit
Distributed Infra
BG
P
OS
PF
PIM
AC
L
QoS
LP
TS
SN
MP
XM
L
NetF
low
Kernel
QNX, 32bit
Distributed Infra
BG
P
OS
PF
PIM
AC
L
QoS
LP
TS
SN
MP
XM
L
NetF
low
Control
Plane
Data
Plane
Mgmt
Plane
1990s 2000s 2003-14 Present Day
Cisco IOS – A Recap
Incremental Development, with Industry leading investment protection

32-bit XR Key Concepts
• Two Stage Commit
• Config History Database
• Rollback
• Atomic vs. Best Effort
• Multiple Config Sessions
• Etc, etc …

XR Command Modes
Exec – Normal operations - monitoring routing and CEF
Config – Configuration for L3 Node
Admin – Chassis operations, outside of SDRs
RP/0/RP0/CPU0:router#show ipv4 interfaces brief show running-configshow install active show cef summary location 0/5/CPU0
Admin Config
RP/0/RP0/CPU0:router(config)#router bgp 100 taskgroup admins policy-map foompls ldp ipv4 access-list block-junk
RP/0/RP0/CPU0:router(admin)#show controllers fabric plane all (CRS) config-register 0x0 show controllers fabric clock (12K) install add (also in SDR)
RP/0/RP0/CPU0:router(admin-config)#sdr backbone location 0/5/* pairing reflector location 0/3/* 0/4/*

Cisco IOS XR (Virtualized)
Virtualization Layer
Cisco Virtual IOS-XR
XR Code v2
Kernel Linux, 64bit
Distributed Infra
BG
P
OS
PF
PIM
AC
L
QoS
LP
TS
SN
MP
XM
L
NetF
low
Kernel Linux, 64bit
System
Admin
XR Code v1
Kernel Linux, 64bit
Distributed Infra
BG
P
OS
PF
PIM
AC
L
QoS
LP
TS
SN
MP
XM
L
NetF
low
Distributed system architectureScale, fault isolation, control plane expansion
Enhanced OS InfrastructureScale, SMP support, 64-bit CPU Support, Open Source apps
VirtualizationISSU, control and admin plane separation, simplifies SDR
Architecture
ISSU and HA ArchitectureProvide Zero Packet Loss (ZPL) and Zero Topology Loss (ZTL) to
avoid service outages
Fault ManagementCarrier class fault handling, correlation and alarm management
Available since IOS XR 5.0 on NCS6KOther platforms rollout – NCS4K, ASR9K, Skywarp, Fretta,
Sunstone and Rosco

Forwarding and Feature Path
• ACLs
• uRPF
• Marking, Policing
• IPv4, IPv6, MPLS
• Segment routing
• BFD
IOS-XRv 9000: Efficient Virtual ForwarderHigh-end Feature Rich Data Plane on x86
Innovative Virtual Forwarder
x86-optimized SW based hardware assists:
• 2x10G Line Rate FDX PCIe pass-through
• SW hierarchical traffic manager with 3 level HQoS, 512K queues @ 20G FDX performance per core
• SW policers that is color aware and nearly 4x faster than DPDK based SW routers
• SW TCAM with logical super key & heuristic cuts algorithms
Data plane optimized for fast convergence 20Gbps+ forwarder with features for IMIX traffic with features (on a single socket)
Portable 64bit C-code (to ARM based platforms)
Common code base with Cisco nPower X family
IOS-XRv Control Plane
RX &
Interface
Classification
Traffic
Manager
& TX
Forwarding
& Features
TCAM PLU
TM
Pkt. replication
IOS-XRv 9000
IOS-XRv Virtual Forwarder
Hierarchical QOS Scheduler
• 30Gbps+ throughput
on a single CPU core
• ½ million Queues
• 3-Layer H-QOS
High speed interface
classification
and fine grained load balancing
SW based HW Assists

NETCONF/YANG – Supported on XR
NETCONF – NETwork CONFiguration Protocol
• Network Management protocol – defines management operations
• Initial focus on configuration, but extended for monitoring operations
• First standard - RFC 4741 (December 2006)
• Latest rev is RFC 6241 (June 2011)
• Does not define content in management operations
YANG – Yet Another Next Generation
• Data modeling language to define NETCONF payload
• Defined in the context of NETCONF, but not tied to NETCONF
• Addresses gaps in SMIv2 (SNMP MIB language)
• Previous failed attempt – SMI NG
• First approved standard - RFC 6020 (October 2010)
N
E
T
C
O
N
F
YANG Data

Complete Your Online Session Evaluation
Don’t forget: Cisco Live sessions will be available for viewing on-demand after the event at CiscoLive.com/Online
• Give us your feedback to be entered into a Daily Survey Drawing. A daily winner will receive a $750 Amazon gift card.
• Complete your session surveys though the Cisco Live mobile app or your computer on Cisco Live Connect.

Thank you
