cisco live data-center design guide -
TRANSCRIPT
#clmel
Small to Medium Data Centre Designs
BRKDCT-2218
Nic Rouhotas - Data Centre Consulting Engineer
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Abstract
• Network design for the data centre has evolved over time, yet typically there
has been the common requirement for networked connectivity to all
applications and their respective resources of physical and virtual compute,
storage and network services, as well as to other required services and
locations. Many of the technical design challenges are the same regardless
the size of the organisation.This session will discuss example architectures for
small to medium data centres, starting from entry-level and then illustrate
transition points to increase scale and capacity whilst providing support for
additional features and functionality. The Nexus switching product range will be
referenced in the examples and guidance provided around optimisationof
features and protocols. Also included is a discussion on connecting to remote
data centres as well as considerations for extending workloads to public clouds
3
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Cisco Live Melbourne Related Sessions
4
BRKDCT-2048 Deploying Virtual Port Channel (vPC) in NXOS
BRKDCT-2049 Data Centre Interconnect with Overlay Transport Virtualisation
BRKDCT-2334 Data Centre Deployments and Best Practices with NX-OS
BRKDCT-2404 VXLAN Deployment Models - A Practical Perspective
BRKDCT-2615 How to Achieve True Active-Active Data Centre Infrastructures
BRKDCT-3640 Nexus 9000 Architecture
BRKDCT-3641 Data Centre Fabric Design: Leveraging Network Programmability and Orchestration
BRKARC-3601 Nexus 7000/7700 Architecture and Design Flexibility for Evolving Data Centres
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Cisco Live Melbourne Related Sessions
5
BRKACI-2000 Application Centric Infrastructure Fundamentals
BRKACI-2001 Integration and Interoperation of Existing Nexus Networks into an ACI Architecture
BRKACI-2006 Integration of Hypervisors and L4-7 Services into an ACI Fabric
BRKACI-2601 Real World ACI Deployment and Migration
BRKVIR-2044 Multi-Hypervisor Networking - Compare and Contrast
BRKVIR-2602 Comprehensive Data Centre & Cloud Management with UCS Director
BRKVIR-2603 Automating Cloud Network Services in Hybrid Physical and Virtual Environments
BRKVIR-2931 End-to-End Application-Centric Data Centre
BRKVIR-3601 Building the Hybrid Cloud with Intercloud Fabric - Design and Implementation
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Start Small
6
…Then Grow …..Then Evolve
Blade Runner, BrickWorld US
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Juggling Many Pieces…
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public 8
Which Pieces to Select?
BRKDCT-2218 Cisco Public© 2015 Cisco and/or its affi liates. All rights reserved.
Agenda
• Introduction
• Spine/Leaf Primer
• Initial Design Options
• Scale Up or Out
• Data Centre Interconnect Solutions
• Programmability
• Automation & Orchestration
• Cloud Considerations
9
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Typical Requirements
Minimum pair of dedicated DC Switches
Transition from collapsed core
Workloads mostly virtualised, some physical
Connect to network periphery
Scalable
Size for current needs
Reuse components in larger designs
Topology options: from single layer to spine-leaf
Design Options
Feature choice + priority = tradeoffs
Driving efficiency: SDN, Programmability, Orchestration, Automation
“Cloud with Control”
Designing Small to Medium Sized Data Centres
FC
FCoE
iSCSI / NAS
L3-----------
L2
Campus
Client Access
WAN / DCI
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Design Goals
Image Credit: In speaker notes
Flexible
Practical
Agile
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Direction will depend on where you draw the line:
Want to stay with existing toolsets for config & management?
Interested in new toolsets to buy some efficiency?
Capable of consuming a new set of tools?
New or traditional operational model?
What Are You Ready For?
Image Credit: In notes
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Single-Tier, Dual-Tier, Spine/Leaf
Small Spine/Leaf
VXLAN
Dual Tier DC
Single Layer DC
Scalable Spine/Leaf DC Fabric
VXLAN
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Compute Connectivity & Usage Needs Drive Design Choices
14
VM VMVM
FCoE
iSCSI
FC
NFS/
CIFS
VM VMVM
Hypervisor Network VirtualisationRequirements
– vSwitch vSS/vDS, OVS, Hyper-V, Nexus 1000v/AVS
Automation/Orchestration
– Abstraction
– APIs/Programmability/Orchestration
– VMM’s ; Fabric
Connectivity Model
– 10 or 1-GigE Server ports
– NIC/HBA Interfaces per-server
– NIC Teaming models
14
Compute Form Factor
– Unified Computing Fabric
– 3rd Party Blade Servers
– Rack Servers (Non-UCS Managed)
Storage Protocols
– Fibre Channel (FC)
– FCoE
– IP (iSCSI, NAS)
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Data Centre Fabric Needs
15
EAST – WEST TRAFFIC
NO
RT
H-
SO
UT
HT
RA
FF
IC
FC
FCoE
iSCSI / NAS
Server/Compute
Site BEnterprise
Network
Public
Cloud
Internet
DATA CENTREFABRIC
Mobile
Services
Storage
Orchestration/
Monitoring
Offsite DC
API
• “North-South”: end-users and external entities.
• “East-West”: clustered applications, workload mobility.
• High throughput, low latency• Increasing high availability
requirements.• Automation & Orchestration
BRKDCT-2218 Cisco Public© 2015 Cisco and/or its affi liates. All rights reserved.
Agenda
• Introduction
• Spine/Leaf Primer
• Initial Design Options
• Scale Up or Out
• Data Centre Interconnect Solutions
• Programmability
• Automation & Orchestration
• Cloud Considerations
16
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Traditional Multi-Tier Hierarchical Design
…
core1 core2
agg1 agg2 aggX aggY
• Extremely wide customer-deployment footprint
• Scales well, but scoping of failure domains imposes some restrictions– L3 Boundary
– VLAN extension / workload mobility options limited
– Default Gateway Placement
• Network Services repeated at every aggregation tier
• Discrete device management
L2
L3
L3
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Core, Aggregation and
AccessSpine-Leaf
Topology Selection: Single/Dual/Multi-Layer vs. Spine-Leaf
19
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
STP
VPC
Data Centre “Fabric” Journey
MAN/WAN
FabricPath
MAN/WAN
FabricPath
/BGP
MAN/WAN
VXLAN
/EVPN
VXLAN(Flood & Learn)
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Why Spine-Leaf Design? Pay as You Grow Model
Need more host ports?
Add a leaf
96 ports2x48 10G (960 Gbps total)
Need even more host ports?
Add another leaf
To speed up flow completion times, add
more backplane,
spread load across
more spines
Lower FCT = FASTER APPLICATIONS
* FCT = Flow Completion Times
144 ports3x48 10G (1440 Gbps total)
192 ports4x48 10G (1920 Gbps total)
Per
Spin
e
Util
izatio
nF
CT
FC
T
FC
T
Per S
pin
e
Utilis
atio
nF
CT
FC
T
FC
T
10G host ports
40G fabric ports
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Ho
st
1
Ho
st
3
Ho
st
2
Ho
st
4H
ost
5
Ho
st
7H
ost
6
Spine/Leaf DC Fabric ≅ Large Non-Blocking Switch
Host
1
Host
3
Host
4Host
5
Host
7
Host
2Host
6
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Spine/Leaf DC Fabric ≅ Large Modular Switch
Ho
st
1
Ho
st
3
Ho
st
2
Ho
st
4H
ost
5
Ho
st
7H
ost
6
Lin
e
Ca
rd
Lin
e
Ca
rdL
ine
C
ard
Lin
e
Ca
rdLin
e
Ca
rd
Lin
e
Ca
rd
Lin
e
Ca
rdL
ine
C
ard
Lin
e
Ca
rdL
ine
C
ard
Fa
bric
M
od
ule
Fa
bric
M
od
ule
Fa
bric
M
od
ule
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Impact of Link Speed – the Drive Past 10G Links
20×10Gbps
Downlinks
20×10Gbps
Uplinks
20×10Gbps
Downlinks
2×100Gbps
Uplinks
20
0G
Ag
gre
ga
te
Ba
nd
wid
th
20
0G
Ag
gre
ga
te
Ba
nd
wid
th
20×10Gbps
Downlinks
5×40Gbps
Uplinks
• 40 & 100Gbps fabric provide very similar performance for fabric links
• 40G provides performance, link redundancy, and low cost with BiDi
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
1 2 3 4 5
Statistical Probabilities of Efficient Forwarding
1 2
1 2 20
Probability of 100% throughput ≅ 3%
Probability of 100% throughput ≅ 99%
Probability of 100% throughput ≅ 75%
20×10Gbps
Uplinks2×100Gbps
Uplinks
11×10Gbps flows
(55% load)
5×40Gbps
Uplinks
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Lower
FCT is
Better
Impact of Link Speed on Flow Completion Times
0
2
4
6
8
10
12
14
16
18
20
30 40 50 60 70 80
FC
T(n
orm
alis
ed
to o
ptim
al)
Load (%)
Avg FCT: Large (10MB,∞) background flows
OQ-Switch
20x10Gbps
5x40Gbps
2x100Gbps
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Lower
FCT is
Better
Impact of Link Speed on Flow Completion Times
0
2
4
6
8
10
12
14
16
18
20
30 40 50 60 70 80
FC
T(n
orm
aliz
ed
to
op
tim
al)
Load (%)
Avg FCT: Large (10MB,∞) background flows
OQ-Switch
20x10Gbps
5x40Gbps
2x100Gbps
• 40/100Gbps fabric: ~ same FCT as non-blocking switch
• 10Gbps fabric links: FCT up 40% worse than 40/100G
Flow Completion is dependent on queuing and
latency.
40G is not just about faster ports and optics,
it’s about
Faster Flow Completion.
BRKDCT-2218 Cisco Public© 2015 Cisco and/or its affi liates. All rights reserved.
Agenda
• Introduction
• Spine/Leaf Primer
• Initial Design Options
• Scale Up or Out
• Data Centre Interconnect Solutions
• Programmability
• Automation & Orchestration
• Cloud Considerations
29
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
DC and Cloud Networking Portfolio – Nexus Family
Nexus 5000/5600
Nexus 7000/7700
Nexus 3548/3100
Nexus 2000/2300
Nexus 9000
Nexus 1000V/AVS
OPENAPIs/ Open Source/ Application Policy Model
HIGH PERFORMANCE FABRIC
1/10/40/100 GE
SCALABLE SECURE SEGMENTATION
VXLAN, BGP-EVPN
ACI Ecosystem
Resilient, Scalable Fabric
Workload Mobility Within/ Across DCs
LAN/SAN Convergence
Operational Efficiency—P-V-C
Architectural Flexibility
Nexus 6000
30
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Decoding the Nexus Product Numbers
Decoding Nexus 5600 Model numbers:
(((32+16)*10G)=480G)+((6*40G)=240G)=(720/10)= 72 5672
(((48+24+24)*10G)=960G)+(((4+2+2)*40G)=320G)=(1280/10)=128 56128
Decoding Nexus 9300 Model Numbers:
((48*10G)=480G)+((12*40G)=480G)=(960/10)= 96 9396
((96*10G)=960G)+((8*40G)=320G)=(1280/10)=128 93128
31
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
FC
Single Layer Data Centre, Nexus 5500 • Dedicated Nexus 5500-based switch pair
FCoE
iSCSI / NAS
1Gig/100M
Servers
10 or 1-Gig attached
UCS C-Series
10-GigE
UCS C-Series
L3-----------
L2Nexus 5500
Campus
Client Access
WAN / DCI
Nexus
2000
Positive
Unified Port on all ports –Max Flexibility
Can work as FC/FCOE access transition switch
Non-blocking, Line Rate 10Gpbs L2
~2us Latency
Supports Fabric Path, DFA*
160G Layer-3 with L3 daughter card or GEM
Supports 24 FEX, A-FEX, VM-FEX
Most CVD’s (i.e. FlexPod)
Negative
L3 card: 160G max, not cumulative
DFA “L2 ONLY Leaf”
No VXLAN HW support
No ACI support
No native DCI support
No VDC
ISSU not supported w/L3
FEX count lower w/L3
Q: 5500 or 5600?
Models:
Nexus 5548P; Nexus 5548UP; Nexus 5596UP; Nexus 5596T
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Single Layer Data Centre, Nexus 5600 • Dedicated Nexus 5600-based switch pair
Positive
Low Price/Performance
Unified Ports – Good Flexibility (not all ports)
Supports VXLAN, Fabric Path, DFA
Non-blocking, Line Rate L2/L3
Native 40G/10G, breakout
~1us Latency
Supports 24 FEX, A-FEX, VM-FEX
Negative
No ACI support
No native DCI support
ISSU not supported w/L3
Q: 5500 or 5600?
FC
FCoE
iSCSI / NAS
1Gig/100M
Servers
10 or 1-Gig attached
UCS C-Series
10-GigE
UCS C-Series
L3-----------
L2Nexus 5600
Campus
Client Access
WAN / DCI
Nexus
2000
Models:
Nexus 5624Q; Nexus 5648Q; Nexus 5696Q; Nexus 5672UP; Nexus 56128P
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Single Layer Data Centre, Nexus 6000• Positioned for rapid scalability and a 40-GigE Fabric
FC
FCoE
iSCSI / NAS
1Gig/100M
Servers
10 or 1-Gig attached
UCS C-Series
10-GigE
UCS C-Series
L3-----------
L2Nexus 6004
Campus
Client Access
WAN / DCI
Nexus
2000
Positive
Unified Ports – Good Flexibility with expansion
Non-disruptive scale-up
96*40G or 384*10G
Supports VXLAN, Fabric Path, DFA
Non-blocking, Line Rate L2/L3
Native 100G/40G/10G, BiDi, breakout support
~1us Latency
Supports 48 L2 FEX, 24 L3 FEX, A-FEX, VM-FEX
Negative
No VXLAN support in HW in early models (need 6004-EF)
No ACI support
No native DCI support
FEX count Lower w/L3
ISSU not supported w/L3
Higher initial cost
Models:
Nexus 6001; Nexus 6004; Nexus 6004-EF
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Single Layer Data Centre, Nexus 9300 • Dedicated Nexus 9300-based switch pair
iSCSI / NAS
1Gig/100M
Servers
10 or 1-Gig attached
UCS C-Series
10-GigE
UCS C-Series
L3-----------
L2Nexus 9300
Campus
Client Access
WAN / DCI
Nexus
2000
Positive
Low Price/Performance
VXLAN Support in HW
ACI Leaf & Spine support
Standalone Leaf & Spine
Non-blocking, Line Rate L2/L3
Native 40G & 10G
<1us Latency
FEX Support - 16
FCoE Hardware Support*
Negative
No FC, Unified Ports
FCoE will require SW
No FP, DFA support
VXLAN Control plane is Mcastuntil EVPN
No native DCI support
Breakout on some 40G ports
ACI Spine <> ACI Leaf
Models:
Nexus 9372TX; Nexus 9396TX; Nexus 93120TX; Nexus 93128TX
Nexus 9372PX; Nexus 9396PX; Nexus 9332PQ ; Nexus 9336PQ (ACI Spine only)
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Single Layer Data Centre, Nexus 7000/7700• Highly Available VirtualisedChassis Access/Aggregation Model
L3-----------
L2
Nexus 7700
WAN / DCI
Campus
Client Access
iSCSI / NAS
1Gig/100M
Servers10 or 1-Gig attached
UCS C-Series
10-GigE
UCS C-Series
Nexus
2000
FCoE
Positive
More feature rich platform
Modular, easy scale up
Flexible L2/L3 with ISSU
LISP*, OTV, FEX, FCoE, FP, VXLAN*
Native 100G, 40G & 10G, breakout
DFA Spine/Leaf
Supports 32 FEX
VDC, PBR, WCCP, MACSec
Different models (18-slot to 2-slot*)
Negative
Higher initial capital cost
No Unified Ports
VXLAN support in Future
No ACI Support
Physical Footprint
Models:Chassis: Nexus 7004/7009/7010/7018; Nexus 7702*/7706/7710/7718
I/O Modules : M1 (10/100/1000GE ; 1GE ; 10GE) , M2 (10GE; 40GE; 40/100GE), F2E(1/10GE), F3 (1/10GE ; 40GE ; 100GE)
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Single Layer Data Centre, Nexus 9500• Highly Available Chassis Access/Aggregation Model
iSCSI / NAS
10 or 1-Gig attached UCS C-Series
L3-----------
L2Nexus 9500
WAN / DCI
Campus
Client AccessPositive
Modular, easy scale up
Flexible L2/L3 with ISSU*
FEX*, FCoE*, VXLAN*
Native 100G, 40G & 10G, breakout
Supports 32 FEX*
ACI Spine/Leaf Support*
Negative
Higher initial capital cost
No FC, Unified Ports
FEX, VXLAN, FCoE support in future
No DFA, FP Support
ISSU coming in future
VDC in future
No native DCI
Models:
Chassis: Nexus 9504; Nexus 9508; Nexus 9516
I/O Modules: 94xx (NX-OS) ; 95xx (NX-OS, ACI) ; 96xx (NX-OS) ; 97xx (ACI)
BRKDCT-2218 Cisco Public© 2015 Cisco and/or its affi liates. All rights reserved.
Agenda
• Introduction
• Spine/Leaf Primer
• Initial Design Options
• Scale Up or Out
• Data Centre Interconnect Solutions
• Programmability
• Automation & Orchestration
• Cloud Considerations
38
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Fabric Server Access Starter Pod
Two Racks, 96x10G ports (960GB)***
24x40G fabric ports needed for non-oversubscribed72x40G available
10G host ports
40G fabric ports
5600 starter4x5672UP
Full SW Bundle (including DCNM)
~250K US list
ACI starter2x9336PQ
4x9396PX3xAPIC & 192 Port Leaf licensing~250K US list
*** Server/Rack density dependent on required load, available power and cooling (geo-diverse)
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Scaling with Spine/Leaf:
Two Racks, 96x10G ports (960GB)
24x40G fabric ports needed for non-oversubscribed72x40G available
Three Racks, 144x10G ports (1440GB)
36x40G fabric ports needed for non-oversubscribed72x40G available
48x40G fabric ports needed for non-oversubscribed72x40G available
Four Racks, 192x10G ports (1920GB)Five Racks, 240x10G ports (2400GB)
60x40G fabric ports needed for non-oversubscribed72x40G available
72x40G fabric ports needed for non-oversubscribed72x40G available
Six Racks, 288x10G ports (2880GB)
10G host ports***
40G fabric ports
*** This example is 100% non-blocking, non-oversubscribed. Could build an oversubscribed model with FEX or fewer fabric links. Server/Rack density dependent on load, power, cooling (geo-diverse)
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
When Do You Add/Upgrade Spines?
Six Racks, 288x10G ports (2880GB)
72x40G fabric ports needed for non-oversubscribed72x40G available
72x40G fabric ports needed for non-oversubscribed144x40G now available, smaller failure impact
Eight Racks, 384x10G ports (3840GB)
96x40G fabric ports needed for non-oversubscribed144x40G available
10G host ports***
40G fabric ports
*** This example is 100% non-blocking, non-oversubscribed. Could build an oversubscribed model with FEX or fewer fabric links. Server/Rack density dependent on load, power, cooling (geo-diverse)
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
When Do You Add/Upgrade Spines?
Eight Racks, 384x10G ports (3840GB)
96x40G fabric ports needed for non-oversubscribed140x40G available
96x40G fabric ports needed for non-oversubscribed2x36 in each modular spine, 280x40G, LC Redundancy, Spine ISSU, etc.
10G host ports***
40G fabric ports
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Q: Okay, Have My Spine-Leaf Topology Now What?
Choice for Fabric mode of operation:
L2 vPC(Traditional)
L2 Routed Fabric (FabricPath)
L3 ECMP with Overlay
(Flood and Learn)
L3 ECMP with Overlay + Control
Plane
Controllers
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Integrated Overlays
Flexible Overlay Virtual Network
• Mobility – Track end-point attach at edges
• Segmentation
• Scale – Reduce core state
– Distribute and partition state to network edge
• Flexibility/Programmability
– Reduced number of touch points
Robust Underlay/Fabric
• High Capacity Resilient Fabric
• Intelligent Packet Handling
• Programmable & Manageable
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Flexible Data Centre Fabrics
Hosts
VM
OS
VM
OS
Virtual
Physical
Create Virtual Networks on top of an efficient IP network
• Mobility • Segmentation + Policy
• Scale• Automated &
Programmable• Full Cross Sectional BW• L2 + L3 Connectivity
• Physical + Virtual
Use VXLAN to Create DC Fabrics
L3
L2/L3
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
SVI/VNI/VLAN Scoping and Provisioning
All VNIs/SVIs everywhere
• Umbrella catch-all provisioning
• Full ARP state on all Leaf Nodes
• Can be manually provisioned up-front
• Open to L2 Flooding everywhere
Orchestration leads to scale optimisation
VNIs/SVIs scoped as hosts attach
• Provision on host attach/policy
• ARP state only for local subnets
• Requires orchestration (i.e. ACI ,VTS*)
• L2 Flooding is scoped
47
L3 Fabric
L3 GWY L3 GWY L3 GWY L3 GWYL3 GWY L3 GWY
L3 Fabric
L3 GWY L3 GWY L3 GWY L3 GWYL3 GWY L3 GWY
Mgmt
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Q: How Do I Integrate Spine-leaf To An Existing Classic Tiered Network?
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Scaling a VPC-based DC Design
L3
L2
Access Layer
VLANs
100-150 Host Host Host
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Scaling a VPC-based DC Design
Access Layer
VLANs
100-150 Host Host Host
Access Layer
VLANs
151-200Host Host Host
L3
L2
DC Core
Layer
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
ACI Fabric
(VXLAN based)
Integrating Spine/Leaf with an Existing Network
Access
Layer VLANs
100-150
Access
LayerVLANs
151-200Host Host
Agg Layer
CoreLayer
ACIBorder
LeafsHost
SpineLayer
ACI PodNew DC
Data Row Upgrade
New Application
Access
LayerVLANs
201-250
L3
L2
L3
L2
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
ACI Fabric
(VXLAN based)
Integrating Spine/Leaf with an Existing Network
Access Layer
VLANs
100-150 Host Host
Agg Layer
CoreLayer
ACIBorder
LeafsHost
SpineLayer
ACI PodNew DC
Data Row Upgrade
New Application
L3
L2
L3
L2
Access Layer
VLANs
151-200
ACILeafs
and Border Leafs
BRKDCT-2218 Cisco Public© 2015 Cisco and/or its affi liates. All rights reserved.
Agenda
• Introduction
• Spine/Leaf Primer
• Initial Design Options
• Scale Up or Out
• Data Centre Interconnect Solutions
• Programmability
• Automation & Orchestration
• Cloud Considerations
53
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Data Centre Interconnect Options• Options for L2 Interconnect
L3-----------
L2
Campus
Client Access
WAN / DCI
VM VMVMVM VMVM
Virtualised Servers with Nexus
1000v, vPath, CSR 1000v
Virtual DC
Services inSoftware
L3-----------
L2
WAN / DCICampus
Client Access
VM VMVMVM VMVM
Virtualised Servers with Nexus
1000v, vPath, CSR 1000v
Virtual DC
Services inSoftware
CSR1000v
ASR1000
ASR1000
N7K
BRKDCT-2218 Cisco Public© 2015 Cisco and/or its affi liates. All rights reserved.
Agenda
• Introduction
• Spine/Leaf Primer
• Initial Design Options
• Scale Up or Out
• Data Centre Interconnect Solutions
• Programmability
• Automation & Orchestration
• Cloud Considerations
55
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Nexus Programmability
Protocols and
Data Models
XMPP Shipping Shipping Future
LDAP Shipping Shipping Shipping
NetConf/XML Shipping Shipping Shipping
NXAPI (JSON/XML) Future Future Shipping
YANG Future Future Future
REST Future Future Shipping
Provisioning &
Orchestration
Puppet/Chef Future Shipping Shipping
PoAP Shipping Shipping Shipping
OpenStack Shipping Shipping Shipping
Programmatic
Interfaces
Native Python Shipping Shipping Shipping
Integrated container Coming Future Shipping
Guest Shell Future Future Shipping
OnePK Future Shipping Roadmap
OpenFlow Future Shipping Shipping
OpFlex Future Future Future
Nexus 7K Nexus 5K / 6K Nexus 9K
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Programming for Many Boxes – Git Hub Repository
https://github.com/datacenter/
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Here’s an example that uses the NXAPI for the N9K. It can automate mundane configuration tasks: you launch it remotely (from your Mac/PC) and use it to get an inventory of the switch, configure new interfaces, etc:
https://github.com/datacenter/nexus9000/blob/master/nx-os/nxapi/getting_started/nxapi_basics.py
Here’s another one that collects the output of several “show commands” and puts them together to create a “super command” which nice NxOS-style formatting:
https://github.com/datacenter/nexus9000/blob/master/nx-os/python/samples/showtrans.py
There are a few others such as a CRC error check here:
https://github.com/datacenter/nexus7000/blob/master/crc_checker_n7k.py
Programming Examples
BRKDCT-2218 Cisco Public© 2015 Cisco and/or its affi liates. All rights reserved.
Agenda
• Introduction
• Spine/Leaf Primer
• Initial Design Options
• Scale Up or Out
• Data Centre Interconnect Solutions
• Programmability
• Automation & Orchestration
• Cloud Considerations
59
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
UCS Manages Compute through Abstraction
60
LAN
SAN
Motherboard Firmware
BIOS Configuration
Adapter Firmware
Boot Order
RAID configuration
Maintenance Policy
LAN Connectivity Configuration
SAN Connectivity Configuration
Service Profile
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
ACI Manages Communications through Abstraction
61
QoS QoS QoS
Network PathForwarding A
CL
Exte
rnal
Co
nne
ctivity
Qo
S
FW
Co
nfig
ura
tio
n
SL
B C
onfig
ura
tio
n
Ho
st C
onne
ctivity
AC
L
AC
L
Qo
S
Qo
S
SL
B C
onfig
ura
tio
n
FW
Co
nfig
ura
tio
n
Ho
st C
onne
ctivity
Ho
st C
onne
ctivity
Application Network
Profile
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Different Modes of Operation with Nexus 9000
NX-OS Working w/ multiple SDN controllers
(inclusive for NfV)
APIC data object / policy model integrated natively with NX-OS
running on Nexus 9000 switches (spines and leaves)
Loosely coupled integration
(custom integration and open programmability)Tightly coupled integration – Out of the box ready system
Deploy for multiple topologies
Leaf/Spine, 2-Tier Aggregation, Full Mesh
Deployed as a well-known CLOS topology.
It’s a system approach.
Interoperable w/ 3rd Party ToR Switches
and WAN gear
Must be Nexus 9000 hardware for leaves and spines as well as ACI
Software (switch code and APIC controller)
1/10/40/100GE
Common Platform
Nexus 9000 Standalone (with Controller*) Application Centric Infrastructure (with APIC)
VTS
NCS
62
BRKDCT-2218 Cisco Public© 2015 Cisco and/or its affi liates. All rights reserved.
Agenda
• Introduction
• Spine/Leaf Primer
• Initial Design Options
• Scale Up or Out
• Data Centre Interconnect Solutions
• Programmability
• Automation & Orchestration
• Cloud Considerations
65
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
UCSD
Cisco InterCloud Architectural Details
Public
VM
InterCloudDirector
InterCloudSwitch
InterCloud Provider Enablement
Platform
VMManager
Private
Cisco Global InterCloud
(or Partner White-Label)
IT AdminsEnd Users
VM VM
InterCloudExtender
InterCloud Services
VM
InterCloud Secure Fabric
Administrator installs
InterCloud Director
Installed and
configured through InterCloud Director
SP Admin deploys
ICPEP
Cisco Global Intercloud
Services
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
InterCloud Director
UCSD-based, separate interface
InterCloud Secure Fabric
N1Kv-based, doesn’t require a full N1Kv install
vNIC from intercloud connecter into the vSwitch
Optional services integration with CSR1000v
InterCloud Provider Enablement Platform
ICF-Provider Edition implemented by Provider
InterCloud Components
Key Takeaways
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Cisco has many options for building DC solutions
All solutions can start small and grow
Does not have to be a “rip and replace”
Spine-Leaf does not have to be expensive
Automated fabrics can provide new tools for simplified operations
Cloud technologies can expose new operational models
Key Takeaways
Q & A
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Give us your feedback and receive a
Cisco Live 2015 T-Shirt!
Complete your Overall Event Survey and 5 Session
Evaluations.
• Directly from your mobile device on the Cisco Live
Mobile App
• By visiting the Cisco Live Mobile Site
http://showcase.genie-connect.com/clmelbourne2015
• Visit any Cisco Live Internet Station located
throughout the venue
T-Shirts can be collected in the World of Solutions
on Friday 20 March 12:00pm - 2:00pm
Complete Your Online Session Evaluation
Learn online with Cisco Live! Visit us online after the conference for full
access to session videos and
presentations. www.CiscoLiveAPAC.com
Additional Resources
© 2015 Cisco and/or its affi liates. All rights reserved.BRKDCT-2218 Cisco Public
Follow up information for more details:
ACI home page on CCO: http://www.cisco.com/c/en/us/solutions/data-center-virtualization/application-centric-
infrastructure/index.html
Promise Theory for Dummies (careful, adult language): https://www.socallinuxexpo.org/scale11x/presentations/promise-
theory-dummies
Meta Data in the Software Defined Data Center: https://www.youtube.com/watch?v=e29hQ7kCcNs&list=PLinuRwpnsHaf7ePRWHZ4Jb5gvTSrxkwpw&index=5
Additional Resources