ebugakov juniper nfv in sp networks 21042015 ver2
DESCRIPTION
NFVTRANSCRIPT
-
Copyright 2014 Juniper Networks, Inc. 1
Juniper Networks SDN and NFV products for Service Providers Networks Evgeny Bugakov Senior Systems Engineer, JNCIE-SP 21 April 2015 Moscow, Russia
-
Copyright 2014 Juniper Networks, Inc. 2
AGENDA
Virtualiza*on strategy and goals 1
vMX product overview and performance 2
vMX Roadmap and licensing 4
vMX Use cases and deployment models 3
Northstar WAN SDN Controller 5
-
Copyright 2014 Juniper Networks, Inc. 3
Virtualization strategy and goals
-
Copyright 2014 Juniper Networks, Inc. 4
Branch Oce
HQ Carrier Ethernet Switch
Cell Site Router
Mobile & Packet GWs
Aggrega*on Router/ Metro
Core
DC/CO Edge Router Service Edge Router
Core
Enterprise Edge/Mobile Edge Aggrega3on/Metro/Metro core Service Provider Edge/Core and EPC
vCPE, Enterprise Router Virtual PE, Virtual BNG/LNS, Hardware Virtualiza*on
Virtual Rou*ng Engine, Virtual Route Reector
MX SDN Gateway
Control Plane and OS: Virtual JUNOS, Forwarding Plane: Virtualized Trio
vBNG, vPE, vCPE
Data center/Central Oce
MX Virtualiza*on Strategy
Leverage R&D effort and JUNOS feature velocity across all physical & virtualization initiatives
SoVware
Applica*ons
-
Copyright 2014 Juniper Networks, Inc. 5
Physical vs. Virtual Physical Virtual
High throughput, high density Flexibility to reach higher scale in control plane and service plane
Guarantee of SLA Agile, quick to start Low power consump*on per throughput Low power consump*on per control plan and service
Scale up Scale out Higher entry cost in $ and longer *me to deploy Lower entry cost in $ and shorter *me to deploy Distributed or centralized model Op*mal in centralized cloud-centric deployment Well development network mgmt system, OSS/BSS Same pla[orm mgmt as Physical, plus same VM
mgmt as a SW on server in the cloud Variety of network interfaces for exibility Cloud centric, Ethernet-only Excellent price per throughput ra*o Ability to apply pay as you grow model
Each op3on has its own strength, and it is created with dierent focus
-
Copyright 2014 Juniper Networks, Inc. 6
Type of deployments with virtual platform
Tradi3onal func3on, 1:1
form replacement
New applica3ons where physical is
not feasible or ideal
A whole new approach to a tradi3onal concept
Cloud CPE
Cloud based VPN
Service Chaining GW
Virtual Private Cloud GW Mul*-func*on, mul*-layer integra*on
w/ rou*ng as a plug-in
SDN GW
Route Reector
Services appliances
Lab & POC
Branch Router
DC GW
CPE
PE
Wireless LAN GW
Mobile Sec GW
Mobile GW
-
Copyright 2014 Juniper Networks, Inc. 7
vMX Product Overview
-
Copyright 2014 Juniper Networks, Inc. 8
vMX goals
Agile and Scalable
Orchestrated
Leverage JUNOS and Trio
Scale-out elas*city by spinning up new instances Faster *me-to-market oering Ability to add new services via service chaining
vMX treated similar to a cloud based applica*on
Leverages the forwarding feature set of Trio Leverages the control plane features of JUNOS
-
Copyright 2014 Juniper Networks, Inc. 9
Virtual and Physical MX
PFE VFP
Microcode crosscompiled
X86 instructions
CONTROL PLANE
DATA PLANE
ASIC/HARDWARE
Cross compilation creates high leverage of features between Virtual and Physical with minimal re-work
TRIO UCODE
-
Copyright 2014 Juniper Networks, Inc. 10
Virtualiza*on techniques: deployment with hypervisors
Applica*on
Virtual NICs
Physical NICs
Guest VM#1
Hypervisor: KVM, XEN,VMWare ESXi
Physical layer
VirtIO drivers
Device emula*on
Para-virtualization (VirtIO, VMXNET3)
Guest and Hypervisor work together to make emulation efficient
Offers flexibility for multi-tenancy but with lower I/O performance
NIC resource is not tied to any one application and can be shared across multiple applications
vMotion like functionality possible
PCI-Pass through with SR-IOV
Device drivers exist in user space Best for I/O performance but has dependency on NIC type Direct I/O path between NIC and user-space application
bypassing hypervisor vMotion like functionality not possible
Applica*on
Virtual NICs
Guest VM#2
VirtIO drivers
Applica*on
Virtual NICs
Physical NICs
Guest VM#1
Hypervisor: KVM, XEN, VMWare ESXi
Physical layer
Device emula*on
Applica*on
Virtual NICs
Guest VM#2
Device emula*on
PCI Pass-through
SR-IO
V
-
Copyright 2014 Juniper Networks, Inc. 11
Virtualiza*on techniques: containers deployment
Applica*on 1
Virtual NICs
Physical NICs
Physical layer
Containers (Docker, LXC)
No hypervisor layer. Much less memory and compute resource overhead
No need for PCI-pass through or special NIC emulation Offers high I/O performance Offers flexibility for multi-tenancy
Applica*on 2
Virtual NICs
Container engine (Docker, LXC)
-
Copyright 2014 Juniper Networks, Inc. 12
vMX overview Ecient separa*on of control and data-plane
Data packets are switched within vTRIO Mul*-threaded SMP implementa*on allows core elas*city Only control packets forwarded to JUNOS Feature parity with JUNOS (CLI, interface model, service congura*on) NIC interfaces (eth0) are mapped to JUNOS interfaces (ge-0/0/0)
Guest OS (Linux)
Guest OS (JUNOS)
Hypervisor x86 Hardware
CHAS
SISD
RPD
LC-
Kernel
DCD
SNMP
Virtual TRIO
VFP VCP
Intel DPDK
-
Copyright 2014 Juniper Networks, Inc. 13
Virtual TRIO Packet Flow
Physical nics
Virtual nics
DPDK
br-int 172.16.0.3
vpfe0 eth0 : 172.16.0.2
fxp0:
vre0
rpd chasd
VMXT = microkernel
vTRIO
br-ext
eth1 :
em1: 172.16.0.1
vpfe1
vre1
VCP
VFP
-
Copyright 2014 Juniper Networks, Inc. 14
SCRIPTS
vMX Orchestration
VCP VFP
Physical NICs
Virtual NICs
Management traffic
Guest VM (Linux + DPDK) Guest VM (FreeBSD)
Hypervisor: KVM
Cores Memory
Bridge / vSwitch SR-IO
V
Physical layer vSwitch for VFP to VCP communication (internal host path)
Optimized data path from physical NIC to vNIC via SR-IOV (Single Root IO Virtualization).
OpenStack/Scripts for VM management
-
Copyright 2014 Juniper Networks, Inc. 15
vMX Performance
-
Copyright 2014 Juniper Networks, Inc. 16
vMX Environment
Descrip3on Value
Sample system congura*on Intel Xeon E5-2667 v2 @ 3.30GHz 25 MB Cache. NIC: Intel 82599 (for SR-IOV only)
Memory Minimum: 8 GB (2GB for vRE, 4GB for vPFE, 2GB for Host OS)
Storage Local or NAS
Sample system configuration
Sample configuration for number of CPUs
Use-cases Requirement
VMX with up to 100Mbps performance Min # of vCPUs: 4 [1 vCPU for VCP and 3 vCPUs for VFP]. Min # of Cores: 2 [ 1 core for VFP and 1 core for VCP]. Min memory 8G. VirtIO NIC only.
VMX with up 3G of performance @ 512 bytes Min # of vCPUs: 4 [1 vCPU for VCP and 3 vCPUs for VFP].
Min # of Cores: 4 [ 2 cores for VFP, 1 core for host, 1 core for VCP]. Min memory 8G. VirtIO or SR-IOV NIC.
VMX with 10G and beyond (assuming min 2 ports of 10G) Min # of vCPUs: 5 [1 vCPU for VCP and 4 vCPUs for VFP].
Min # of Cores: 5 [ 3 cores for VFP, 1 core for host, 1 core for VCP]. Min memory 8G. SR-IOV only NIC.
-
Copyright 2014 Juniper Networks, Inc. 17
vMX Baseline Performance VMX performance in Gbps
# of cores for packet processing *
Frame size (Bytes) 3 4 6 8 10
256 2 3.8 7.2 9.3 12.6
512 3.7 7.3 13.5 18.4 19.8
1500 10.7 20 20 20 20
2 x 10G ports
4 x 10G ports
# of cores for packet processing*
Frame size (Bytes) 3 4 6 8 10
256 2.1 4.2 6.8 9.6 13.3
512 4.0 7.9 13.8 18.6 26
1500 11.3 22.5 39.1 40 40
6 x 10G ports
# of cores for packet processing*
Frame size (Bytes) 3 4 6 8 10
256 2.2 4.0 6.8 9.8
512 4.1 8.1 14 19.0 27.5
1500 11.5 22.9 40 53.2 60
*Number of cores includes cores for packet processing and associated host func7onality. For each 10G port there is a dedicated core not included in this number.
8 x 10G ports
# of cores for packet processing*
Frame size (Bytes) 3 4 6 8 12
66 4.8
128 8.3
256 14.4
512 31
1500 78.5
IMIX 35.3
-
Copyright 2014 Juniper Networks, Inc. 18
vMX Performance improvement
Degree of TRIO ASIC emulation Less More
Instructions utilized to process a
packet on x86
VMX at present
VMX future
Mor
e Le
ss
Reduce the number of instructions per packet has an inverse relation with the packet performance
VMX roadmap is to reduce the number of instructions utilized per packet by running some parts of the forwarding plane natively on x86 without emulation.
-
Copyright 2014 Juniper Networks, Inc. 19
vMX use cases and deployment models
-
Copyright 2014 Juniper Networks, Inc. 20
Service Provider VMX use case virtual PE (vPE)
DC/CO%Gateway%%
Provider%MPLS%cloud%CPE%
L2%PE%
L3%PE%
CPE%
Peering%
Internet%
SMB$CPE%
Pseudowire%
L3VPN%
IPSEC/Overlay%technology%
Branch$Oce$
Branch$Oce$
DC/CO%Fabric%%%
vPE%
Scale-out deployment scenarios
Low bandwidth, high control plane scale customers
Dedicated PE for new services and faster time-to-market
Market Requirement
VMX is a virtual extension of a physical MX PE
Orchestration and management capabilities inherent to any virtualized application apply
VMX Value Proposition
-
Copyright 2014 Juniper Networks, Inc. 21
VMX as a DC Gateway virtual USGW
VM# VM# VM#
ToR#(IP)#
ToR#(L2)#
Non#Virtualized#environment#(L2)##
VXLAN#Gateway#(VTEP)#
VTEP#
VM# VM# VM#
VTEP#
Virtualized#Server# Virtualized#Server#
VPN#Cust#A# VPN#Cust#B#
VRF#A#
VRF#B#
MPLS#Cloud#
VPN#Gateway#(L3VPN)#
VMX#
Virtual#Network#B# Virtual#Network#A#
VM# VM# VM# VM# VM# VM#
Data#Center/#Central#Oce#
Service Providers need a gateway router to connect the virtual networks to the physical network
Gateway should be capable of supporting different DC overlay, DC Interconnect and L2 technologies in the DC such as GRE, VXLAN, VPLS and EVPN
Market Requirement
VMX supports all the overlay, DCI and L2 technologies available on MX
Scale-out control plane to scale up VRF instances and number of VPN routes
VMX Value Proposition
-
Copyright 2014 Juniper Networks, Inc. 22
VMX to oer managed CPE/centralized CPE
vMX$as$vCPE$$(IPSec,$NAT)$
vSRX$(Firewall)$
Branch'Oce'
Switch$$
Provider$MPLS$cloud$
DC/CO$GW$
Branch'Oce'
Switch$
Provider$MPLS$cloud$
DC/CO$Fabric$+$$Contrail$overlay$
vMX$as$vPE$
Branch'Oce'
Switch$
L2$PE$
L2$PE$
PE$
Internet$Contrail Controller
Service providers want to oer a managed CPE service and centralize the CPE func*onality to avoid truck rolls Large enterprises want a centralized CPE oering to manage all their branch sites Both SPs and enterprises want the ability to oer new services without changing the CPE device
Market Requirement
VMX with service chaining can oer best of breed rou*ng and L4-L7 func*onality Service chaining oers the exibility to add new services in a scale-out manner
VMX Value Proposition
-
Copyright 2014 Juniper Networks, Inc. 23
Reflection from physical to virtual world Proof of concept lab validation or SW certification
Perfect mirroring eect between carrier grade physical pla[orm & virtual router
Can provide reec*on eect of an actual deployment in virtual environment
Ideal to support Proof of Concept lab New service congura*on/opera*on
prepara*on SW release valida*on for an actual
deployment Training lab for opera*onal team Troubleshoot environment for a real network
issue
CAPEX or OPEX reduc*on for lab Quick turn around when lab network
scale is required
Virtual
Physical deployment
-
Copyright 2014 Juniper Networks, Inc. 24
Service Agility: Bring up a new service in a POP
SP Network for VPN service
PE
L3 CPE L3 CPE
PE
POP
1. Install a new vMX to start oering a new service without impact to exis*ng pla[orm
vMX
2. Scale out the service with vMX quickly if trac prole ts the requirements
vMX 3. Add service directly to the physical MX GW or add more physical MX if service is successful and there is more demand with signicant trac growth
MX
4. Integrated the new service into exis*ng PE when the service is mature
-
Copyright 2014 Juniper Networks, Inc. 25
vBNG, what is it?
Runs on x86 inside virtual machine Two virtual machines needed, one for forwarding and one for control plane First iteration supports KVM for hypervisor and OpenStack for orchestration
VMWARE support planned Based on the same code base and architecture as Junipers VMX Runs Junos
Full featured and constantly improving Some features, scale and performance of vBNG will be different than pBNG
Easy migration from pBNG Supports multiple BB models
vLNS BNG based on PPP, DHCP, C-VLAN and PWHT connections types
-
Copyright 2014 Juniper Networks, Inc. 26
Virtual BNG cluster in a data center
BNG cluster
10K~100K subscribers
Data Center or CO
vMX as vBNG
vMX vMX vMX vMX vMX
Poten*ally BNG func*on can be virtualized, and vMX can help form a BNG cluster at the DC or CO (Roadmap item, not at FRS); Suitable to perform heavy load BNG control-plane work while there is liwle BW needed; Pay-as-you-grow model; Rapid Deployment of new BNG router when needed; Scale-out works well due to S-MPLS architecture, leverages Inter-Domain L2VPN, L3VPN, VPLS;
-
Copyright 2014 Juniper Networks, Inc. 27
vMX Route Reector feature set Route Reectors are characterized by RIB scale (available memory) and BGP
Performance (Policy Computa*on, route resolver, network I/O - determined by CPU speed)
Memory drives route reector scaling Larger memory means that RRs can hold more RIB routes With higher memory an RR can control larger network segments lower number of RRs required in a network
CPU speed drives faster BGP performance Faster CPU clock means faster convergence Faster RR CPUs allow larger network segments controlled by one RR - lower numbers of RRs required in a network
vRR product addresses these pain point by running Junos image as an RR applica*on on faster CPUs and with memory on standard servers/appliances
-
Copyright 2014 Juniper Networks, Inc. 28
Juniper vRR DEVELOPMENT Strategy
vRR development is following three pronged approach 1. Evolve plaeorm capabili3es using virtualiza3on technologies Allow instan*a*on of Junos image on a non RE hardware
Any Intel Architecture Blade Server / Server 2. Evolve Junos OS and RPD capabili3es 64 bit Junos kernel 64 bit RPD improvements for increased scale RPD modularity / mul*-threading for bewer convergence performance 3. Evolve Junos BGP capabili3es for RR applica3on BGP Resilience and Reliability improvements BGP monitoring protocol BGP Driven Applica*on control DDoS preven*on via FlowSpec
-
Copyright 2014 Juniper Networks, Inc. 29
VRR Scaling Results
* The convergence numbers also improve with higher clock CPU
Tested with 32G vRR instance
Address Family
# of adver*zing peers ac*ve routes Total Routes
Memory U*liza*on(for
receive all routes) Time taken
to receive all routes # of receiving peers Time taken to adver*se
the routes and Mem U*ls.
IPv4 600 4.2 million 42Mil(10path) 60% 11min 600 20min(62%)
IPv4 600 2 million 20Mil(10path) 33% 6min 600 6min(33%)
IPv6 600 4 million 40Mil(10path) 68% 26min 600 26min(68%)
VPNv4 600 2Mil 4Mil (2 paths ) 13% 3min 600 3min(13%)
VPNv4 600 4.2Mil 8.4Mil (2 paths ) 19% 5min 600 23min(24%)
VPNv4 600 6Mil 12Mil (2 paths ) 24% 8min 600 36min(32%)
VPNv6 600 6Mil 12Mil (2 paths ) 30% 11min 600 11min(30%)
VPNv6 600 4.2Mil 8.4Mil (2 paths ) 22% 8min 600 8min(22%)
-
Copyright 2014 Juniper Networks, Inc. 30
Network based Virtual Route Reector Design
Client 1
vRRs can be deployed in the same loca*ons in the network Same connec*vity paradigm between vRRs and clients as todays RRs and clients vRR instan*a*on and connec*vity (underlay) provided by Openstack
Client 2
Client 3
Client n
iBGP
Junos VRR on VMs On standard servers
-
Copyright 2014 Juniper Networks, Inc. 31
CLOUD Based Virtual Route Reector DESIGN Solving the best path selec3on problem for cloud virtual route reector
VRR 1 Region 1
Regional Network 2
VRR 2 Region 2 Data Center
Cloud Backbone
GRE, IGP
GRE, IGP
VRR 2 selects path based on R1 view
R1
R2 VRR 2 selects path based on R2 view
vRR as an Applica*on hosted in DC GRE tunnel is originated from gre.X (control plane interface) VRR behaves like it is locally awached to R1 (requires resolu*on RIB cong)
Client 2
Client 1 Regional Network 1
iBGP
iBGP
Client 3
iBGP
Cloud Overlay w/ Contrail or VMWare
-
Copyright 2014 Juniper Networks, Inc. 32
There is a App for That
EVOLVING SERVICE DELIVERY to bring cloud proper*es to managed BUSINESS services
30Mbps Firewall
Applica3on Accelera3on
Remote access for 40 employees
Applica3on Repor3ng
There is an App for That
-
Copyright 2014 Juniper Networks, Inc. 33
Cloud Based CPE with vMX
A Simplied CPE Remove CPE barriers to service
innova*on Lower complexity & cost
DHCP Firewall Routing / IP Forwarding NAT
Modem / ONT Switch Access Point
Voice MoCA/ HPAV/ HPNA3
Typical CPE Func3ons
DHCP
FW Routing / IP Forwarding
NAT Modem / ONT Switch Access Point
Voice MoCA/ HPAV/ HPNA3
Simplied L2 CPE
In Network CPE func*ons Leverage & integrate with other
network services Centralize & consolidate Seamless integrate with mobile & cloud
based services
Direct Connect Extend reach & visibility into the
home Per device awareness & state Simplied user experience
Simplify the device required on the customer premise Centralize key CPE functions & integrate them into the network edge
BNG / PE in SP Network
-
Copyright 2014 Juniper Networks, Inc. 34
Cloud CPE scenario A: Integrated v- branch router
Ethernet NID
Switch with Smart SFP DHCP
Routing NAT, FW VPN
Cloud CPE Context
Edge Router
L2 CPE (op*onally with L3 awareness for
QoS and Assurance)
LAG, VRRP, OAM, L2 Filters,.. Sta*s*cs and Monitoring
per vCPE
Addressing, Rou*ng, Internet & VPN, QoS
NAT, Firewall, IDP
vCPE instance = VPN rou*ng instance
Pros Simplest onsite CPE Limited investments LAN extension Device visibility
Cons Access network impact Limited services Management impact
Juniper MX JS Self-Care App NID Partners
-
Copyright 2014 Juniper Networks, Inc. 35
CPE
VPN
Lightweight L3 CPE
(Un)Secure Tunnel L2 or L3 Transport
vCPE instance = VR on VM
Pros No domain constraint Opera3onal isola3on VM exibility Transparent to exis3ng network
Cons Pre-requisites on CPE Blindsided Edge Virtualiza3on Tax
Juniper Firey Virtual Director
CPE
VM
VM
VM
VM can be shared across sites
Cloud CPE scenario B: overlay v- branch router
-
Copyright 2014 Juniper Networks, Inc. 36
BROADBAND DEVICE VISIBILITY EXAMPLE: PARENTAL CONTROL BASED ON DEVICE POLICIES
HOME NETWORK
Laptop
L2 Bridge
Tablet
Little Jimmys Desktop
ACTIVITY REPORTING
Volumes Content
Facebook.com Twitter.com Hulu.com Wikipedia.com Iwishiwere21.com
Portal / Mobile App
Self-care & Repor*ng
CONTENT FILTER
You have tried to access www.iwishiwere21.com
This site is ltered in order to protect you
TIME OF DAY
Internet access from this device is not permiled between 7pm and 7am.
Try again tomorrow
-
Copyright 2014 Juniper Networks, Inc. 37
More use cases? The limit is our imagination
Virtual pla[orm is one more tool for network provider, and the use cases are up to users to dene
VPC GW for private, public and hybrid cloud
Virtual Route Reector
NFV plug-in for mul*-func*on consolida*on
SW cer*ca*on, lab valida*on, network planning & troubleshoo*ng, proof of concept
Distributed NFV Service Complex
Virtual BNG cluster
Virtual Mobile service control GW
And more
Cloud based VPN
vGW for service chaining
-
Copyright 2014 Juniper Networks, Inc. 38
vMX FRS features
-
Copyright 2014 Juniper Networks, Inc. 39
vMX Products family
Characteris*cs Target customer Availability
Trial
Up to 90 day trial No limit on capacity
Inclusive of all features
Poten*al customers who want to try-out VMX in their lab or
qualify VMX
Early availability by end of Feb 2015
Lab simula*on/Educa*on
No *me-limit enforced Forwarding plane limited to
50Mbps Inclusive of all features
Customer wants to simulate produc*on network in lab
New customer to gain JUNOS and MX experience
Early availability by end of Feb 2015
GA product
Bandwidth driven licenses Two modes for features:
BASE or ADVANCE/PREMIUM
Produc*on deployment for VMX 14.1R6 (June 2015)
-
Copyright 2014 Juniper Networks, Inc. 40
VMX FRS product Ocial FRS target date for VMX Phase-1 is targeted for Q1 2015 with JUNOS release 14.1R6. High level overview of FRS product
DPDK integra*on. Min 80G throughput per VMX instance. OpenStack integra*on. 1:1 mapping between VFP and VCP Hypervisor support: KVM, VMWare ESXi, Xen High level feature support for FRS
Full IP capabili*es MPLS: LDP, RSVP MPLS applica*ons: L3VPN, L2VPN, L2Circuit IP and MPLS mul*cast Tunneling: GRE, LT OAM: BFD QoS: Intel DPDK QoS feature-set
-
Copyright 2014 Juniper Networks, Inc. 41
vMX Roadmap
-
Copyright 2014 Juniper Networks, Inc. 42
vMX with vRouter and Orchestra*on
Contrail controller
NFV orchestrator
Template based cong
vMX with vRouter integra*on VirtIO u*lized for Para-virtualized drivers Contrail OpenStack for
VM management Se~ng up overlay network
NFV Orchestrator (OpenStack Heat templates) u*lized to easily create and replicate VMX instances
-
Copyright 2014 Juniper Networks, Inc. 43
Physical & Virtual MX
Physical Forwarding resources
L2 interconnect
Virtual Forwarding resources
Contrail controller
NFV orchestrator Template based cong BW per instance Memory # of WAN ports
Virtual Rou*ng Engine
VMX1
Oer a scale-out model across both physical and virtual resources
Depending on the type of customer and service oering NFV orchestrator decides whether to provision the customer on a physical or virtual resource
VMX2
-
Copyright 2014 Juniper Networks, Inc. 44
vMX roadmap 1H2015 2H2015 2016
Features
Hypervisor
Orchestra*on/Management.
Licensing
Performance
VMX 90-day trial and lab test/simula3on plaeorm
VMX FRS (JUNOS 14.1R6) Full IP capabili3es MPLS applica3ons: L3VPN, L2VPN, L2Circuit
IP and MPLS mul3cast Tunneling: GRE, LT OAM: BFD Intel DPDK QoS feature-set
Max vanilla IP performance with 20 cores @ 1500 bytes: 80G. With IMIX: 36G
VMX post FRS features (Target release 15.1Rx)
L2: Bridging & IRB, VPLS, VXLAN, EVPN
Inline services: jow, IPFIX VRR applica3on
VMX live migra3on and HA architectures
VMX in CSPs (Amazon) as Virtual Private Cloud Gateway
Inline site-to-site IPSec
VMX bring up with OpenStack u3lizing HEAT templates
VMX Neutron L3 Plugin VMX working with Contrail vRouter and integra3on into Contrail OpenStack
L4-L7 feature integra3on NAPT, Dynamic NAT NAT64, lw4o6 Dynamic Mul3point IPSec VPN
Enhanced license management sosware with on-site server or call-home func3onality for VMX license-management
Performance improvements for higher PPS per core (1.5-2MPPS/core) vHypermode
VMX scale-out architectures
KVM with SR-IOV and VirtIO
XEN Docker & LXC for VFP VMWare ESXi with VMXNET3 and SR-IOV Microsos Hyper-V
-
Copyright 2014 Juniper Networks, Inc. 45
vMX Licensing
-
Copyright 2014 Juniper Networks, Inc. 46
vMX Pricing philosophy
Value based pricing
Elas*c pricing model
Price as a pla[orm and not just on cost of bandwidth Each VMX instance is a router with its own control-plane,
data-plane and administra*ve domain The value lies in the ability to instan*ate routers easily
Bandwidth based pricing Pay as you grow model
-
Copyright 2014 Juniper Networks, Inc. 47
vMX License structure
Three applica3on packages BASE: Basic IP rou*ng. No VPN capabili*es ADVANCED: Same func*onality as IR mode MPCs PREMIUM: Same func*onality as R mode MPCs
Capacity based licensing Each applica*on package oers capacity based SKUs
Per instance license
Payment op3ons Licenses will have a perpetual and subscrip*on op*on
-
Copyright 2014 Juniper Networks, Inc. 48
Applica*on package func*onality mapping Applica3on package Func3onality Use cases
BASE IP rou*ng with 32K IP routes in FIB Basic L2 func*onality: L2 Bridging and switching
No VPN capabili*es: No L2VPN, VPLS, EVPN and L3VPN
Low end CPE or Layer3 Gateway
ADVANCED (-IR)
Full IP FIB Full L2 capabili*es includes L2VPN, VPLS, L2Circuit
VXLAN EVPN IP Mul*cast
L2vPE Full IP vPE Virtual DC GW
PREMIUM (-R) BASE L3VPN for IP and Mul*cast
L3VPN vPE Virtual Private Cloud GW
Note: Application packages exclude IPSec, BNG and VRR functionality.
-
Copyright 2014 Juniper Networks, Inc. 49
Bandwidth License SKUs Bandwidth based licenses for each applica*on package for the following processing capacity limits:
100M, 250M, 500M, 1G, 5G, 10G, 40G. Note for 100M, 250M and 500M there is a combined SKU with all applica*ons included.
100M 250M 500M
1G BASE
1G ADV
1G PRM
5G BASE
5G ADV
5G PRM
10G BASE
10G ADV
10G PRM
40G BASE
40G ADV
40G PRM
BASE
ADVANCE
PREMIUM
Applica*on *ers are addi*ve i.e ADV *er encompasses BASE func*onality
-
Copyright 2014 Juniper Networks, Inc. 50
VMX soVware License SKUs SKU Descrip*on
VMX-100M 100M perpetual license. Includes all features in full scale
VMX-250M 250M perpetual license. Includes all features in full scale
VMX-500M 500M perpetual license. Includes all features in full scale
VMX-BASE-1G 1G perpetual license. Includes limited IP FIB and basic L2 func*onality. No VPN features
VMX-BASE-5G 5G perpetual license. Includes limited IP FIB and basic L2 func*onality. No VPN features
VMX-BASE-10G 10G perpetual license. Includes limited IP FIB and basic L2 func*onality. No VPN features
VMX-BASE-40G 40G perpetual license. Includes limited IP FIB and basic L2 func*onality. No VPN features
VMX-ADV-1G 1G perpetual license. Includes full scale L2/L2.5, L3 features. Includes EVPN and VXLAN. Only 16 L3VPN instances
VMX-ADV-5G 5G perpetual license. Includes full scale L2/L2.5, L3 features. Includes EVPN and VXLAN. Only 16 L3VPN instances
VMX-ADV-10G 10G perpetual license. Includes full scale L2/L2.5, L3 features. Includes EVPN and VXLAN. Only 16 L3VPN instances
VMX-ADV-40G 40G perpetual license. Includes full scale L2/L2.5, L3 features. Includes EVPN and VXLAN. Only 16 L3VPN instances
VMX-PRM-1G 1G perpetual license. Includes all features in BASE (L2/L2.5, L3, EVPN, VXLAN) and full scale L3VPN features.
VMX-PRM-5G 5G perpetual license. Includes all features in BASE (L2/L2.5, L3, EVPN, VXLAN) and full scale L3VPN features.
VMX-PRM-10G 10G perpetual license.Includes all features in BASE (L2/L2.5, L3, EVPN, VXLAN) and full scale L3VPN features.
VMX-PRM-40G 40G perpetual license.Includes all features in BASE (L2/L2.5, L3, EVPN, VXLAN) and full scale L3VPN features.
-
Copyright 2014 Juniper Networks, Inc. 51
Juniper NorthStar Controller
-
Copyright 2014 Juniper Networks, Inc. 52
CHALLENGES WITH CURRENT NETWORKS How to Make the Best Use of the Installed Infrastructure?
2
3
1 ? How do I use my network resources eciently?
1 ? How can I make my network applica3on aware?
1 ? How do I get complete & real-3me visibility?
-
Copyright 2014 Juniper Networks, Inc. 53
PCE ARCHITECTURE A Standards-based Approach for Carrier SDN
Path Computa3on Element (PCE): Computes the path
Path computa3on Client (PCC): Receives the path and applies it in the network. Paths are s*ll signaled with RSVP-TE.
PCE protocol (PCEP): Protocol for PCE/PCC communica*on
PCEP
PCC PCC
PCC
A path Computa7on Element (PCE) is a system component, applica7on, or network node that is capable of determining and nding a suitable route for conveying data between a source and a des7na7on
What are the components? What is it?
PCE
-
Copyright 2014 Juniper Networks, Inc. 54
PCE: EVOLUTIONARY APPROACH Active Stateful PCE Extensions
REAL-TIME AWARENESS OF LSP & NETWORK STATE
PCE dynamically learns the network topology PCCs report the LSP state to the PCE
LSP ATTRIBUTE UPDATES Via the PCEP, the PCE can update LSP B/W & path awributes, if the LSP is *controlled*
CREATE & TEAR-DOWN LSPS
The PCE can *create* LSPs on the PCC, ephemerally
HARDER PROBLEMS OFFLOADED FROM NETWORK ELEMENT
* No persistent congura*on is present on the PCC
P2MP LSP path computa*on & P2MP tree diversity Disjoint SRC/DST LSP path diversity Mul*-layer & mul*ple constraints
-
Copyright 2014 Juniper Networks, Inc. 55
ACTIVE STATEFUL PCE A centralized network controller
The original PCE drass (of the mid-2000s) were mainly focused around passive stateless PCE architectures: More recently, theres a need for a more Ac*ve and Stateful PCE NorthStar is an ac*ve stateful PCE This ts well to the SDN paradigm of a centralized network controller
What makes an ac3ve Stateful PCE dierent: The PCE is synchronized, in real-*me, with the network via standard
networking protocols; IGP, PCEP The PCE has visibility into the network state; b/w availability, LSP awributes The PCE can take control and create state within the MPLS network The PCE dictates the order of opera*ons network-wide.
Report LSP state Create LSP state
NorthStar
MPLS Network
-
Copyright 2014 Juniper Networks, Inc. 56
Title Only
SOFTWARE-DRIVEN POLICY
Topology Discovery Path Computa*on State Installa*on
NORTHSTAR COMPONENTS & WORKFLOW
PCEP TE LSP discovery IGP-TE, BGP-LS TED discovery (BGP-LS, IGP) LSDB discovery (OSPF, ISIS)
PCEP Create/Modify TE LSP One session per LER(PCC)
ANALYZE OPTIMIZE VIRTUALIZE
Rou*ng PCEP Applica*on Specic Algs
RSVP signaling
OPEN APIs
-
Copyright 2014 Juniper Networks, Inc. 57
NORTHSTAR MAJOR COMPONENTS
NorthStar consists of several major components: JUNOS Virtual Machine (VM) Path Computa*on Server (PCS) Topology Server REST Server Component func3onal responsibili3es: The JUNOS VM, is used to collect the TE-database & LSDB
A new JUNOS daemon, NTAD, is used remote ash the lsdist0 table to the PCS
The PCS has mul*ple func*ons: Peers with each PCC using PCEP for LSP state collec*on &
modica*on Runs applica*on specic Algs for compu*ng LSP paths
The REST server is the interface into the APIs
PCE JUNOS VM
NTAD
RPD
PCS
REST_Server
KVM Hypervisor
Centos 6.5
MPLS Network PCC
BGP-LS/IGP PCEP
Topo_Server
-
Copyright 2014 Juniper Networks, Inc. 58
Title and Bullets
The JunosVM is used to peer with the network for topology acquisi*on using: BGP-LS Direct ISIS or OSPF adjacency ISIS or OSPF adjacency over a GRE tunnel
PCCs connect to the PCEServer via PCEP for LSP repor*ng PCEP sessions are established from each LSP head-end to the PCE Server
NORTHSTAR AS A BLACK-BOX
JUNOS VM RPD
PCE_Server
REST_Server Web Server
MPLS Network PCC
BGP-LS/IGP PCEP
Auth Module
PC Server
3rd Party Applica*ons User Interface
PCS HTTP TCP
-
Copyright 2014 Juniper Networks, Inc. 59
Standard, custom, & 3rd party Applica3ons
Topology Discovery Path Computa*on Path Installa*on
Topology API Path computa*on API Path provisioning API
PCEP PCEP Applica*on specic algorithms IGP-TE / BGP-LS
REST REST REST
NorthStar pre-packaged applica3ons Bandwidth Calendaring, Path Diversity, Premium
path, auto-bandwidth / TE++, etc
NORTHSTAR NORTHBOUND API Integra*on with 3rd Party Tools and Custom Applica*ons
-
Copyright 2014 Juniper Networks, Inc. 60
NORTHSTAR 1.0 HIGH AVAILABILITY (HA) Ac*ve / Standby for delegated LSPs NorthStar 1.0 supports a high availability model only for delegated LSPs: Controllers are not ac*vely synced with each-other Ac3ve / standby PCE model with up to 16 back-up controllers: PCE-group: All PCE belonging to the same group LSPs are delegated to the primary PCE Primary PCE is the controller with the highest delega*on priority Other controllers cannot make changes to the LSPs If a PCC looses connec*on with its primary PCE, it will immediately
use the PCE with next highest delega*on priority as its new primary PCE
ALL PCCs MUST use the same primary PCE
[configuration protocols pcep]pce-group pce { pce-type active stateful; lsp-provisioning;
delegation-cleanup-timeout 600;}pce jnc1 {
pce-group pce; delegation-priority 100;}pce jnc2 {
pce-group pce; delegation-priority 50;
jnc1 jnc2
PCC
PCEP PCEP
-
Copyright 2014 Juniper Networks, Inc. 61
NorthStar
BGP-LS speaker/Hierarchy
BGP-LS session(s)
TOPOLOGY ACQUISITION BGP-LS Various deployment op*ons are supported Using BGP-LS, allows an operator to tap into all of BGPs deployment & policy exibility to support network architectures of all types: Supports various inter-area and Inter-domain deployment op*ons Allows for fewer topology acquisition sessions with NorthStar NorthStar
ASBRs/ABRs
BGP-LS session(s)
-
Copyright 2014 Juniper Networks, Inc. 62
NorthStar
Redundant IGP Speakers
IGP Adj(s)
TOPOLOGY ACQUISITION ISIS, OSPF & GRE TUNNELING Na*ve protocol topology acquisi*on
NorthStar can also be deployed where it peers with the network via its na3ve IGP: ISIS and OSPFv2 are supported GRE tunneling is also supported to increase deployment exibility Mul*-area, mul*-level & mul*-domain networks MAY require many IGP adjacencies & GRE tunnels NorthStar
ASBRs/ABRs
IGP Adj(s) over GRE tunnels
cbarth@vrr-84# show interfaces gre unit 0 { tunnel { source 84.105.199.2; destination 84.0.0.101; } family inet { address 2.2.2.2/30; } family iso; family mpls; cbarth@vrr-84# show protocols isis interface gre.0 { point-to-point; level 2 metric 50000; } interface lo0.0;
-
Copyright 2014 Juniper Networks, Inc. 63
JUNOS PCE CLIENT IMPLEMENTATION New JUNOS daemon, pccd
Enables a PCE applica*on to set parameters for a tradi*onally congured TE LSPs and create ephemeral LSPs
PCCD is the relay/message translator between the PCE & RPD LSP parameters, such as the path & bandwidth, & LSP crea*on instruc*ons are received from the
PCE are communicated to RPD via PCCD RPD then signals the LSP using RSVP-TE
PCE PCEP
PCCD PCEP RPD
MPLS Network
PCEP
JUNOS IPC
RSVP-TE
-
Copyright 2014 Juniper Networks, Inc. 64
Topology Discovery MPLS capacity planning Full Oine Network Planning
NorthStar NorthStar Simula*on IP/MPLSview
LSP Control/Modica*on FCAPs (PM, CM, FM) Exhaus*ve Failure Analysis
REAL-TIME NETWORK FUNCTIONS
Dynamic Topology updates via BGP-LS / IGP-TE
Dynamic LSP state updates via PCEP
Real-*me modica*on of LSP awributes via PCEP (ERO, B/W, pre-emp*on, )
MPLS LSP PLANNING & DESIGN
Topology acquisi*on via NorthStar REST API (snapshot)
LSP provisioning via REST API Exhaus*ve failure analysis &
capacity planning for MPLS LSPs MPLS LSP design (P2MP, FRR,
JUNOS conglet, )
OFFLINE NETWORK PLANNING & MANAGEMENT
Topology acquisi*on & equipment discovery via CLI, SNMP, NorthStar REST API
Exhaus*ve failure analysis & capacity planning (IP & MPLS)
Inventory, provisioning, & performance management
NORTHSTAR SIMULATION MODE NorthStar vs. IP/MPLSview
-
Copyright 2014 Juniper Networks, Inc. 65
DIVERSE PATH COMPUTATION Automated Computa*on of end-to-end diverse paths Network-wide visibility allows NorthStar to support end-to-end LSP path diversity: Wholly disjoint path computations; Options for link, node and SRLG diversity Pair of diverse LSPs with the same end-points or with different end-points SRLG information learned from the IGP dynamically Supported for PCE created LSPs(at time of provisioning) and delegated LSPs(though manual
creation of diversity group)
Warning! Shared Risk Shared Risk Eliminated
Primary Link Secondary Link
CE
CE
CE
CE
NorthStar
-
Copyright 2014 Juniper Networks, Inc. 66
PCE CREATED SYMMETRIC LSPS Local associa*on of LSP symmetry constraint
Symmetric LSPs
NorthStar
NorthStar supports crea3ng symmetric LSPs:
Does not leverage GMPLS extensions for co-routed or associated bi-direc*onal LSPs Unidirec*onal LSPs (iden*cal names) are created from nodeA to nodeZ & nodeZ to nodeA Symmetry constraint is maintained locally on NorthStar (awribute: pair =)
Symmetric LSP crea*on
-
Copyright 2014 Juniper Networks, Inc. 67
MAINTENANCE-MODE RE-ROUTING Automated Path Re-computa*on, Re-signaling and Restora*on Automate re-routing of traffic before a scheduled maintenance window: Simplifies planning and preparation before and during a maintenance window Eliminate the risk that traffic is mistakenly affected when a node / link goes into maintenance mode Reduced need for spare capacity through the optimum use of resources available during the
maintenance window After the maintenance window finished paths are automatically restored to the (new) optimum path
1
Maintenance mode tagged: LSP paths are re-computed assuming
aected resources are not available
X X
X
2
In maintenance mode: LSP paths are automa*cally (make-before-break)
re-signaled 3
Maintenance mode removed: all LSP paths are re-stored to their
(new) op*mal path
NorthStar
-
Copyright 2014 Juniper Networks, Inc. 68
Bandwidth calendaring allows network operators to schedule the crea3on/dele3on/modica3on of an LSP: An LSP may be scheduled for crea*on or dele*on at some point in the future An LSP may be scheduled for modica*on as some point in the future B/W calendaring is built into all the LSP add/modify UI s
Example: 1. Operator pre-provisions a calendar event, either through the calendaring
func*on na*ve to NorthStar or through the path provisioning API 2. NorthStar schedules the LSP provisioning event 3. The LSP path is calculated at the scheduled point in *me and the path is
provisioned in the network
Operator loads calendar event
NorthStar LSP path
provisioning at scheduled 3me
1
3
2
BANDWIDTH CALENDARING Time-based LSP Provisioning
-
Copyright 2014 Juniper Networks, Inc. 69
GLOBAL CONCURRENT OPTIMIZATION Op*mized LSP placement NorthStar enhances traffic engineering through LSP placement based on a network wide visibility of the topology and LSP parameters: CSPF ordering can be user-defined, i.e. the operator can select which parameters such as LSP priority
and LSP bandwidth influence the order of placement Net Groom:
- Triggered on demand - User can choose LSPs to be op*mized - LSP priority is not taken into account - No pre-emp*on
Path Op*miza*on: - Triggered on demand or on scheduled
intervals (with op*miza*on *mer) - Global re-op*miza*on toward all LSPs - LSP priority is taken into account - Preemp*on may happen
High priority LSP Low priority LSP
Global re-op3miza3on
NorthStar Bandwidth bolleneck!
CSPF failure
New Path request
-
Copyright 2014 Juniper Networks, Inc. 70
MPLS AUTO-BANDWIDTH Auto-Bandwidth Example
1. JUNOS PCC will collect auto-bandwidth LSP sta*s*cs
2. Every adjustment interval the PCC will send a PcRpt message with a LSP bandwidth request
3. NorthStar will compute a new ERO for the requested B/W
4. NorthStar will send a PcUpdate message with the new ERO & bandwidth *me Adj_Interval B/W_Sample(s)
NorthStar
PcRpt Msg, b/w=14m
PcRpt Msg, b/w=12m
PcRpt Msg, b/w=15m
PcRpt Msg, b/w=16m
B/W
PcUpdate Message
1
4
3
2 PCC
-
Copyright 2014 Juniper Networks, Inc. 71
INTER-DOMAIN TRAFFIC-ENGINEERING Op*mal Path Computa*on & LSP Placement LSP [delegation, creation, optimization] of inter-domain LSPs Single active PCE across domains, BGP-LS for topology acquisition JUNOS Inter-AS requirements & constraints
http://www.juniper.net/techpubs/en_US/junos13.3/topics/usage-guidelines/mpls-enabling-inter-as-traffic-engineering-for-lsps.html
Inter-AS Trac-Engineering
NorthStar NorthStar
Inter-Area Trac-Engineering
AS 100
AS 200
Area 1
Area 2
Area 3 Area 0
-
Copyright 2014 Juniper Networks, Inc. 72
NORTHSTAR SIMULATION MODE Oine Network Planning & Modeling NorthStar builds a near real-3me network model for visualiza3on and o-line planning through dynamic topology / LSP acquisi3on: Export of topology and LSP state to NorthStar simula*on mode for o-line MPLS network modeling Add/delete links/nodes/LSPs for future network planning Exhaus*ve failure analysis, P2MP LSP design/planning, LSP design/planning, FRR design/planning JUNOS LSP conglet genera*on
NorthStar-Simula3on
Year 1
Year 3
Year 5
Extension Year 1
-
Copyright 2014 Juniper Networks, Inc. 73
A REAL CUSTOMER EXAMPLE PCE VALUE Centralized vs. distributed path computa*on
Link U*liza
*on (%
)
0.00%
20.00%
40.00%
60.00%
80.00%
100.00%
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 76 79 82 85 88 91 94 97 100 103 106 109 112 115 118 121 124 127 130 133 136 139 142 145 148 151 154 157 160 163 166 169 172
Distributed CSPF PCE centralized CSPF
TE-LSP opera*onal routes are used for distributed CSPF
RSVP-TE Max Reservable BW set BW set to 92%
Modeling was performed with the exact opera*on LSP paths
Convert all TE-LSPS to EROs via PCE design ac*on
Objec*ve func*on is Min Max link u*liza*ons
Only Primary EROS & Online Bypass LSPS Modeling was performed with 100% of
TE LSPS being computed by PCE
Up to 15% reduc*on in RSVP reserved B/W
Distributed CSPF Assump3ons Centralized Path Calcula3on Assump3ons
-
Copyright 2014 Juniper Networks, Inc. 74
NORTHSTAR 1.0 FRS delivery
NorthStar FRS is targeted for March-23rd: (Beta) trials / evalua*ons already ongoing First customer wins in place
Target JUNOS releases: 14.2R3 Special * 14.2R4* / 15.1R1* / 15.2R1*
Supported plaeorms at FRS: PTX (3K, 5K), MX (80, 104, 240/480/960, 2010/2020, vMX) Addi*onal pla[orm support in NorthStar 2.0
* Pending TRD Process
NorthStar packaging & plaeorm: Bare metal applica*on only No VM support at FRS Runs on any x86 64bit machine that is supported
by Red Hat 6 or Centos 6 Single hybrid ISO for installa*on Based on Juniper SCL 6.5R3.0
Recommended minimum hardware requirements: 64-bit dual x86 processor or dual 1.8GHz Intel
Xeon E5 family equivalent 32 GB RAM 1TB storage 2 x 1G/10G network interface
-
Copyright 2014 Juniper Networks, Inc. 75
NORTHSTAR 1.0 H/W REQUIREMENTS Subscrip*on based pricing for NorthStar There is no dependency on Motherboard, NIC cards etc as we support CentOS6.5 as
Host OS, verify it with CentOS6.5 supported hardware portal No preference on Vendor
Small ( 1-50 Nodes) Medium ( 50-250 Nodes) Large ( 250+ Nodes) CPU: 64-bit dual 1.8GHz Intel Xeon E5 family equivalent
CPU: 64-bit Quad Intel Xeon Processor E5520 (2.26 GHz, 8MB L3 Cache) equivalent
CPU: 64 bit Quad core Intel Xeon Processor X5570 (2.93 GHz,8MB L3 Cache) equivalent
RAM: 16GB Hard Drive: 250GB Network Port: 1/10GE ( CSE2k matches this spec)
RAM: 64GB Hard Drive: 500GB Network Port: 1/10GE
RAM: 128GB Hard Drive: 1TB Network Port: 1/10GE
-
Thank You!