considerations for launching mso business services
TRANSCRIPT
Networking Evolution in CLOUD / Datacenters
INTERNET ENTERPRISE WAN
TRANSPORT DEPENDENT
LOCATION DEPENDENT
HARDWARE DEPENDENT
INTERNET CLOUD/ DATACENTER
MANUAL (TIME ‘DEPENDENT’)
AUTOMATED (TIME ‘INDEPENDENT’)
TRANSPORT INDEPENDENT
LOCATION INDEPENDENT
HARDWARE INDEPENDENT
VS
Virtualized Network Services (VNS)
INTERNET ENTERPRISE WAN
POLICY-DRIVEN AUTOMATION
INTER-OPERABLE
FLEXIBLE DEPLOYMENT
MODELS
AUTOMATED (TIME ‘INDEPENDENT’)
TRANSPORT INDEPENDENT
LOCATION INDEPENDENT
HARDWARE INDEPENDENT
INTERNET BRANCH/ CAMPUS
INTERNET HEADQUATERS
INTERNET CLOUD/
DATACENTER
INTERNET INTERNET
INTERNET IP/MPLS
INTERNET 3G/LTE
APPLYING CLOUD-PRINCIPLES TO ENTERPRISE WAN
Software Defined VPNs • Leverage the virtualized cloud to lower costs • Expedite connectivity as they add and move branch offices • Steer traffic into data centers for advance service chaining
use cases • Utilize cloud-based applications and third-party services • Easily extend VPN connectivity over wireless links to
employees • Eliminate the costs of buying and upgrading proprietary
CPE
Cloud Virtualization Dynamics • Recent survey of IT managers ranked hardware spending first, followed by legacy
system modernization, on-premises data center consolidation and optimization. • Growing reliance on cloud-friendly networks can maximize companies’ ability to
leverage private and public cloud technology. • Virtualization of datacenters utilizing SDN/NFV technology have reoriented
spending across the business landscape • New developments in DC Virtualization
– The modularization of software requires far more and frequent updates & upgrades – Onset of big data analytics across multiple workflows helping collaboration – Business are seeking DC environments to exploit agility and hence improve time to
market.
Open Platforms • Cloud technology has quickly evolved into a multi-vendor
ecosystem that is commonly called a “cloud stack” • Open interfaces and protocols have contributed to real
multi-vendor solutions e.g. (Project Neutron of OpenStack) • Open source projects generally ensure interoperability
between solutions and can shorten development cycles. • The cloud industry has benefited from the open source
approach of OpenStack as a multi-vendor cloud orchestration platform.
NFV Orchestrator (NFVO) Multi-VIM management Virtual resource management VNF placement VNF policy enforcement
VNF Manager (VNFM) VNF deployment VM level monitoring
(CPU/storage) Virtual Infrastructure Manager (VIM)
Virtual resource management at hypervisor level Compute/storage/networking
MANAGEMENT AND ORHESTRATION
Confidential
NFV MANO (VNFM + NFVO)
Distributed Cloud Infrastructure
OSS/BSS
Element Management
NFVI (VIM) VNFs/Apps
ETSI NFV
OSS/BSS OSS/BSS Service Orchestration & Assurance (Dynamic E2E & Cross-Domain SO)
SO Other OSS
Extend ETSI NFV EM to VNF & PNF Domain
Management (FCAPS & VNFM-S) Domain Specific Orchestration
ORCHESTRATION & ASSURANCE ARCHITECTURE FOR VNF & PNF Going beyond ETSI NFV
Confidential
NFV MANO (VNFM-G + NFVO)
Distributed Cloud Infrastructure
Element Management
NFVI (VIM) VNFs/Apps
ETSI NFV
RO
NSO Domain Mgmt (FCAPS + VNFM-S)
PNFs
Complement ETSI NFV Orchestrator (NFVO) with Cross-Domain Service Orchestration E2E Service Orchestration across
VNF & PNF Full Service Assurance
SO = Service Orchestration NSO = Network Service Orchestration RO = Resource Orchestration VNFM = VNF Manager FCAPS = Fault, Configuration, Accounting, Performance, Security (*) Domain = network/application technology (IMS, EPC / IP, Transport / Cloud, SDN, etc.)
TMForum
Managem
ent/Control Continuum
Service Assurance & Security
• MANO’s role in Service Assurance – Collects status information and alarms related to infrastructure resources – Help Restore Service during catastrophic failures through automated
deployment
• MANO and the security of the solution – Implements a roles based access control – Need to adhere to national security requirements – Allows for the definition of security zones, security groups and appliances,
Model Based Orchestration • A model and policy based approach to orchestration
need to be implemented • VNFs and VNF infrastructure are defined in domain
specific languages • Different engines can use the same model for their
respective purposes: deployment, placement, scaling, healing/RCA, upgrades, etc..
On-boarding of VNFs • A Step-by-step approach
– Deployment • Automatic creation of VNF instances
– Automated Healing • Set of functions are executed automatically to restore service
– Automated Scaling • Automatic allocation of additional resources based on demand
– Automated Fully Life Cycle Management • Ultimate objective of NFV: auto update, auto upgrade, protection
from human errors
OSS AUTOMATION & SIMPLIFICATION WITH DYNAMIC SERVICE OPERATIONS READY FOR SERVICES ON HYBRID CURRENT AND NFV/SDN ENABLED NETWORKS
Future OSS Strategy: A new operational model
SI - Assure SI - Assure SI - Assure Fulfillment OSS
SI - Assure SI - Assure SI - Assure SI - Assurance
Dedicated OSS stack per network technology Basic fulfillment automation Limited assurance automation No dynamicity due to static OSS systems No integration of Network & IT operations
FROM STATIC NETWORK MANAGEMENT TO DYNAMIC OPERATIONS
Service Models
Technology independent Dynamic Service Operations Maximize E2E dynamic service fulfillment automation Optimize E2E dynamic service assurance automation Abstract Resources with innovative Service Models (SURE)
Customer Fulfillment OSS
Customer Assurance OSS
Modular OSS Solution
DYNAMIC ASSURANCE
DYNAMIC FULFILLMENT
SI - Assure SI - Assure SI - Assure Assurance OSS
Conclusion • Opportunity for MSO to bundle VAS with connectivity services • Implement SDN principles to make the network consumable
(Inter DC and Intra DC) • Open platforms with open interfaces/protocols helps multi-
vendor interoperability • The NFV platform need be modal based and address security • Full NFV automation is a step by step process • The current OSS need to transform to dynamic operations
Orchestration • Service Orchestration
– Term defined in the Software Defined Network Architecture – Responsible for managing the end-to-end network required for services that use that network – Includes physical as well as virtual equipment and resources
• Network Function Virtualization Orchestration
– Term defined in the ETSI MANO framework – Responsible for the management of the virtual network infrastructure required for any given
service
• The Service Orchestrator along with a NFV Orchestrator (NFVO) can be utilized to implement both the end-to-end connectivity, as well as the virtualized network functions and physical network equipment configuration needed to fully realize a given service that a service provider wishes to make available
Orchestration in Remote PHY • Service Orchestrator manages
physical and logical infrastructure
• Virtualized Infrastructure for DOCSIS MAC Domain, Virtual OLT functions managed by NFVO
• Service Orchestrator works with NFVO to enable new CCAP Cores and virtual OLTs as well as managing physical network connections to on-board new remote PHY devices
Remote PHY and Virtualization • Service Orchestrator
prepares DOCSIS services for new RPDs
• New DOCSIS or OLT services enabled with SDN Controller and NFVO
The Gadget Manager Issue:
How do we efficiently and quickly manage the remote PHY and DOCSIS and Aux Core from initial install?
Gadget Management Manages aspects of the DOCSIS and PON virtualized infrastructure
– Provides the Presentation Layer interface to management and provisioning planes in the network
– Manages ancillary services like • Licensing • Mapping Core resources with remote physical resources • Manage legacy OSS interface to offload this from the CORE and remote
devices
Elastic resource – Scales as needed to allow remote devices to be as lightweight as
possible
An open software architecture for a remote physical access node
Alon Bernstein Distinguished Engineer Cisco systems
Open Source • A way for commercial
companies to share the cost of development of common software
• Allows vendors to focus on differentiation rather then the common
Cablelabs Open Remote Phy Device • 49 developers • 13 companies
• OpenRPD (Community restricted) access -
https://community.cablelabs.com/wiki/display/C3/OpenRPD+Reference+Software+C3+Home
• C3 Public accesshttps://community.cablelabs.com/wiki/display/C3FO/Common+Code+Community+-+Public
• CL Blog on Open RPD: http://www.cablelabs.com/new-open-source-initiative-at-cablelabs/
Conclusion • More in the paper… • Even better - read the code !! • Contact Cablelabs to join the project
Towards SDN for Optical Access Networks
Eugene (Yuxin) Dai Cox Communications
Spring Technical Forum, May 17, 2016 Bosten, USA
Outline
• Current literatures • Optical access network data plane
virtualization • Optical access network control plane and
data plane separation • SDN for Passive Optical Networks
2
Current literatures on SDN for GPON
3
• Current literatures on SDN for GPON generally suggesting: • Move DBA function to an
external SDN controller • Let an external SDN controller
using OpenFlow to control GEM/XGEM Port-ID, Allo-ID, T-cont etc.
• In parallel for EPON it would be to move LLID, MPCP, etc., to an external SDN controller
• There are several problems with those approaches • GEM/XGEM Port-ID, LLID, T-cont, etc. are
local resources, have no global meaning and they are at a very low layer Letting OpenFlow control them could be problematic.
• DBA is a closed protocol between OLT and ONUs. May not feasible to move it to an external SDN controller.
• Some of those parameters have strident timing and delay requirements, for example MPCP.
Back to the fundamentals: How to separate control plane from data-forwarding plane for PON?
Current Optical Access Network Architecture
4
• Core optical networks already have separated control and data planes
Core Network
Central office Edge router
Customers
Date plane
Data plane
Control plane
Data plane
Control plane
Data plane
Control plane
MPLS control plane
Access Node(active or Passive)
Ethernet Aggregation Switch
MPLS control plane
SDN Controller
• MPLS control plane can be easily extended from core network to edge routers • SDN controller is used for centralized path computation • Control plane of AON access network need to be separated from its data plane
Optical Ethernet Data Plane Virtualization
5
• Ethernet physical layer topologies: P2P or mess
• 802.1D Ethernet topology: tree and branches
• Using 802.1Q and 802.1ad, Ethernet data plane can be virtualized to VLANs • Each VLAN has its own topology • 802.1ak Multiple VLAN Registration Protocol (MVRP) creates and manages VLANs
Physical topology
Ethernet topology
VALN topology
DA SA
Ethtype Payload
802.1D
DA SA
Ethtype
Payload
802.1Q
VID
C DA C-SA
Ethtype
Payload
802.1ad
S-VID Ethtype C-VID
Ethtype
VLAN control protocols
Control plane separation
6
• Ethernet control plane: Multiple VLAN control protocols • A SDN controller can control both MPLS and Ethernet control planes
Core Network
Central office Edge router
Aggregation Switch
Customers
SDN Controller
Date plane
MPLS control plane Virtualized data plane
Multiple VLAN control protocols
OLT Home network
OLT ONT
ONT
N+1 ports Ethernet switch Ethernet interface
Ethernet interface
• PON can be virtualized to a distributed N+1 ports VALN switch • A common SDN controller controls both core networks and PON
SDN view of Core and PON Networks
7
SDN Controller
Residential Business IP TV Mobile
backhaul … Applications
SDN Control plane
Data plane
Global view Abstract Network Topologies
Switch APIs
Network APIs
Core Network
Central office Edge router
OLT ONT
ONT
N+1 ports VLAN switch Ethernet interface Ethernet
interface • A unified SDN control plane controls both core and PON networks. • In global map abstraction, PON access network is represented by VLANs.
Conclusions
• VLANs can be used to virtualize Ethernet based AON and PON.
• The control plane and data plane of PON can be separated using VLAN control protocols.
• An unified SDN control plane can control both optical access networks and core networks.
8
SDN – Up Our Ops Game • Software defined networks are fluid • Distributed applications built on top of them are
increasingly complex • Troubleshooting, capacity planning in silos is
insufficient, and will become even more so • Need a robust, scaled out telemetry collection
system that can collect data from network and applications
Key Concerns • Scale 10s of million telemetry points/second
(eventually!) • Ability to use data for many needs,
operational, business intelligence, customer care, recommendations, and so on
• Collect as seamlessly as possible from an extremely heterogeneous environment
Key Technologies • Apache Kafka – Scalable distributed log, modern
equivalent to a message bus • Apache Avro – Schema format that supports
compatibility between schemas • Apache NiFi – Transformation and data cleansing • Homegrown schema registry and search • Homegrown HTTP Ingest • Heavily modified server agent (Heka)
Apache Kafka • Distributed log – modern, fault tolerant, highly
scalable messaging system • Ability to handle many consumers per topic,
node failures, different SLOs per topic • Proven ability to scale to 10s of millions of
messages/second
Apache Avro • Schema and serialization format for structured
data • Specifies schema format, as well as binary and
JSON serialization • Semantics to identify whether two schemas
are compatible, helps to evolve data over long term without complete chaos
Apache NiFi • Data collection and manipulation system • GUI makes it possible for non-programmers to
do fairly sophisticated manipulations • Ideal for data cleansing without needed a
programmer for everything
Heka and HTTP Ingest • Ingest for server side and CPE components,
respectively • Integrates with schema management system • Avoids tons of custom integration work