multi-service backbone design drivers behind next generation networks vijay gill jim boyle

27
Multi-Service Backbone Design Drivers behind Next Generation Networks Vijay Gill <[email protected]> Jim Boyle <[email protected]>

Upload: clinton-caldwell

Post on 30-Dec-2015

217 views

Category:

Documents


1 download

TRANSCRIPT

Multi-Service Backbone Design

Drivers behind Next Generation Networks

Vijay Gill <[email protected]>Jim Boyle <[email protected]>

Multi-Service Core

What is it? Why Multi-Service? Options for delivering Multiple

services

What

mul·ti·serv·ice adj.

Offering or involving a variety of services

IP, Voice, Private Line, VPNs, ATM, FR

Why

$700 Billion – Voice, Fax, Modem Market

Telco Companies Day Job Voice based communications is still

~ 90% of Global Telco revenue Voice bit is ~ 14 x more expensive

than data bit

Ways to Deliver Multiple Services

Multiple Backbones One for each Service (SONET, ATM/FR,

IP) Common Backbone

Layer multiple services on top of a common transport fabric

Multiple Backbones

Application Aware Dedicated Infrastructure used to implement

each application – PSTN, FR, ATM, Private Line Discourages Diversity

Needs Large Market Demand before it is cost effective to go out and build the support infrastructure

ATM/SONET infrastructure More boxes, complex management issues Hard to upgrade bandwidth in sync for all backbones

Common Backbone

Application Unaware Characterized by the new breed of Telcos

Rapid innovation Clean separation between transport, service, and

application Allows new applications to be constructed without

modification to the transport fabric.

Less Complex (overall)

Why A Common Backbone?

Spend once, use many Easier capacity planning and implementation

Elastic Demand 1% price drops result in 2-3% rise in demand –

Matt Bross, WCG Increase of N on edge necessitates 3-4 N core

growth Flexibility in upgrading bandwidth allows you to

drop pricing faster than rivals

Source: KPCB

Historical and forecast market price and unit cost of Transatlantic STM-1 circuit (on 25 year IRU lease)

2,000

4,000

6,000

8,000

10,000

12,000

14,000

16,000

18,000

1996 1997 1998 1999 2000 2001 2002 2003 2004 2005

Pri

ce p

er S

TM

-1 (

$m)

PRICE

COST

Bandwidth Blender - Set on Frappe

Some Facts

There is no absolute way to measure any statistic regarding the growth of the Internet

The Internet is getting big, and it's happening fast

Source: Robert Orenstein

“Already, data dominates voice traffic on our networks”

-Fred Douglis, ATT Labs

Solution

Leverage packet based technology Multi service transport Fabric Optimize for the biggest consumer - IP

Provide a loosely coupled access point for service specific networks (e.g. IP good, per-call signaling bad)

Solution

Internet (IP) Internet (IP)

VPN VPN

Voice/Video

CES

Voice/Video

CES

Multi Service IP Transport Fabric

Requirements

Isolating inter-service routing impacts

Address space protection/isolation Fast Convergence (Service

Restoration) Providing COS to services

Requirements

Support multiple services Voice, VPN, Internet, Private Line

Improving service availability with stable approaches where possible

LSPs re-instantiated as p2p links in IGP e.g. ATL to DEN LSP looks like p2p link

with metric XYZ Run multiple instances of IGPs (TE

and IP)

Stabilize The Edge

Stabilize The Core

Global instability propagated via BGP Fate sharing with the global Internet

All decisions are made at the edge where the traffic comes in

Rethink functionality of BGP in the core

COS

Mark service bits upon ingress WRR on trunks configure max time-in queue Avoid congestion But when congested, monitor that

traffic delivered in line with objectives Crude (compared to what?) but

effective.

Implementation Approaches

Pure IP Layer 2 tunneling (aka CCC, AToM) RFC2547 (base and bis) Merged IGP Multi process IGP

IP + Optical Virtual Fiber Mesh Protection GMPLS (UNI, NNI)

IP Only

Fiber

DWDM / 3R

IP / Routers

• Removal of an entire layer of active optronics

• Directly running on DWDM• Technology for Private

Lines and Circuit Emulation isn’t here yet

• Fate sharing with Global Internet

LSP Distribution

LDP alongside RSVP Routers on edge of RSVP domain do fan-out Multiple Levels of Label Stacking Backup LSPs

Primary and Backup in RSVP Core Speed convergence Removes hold down issues (signaling too fast in a

bouncing network) Protect path should be separate from working

There are other ways, including RSVP E2E

LDPService Edge

RSVP-TE Core

IP + Optical

Fiber

DWDM / 3R

IP / Routers

Optical Switching

• Virtual Fiber • Embed Arbitrary fiber

topology onto physical fiber.• Mesh restoration. • Private Line• Increased Velocity of service

provisioning• Higher cost, added complexity

Peter’s “Ring of Fire”

Edge

Core

Optical Switch

DWDM Terminal

Backbone Fiber

Metro Collectors

IP + Optical Network

Big Flow

Big Flow

Out of port capacity, switching speeds on routers? Bypass intermediate hops

Dual Network Layers

Optical Core (DWDM Fronted by OXC) Fast Lightpath provisioning Attach Metro collectors in Mega PoPs via multiple OC-

48/192 uplinks Metro/Sub-rate Collectors

Multiservice Platforms, Edge Optical Switches Groom into lightpaths or dense fiber. Demux in the PoP (light or fiber)

Eat Own Dog Food Utilize customer private line provisioning internally to

run IP network.

Questions

(3) Jim BoyleVijay “Route around the congestion, we must”

Gill

Nota Bene – This is not a statement of direction for our companies!

Many thanks to Tellium (Bala Rajagopalan and Krishna Bala) for providing icons at short notice!