project co2: technical overview inder monga {[email protected] om} advanced technology cto...

54
Project CO2: Technical Overview Inder Monga {imonga@nortelnetwork s.com} Advanced Technology CTO Office Billerica MA

Upload: samson-lane

Post on 24-Dec-2015

218 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

Project CO2: Technical Overview

• Inder Monga

• {[email protected]}

• Advanced Technology• CTO Office• Billerica MA

Page 2: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

CO2 InterfacesCO2 middleware provides applications a service interface providing virtualized network view, dynamic provisioning, fault notification and performance monitoring

3rd party Service Creation and Mgmt

• Web Interface•HTTP

• Legacy IP/QoS•Classical RSVP

• (G)MPLS•CR-LDP•RSVP/TE

•UNI•ASTN UNI • MEF UNI

•Layer 2/RPR• SNMP• UNI• TL1

•CIM

PLUGINS

Service API

API to capture

And notify

events

CO2Intelligence and

Processing

VariousSignalingProtocols

Implemented

Policy Control and OAM

OAM API

Network provisioningEvents

• Manual• Traffic inspection• Application request

• Fault notification• Abstracted network

and performance view

Page 3: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

Multiple Applications drive CO2

XML Messaging Service Bus

Storage ServerApplication

CO2 XML Application Interface

Core CO2 Service Intelligence

Signaling andManagement Layer

RadiologistWorkstationRequests data

Time reached to start Backup

Network Elements/Control Plane(OM3500, PP8600)/ASTN, GMPLS

Medical ServerApplication

GUIStation

Page 4: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

Inside the CO2 box—a notional view

CO2-to-Control Plane scope

Networking stack, packet filters

CO2-to-CO2 scope

3rd party applications

CO2-to-OAM&P scope

PoliciesF

ilter

ed c

on

ten

t f

eed

Page 5: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

Programming Model:Sample API invocations

Register (Authentication Info)

Register (Handle, SupportedServices)

Request (ServiceLevel, XferAmnt, Time, Priority)

Response (Status)

NetPoliciesDownload(PolicySchema)

SLAViolation (Latency)

IncTime (DeltaT)

Alert (Port X down)

Release (Handle)

App

lica

tion

CO

2 Softw

are

CO2 Features illustrated:

• Service Introspection

• Dynamic Policy control

• Abstracted service interface

• SLA Monitoring and Verification

• Error compartmentalization

Page 6: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

CO2 Detailed Block Diagram

LEGEND

CO2 module

External Module

Internal API

External API

Cut-throughManager

(CTM)

AdvancedResv and

ToDScheduler

Network Provisioning API

Service Function call API Event interpretation API

Policy and OAM I/FService Creation and

Control

Service Creation API Policy and OAM API

Acceleration Services

ClientRegistation

AppSignaling/Messaging

UNI SessionManager

UNI Signalling

RPR CallManager

SNMP

GMPLS CallManager

GMPLSSignalling

Me

ch

an

ism

sA

pp

lic

ati

on

/P

ers

on

ali

tyS

ma

rts

QosManager

ApplicationPolicy

Database

COPS-PR

NT Extensions

SLAManager

Modulerepository/

maintenance

ModuleCreation

ModuleFunctioning

ContentForwarding

ContentRedirection

ACPServices

ACPResourceManager(ARM)

VPNController(VPNC)

CO2 ConfigManager

GUI

Application AAA

Personality -Service API

Service-Resource API

ApplicationPolicy

DecisionPoint

ServicePolicy

Database

ResourcePolicy

DecisionPoint

ServicePolicy

DecisionPoint

Se

rvic

eS

ma

rts

Re

so

urc

eS

ma

rts

MetaProvisioning Service

ResourcePolicy

Database

Resource InformationService

CO2 Inter-Domain ManagerCO2 RISDatabase

Resource Usage Allocation Resource Usage Optimization

Resource Usage Feedback

TL1

Service AAA

Resource AAA

POLICY

API

Monitoring&

Topology

Module Management

ModuleControl

Page 7: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

A day in the life:Application driven provisioning

Application Software

(AppSoft)

Policy TOD

ConfigurationState of network

(bandwidth, VLANs)

OperationalState of network(errors, latency)

SmartB/w

Manager

2

3

4

5 7

8

6

1

1. AppSoft sends a request for bandwidth to CO22. After JMS and XML processing, the message is send

to smart bandwidth management module (SBM)3. SBM consults the policy engine with network

policies to figure out available bandwidth for this request

4. Policy engine looks at general and specific Time of Day (TOD) policies to calculate allowable bandwidth for this request

5. SBM consults the configuration state block to figure out total bandwidth already allocated for other ongoing requests, plus bandwidth available for this node

6. Check the operation state of the network to ensure proper performance level is met for the request

7. Send configuration commands with proper attributes to meet AppSoft request

8. Meta signaling API uses the right signaling blocks accomplish dynamic provisioning.

Indicates blocks that are ongoing activities regardless

of Application messagesM

eta

Sig

nali

ng

Lay

er

JMS

and

XM

L

AP

I

Page 8: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

A day in the life:Application driven provisioning (contd.)

Page 9: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

7. EMR requests Ethernet errors8. Network elements respond with Ethernet errors9. EMR updates state of network, constructs an JMS/XML message with the VLAN specific errors and delivers it to CO2’s JMS/XML module10. CO2’s JMS/XML module responds to FMon with network VLAN errors that the application is

interested in viewing.

1. FMon requests CO2 to monitor errors on VLAN

5002. After JMS/XML processing, the message is

delivered to CO2’s Error Monitoring and Reporting module (EMR)

3. EMR reviews network topology, looks up the policy configuration for that VLAN and downloads the appropriate policies

4. EMR looks up Service Discovery information5. EMR signals for SONET errors based on the

discovered service capabilities6. Network element responds with path and link

errors

A day in the life: Error monitoring

Fault MonitoringApplication

(FMon)

Policy TOD

ConfigurationState of network

(bandwidth, VLANs)

OperationalState of network(errors, latency)

ErrorMonitoring &

Reporting

23

4

6

9

7

1

Indicates blocks that are ongoing activities regardless

of Application messages

Met

a S

igna

ling

L

ayer

JMS

and

XM

L

AP

I10

5

3

7 8

8

9

Page 10: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

1. StorageApp and VideoApp request a service affecting network resources.

2. After JMS/XML processing, the message is delivered to CO2’s Application Message Handler module (AMH)

3. AMH authenticates/authorizes and downloads the appropriate administrative policies relevant to the application request.

4. AMH uses the policy information to construct a detailed service request to the Smart BW Mgr.

5. QoM queries the Policy module for service policies relevant to the application requests.

A day in the life: Policy-based control

VideoApp

Policy

ConfigurationState of network

(bandwidth, VLANs)

OperationalState of network

(utilizationerrors, latency)

ApplicationMessageHandler

2

5

4

6

1

6. QoM queries the operational state of the network and configuration state of the network to get reachability and utilization information

7. QoM makes admission control decisions based on Applications service requirements, policies and its resource knowledgebase.

8. QoM provisions resources, if necessary, to meet StorageApp’s request

9. QoM sends a negative response to VideoApps request (lower priority than StorageApp).

10. Based on policies and network capabilities, CO2 can setup a new connection to grant VideoApps request if current capabilities cannot satisfy VideoApps request

Met

a S

igna

ling

L

ayer

JMS

and

XM

L

AP

I

9

3

7

StorageApp

QoS Manager

8

910

Page 11: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

CO2 with CIM

• Management of CO2– CIM used opportunistically to

manage CO2 configuration• Utilize existing CIM policy

model

– Extend existing models for CO2 smarts management

• Management of CO2 Services

– Application interface to CO2 services currently via a proprietary XML API

– Investigating CIM for modeling CO2 service interface

• Requires extensive schema work to extend the CIM Core Model

3rd party Service Creation and Mgmt

API

Service API

ApplicationAPI

(CIM??)

CO2Intelligence and

Processing

VariousSignalingProtocols

Policy Control and OAM

OAM API (CIM)

• Manual• Traffic inspection• Application

request

• Event notification• Virtualized

network and performance view

• HTTP•IP/DiffServ• (G)MPLS•UNI•SNMP•TL1• CIM• …

Page 12: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

CO2 Module Coding• Module & Code

– A module is an encapsulation of code providing a or more particular features/functions, e.g., SNMP, SLA, ToD, XML parsing

– CO2 module is implemented in Java + native codes

• Java package– Interface class: .java– Implementation class: .java– Makefile– Module profile: .mp– Module policy: .my– Doc: README– Binaries: .class– Unsigned Jar: .jar– Signed jar: .sjar

• Examples– Unsecured: examples/hello/server– Secured: examples/policy/source Network

Module

Module

Client

Network signaling

API invocation

App messaging

CO2

GUI

Native access

Device

Page 13: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

CO2 CVS Source Directory Tree

testsstorage

grid client gui

skeletonprofiles

services

slam

qos

vpn

native

omninet

pp8600

om3500

common

protocols

resource

mib

networks metasignalling/drivers

runtime

jars

docs

contrib

tests

etc common

appmgr

policy

examples

templates

service

network

application

others

CO2

co2/

snmp

tl1

uni

scripts

config

com/nortelnetworks/co2/

Page 14: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

Standards• Protocols:

– Messaging• JMS 1.1 (J2EE 1.4.1)• XML Parser, Xerces 1.4.4

– Signaling• RSVP RFC 3209• OIF UNI 1.0• Alteon NAAP• MEF UNI *• GMPLS *

– Policies• Policy Core Information Model -- Version 1 Specification (RFC 3060)• Policy Core Information Model (PCIM) Extensions (RFC 3460)

• Management: – SNMPv1, RFC 1157; SMIv1, RFC 1155– SNMPv2, RFC 1905; SMIv2, RFC 1902*– TL1– CIM*

• Grid:– OGSI– Globus/GT3

* In progress

Page 15: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

Relevant Standards Bodies

• Global Grid Forum

• DMTF– CIM schemas for network devices and end-to-end services

• OIF– New UNIs

• IETF/IRTF– Policy, AAAs

• ITU– VPNs, (E)NNIs, GMPLS

• OASIS, W3C– Evolution of WS technologies

Page 16: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

Related Work• GARA, DUROC

– Concept of Resource co-allocation, scheduler, advanced reservations leveraged in our work

– GPAN extends the reach of GARA/DUROC concepts

– Job Manager in GPAN refers to GRAM2 and its instances

• WS-Agreement– Services and resource lifetime-management and policy-based

negotiations between network domains

• GRAM/RSL/JSDL– Extend RSL2 to work with GPAN for network resources

– JSDL is new standard being discussed @GGF for job submissions

Page 17: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

• Storage EMC, McData

• End-Systems IBM, HP-Labs

• Providers Allstream, Verizon

• Medical GE, Stryker, Cerner,

McKesson

• Conference Demos SuperComm, GGF9,

Telecom 2003, SC2003,

GlobusWorld 2004

• Testbed Winnipeg Health Authority

• External Funding DARPA

Engagements and Industry Validation

Page 18: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

Demo 1: Application driven

Dynamic Bandwidth and QoS

In Billerica Lab!

Page 19: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

Demo 2: A Globus-based Grid Infrastructure Negotiates

Ephemeral Optical Bandwidth Boost

Page 20: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

Grid Proxy Architecture for Network Resources (GPAN)

• Enables Grid Resource Services to take advantage of existing network services

• The GPAN Grid middleware functionality includes:– Proxy for accepting Grid resource

requirements– Provider of information regarding network

resource availability/status– Co-existence and integration with GRAM2,

MDS– Support for RSL2 extensions featuring

network resource allocation capabilities– OGSI-services providing network resource

info & dynamic allocation capabilities– Abstract view and access to base network

services

Grid Applications

Network Elements

Grid Services

Network Services

GPAN

Page 21: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

Grid Resources: general setup

B

DE

A Grid Domain:Virtual Organization

MDS Index

CA

• A Grid VO utilizes grid resources in Campus A-E• Service Providers (xSP) on MAN/WAN access networks peer together to provide required network services to the Grid VO.• Index services collects resource information from computing, storage resources in Campus A-E and xSP• Broker/metascheduler performs resource lookups and allocations of all grid resources for applications

Grid Overlay

Computing

GRAM2MMJFS

MFJSRIPS

MFJSRIPS

MJSRIPS

Storage

MFJSRIPS

MFJSRIPS

RMInfo

GRAM2RM

Computing

GRAM2MMJFS

MFJSRIPS

MFJSRIPS

MJSRIPS

Access

Network Service Overlay

Access

Broker/Metascheduler

Core Network xSPxSP

GPANProxyNIP

HostingEnvironment

application

Page 22: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

Proxy architecture implements scalable resource services for networks

Computer

Network device

GPANGrid

Service

NetworkServices

• GPAN Grid Service– Provides a GRAM-2 instance in a network

– Extends RSL2 for network resources

– Supports resource discovery and info updates on the Grid

– Supports resource dynamic provisioning, optimization

– Resource Services such as GRAM talks to GPAN for network resource requests

– Grid clients and services use GPAN WSDL interface

Page 23: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

GRAM2

Computing RM

GRAM2

VO Master CO2

GRAM2

Visual. RM

GRAM2

Storage RM

Broker/Metascheduler Application

Resource Management Flow

Network Service overlay

Network RM

MDS

Feeds not shown

Derived from © ANL Material

Page 24: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

Network Resource Information using GPAN

B

DE

A Grid Domain:Virtual Organization

MDS Index

CA

Grid Overlay

Computing

GRAM2MMJFS

MFJSRIPS

MFJSRIPS

MJSRIPS

Storage

MFJSRIPS

MFJSRIPS

RMInfo

GRAM2RM

Computing

GRAM2MMJFS

MFJSRIPS

MFJSRIPS

MJSRIPS

Access

Network Service Overlay

Access

Core Network xSPxSP

GPANProxyNIP

• GPAN provides network info to MDS/Index– Proxy for network resource allocation status and updates

• Network Info Provider (NIP) aggregates resource discovery and status updates− Based on virtual network topology related to the VO

HostingEnvironment

application

Broker/Metascheduler

Page 25: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

Network Resource Allocation using GPAN

B

DE

A Grid Domain:Virtual Organization

MDS Index

CA

Grid Overlay

Computing

GRAM2MMJFS

MFJSRIPS

MFJSRIPS

MJSRIPS

Storage

MFJSRIPS

MFJSRIPS

RMInfo

GRAM2RM

Computing

GRAM2MMJFS

MFJSRIPS

MFJSRIPS

MJSRIPS

Access

Network Service Overlay

Access

Core Network xSPxSP

GPANProxyNIP

1) Application requests broker/metascheduler for job services and resources

2) Broker/metascheduler generates RSL2 for resource allocation requests after consulting MDS/Index

3) xSPs co-ordinate to allocate requested resources

1

2

2

RSL2

RSL2

3

HostingEnvironment

application

Broker/Metascheduler

Page 26: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

CO2 Grid with OGSI

Grid Applications

Network Info ProviderCO2 Grid Proxy

XML messaging

App Messaging

CO2 Grid HandlerRISService

Creation

CO2 GridPersonality

CO2 Core Platform CO2 Domain Mgr

Network Resources

CO2

OGSI

OGSA GridPlatform

OGSA Services

Network

CO2 Core Middleware

Page 27: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

App. AResources Ethernet

Switch

EthernetSwitch

Photonicswitch

EthernetSwitch

Optical Control Plane

NetworkProvisioning

Services

GPAN Proxy

App. BResources

App. AResources

App. BResources

“A Globus-based Grid Infrastructure Negotiates Ephemeral Optical Bandwidth Boost”

Internet

Optical bypass

RSL2

• Two applications use Grid FTP for communicating large sets of data

• Grid FTP provides data movement requirements and constraints to GPAN

• GPAN Proxy module translates Grid requirements to appropriate network resource allocation

• GPAN Proxy module works with Network provisioning services to allocate optical by-pass as shown.

Page 28: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

Demo 2: Default Provisioned Bandwidth for Clients

Page 29: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

Demo 2: Dynamic provisioning of EphemeralOptical Bypass Circuit

Page 30: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

Demo 2: GPAN Information Pane

Page 31: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

Demo 3: DWDM-RAM DARPA Sponsored Research for Data Intensive Service-on-Demand Advanced Optical Networks

Page 32: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

The Data Intensive App Challenge: Emerging data intensive applications in the field of HEP,astro-physics, astronomy, bioinformatics, computational chemistry, etc., require extremely high performance andlong term data flows, scalability for huge data volume,global reach, adjustability to unpredictable traffic behavior, and integration with multiple Grid resources.

Response: DWDM-RAM An architecture for data intensive Grids enabled by next generation dynamic optical networks, incorporating new methods for lightpath provisioning. DWDM-RAM is designed to meet the networking challenges of extremely large scale Grid applications. Traditional network infrastructure cannot meet these demands, especially, requirements for intensive data flows

Data-Intensive Applications

DWDM-RAM

Abundant Optical Bandwidth

PBs Storage

Tbs on single fiber strand

Optical Abundant Bandwidth Meets Grid

Page 33: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

Optical Control Network

Optical Control Network

Network Service Request

Data Transmission Plane

OmniNet Control PlaneODIN

UNI-N

ODIN

UNI-N

Connection Control

L3 router

L2 switch

Data storageswitch

DataPath

Control

DataPath Control

DATA GRID SERVICE PLANEDATA GRID SERVICE PLANE

1 n

1

n

1

n

DataPath

DataCenter

ServiceControl

ServiceControl

NETWORK SERVICE PLANENETWORK SERVICE PLANE

GRID Service Request

DataCenter

DWDM-RAM Service Control Architecture

Page 34: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

Data Management ServicesOGSA/OGSI compliant, capable of receiving and understanding application requests, have complete knowledge of network resources, transmit signals to intelligent middleware, understand communications from Grid infrastructure, adjust to changing requirements, understands edge resources, on-demand or scheduled processing, support various models for scheduling, priority setting, event synchronization

Intelligent Middleware for Adaptive Optical NetworkingOGSA/OGSI compliant, integrated with Globus, receives requests from data services and applications, knowledgeable about Grid resources, has complete understanding of dynamic lightpath provisioning, communicates to optical network services layer, can be integrated with GRAM for co-management, architecture is flexible and extensible

Dynamic Lightpath Provisioning ServicesOptical Dynamic Intelligent Networking (ODIN), OGSA/OGSI compliant, receives requests from middleware services, knowledgeable about optical network resources, provides dynamic lightpath provisioning, communicates to optical network protocol layer, precise wavelength control, intradomain as well as interdomain, contains mechanisms for extending lightpaths through E-Paths - electronic paths, incorporates specialized signaling, utilizes IETF – GMPLS for provisioning, new photonic protocols

DWDM-RAM Components

Page 35: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

DataCenter

1

n

1

n

DataCenter

Data-Intensive Applications

Dynamic Lambda, Optical Burst, etc., Grid services

Dynamic Optical Network OMNInet

DataTransfer Service

Basic NetworkResource

Service

NetworkResource Scheduler

Network Resource Service

DataHandlerService

Information S

erviceApplication MiddlewareLayer

Network ResourceMiddlewareLayer

Connectivity and Fabric Layers

OGSI-ification API

NRS Grid Service API

DTS API

Optical path control

DWDM-RAM Architecture

Page 36: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

4x10GE

Northwestern U

OpticalSwitchingPlatform

Passport8600

ApplicationCluster

• A four-node multi-site optical metro testbed network in Chicago -- the first 10GE service trial!• A test bed for all-optical switching and advanced high-speed services• OMNInet testbed Partners: SBC, Nortel, iCAIR at Northwestern, EVL, CANARIE, ANL

ApplicationCluster

OpticalSwitchingPlatform

Passport8600

4x10GE

StarLight

OPTera Metro5200

ApplicationCluster

OpticalSwitchingPlatform

Passport8600

4x10GE8x1GE

UIC

CA*net3--Chicago

OpticalSwitchingPlatform

Passport8600

Closed loop

4x10GE8x1GE

8x1GE

8x1GELoop

OMNInet Core Nodes

Page 37: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

The DWDM-RAM architecture identifies two distinct planes over the dynamic underlying optical network: 1) the Data Grid Plane that speaks for the diverse requirements of a data-

intensive application by providing generic data-intensive interfaces and services and

2) the Network Grid Plane that marshals the raw bandwidth of the underlying optical network into network services, within the OGSI framework, and that matches the complex requirements specified by the Data Grid Plane.

At the application middleware layer, the Data Transfer Service (DTS) presents an interface between the system and an application. It receives high-level client requests, policy-and-access filtered, to transfer specific named blocks of data with specific advance scheduling constraints.

The network resource middleware layer consists of three services: the Data Handler Service (DHS), the Network Resource Service (NRS) and the Dynamic Lambda Grid Service (DLGS). Services of this layer initiate and control sharing of resources.

DWDM-RAM Architecture

Page 38: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

Application

Fabric“Controlling things locally”: Access to, & control of, resources

Connectivity

“Talking to things”: communication (Internet protocols) & security

Resource

“Sharing single resources”: negotiating access, controlling use

Collective

“Coordinating multiple resources”: ubiquitous infrastructure services, app-specific distributed services

Data TransferService

NetworkResource

Service

Data Path ControlService

Layered DWDM-RAM Layered Grid

’s

Application

Optical ControlPlane

Application MiddlewareLayer

Network ResourceMiddlewareLayer

Connectivity &Fabric Layer

OGSI-ification API

NRS Grid Service API

DTS API

DWDM-RAM vs. Layered Grid Architecture

Page 39: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

Network and Data Transfers scheduled• Data Management schedule coordinates network, retrieval, and sourcing services (using their schedulers)• Scheduled data resource reservation service (“Provide 2 TB storage between 14:00 and 18:00 tomorrow”)

Network Management has own schedule• Variety of request models:

• Fixed – at a specific time, for specific duration• Under-constrained – e.g. ASAP, or within a window

Auto-rescheduling for optimization• Facilitated by under-constrained requests• Data Management reschedules for its own requests or on request of Network Management

Design for Scheduling

Page 40: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

• Request for 1/2 hour between 4:00 and 5:30 on Segment D granted to User W at 4:00

• New request from User X for same segment for 1 hour between 3:30 and 5:00

• Reschedule user W to 4:30; user X to 3:30. Everyone is happy.

Route allocated for a time slot; new request comes in; 1st route can be rescheduled for a later slot within window to accommodate new request

4:30 5:00 5:304:003:30

W

4:30 5:00 5:304:003:30

X

4:30 5:00 5:304:003:30

WX

Example: Lightpath Scheduling

Page 41: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

0.5s 3.6s 0.5s 174s 0.3s 11s

OD

IN S

erve

r P

roce

ssin

g

File

tra

nsfe

r do

ne,

path

re

leas

ed

File

tra

nsfe

r re

ques

t ar

rives

Pat

h D

eallo

cati

on

req

ues

t

Dat

a T

ran

sfer

20 G

B

Pat

h ID

re

turn

ed

OD

IN S

erve

r P

roce

ssin

g

Pat

h A

lloca

tio

n

req

ues

t

25s

Net

wo

rk

reco

nfi

gu

rati

on

0.14sT

ran

spo

rt

setu

p t

ime

End-to-end Transfer Time

20GB File TransferSet up: 29.7s

Transfer: 174sTear down: 11.3s

sumit
sumit9/20/2003Theme:Breakup of the end-to-end transfer time presented in the previous slide.Source: NWU
Page 42: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

20GB File Transfer

Page 43: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

Conclusions

• Adaptivity requirements prompt us to re-define an application’s experience with the network

• We see the roll-out of on-demand solutions end-to-end as a multi-phase and multi-party concerted effort inclusive of network providers

• The CO2 framework is adaptivity middleware wherein we champion new contracts between applications and the network

Page 44: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA
Page 45: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

• Storage EMC, McData

• End-Systems IBM, HP-Labs

• Providers Allstream, Verizon

• Medical GE, Stryker, Cerner,

McKesson

• Conference Demos SuperComm, GGF9,

Telecom 2003, SC2003,

GlobusWorld 2004

• Testbed Winnipeg Health Authority

• External Funding DARPA

Engagements and Industry Validation

Page 46: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

A Strong Record in Light Paths Dynamic Provisioning

• OMNInet, the 1st wavelength-switched all-optical metro net, with end-to-end λ svcs established within seconds (2001)

– no SONET, no routers, no point-and-click

• SuperComm 2001, the 1st display of applications allocating lightpaths via OIF UNI

• GGF9 (Oct 2003), the 1st display of OGSI-fied λ services over a wavelength-switched all-optical network (“DWDM-RAM”)

• SC03, + time scheduling of under-specified client requests

• GlobusWORLD04, + integration with GT3, use by GT3 clients

• 3 journal submissions submitted for publication over 2H03 (JSAC, CCGrid, JOGC)

Page 47: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

Some key folks checking us out at our CO2+Grid booth, GlobusWORLD ‘04, Jan ‘04

Ian Foster and Carl Kesselman, co-inventors of the Grid (2nd,5th from the left) Larry Smarr of OptIPuter fame (6th and last from the left)

Franco, Tal, and Inder (1th, 3rd, and 4th from the left)

Page 48: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

CO2 Value Propositions• Topology discovery

– Automatic network discovery and storage to network linkage enables reduced management and outages costs

– Logical Topology discovery allows tracking network related application resource utilization

• Compartmentalization of faults – Expedited fault isolation and resolution– Reduced TCO leading to customer satisfaction and further investment in similar products

• Unified console for storage and network management– Collate application errors with network errors– Reduce IT management costs by managing performance of applications and corresponding network topology via

one interface

• Performance monitoring– Performance measurement of latency, packet jitter, bit error rate and other network related sensitivities as relating

to the application– Increased application uptime, response to potential problems before they become noticeable– IT organization can devote less time for active network monitoring and problem resolution

• Cross-departmental charging– Pay per network service usage– IT organization can identify user needs, justify expenses and plan for growth

• Absorbs churn from network upgrades– Seamless upgrade of network capabilities without affecting unified management

Page 49: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

CO2 Value Propositions contd.

• Policy based Admission control– Improve application response time by restricting access to lower priority traffic during peak demands

– Translates application QoS requirements to network service provisioning

• Bandwidth efficiencies– Bandwidth savings through consolidating and managing multiple streams of storage/LAN traffic over a

common infrastructure

– Provides the IT department more efficient and longer use of current resources

• Configuration automation– Provides a web services interface for data center on-demand applications to access network services

through CO2

• Access to layered network services– Provides choice of service access at Layer 1 (SONET), Layer 2 (Ethernet/ATM/FR) and/or Layer 3 (IP)

depending on SLA parameters requested

• Performance smoothing via congestion avoidance– Eliminate erratic application performance during congestion by utilizing proactive bandwidth management

– Ensure applications receiving a consistent quality of service

• Application-Network QoS guarantees– Translates application QoS requirements to network service provisioning

Page 50: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

Is CO2 a good fit for my network?Is CO2 a good fit for my network?

Does the network administrator allow access to network provisioning plane?

no yes

Do I want to use network’s standard provisioning plane?

yesno

CO2 Value Add Smart BW MgmtDynamic VPNsDynamic QoS

Using CO2’sframeworks,

write the plug-in code for a

particular signalingprotocol. Selected

unmodified built-in CO2 services work still,you can write new ones

no

yes

RSVP, GMPLS,

SNMP, TL1, UNI,

no

Does the network expose management information?

Does the network contain L2-L7 packet inspection boxes?

yes

CO2 Value Add Error Isolation

Performance MonitoringTopology SummarySLA Monitoring &

Verification

CO2 Value Add Accelerated processing,

content Insertion

Agile Network

C O2

Most Value-add

Limited Value-

add

Limited Value- add-

Brand X Nortel

Brand Y

Nortel

END

START

Page 51: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

Virtual Organization

Amsterdam

Dwingelo Utrecht

Eindhoven

Amsterdam

Grid Overlay

CO2 Overlay

CO2+DiffServ

CO2+MPLSCore Network

Access

CO2

AccessCO2

Access

CO2

Access

CO2

Access

CO2

Example: CO2 brokering λs for a Globus application - general setup

Computing

GRAM2MMJFS

MFJSRIPS

MFJSRIPS

MJSRIPS

Storage

MFJSRIPS

MFJSRIPS

TBDInfo

GRAM2TBD

Computing

GRAM2MMJFS

MFJSRIPS

MFJSRIPS

MJSRIPS

Acronyms:•MMJFS: Master Managed Job Factory Service•MJFS: Managed Job Factory Service•MJS: Managed Job Service

•RIPS: Resource Info Provider Service•MDS: Monitoring and Discovery Service•GRAM: Grid Resource Allocation Management

HostingEnvironment

application

MDS Index

VO’s Meta Scheduler

Page 52: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

Virtual Organization

Amsterdam

Dwingelo Utrecht

Eindhoven

Amsterdam

Grid Overlay

CO2 Overlay

CO2+DiffServ

CO2+MPLSCore Network

Access

CO2

Access

Access

CO2

Access

CO2

Access

CO2

Example: CO2 brokering λs for a Globus application - Resource discovery

Computing

GRAM2MMJFS

MFJSRIPS

MFJSRIPS

MJSRIPS

Storage

MFJSRIPS

MFJSRIPS

TBDInfo

GRAM2TBD

Computing

GRAM2MMJFS

MFJSRIPS

MFJSRIPS

MJSRIPS

Acronyms:•MMJFS: Master Managed Job Factory Service•MJFS: Managed Job Factory Service•MJS: Managed Job Service

•RIPS: Resource Info Provider Service•MDS: Monitoring and Discovery Service•GRAM: Grid Resource Allocation Management

HostingEnvironment

application

MDS Index

VO’s Meta Scheduler

VO’s Master CO2

Page 53: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

Virtual Organization

Amsterdam

Dwingelo Utrecht

Eindhoven

Amsterdam

Grid Overlay

CO2 Overlay

CO2+DiffServ

CO2+MPLSCore Network

Access

CO2

Access

Access

CO2

Access

CO2

Access

CO2

Example: CO2 brokering λs for a Globus application - Resource allocation

Computing

GRAM2MMJFS

MFJSRIPS

MFJSRIPS

MJSRIPS

Storage

MFJSRIPS

MFJSRIPS

TBDInfo

GRAM2TBD

Computing

GRAM2MMJFS

MFJSRIPS

MFJSRIPS

MJSRIPS

Acronyms:•MMJFS: Master Managed Job Factory Service•MJFS: Managed Job Factory Service•MJS: Managed Job Service

•RIPS: Resource Info Provider Service•MDS: Monitoring and Discovery Service•GRAM: Grid Resource Allocation Management

HostingEnvironment

application

MDS Index

VO’s Meta Scheduler

VO’s Master CO2

Page 54: Project CO2: Technical Overview Inder Monga {imonga@nortelnetworks.c om} Advanced Technology CTO Office Billerica MA

Scheduling Example - Reroute • Request for 1 hour between nodes A and B between 7:00 and

8:30 is granted using Segment X (and other segments) is granted for 7:00

• New request for 2 hours between nodes C and D between 7:00 and 9:30 This route needs to use Segment E to be satisfied

• Reroute the first request to take another path thru the topology to free up Segment E for the 2nd request. Everyone is happy

A

D

B

C

X7:00-8:00

A

D

B

C

X7:00-8:00

Y

Route allocated; new request comes in for a segment in use; 1st route can be altered to use different path to allow 2nd to also be serviced in its time window