testbed by yeongjae yu

36
CS744 Sub Area Introduction: GENI Facility (Testbed) 2007.10.22 Yu Yeongjae [email protected]

Upload: cameroon45

Post on 23-Jan-2015

761 views

Category:

Technology


3 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Testbed by Yeongjae Yu

CS744 Sub Area Introduction:GENI Facility (Testbed)

2007.10.22

Yu Yeongjae

[email protected]

Page 2: Testbed by Yeongjae Yu

2

Table of Contents

1. Need for Experimental Facility

2. GENI Architecture & Facility Design

3. Related Work − PlanetLab & Virtual Network Infrastructure (VINI)

− User Controlled LightPath (UCLP)& Articulated Private Network (APN)

Reference

Appendix

Page 3: Testbed by Yeongjae Yu

1. Need for Experimental Facility

Page 4: Testbed by Yeongjae Yu

1.1 Need for Experiment Facility [Peterson 06]

Goal: Seamless conception-to-deployment process

Page 5: Testbed by Yeongjae Yu

• Simulators– ns

• Emulators– Emulab– WAIL

• Wireless Testbeds– ORBIT– Emulab

• Wide-Area Testbeds– PlanetLab– RON– X-bone– DETER

1.2 Existing Tools [Peterson 06]

Page 6: Testbed by Yeongjae Yu

• Simulation based on simple models– topologies, admin policies, workloads, failures…

• Emulation (and “in lab” tests) are similarly limited– only as good as the models

• Traditional testbeds are targeted– often of limited reach– often with limited programmability

• Testbed dilemma– production: real users but incremental change– research: radical change but no real users

1.3 Today’s Tools Have Limitations [Peterson 06]

Page 7: Testbed by Yeongjae Yu

• We need:

− Real implementation

− Real experience

− Real network conditions

− Real users

• Global Environment for Network Innovations

− Prototyping new architectures

− Realistic evaluation

− Controlled evaluation

− Shared facility

− Connecting to real users

− Enabling new services

1.4 Things We Need [Rexford 06]

Page 8: Testbed by Yeongjae Yu

2. GENI Architecture & Facility Design

Page 9: Testbed by Yeongjae Yu

1.1 What is GENI?GENI is an open, large-scale, realistic experimental facility that will revolutionize research in global communication networks [Peterson 06]

1.2 The Role of GENIGENI will allow researchers to experiment with alternative network architectures, services, and applications at scale and under real-world conditions [Clark 07]

1. GENI Architecture & Facility Design

* Reference: http://netseminar.stanford.edu/sessions/2006-11-16.ppt

GENI Network Virtualization

Page 10: Testbed by Yeongjae Yu

1.3 Three Levels of GENI Architecture [Peterson 07]1) Physical Substrate

− At the bottom level, GENI provides a set of physical facilities (e.g., routers,

processors, links, wireless devices)

2) User Services

− At the top level, GENI's “user services” provide a rich array of user-visible

support services intended to make the facility accessible and effective in

meeting its research goals

3) GENI Management Core (GMC)

− Sitting between the “physical substrate” and the “user services” is the

“GENI Management Core”, or “GMC”

− The purpose of the GMC is to define a stable, predictable, long-lived

framework(a set of abstractions, interfaces, name spaces, and core

services) to bind together the GENI architecture

1. GENI Architecture & Facility Design (Cont’d)

Page 11: Testbed by Yeongjae Yu

* Note that the GMC is not a management service or operations center.

GMC only defines the framework within which such facilities can be

constructed [Peterson 07]

1. GENI Architecture & Facility Design (Cont’d)

GENI Architecture

* GMC: GENI Management Core * Reference: http://www.geni.net/docs/GENI.ppt

abstraction

Page 12: Testbed by Yeongjae Yu

• GENI Names [Peterson 07] − The GMC defines unambiguous identifiers-called GENI Global Identifiers (GGID)-for the set of objects that make up GENI

− These objects include users, components, aggregates, and slices

− A GGID is represented as an X.509 certificate [X509, RFC3280] that binds a Universally Unique Identifier (UUID) to a public key

1. GENI Architecture & Facility Design (Cont’d)

Page 13: Testbed by Yeongjae Yu

• GENI Abstractions [Peterson 07]Three major abstractions that the GMC defines:

(1) Components

− The primary building block of GENI

− A component encapsulates a collection of resources

− Each component is controlled via a component manager (CM)

(2) Slices

− Slice is the substrate resources bound to a particular experiment [Clark 07]

− Users run their experiments in a slice of the GENI substrate

(3) Aggregates

− An “aggregate” is a GENI object that represents an unordered collection of

components

− There also might be a “root” aggregate (e.g. researcher portal) that

corresponds to all GENI components

− Aggregates coordinate resource allocation and manage set of components

1. GENI Architecture & Facility Design (Cont’d)

Page 14: Testbed by Yeongjae Yu

1. GENI Architecture & Facility Design (Cont’d)

Resource Controller Auditing Archive

Slice ManagerAggregate

nodecontrol

data

CM

Virtualization SW

Substrate HW

CM

Virtualization SW

Substrate HW

CM

Virtualization SW

Substrate HW

* CM: Component Manager

* GMC: GENI Management Core Component

Components & Aggregate

* Reference: http://www.geni.net/docs/GENI.ppt

Management - boot/monitorCoordination - slice control

Page 15: Testbed by Yeongjae Yu

• Management Aggregate (Backbone/Wireless WG)

− Operations & Maintenance Control Plane securely boot and update➤ diagnose & debug failures➤

• Coordination Aggregate (Backbone/Wireless WG)

− Slice Control Plane coordinate slice embedding across a subnet➤

• Portal Aggregate (Services WG)

− slice embedding service resource discovery➤ resource allocation➤ end-to-end topology “stitching”➤

− experiment management service ➤ configuration management development tools➤ diagnostics & monitoring➤ data logging➤

1. GENI Architecture & Facility Design (Cont’d)

* A “portal” is an interface that defines an "entry point“ through which users access GENI components

Page 16: Testbed by Yeongjae Yu

1. GENI Architecture & Facility Design (Cont’d)[Peterson 07]

(1) Researcher Portal(2) Operator Portal

* Both portals serve as “front-ends” for a set of “infrastructure services” that researchers engage to help them manage their slices, and operators engage to help them monitor and diagnose the components

User Portal

Page 17: Testbed by Yeongjae Yu

4. Related Work

Page 18: Testbed by Yeongjae Yu

3.1.1 PlanetLab − PlanetLab is a global overlay network for developing and accessing broad-

coverage network services [Chun 03]

− PlanetLab allows multiple services to run concurrently and continuously,

each in its own slice of PlanetLab [Chun 03]

− PlanetLab serves as a prototype of GENI [Peterson 05]

➤ helps make the case that such a facility is feasible

− PlanetLab is limited to a set of commodity PCs running as an overlay

[Chen 06]

3.1 PlanetLab & VINI

Page 19: Testbed by Yeongjae Yu

3.1.1 PlanetLab − PlanetLab Central (PLC), a centralized front-end, acts as the trusted

intermediary between PL users and node owners [Peterson 06]

3.1 PlanetLab & VINI (Cont’d)

* Reference: [Peterson 05] http://lsirwww.epfl.ch/PlanetLabEverywhere/slides/epfl.ppt

<PlanetLab Principals>

Page 20: Testbed by Yeongjae Yu

3.1.2 Virtual Network Infrastructure (VINI) [Bavier 06] − VINI is a virtual network infrastructure that allows network researchers to

evaluate their protocols and services in a realistic environment that also

provides a high degree of control over network conditions.

− PL-VINI is a prototype of a VINI that runs on the public PlanetLab. PL-VINI

enables arbitrary virtual networks, consisting of software routers connected

by tunnels, to be configured within a PlanetLab slice.

− VINI is an early prototype of the GENI [Rexford and Peterson 07]

3.1 PlanetLab & VINI (Cont’d)

Page 21: Testbed by Yeongjae Yu

<Deploying and Initializing the Virtual Network>

1. Make a slice on PlanetLab nodes

2. Change configuration file3. Generate the configuration files for all nodes

4. Copy everything (necessary scripts, configuration files, RPMs, etc.)into each PlanetLab node

5. Install any required RPMs on the boxes, install the file system image that UML runs from6. Start up the overlay

Page 22: Testbed by Yeongjae Yu

• User Controlled LightPath (UCLPv2) [Lemay 06] − UCLP is a network virtualization management tool built using web services ex) XC-WS(Cross Connect Web Service) for SONET, SDH and Lambda Cross Connects

− Users can create several parallel application specific networks from

a single physical network through UCLP

• Articulated Private Network (APN) − An aggregate mix of resources [St.Arnaud 07]

3.2 UCLP & APN

Substrate Router

InstrumentWS

SubstrateSwitch

ParentLightpathWS

TimesliceWS

Child Lightpath WS(may run over IPEthernet, MPLS, etc

GMPLSDaemon WS

APN

VirtualRouterWS Wireless Sensor

Network

Substrate Router

InstrumentWS

SubstrateSwitch

ParentLightpathWS

TimesliceWS

Child Lightpath WS(may run over IPEthernet, MPLS, etc

GMPLSDaemon WS

APN

VirtualRouterWS Wireless Sensor

Network

UCLP Network Virtualization

Page 23: Testbed by Yeongjae Yu

* Reference: [Grasa 07], http://tnc2007.terena.org/core/getfile.php?file_id=474

Resource Management Layer

Resource Virtualization Layer

User Access Layer

LP-WS ITF-WS

EthernetWSXC-WS Router-WS

GUI Client

<UCLP High Level Architecture>

Page 24: Testbed by Yeongjae Yu

• Original User Controlled LightPath (UCLPv2) Limitation − It supports virtualization only for SONET based network element such as

ONS15454

− As a result, It is limited to make link level APNs and can’t support

experiments that need routers

3.2 UCLP & APN (Cont’d)

Page 25: Testbed by Yeongjae Yu

3.3 Relationship between GENI and Related Work

<Relationship among PlanetLab, PL-VINI, UCLP and GENI>

Page 26: Testbed by Yeongjae Yu

• GENI is an experimental facility intended to enable fundamental innovations in networking and distributed systems [Peterson 07]

• GENI will allow researchers to experiment with alternative network architectures, services, and applications at scale and under real-world conditions [Clark 07]

• Prototyping of GENI is needed to aggressively drive down GENI construction risk [Elliott 07]

• There are related works with GENI: PlanetLab, VINI, UCLP

• PlanetLab serves as a prototype of GENI [Peterson 05]

• VINI is an early prototype of the GENI [Rexford and Peterson 07]

• We’ll try to make a prototype of GENI based on UCLP deployed on lambda network

4. Summary

Page 27: Testbed by Yeongjae Yu

[Peterson 07] Larry Peterson, John Wroclawski, “Overview of the GENI Architecture”, GDD-06-11, January 2007

[Peterson 07] Larry Peterson, Tom Anderson, Dan Blumenthal, “GENI Facility Design”, GDD-07-44, March 2007

[Turner 06] Jonathan Turner, “A Proposed Architecture for the GENI Backbone Platform”, GDD-06-09, March 2006

[Blumenthal 06] Dan Blumenthal, Nick McKeown, “Backbone Node: Requirements and Architecture”, GDD-06-26, November 2006

[Peterson 06] Larry Peterson, Steve Muir, Timothy Roscoey, Aaron Klingaman, "PlanetLab Architecture: An Overview", PDN-06-031, May 2006

[Bavier 06] Andy Bavier, Nick Feamster, Mark Huang, Larry Peterson, and Jennifer Rexford, "In VINI Veritas: Realistic and Controlled Network Experimentation“, SIGCOMM, October 2006

[St.Arnaud 06] Bill St.Arnaud, "UCLP Roadmap for creating User Controlled and Architected Networks using Service Oriented Architecture", January 2006

Reference

Page 28: Testbed by Yeongjae Yu

Appendix A. PlanetLab

Join Request PI submits Consortium paperwork and requests to join

PI Activated PLC verifies PI, activates account, enables site (logged)

User Activated Users create accounts with keys, PI activates accounts (logged)

Nodes Added to Slices

Users add nodes to their slice (logged)

Slice Traffic Logged

Experiments run on nodes and generate traffic (logged by Netflow)

Traffic Logs Centrally Stored

PLC periodically pulls traffic logs from nodes

Slice Created PI creates slice and assigns users to it (logged)

A.1 Chain of Responsibility [Peterson 05]

* PI: PlanetLab Investigator* PLC: PlanetLab Central

Page 29: Testbed by Yeongjae Yu

Appendix A. PlanetLab

PLC(SA)

VMM

NM VM

PI SliceCreate( ) SliceUsersAdd( )

User SliceNodesAdd( ) SliceAttributeSet( ) SliceInstantiate( )

SliceGetAll( )

slices.xml VM VM…

.

.

.

.

.

.

A.2 Slice Creation Mechanism (1)

* NM: Node Manager* VM: Virtual Machine* VMM: Virtual Machine Monitor

Page 30: Testbed by Yeongjae Yu

Appendix A. PlanetLab

PLC(SA)

VMM

NM VM

PI SliceCreate( ) SliceUsersAdd( )

User SliceAttributeSet( ) SliceGetTicket( )

VM VM…

.

.

.

(distribute ticket to slice creation service)

SliverCreate(ticket)

A.2 Slice Creation Mechanism (2)

* NM: Node Manager* VM: Virtual Machine* VMM: Virtual Machine Monitor

Page 31: Testbed by Yeongjae Yu

Appendix B. Virtual Network Infrastructure

Virtual Network Infrastructure (VINI)• Configuring a Virtual NetworkA virtual network's topology and routing protocols are specified by means of a configuration file. The basic idea is to define a Node object for each router in your experiment, and a Link object between two Node objects for each virtual link.

Next slide shows an example of configuration file.

Page 32: Testbed by Yeongjae Yu

### Specify global defaults

### Slices$iias = Slice.new(13654, 4801, 'princeton_iias', true, 'XORP')

### Nodes$pr = Node.new('vini1.princeton.vini-veritas.net', $iias, 'pr‘)$ny1 = Node.new('vini1.newy.internet2.vini-veritas.net', $iias, 'ny1‘)$ch1 = Node.new('vini1.chic.internet2.vini-veritas.net', $iias, 'ch1‘)$ny2 = Node.new('vini2.newy.internet2.vini-veritas.net', $iias, 'ny2‘)$ch2 = Node.new('vini2.chic.internet2.vini-veritas.net', $iias, 'ch2‘)

### Links$l1 = Link.new($pr, $ny1, 50)$l2 = Link.new($pr, $ny2, 50)$l3 = Link.new($ny1, $ch1, 700)$l4 = Link.new($ny2, $ch2, 700)$l5 = Link.new($ny1, $ny2, 1)$l6 = Link.new($ch1, $ch2, 1)

### External destinations$ch1.add_nat_dests(['66.158.71.0/24'])$ch2.add_nat_dests(['66.158.71.0/24'])

### OpenVPN servers$pr.openvpn(true)

### Additional node configuration### Specifying host info means we don't need to ssh to node$pr.hostinfo("128.112.139.43", "10.0.1.1", "00:FF:0A:00:01:01“)$ny1.hostinfo("64.57.18.82", "10.0.20.1", "00:FF:0A:00:14:01")$ch1.hostinfo("64.57.18.18", "10.0.18.1", "00:FF:0A:00:12:01")$ny2.hostinfo("64.57.18.83", "10.0.21.1", "00:FF:0A:00:15:01")$ch2.hostinfo("64.57.18.19", "10.0.19.1", "00:FF:0A:00:13:01“)

<Example configuration file>

Page 33: Testbed by Yeongjae Yu

• This is what is frequently referred to as the Programmable Router [GDD-06-09]

• It can process packets in any way it chooses, at layers higher than we traditionally think of as routing

− Transcode packets from one format to another − Do deep application level packet inspection − Terminate flows as an end-point

Appendix C. GENI Programmable RouterC.1 Packet Processor

[Blumenthal 06]

Page 34: Testbed by Yeongjae Yu

Here, we categorize different experimenters and how they use might GENI.

• Type 1Network researchers who want a stable network of standard routers (e.g. IPv4, IPv6 routers with standard features) operating over a topology of their choosing, but using links that are statistically shared with other users and experiments.

• Type 2Network researchers who want a network of stable, standard routers (e.g. IPv4,IPv6 routers with standard features) operating over a topology with dedicated and private bandwidth (e.g. a topology of circuits), with stable (possibly standard/default) framing.

• Type 3Network researchers who want to deploy their own packet processing elementand protocols in a private, or shared, slice; running over shared or dedicated bandwidth links within a topology. The experimenter has complete control of how data passes over the network (including framing and packet format).

=> Type 1-3 researchers are the networking research community who will need a stable substrate over which to perform their experiments

Appendix C. GENI Programmable RouterC.2 Summary of Requirements to Support Multiple Layers of Research

[Blumenthal 06]

Page 35: Testbed by Yeongjae Yu

• Type 4Network researchers who want specific bandwidths on demand within a topology. E.g. a topology with precise bandwidths between nodes, and where bandwidth can be setup and removed dynamically.

• Type 5Researchers who want access to raw optical wavelengths with no framing, protocol or transport constraints.

• Type 6Researchers who want access to raw fiber bandwidth. E.g. new transmission, modulation, coding and formats.

=> Type 4-6 researchers are the networking physical layer research community who will invent and explore new ways to provide future stable substrates for experiments of Types 1-3

* It seems that “GENI Programmable Router” should support type 1-3 researchers.* Type 4-6 researchers may be supported at lower layers.

Appendix C. GENI Programmable RouterC.2 Summary of Requirements to Support Multiple Layers of Research

[Blumenthal 06]

Page 36: Testbed by Yeongjae Yu

• Criteria1) Multiple independent routing and forwarding tables for current experiments for all types2) Dedicated bandwidth allocation for type 23) Programmability at hardware and/or software level for type 3

Appendix C. GENI Programmable RouterC.3 Classification of Virtual Routers

GENI Programmable Router

1)1) & 2)

3)

Linux Virtual RouterVINI Overlay Router

Juniper Logical Router

NetFPGA RouterOpen Network Laboratory Extensible Router

2)1) & 2)

& 3)