1 esnet status and plans ineternet2 all hands meeting, sept. 28, 2004 william e. johnston, esnet...

28
1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael S. Collins, Stan Kluz, Joseph Burrescia, and James V. Gagliardi, ESnet Leads Gizella Kapus, Resource Manager and the ESnet Team Lawrence Berkeley National Laboratory

Upload: erik-franklin

Post on 27-Dec-2015

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project

1

ESnet Status and PlansIneternet2 All Hands Meeting, Sept. 28, 2004

William E. Johnston, ESnet Dept. Head and Senior Scientist

R. P. Singh, Federal Project Manager

Michael S. Collins, Stan Kluz,Joseph Burrescia, and James V. Gagliardi, ESnet Leads

Gizella Kapus, Resource Manager

and the ESnet Team

Lawrence Berkeley National Laboratory

Page 2: 1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project

TWC

JGISNLL

LBNL

SLAC

YUCCA MT

BECHTEL

PNNLLIGO

INEEL

LANL

SNLAAlliedSignal

PANTEX

ARM

KCP

NOAA

OSTIORAU

SRS

ORNLJLAB

PPPL

ANL-DCINEEL-DCORAU-DC

LLNL/LANL-DC

MIT

ANL

BNL

FNALAMES

4xLAB-DCNERSC

NR

EL

ALBHUB

LLNL

GA DOE-ALB

SDSC

Japan

GTN&NNSA

International (high speed)OC192 (10G/s optical)OC48 (2.5 Gb/s optical)Gigabit Ethernet (1 Gb/s)OC12 ATM (622 Mb/s)OC12 OC3 (155 Mb/s)T3 (45 Mb/s)T1-T3T1 (1 Mb/s)

Office Of Science Sponsored (22)NNSA Sponsored (12)Joint Sponsored (3)

Other Sponsored (NSF LIGO, NOAA)Laboratory Sponsored (6)

QWESTATM

42 end user sites

ESnet mid-2004

SInet (Japan)Japan – Russia(BINP)

CA*net4MRENNetherlandsRussiaStarTapTaiwan (ASCC)

CA*net4KDDI (Japan)FranceSwitzerlandTaiwan (TANet2)

AustraliaCA*net4Taiwan (TANet2)Singaren

ESnet core: Packet over SONET Optical Ring and

Hubs

ELP HUB

SNV HUB CHI HUB

ATL HUB

DC HUB

peering points

MAE-E

Fix-W

PAIX-W

MAE-W

NY-NAP

PAIX-E

Euqinix

PN

WG

SEA HUB

ESnet Provides Full Internet Serviceto DOE Facilities and Collaborators with High-Speed Access to

Major Science Collaborators

hubs SNV HUB

Ab

ilene

Abilene high-speed peering points

Abilene

Ab

ilen

e MA

N L

AN

Abi

lene

CERN(DOE link)

GEANT - Germany, France, Italy, UK, etc

NYC HUB

StarlightChi NAP

Page 3: 1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project

STARLI

GH

T

MAE-E

NY-NAP

PAIX-E

GA

LB

NL

ESnet’s Peering InfrastructureConnects the DOE Community With its Collaborators

ESnet Peering (connections to other networks)

NYC HUBS

SEA HUB

Japan

SNV HUB

MAE-W

FIX

-W

PAIX-W 26 PEERS

CA*net4CERNMRENNetherlandsRussiaStarTapTaiwan (ASCC)

Abilene +7 Universities

22 PEERS

MAX GPOP

GEANT - Germany - France - Italy - UK - etc SInet (Japan)KEKJapan – Russia (BINP)

AustraliaCA*net4Taiwan

(TANet2)Singaren

20 PEERS3 PEERS

LANL

TECHnet

2 PEERS

39 PEERS

CENICSDSC

PNW-GPOP

CalREN2 CHI NAP

Distributed 6TAP19 Peers

2 PEERS

KDDI (Japan)France

EQX-ASH

1 PEER

1 PEER

5 PEERS

ESnet provides access to all of the Internet by managing the full complement of Global Internet routes (about 150,000) at 10 general/commercial peering points + high-speed peerings w/ Abilene and the international R&E networks. This is a lot of work, and is very visible, but provides full access for DOE.

ATL HUB

University

International

Commercial

Abilene

EQX-SJ

Abilene

6 PEERS

Abilene

Page 4: 1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project

4

Major ESnet Changes in FY04

• Dramatic increase in International traffic as major large-scale science experiments start to ramp up

• CERNlink connected at 10 Gb/s

• GEANT (main European R&E network – like Abilene and ESnet) connected at 2.5 Gb/s

• Abilene-ESnet high-speed cross-connects ([email protected] Gb/s and 1@10 Gb/s)

• In order to meet the Office of Science program needs, a new architectural approach has been developedo Science Data Network (a second core network for high-

volume traffic)

o Metropolitan Area Networks (MANs)

Page 5: 1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project

5

Organized by Office of Science

Mary Anne Scott, Chair Dave Bader Steve Eckstrand Marvin Frazier Dale Koelling Vicky White

Workshop Panel Chairs Ray Bair and Deb AgarwalBill Johnston and Mike WildeRick StevensIan Foster and Dennis GannonLinda Winkler and Brian TierneySandy Merola and Charlie Catlett

August 13-15, 2002

Predictive Drivers for Change

•Focused on science requirements that driveo Advanced Network Infrastructureo Middleware Researcho Network Researcho Network Governance Model

•The requirements for DOE science were developed by the OSC science community representing major DOE science disciplines

o Climateo Spallation Neutron Sourceo Macromolecular Crystallographyo High Energy Physics

o Magnetic Fusion Energy Scienceso Chemical Scienceso Bioinformatics

Available at www.es.net/#research

Page 6: 1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project

6

Evolving Quantitative Science Requirements for Networks

Science Areas Today End2End Throughput

5 years End2End Throughput

5-10 Years End2End Throughput

Remarks

High Energy Physics

0.5 Gb/s 100 Gb/s 1000 Gb/s high bulk throughput

Climate (Data & Computation)

0.5 Gb/s 160-200 Gb/s N x 1000 Gb/s high bulk throughput

SNS NanoScience Not yet started 1 Gb/s 1000 Gb/s + QoS for control channel

remote control and time critical throughput

Fusion Energy 0.066 Gb/s(500 MB/s burst)

0.198 Gb/s(500MB/20 sec. burst)

N x 1000 Gb/s time critical throughput

Astrophysics 0.013 Gb/s(1 TBy/week)

N*N multicast 1000 Gb/s computational steering and collaborations

Genomics Data & Computation

0.091 Gb/s(1 TBy/day)

100s of users 1000 Gb/s + QoS for control channel

high throughput and steering

Page 7: 1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project

7

Traffic coming into ESnet = GreenTraffic leaving ESnet = BlueTraffic between sites% = of total ingress or egress traffic

Note that more that 90% of the ESnet traffic is OSC traffic

ESnet Appropriate Use Policy (AUP)

All ESnet traffic must originate and/or terminate on an ESnet an site (no transit traffic is allowed)

Observed Drivers for ChangeESnet Inter-Sector Traffic Summary,

Jan 2003 / Feb 2004: 1.7X overall traffic increase, 1.9X OSC increase (The international traffic is increasing due to BABAR at SLAC

and the LHC tier 1 centers at FNAL and BNL)

Peering Points

Commercial

R&E (mostlyuniversities)

21/14%

17/10%

9/26%

14/12%

10/13%

4/6%

ESnet

~25/18%

DOE collaborator traffic, inc.data

72/68%

53/49%

DOE is a net supplier of data because DOE facilities are used by universities and commercial entities, as well as by DOE researchers

DOE sites

International(almost entirelyR&E sites)

Page 8: 1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project

8

ESnet Top 20 Data Flows, 24 hr. avg., 2004-04-20

Fermila

b (US)

CERN

SLAC (US)

IN2P3 (F

R)

1 T

erab

yte/

day

SLAC (US)

INFN P

adva (I

T)

Fermila

b (US)

U. C

hicago (U

S)

CEBAF (US)

IN2P3 (F

R)

INFN P

adva (I

T) S

LAC (US)

U. Toro

nto (CA)

Ferm

ilab (U

S)

Helmholtz

-Karls

ruhe (D

E) S

LAC (US)

DOE Lab D

OE Lab

DOE Lab D

OE Lab

SLAC (US)

JANET (U

K)

Fermila

b (US)

JANET (U

K)

Argonne (U

S) Leve

l3 (US)

Argonne

SURFnet (

NL)

IN2P3 (F

R) S

LAC (US)

Fermila

b (US)

INFN P

adva (I

T)

A small number of science users

account for a significant

fraction of all ESnet traffic

Since BaBar production started, the top 20 ESnet flows have consistently accounted for > 50% of ESnet’s monthly total traffic (~130 of 250 TBy/mo)As LHC data starts to move, this will increase a lot (200-2000 times)

Page 9: 1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project

9

ESnet Top 10 Data Flows, 1 week avg., 2004-07-01

FNAL (US)

IN2P3 (F

R)

2.2 Tera

bytes

SLAC (U

S) IN

FN Pad

ua (I

T)

5.9

Terab

ytes

U. Toro

nto (CA)

Ferm

ilab (U

S)

0.9 Tera

bytes

SLAC (US)

Helm

holtz-K

arlsru

he (DE)

0.9 Tera

bytes

SLAC (U

S) IN

2P3

(FR)

5.3

Terab

ytes

CERN F

NAL (US)

1.3 Tera

bytes

FNAL (US)

U. N

ijmegen (N

L)

1.0 Tera

bytes

FNAL (US)

Helm

holtz-K

arlsru

he (DE)

0.6 Tera

bytes

FNAL (US)

SDSC (U

S)

0.6 Tera

bytes

U. Wisc

. (US)

FNAL (U

S)

0.6 Tera

bytes

The traffic is not transient: Daily and weekly averages are about the same.

Page 10: 1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project

10

ESnet and Abilene

• Abilene and ESnet together provide most of the nation’s transit networking for science

• Abilene provides national transit networking for most of the US universities by interconnecting the regional networks (mostly via the GigaPoPs)

• ESnet connects the DOE Labs

• ESnet and Abilene have recently established high-speed interconnects and cross-network routing

• Goal is that DOE Lab ↔ Univ. connectivity should be as good as Lab ↔ Lab and Univ. ↔ Univ. Constant monitoring is the key

Page 11: 1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project

11

ESnetAbileneORNL

DENDEN

ELPELP

ALBALB

DCDC

DOE Labs w/ monitorsUniversities w/ monitorsnetwork hubshigh-speed cross connects: ESnet ↔ Internet2/Abilene

Monitoring DOE Lab ↔ University Connectivity• Current monitor infrastructure (red) and target infrastructure• Uniform distribution around ESnet and around Abilene• Need to set up similar infrastructure with GEANT

Japan

Japan

CERN/Europe Europe

SDGSDG

Japan

CHICHI

AsiaPacSEASEA

NYCNYC

HOUHOU

KCKC

LALA

ATLATL

INDIND

SNVSNV

Initial site monitors

SDSC

LBNL

FNAL

NCSU

BNL

OSU

ESnet

Abilene

Page 12: 1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project

12

Initial Monitoring is with OWAMP One-Way Delay Tests

• These measurements are very sensitive – e.g. NCSU Metro DWDM reroute of about 350 micro seconds is easily visible

Fiber Re-Route

42.041.941.841.741.641.5

ms

Page 13: 1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project

13

Initial Monitor Results (http://measurement.es.net)

Page 14: 1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project

14

ESnet, GEANT, and CERNlink

• GEANT plays a role in Europe similar to Abilene and ESnet in the US – it interconnects the European National Research and Education networks, to which the European R&E sites connect

• GEANT currently carries essentially all ESnet international traffic (LHC use of CERNlink to DOE labs is still ramping up)

• GN2 is the second phase of the GEANT projecto The architecture of GN2 is remarkably similar to the new ESnet

Science Data Network + IP core network model

• CERNlink will be the main CERN to US, LHC data patho Both US, LHC tier 1 centers are on ESnet (FNAL and BNL)

o ESnet directly connects at 10 Gb/s to the CERNlink

o The ESnet new architecture (Science Data Network) will accommodate the anticipated 40 Gb/s from LHC to US

Page 15: 1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project

15

GEANT and CERNlink

• A recent meeting between ESnet and GEANT produced proposals in a number of areas designed to ensure robust and reliable science data networking between ESnet and Europe

o A US-EU joint engineering task force (“ITechs”) should be formed to coordinate US-EU science data networking

- Will include, e.g., ESnet, Abilene, GEANT, CERN

- Will develop joint operational procedures

o ESnet will collaborate in GEANT development activities to ensure some level of compatibility

- Bandwidth-on-demand (dynamic circuit setup)

- Performance measurement and authentication

- End-to-end QoS and performance enhancement

- Security

o 10 Gb/s connectivity between GEANT and ESnet will be established by mid-2005 and backup 2.5 Gb/s will be added

Page 16: 1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project

16

New ESnet Architecture Needed to Accommodate OSC

• The essential DOE Office of Science requirements cannot be met with the current, telecom provided, hub and spoke architecture of ESnet

• The core ring has good capacity and resiliency against single point failures, but the point-to-point tail circuits are neither reliable nor scalable to the required bandwidth

ESnetCore

New York (AOA)

Chicago (CHI)

Sunnyvale (SNV)

Atlanta (ATL)

Washington, DC (DC)

El Paso (ELP)

DOE sites

Page 17: 1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project

17

A New ESnet Architecture

• Goalso full redundant connectivity for every site

o high-speed access for every site (at least 10 Gb/s)

• Three part strategy1) MAN rings provide dual site connectivity and much higher

site-to-core bandwidth

2) A Science Data Network core for- multiply connected MAN rings for protection against hub failure

- expanded capacity for science data

- a platform for provisioned, guaranteed bandwidth circuits

- alternate path for production IP traffic

- carrier circuit and fiber access neutral hubs

3) An IP core (e.g. the current ESnet core) for high-reliability

Page 18: 1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project

GEANT (Europe)

Asia-Pacific

ESnetIP Core

New York(AOA)

Chicago (CHI)

Sunnyvale(SNV)

Washington, DC (DC)

El Paso (ELP)

DOE/OSC Labs

New hubs

Existing hubs

ESnetScience Data

Network(2nd Core)

A New ESnet Architecture:Science Data Network + IP Core

Possible new hubs

Atlanta (ATL)

MetropolitanAreaRings

CERN

Page 19: 1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project

19

ESnet Long-Term Architecture

ESnet MetropolitanArea

Networks

ESnetSDNcorering

site (typ.)

site

production IPprovisioned circuits carriedover optical channels / lambdas

Optical channel (λ) equipmen – Carrier

management domain

10 GigEthernet switch(s) – ESnet

management domain

core router – Esnet management

domain

ESnetIP core

ring

provisioned circuits tunneledthrough the IP core via MPLS

monitor

ESnet management and monitoring

equipment

monitor

monitor

monitor

site router – site management domain

ESnet hub(typ.)

one or moreindep. fiber pairs

one or moreindependent fiber pairs

Page 20: 1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project

20

ESnet New Architecture, Part 1: MANs

• The MAN architecture is designed to provideo At least one redundant path from sites to ESnet hub

o Scalable bandwidth options from sites to ESnet hub

o The first step in point-to-point provisioned circuits- With endpoint authentication, these are private and intrusion

resistant circuits, so they should be able to bypass site firewalls if the endpoints trust each other

- End-to-end provisioning will be initially provided by a combination of Ethernet switch management of λ paths in the MAN and MPLS paths in the ESnet POS backbone (OSCARS project)

- Provisioning will initially be provided by manual circuit configuration, on-demand in the future (OSCARS)

o Cost savings over two or three years, when including the future site needs in increased bandwidth

Page 21: 1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project

21

Site gateway routersite equip. Site gateway router

Qwest hub

ESnet IP core

ANLFNAL

ESnet MAN Architecture – logical (Chicago, e.g.)CERN (DOE funded link)

monitor

ESnet management and

monitoring

ESnet managedλ / circuit servicestunneled through the IP backbone

monitor

site equip.

ESnet production IP service

ESnet managedλ / circuit services

T320StarLight

International peerings

Site LAN Site LAN

ESnet SDN core

T320

Page 22: 1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project

22

ESnet Metropolitan Area Network ring (MANs)

• In the near term MAN rings will be built in the San Francisco and Chicago areas

• In long term there will likely be MAN rings on Long Island, in the Newport News, VA area, in No. New Mexico, in Idaho-Wyoming, etc.

• San Francisco Bay Area MAN ring progresso Feasibility has been demonstrated with an engineering

study from CENIC

o A competitive bid and “best value source selection” methodology will select the ring provider within two months

Page 23: 1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project

23

Joint Genome Institute

SLAC

Qwest /ESnet hub

ESnet IP Core Ring

Chicago

El Paso

SF Bay Area MAN

NLR / UltraScienceNet

Seattle and Chicago

LA and San Diego

LLNL

SNLL

NERSC

LBNL

SF Bay Area

SF BA MAN

Level 3hub

ESnet Science Data Network

core

Page 24: 1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project

24

Proposed Chicago MANESnet CHI-HUB

Qwest - NBC Bld455 N Cityfront Plaza Dr, Chicago, IL 60611

ANL9700 S Cass Ave, Lemont,

IL 60439

FNALFeynman Computing Center,

Batavia, IL 60510

StarLight910 N Lake Shore Dr, Chicago,

IL 60611

Page 25: 1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project

25

ESnet New Architecture – Part 2: Science Data Network

SDN (second core): Rationale• Add major points of presence in carrier circuit and

fiber access neutral facilities atSunnyvale, Seattle, San Diego, and Chicago

o Enable UltraSciNet cross-connect with ESneto Provide access to NLR and other fiber-based networkso Allow for more competition in acquiring circuits

• Initial steps toward Science Data Network (SDN)o Provide a second, independent path between major

northern route hubs- Alternate route for ESnet core IP traffic

o Provide for high-speed paths on the West Coast to reachPNNL, GA, and AsiaPac peering

o Increase ESnet connectivity to other R&E networks

Page 26: 1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project

26

DENDEN

ELPELP

ALBALB

ATLATLMANs

High-speed cross connects with Internet2/Abilene

Major DOE Office of Science Sites

ESnet New Architecture Goal FY05Science Data Network Phase 1 and SF BA MAN

Japan

Japan

CERN (2X10Gb/s)Europe

AsiaPacSEASEA

NYCNYC

new ESnet hubscurrent ESnet hubs

SNVSNV

ESnet IP core (Qwest)ESnet SDN coreLab suppliedMajor international

Europe

UltraSciNet

DCDC

SDGSDG

Japan

CHICHI

2.5 Gbs10 Gbs

Future phases

Existing ESnet Core

NewCore

Page 27: 1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project

27

DENDEN

ELPELP

ALBALB

ATLATLMANs

High-speed cross connects with Internet2/Abilene

Major DOE Office of Science Sites

ESnet New Architecture Goal FY06Science Data Network Phase 2 and Chicago MAN

Japan

Japan

CERN (3X10Gb/s) Europe

AsiaPacSEASEA

NYCNYC

new ESnet hubscurrent ESnet hubs

SNVSNV

Europe

DCDC

SDGSDG

Japan

CHICHI

ESnet SDN coreLab suppliedMajor international

UltraSciNet2.5 Gbs10 Gbs

Future phases

ESnet IP core (Qwest)

Page 28: 1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project

28

DENDEN

ELPELP

ALBALBATLATL

MANs

High-speed cross connects with Internet2/Abilene

Major DOE Office of Science Sites

ESnet Beyond FY07

Japan

CERNEurope

SDGSDG

AsiaPacSEASEA

ESnet SDN core hubsESnet IP core (Qwest) hubs

SNVSNV

Europe

10Gb/s30Bg/s

40Gb/s

Japan

CHICHI

High-impact science coreLab suppliedMajor international

2.5 Gbs10 Gbs

Future phases

Production IP ESnet core

DCDC

Japan

NYCNYC