1 nuclear physics network requirements workshop washington, dc eli dart, network engineer esnet...

19
1 Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group May 6, 2008 Energy Sciences Network Lawrence Berkeley National Laboratory Networking for the Future of Science

Upload: edward-whitehead

Post on 20-Jan-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 1 Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group May 6, 2008 Energy Sciences Network

1

Nuclear Physics Network Requirements Workshop

Washington, DC

Eli Dart, Network EngineerESnet Network Engineering Group

May 6, 2008

Energy Sciences NetworkLawrence Berkeley National Laboratory

Networking for the Future of Science

Page 2: 1 Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group May 6, 2008 Energy Sciences Network

2

Overview

Logistics

Network Requirements

Sources, Workshop context

Case Study Example

Large Hadron Collider

Today’s Workshop

Structure and Goals

Page 3: 1 Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group May 6, 2008 Energy Sciences Network

3

Logistics

• Mid-morning break, lunch, afternoon break

• Self-organization for dinner

• Agenda on workshop web page

– http://workshops.es.net/2008/np-net-req/

• Round-table introductions

Page 4: 1 Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group May 6, 2008 Energy Sciences Network

4

Network Requirements

• Requirements are primary drivers for ESnet – science focused

• Sources of Requirements

– Office of Science (SC) Program Managers

– Direct gathering through interaction with science users of the network

• Examples of recent case studies

– Climate Modeling

– Large Hadron Collider (LHC)

– Spallation Neutron Source at ORNL

– Observation of the network

– Other Sources (e.g. Laboratory CIOs)

Page 5: 1 Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group May 6, 2008 Energy Sciences Network

5

Program Office Network Requirements Workshops

• Two workshops per year

• One workshop per program office every 3 years

• Workshop Goals

– Accurately characterize current and future network requirements for Program Office science portfolio

– Collect network requirements from scientists and Program Office

• Workshop structure

– Modeled after the 2002 High Performance Network Planning Workshop conducted by the DOE Office of Science

– Elicit information from managers, scientists and network users regarding usage patterns, science process, instruments and facilities – codify in “Case Studies”

– Synthesize network requirements from the Case Studies

Page 6: 1 Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group May 6, 2008 Energy Sciences Network

6

Large Hadron Collider at CERN

Page 7: 1 Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group May 6, 2008 Energy Sciences Network

7

LHC Requirements – Instruments and Facilities

• Large Hadron Collider at CERN– Networking requirements of two experiments have been characterized – CMS

and Atlas– Petabytes of data per year to be distributed

• LHC networking and data volume requirements are unique to date– First in a series of DOE science projects with requirements of unprecedented

scale– Driving ESnet’s near-term bandwidth and architecture requirements– These requirements are shared by other very-large-scale projects that are

coming on line soon (e.g. ITER)

• Tiered data distribution model– Tier0 center at CERN processes raw data into event data– Tier1 centers receive event data from CERN

• FNAL is CMS Tier1 center for US• BNL is Atlas Tier1 center for US• CERN to US Tier1 data rates: 10 Gbps in 2007, 30-40 Gbps by 2010/11

– Tier2 and Tier3 sites receive data from Tier1 centers• Tier2 and Tier3 sites are end user analysis facilities• Analysis results are sent back to Tier1 and Tier0 centers• Tier2 and Tier3 sites are largely universities in US and Europe

Page 8: 1 Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group May 6, 2008 Energy Sciences Network

8

LHC Requirements – Process of Science

• Strictly tiered data distribution model is only part of the picture– Some Tier2 scientists will require data not available from their local Tier1 center– This will generate additional traffic outside the strict tiered data distribution tree– CMS Tier2 sites will fetch data from all Tier1 centers in the general case

• Network reliability is critical for the LHC– Data rates are so large that buffering capacity is limited– If an outage is more than a few hours in duration, the analysis could fall

permanently behind• Analysis capability is already maximized – little extra headroom

• CMS/Atlas require DOE federated trust for credentials and federation with LCG

• Service guarantees will play a key role– Traffic isolation for unfriendly data transport protocols– Bandwidth guarantees for deadline scheduling

• Several unknowns will require ESnet to be nimble and flexible– Tier1 to Tier1,Tier2 to Tier1, and Tier2 to Tier0 data rates could add significant

additional requirements for international bandwidth– Bandwidth will need to be added once requirements are clarified– Drives architectural requirements for scalability, modularity

Page 9: 1 Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group May 6, 2008 Energy Sciences Network

9

LHC Ongoing Requirements Gathering Process

• ESnet has been an active participant in the LHC network planning and operation

– Been an active participant in the LHC network operations working group since its creation

– Jointly organized the US CMS Tier2 networking requirements workshop with Internet2

– Participated in the US Atlas Tier2 networking requirements workshop

– Participated in US Tier3 networking workshops

Page 10: 1 Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group May 6, 2008 Energy Sciences Network

10

LHC Requirements Identified To Date

• 10 Gbps “light paths” from FNAL and BNL to CERN– CERN / USLHCnet will provide10 Gbps circuits to Starlight, to 32 AoA, NYC

(MAN LAN), and between Starlight and NYC– 10 Gbps each in near term, additional lambdas over time (3-4 lambdas each by

2010)

• BNL must communicate with TRIUMF in Vancouver

– This is an example of Tier1 to Tier1 traffic – 1 Gbps in near term

– Circuit is currently up and running

• Additional bandwidth requirements between US Tier1s and European Tier2s

– Served by USLHCnet circuit between New York and Amsterdam

• Reliability– 99.95%+ uptime (small number of hours per year)– Secondary backup paths– Tertiary backup paths – virtual circuits through ESnet, Internet2, and GEANT

production networks and possibly GLIF (Global Lambda Integrated Facility) for transatlantic links

• Tier2 site connectivity– 1 to 10 Gbps required– Many large Tier2 sites require direct connections to the Tier1 sites – this drives

bandwidth and Virtual Circuit deployment (e.g. UCSD)• Ability to add bandwidth as additional requirements are clarified

Page 11: 1 Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group May 6, 2008 Energy Sciences Network

11

Identified US Tier2 Sites

• Atlas (BNL Clients)

– Boston University

– Harvard University

– Indiana University Bloomington

– Langston University

– University of Chicago

– University of New Mexico Alb.

– University of Oklahoma Norman

– University of Texas at Arlington

• Calibration site

– University of Michigan

• CMS (FNAL Clients)

– Caltech

– MIT

– Purdue University

– University of California San Diego

– University of Florida at Gainesville

– University of Nebraska at Lincoln

– University of Wisconsin at Madison

Page 12: 1 Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group May 6, 2008 Energy Sciences Network

12

LHC ATLAS Bandwidth Matrix as of April 2007

Site A Site Z ESnet A ESnet Z A-Z 2007 Bandwidth

A-Z 2010 Bandwidth

CERN BNL AofA (NYC) BNL 10Gbps 20-40Gbps

BNL U. of Michigan (Calibration)

BNL (LIMAN) Starlight (CHIMAN)

3Gbps 10Gbps

BNL Boston University

BNL (LIMAN)

Internet2 / NLR Peerings

3Gbps

(Northeastern Tier2 Center)

10Gbps

(Northeastern Tier2 Center)

BNL Harvard University

BNL Indiana U. at Bloomington

BNL (LIMAN)

Internet2 / NLR Peerings

3Gbps

(Midwestern Tier2 Center)

10Gbps

(Midwestern Tier2 Center)BNL U. of Chicago

BNL Langston University

BNL (LIMAN) Internet2 / NLR Peerings

3Gbps

(Southwestern Tier2 Center)

10Gbps

(Southwestern Tier2 Center)

BNL U. Oklahoma Norman

BNL U. of Texas Arlington

BNL Tier3 Aggregate BNL (LIMAN) Internet2 / NLR Peerings

5Gbps 20Gbps

BNL TRIUMF (Canadian ATLAS Tier1)

BNL (LIMAN) Seattle 1Gbps 5Gbps

Page 13: 1 Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group May 6, 2008 Energy Sciences Network

13

LHC CMS Bandwidth Matrix as of April 2007

Site A Site Z ESnet A ESnet Z A-Z 2007 Bandwidth

A-Z 2010 Bandwidth

CERN FNAL Starlight (CHIMAN)

FNAL (CHIMAN)

10Gbps 20-40Gbps

FNAL U. of Michigan (Calibration)

FNAL (CHIMAN)

Starlight (CHIMAN)

3Gbps 10Gbps

FNAL Caltech FNAL (CHIMAN)

Starlight (CHIMAN)

3Gbps 10Gbps

FNAL MIT FNAL (CHIMAN)

AofA (NYC)/ Boston

3Gbps 10Gbps

FNAL Purdue University FNAL (CHIMAN)

Starlight (CHIMAN)

3Gbps 10Gbps

FNAL U. of California at San Diego

FNAL (CHIMAN)

San Diego 3Gbps 10Gbps

FNAL U. of Florida at Gainesville

FNAL (CHIMAN)

SOX 3Gbps 10Gbps

FNAL U. of Nebraska at Lincoln

FNAL (CHIMAN)

Starlight (CHIMAN)

3Gbps 10Gbps

FNAL U. of Wisconsin at Madison

FNAL (CHIMAN)

Starlight (CHIMAN)

3Gbps 10Gbps

FNAL Tier3 Aggregate FNAL (CHIMAN)

Internet2 / NLR Peerings

5Gbps 20Gbps

Page 14: 1 Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group May 6, 2008 Energy Sciences Network

14

Estimated Aggregate Link Loadings, 2007-08

Denver

Seattle

Su

nn

yv

ale

LA

San Diego

Chicago

Jacksonville

KC

El Paso

Albuq.Tulsa

Clev.

Boise

Wash DC

Salt Lake City

Portland

BatonRougeHouston

Pitts.

NYC

Boston

Philadelphia

Indianapolis

Atlanta

Nashville

Existing site supplied circuits

ESnet IP core (1)ESnet Science Data Network coreESnet SDN core, NLR linksLab supplied linkLHC related linkMAN linkInternational IP Connections

Raleigh

OC48

(1)(1(3))

Layer 1 optical nodes at eventual ESnet Points of Presence

ESnet IP switch only hubs

ESnet IP switch/router hubs

ESnet SDN switch hubs

Layer 1 optical nodes not currently in ESnet plans

Lab site

13

12.5

8.5

9

13

2.5 Committed bandwidth, Gb/s

6

9

6

2.52.5

2.5

unlabeled links are 10 Gb/s

Page 15: 1 Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group May 6, 2008 Energy Sciences Network

15

Layer 1 optical nodes at eventual ESnet Points of Presence

ESnet IP switch only hubs

ESnet IP switch/router hubs

ESnet SDN switch hubs

Layer 1 optical nodes not currently in ESnet plans

Lab site

ESnet IP coreESnet Science Data Network coreESnet SDN core, NLR links (existing)Lab supplied linkLHC related linkMAN linkInternational IP Connections

ESnet4 2007-8 Estimated Bandwidth Commitments

Denver

Seattle

Su

nn

yv

ale

LA

San Diego

Chicago

Raleigh

Jacksonville

KC

El Paso

Albuq.Tulsa

Clev.

Boise

Wash DC

Salt Lake City

Portland

BatonRougeHouston

Pitts.

NYC

Boston

Philadelphia

Indianapolis

Atlanta

Nashville

All circuits are 10Gb/s.

MAX

West Chicago MAN Long Island MAN

Newport News - Elite

San FranciscoBay Area MAN

LBNL

SLAC

JGI

LLNL

SNLL

NERSC

JLab

ELITE

ODU

MATP

Wash., DC

OC48(1(3))

(7)

(17)

(19) (20)

(22)(23)

(29)

(28)

(8)

(16)

(32)

(2)

(4)

(5)

(6)

(9)

(11)

(13) (25)

(26)

(10)

(12)

(3)

(21)

(27)

(14)

(24)

(15)

(0)

(1)

(30)

FNAL

600 W. Chicago

Starlight

ANL

USLHCNet

CERN

10

29(total)

2.5 Committed bandwidth, Gb/s

BNL

32 AoA, NYC

USLHCNet

CERN

10

13

5

unlabeled links are 10 Gb/s

Page 16: 1 Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group May 6, 2008 Energy Sciences Network

16

Estimated Aggregate Link Loadings, 2010-11

Denver

Seattle

Su

nn

yv

ale

LA

San Diego

Chicago

Raleigh

Jacksonville

KC

El Paso

Albuq.Tulsa

Clev.

Boise

Wash. DC

Salt Lake City

Portland

BatonRougeHouston

Pitts. NYC

Boston

Philadelphia

Indianapolis

(>1 )

Atlanta

Nashville

Layer 1 optical nodes at eventual ESnet Points of Presence

ESnet IP switch only hubs

ESnet IP switch/router hubs

ESnet SDN switch hubs

Layer 1 optical nodes not currently in ESnet plans

Lab site

OC48

ESnet IP core (1)ESnet Science Data Network coreESnet SDN core, NLR links (existing)Lab supplied linkLHC related linkMAN linkInternational IP Connections

50

40

40

40

40 40

4

50

5

50

30

4040

50

50

50

50

5040(16)

30

30

5045

30

15

20

2020 5

20

5

5

20

10

2.5 Committed bandwidth, Gb/s link capacity, Gb/s40

unlabeled links are 10 Gb/slabeled links are in Gb/s

Page 17: 1 Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group May 6, 2008 Energy Sciences Network

17

ESnet4 2010-11 Estimated Bandwidth Commitments

Denver

Seattle

Su

nn

yv

ale

LA

San Diego

Chicago

Raleigh

Jacksonville

KC

El Paso

Albuq.Tulsa

Clev.

Boise

Wash. DC

Salt Lake City

Portland

BatonRougeHouston

Pitts. NYC

Boston

Philadelphia

Indianapolis

(>1 )

Atlanta

Nashville

Layer 1 optical nodes at eventual ESnet Points of Presence

ESnet IP switch only hubs

ESnet IP switch/router hubs

ESnet SDN switch hubs

Layer 1 optical nodes not currently in ESnet plans

Lab site

OC48

(0)

(1)

ESnet IP core (1)ESnet Science Data Network coreESnet SDN core, NLR links (existing)Lab supplied linkLHC related linkMAN linkInternational IP ConnectionsInternet2 circuit number(20)

5

4

4

4

4 4

4

5

5

5

3

44

5

5

5

5

54

(7)

(17)

(19) (20)

(22)(23)

(29)

(28)

(8)

(16)

(32)

(2)

(4)

(5)

(6)

(9)

(11)

(13) (25)

(26)

(10)

(12)

(27)

(14)

(24)

(15)

(30)3

3(3)

(21)

2520

25

15

10

2020 5

10

5

5

80

FNAL

600 W. Chicago

Starlight

ANL

USLHCNet

CERN

40

808080100

BNL

32 AoA, NYC

USLHCNet

CERN

65

40

unlabeled links are 10 Gb/s

2.5 Committed bandwidth, Gb/s

Page 18: 1 Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group May 6, 2008 Energy Sciences Network

18

2008 NP Workshop

• Goals

– Accurately characterize the current and future network requirements for the NP Program Office’s science portfolio

– Codify the requirements in a document

• The document will contain the case studies and summary matrices

• Structure

– Discussion of ESnet4 architecture and deployment

– NP Science portfolio

– I2 Perspective

– Round table discussions of case study documents

• Ensure that networking folks understand the science process, instruments and facilities, collaborations, etc. outlined in case studies

• Provide opportunity for discussions of synergy, common strategies, etc

• Interactive discussion rather than formal PowerPoint presentations

– Collaboration services discussion – Wednesday morning

Page 19: 1 Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group May 6, 2008 Energy Sciences Network

19

Questions?

• Thanks!