indianauniversityindianauniversity transpac high-performance connectivity between the us and the...

Post on 27-Mar-2015

217 Views

Category:

Documents

3 Downloads

Preview:

Click to see full reader

TRANSCRIPT

I

N

D

I

A

N

A

U

N

I

V

E

R

S

I

T

Y

TransPACHigh-performance connectivity between the US

and the Asia-Pacific region

James Williamswilliams@iu.edu

TransPAC Executive InvestigatorIndiana University

February 7, 2002

The TransPAC Project is funded by the US National Science Foundationand the Japan Science and Technology Corporation

I

N

D

I

A

N

A

U

N

I

V

E

R

S

I

T

Y

Topics to be discussed

• TransPAC background

• Network-enabled science

• TransPAC technical overview

I

N

D

I

A

N

A

U

N

I

V

E

R

S

I

T

Y

The TransPAC Project provides high-performance network connectivity between the Asia-Pacific region and the United States for the purpose of encouraging educational and scientific collaboration among scientists and researchers in these respective areas.

Specifically, TransPAC connects the Asia-Pacific Advanced Network (APAN) to the US high-performance infrastructure (Abilene, the vBNS and “Fednets”) and to other international high-performance networks (Canarie, and EU networks).

Background

I

N

D

I

A

N

A

U

N

I

V

E

R

S

I

T

Y

Indiana University provides technical and administrative support for TransPAC in the US. KDDI provides similar support for TransPAC in Japan.

The TransPAC Project is jointly funded by the US National Science Foundation and the Japan Science and Technology Corporation.

Background 2

I

N

D

I

A

N

A

U

N

I

V

E

R

S

I

T

Y

Network-enabled science and research in the 21st century

• Science and research are becoming progressively more global with network-enabled world wide collaborative communities rapidly forming in a broad range of areas

• Many are based around a few expensive – sometimes unique – instruments or distributed complexes of sensors that produce vast amounts of data

• These global communities will carry out research based on this data

I

N

D

I

A

N

A

U

N

I

V

E

R

S

I

T

Y

Network-enabled science and research in the 21st century

• This data will be analyzed by supercomputers and large computer clusters, visualized with advanced 3-D display technology, and stored in massive or large data storage systems – all of this will be distributed globally

• Note the tight interaction between computation, storage and networking

I

N

D

I

A

N

A

U

N

I

V

E

R

S

I

T

Y

Some examples of global science

• NSF-funded Grid Physics Network’s (GriPhyN) need for petascale virtual data grids (i.e., capable of analyzing petabyte datasets) (http://www.griphyn.org/)

• The Large Hadron Collider (LHC) located at CERN (http://lhc.web.cern.ch/lhc/)

• Earthscope Geological and Seismic Collaboratory (http://www.earthscope.org)

• Sloan Digital Sky Survey (SDSS) (http://www.sdss.org/)

Earthscope Geological and Seismic Collaboratory

• Earthscope applies the latest observational, analytic and telecommunications technologies to investigate the structure and evolution of the North American continent and the physical processes controlling earthquakes and volcanic eruptions

• Four components of a network-based instrument collaboratory

– USArray - continental scale seismic array to provide a coherent 3-D image of the lithosphere and deeper Earth

– SAFOD - San Andreas Fault Observatory at Depth– PBO - Plate Boundary Observatory– InSAR synthetic aperture radar images of tectonically

active regions

I

N

D

I

A

N

A

U

N

I

V

E

R

S

I

T

Y

Earthscope - International Connections

• “The U.S. scientific community is poised to implement the Earthscope initiative that would provide urgently needed observations on a global scale.”1

• Some project funding from the International Continental Scientific Drilling Program (ICDP); members include Canada, China, Germany, Japan, Mexico, Poland, and the US

• Array extensions in Canada and Mexico • Large ground motion sensor array in Japan• Taiwan sensor array modeled after US efforts

1Testimony before Congress, 3/21/2001 by M. Miller, U. Central Washington

Data distribution from the Large Hadron Collider (LHC) at CERN

Tier 1

Online System

Offline Farm,CERN Computer Ctr ~25

TIPS

FNAL Center

IN2P3 Center

INFN Center RAL Center

InstituteInstituteInstituteInstitute ~0.25TIPS

Workstations

~100 MBytes/sec

~0.6-2.5 Gbps

100 - 1000 Mbits/sec

~PByte/sec

~2.5 Gbits/sec

Tier2 CenterTier2 CenterTier2 Center

~0.6-2.5 Gbps

Tier 0 +1

Tier 3

Tier 4

Tier2 Center Tier 2

Experiment

Source: Harvey Newman

GriPhyN iVDGLMap Circa 2002-2003US, UK, Italy, France, Japan, Australia

GriPhyN iVDGLMap Circa 2002-2003US, UK, Italy, France, Japan, Australia

Tier0/1 facilityTier2 facility

10Gbpslink2.5Gbpslink622 Mbps linkOther link

Tier3 facility

International Virtual-Data Grid LaboratoryConduct Data Grid tests “at scale”Develop CommonGrid infrastructureNational, international scale Data Grid

tests, operations (GGOC) Components

Tier1, Selected Tier2 and Tier3 Sites Distributed TerascaleFacility (DTF)0.6 -10 Gbpsnetworks: US, Europe, transoceanic

http://www.ivdgl.org and http://igoc.iu.edu* H. Newman

*

I

N

D

I

A

N

A

U

N

I

V

E

R

S

I

T

Y

Data rates for some selected projects

Large Hadron Collider (CERN)8.4 TB/day100 MB/sec

Atacama Large Millimeter Array (Chile)

520 GB/day6 MB/sec

LIGO1 (Hanford, WA & Livingston, LA)

500 GB/day-2.6 TB/day30 MB/sec (raw); 6 MB/sec (processed)

Amanda II Neutrino Detector (South Pole)

6-8 GB/day70-93 KB/sec

IceCube Neutrino Detector (South Pole)

200-500 GB/day2.3-6 MB/sec

1 Data rates for these two instruments only. A minimum of three are required for spatial resolution.

I

N

D

I

A

N

A

U

N

I

V

E

R

S

I

T

Y

Our challenge is to design, build and manage the reliable, stable networks needed for scientiststo collect and analyze their data globally.

I

N

D

I

A

N

A

U

N

I

V

E

R

S

I

T

Y

• What did the network look like before October 2001?

• What does the network look like after October 2001?• New OC-12 POS circuit from Tokyo to Seattle• New OC-12 ATM circuit from Tokyo to Chicago

• PVC Configuration of Southern ATM Route

• BGP relationship and traffic engineering

TransPAC Technical Overview

What did the network look like before October 2001?

Prior to 15 October 2001, the TransPAC network consisted of 155Mbps ATM service from Tokyo to the STAR TAP in Chicago

I

N

D

I

A

N

A

U

N

I

V

E

R

S

I

T

Y

On 15 October 2001, TransPAC was upgraded to 1.244Gbps. The new TransPAC network has dual 622Mbps connections from Tokyo to Seattle (Pacific Wave Connection Point) [POS] and to Chicago (StarLight Connection Point) [ATM].

The Tokyo-Seattle link is supplied by Teleglobe.

The Tokyo-Chicago is supplied by KDDI.

What does the network look like after October 2001?

What does the network look like after October 2001?

New OC-12 POS circuit from Tokyo to Seattle

• Trans-Pacific and west-coast circuit provided by Teleglobe• Terminates into a Juniper M10 at the Pacific Northwest Gigapop

Weekly Traffic graph from – 1/7/02 1/14/02

New OC-12 ATM circuit from Tokyo to Chicago

• Trans-Pacific link provided by KDDI• Contains multiple PVCs to provide direct peering with US HPRENs

Weekly Traffic graph from – 1/7/02 1/14/02

PVC Configuration of Southern ATM Route

Abilene BGP relationship and traffic engineering

• Abilene receives APAN routes from three sources: (in order of preference) • From the colocated TransPAC router at PNWG to the Seattle core node• From the direct ATM PVC to the Indianapolis core node• From the Chicago node’s peering with STARTAP

• APAN advertises community 11537:40 on the direction PVC and through STARTAP• Abilene’s route maps, in turn, localpref these routes at 40• The default localpref for ITN peers will remain at 100 in Seattle• APAN localpref’s Abilene routes from Seattle higher than the other two connections

I

N

D

I

A

N

A

U

N

I

V

E

R

S

I

T

Y

Questions and Comments

Useful Links:

Main TransPAC Web page:http://www.transpac.org

TransPAC NOC Web page:http://noc.transpac.org

TransPAC traffic graphs: http://loadrunner.uits.iu.edu/mrtg-monitors/

transpac

TransPAC router proxy:

http://loadrunner.uits.iu.edu/~routerproxy/transpacContact Information:• Administrative: Jim Williams <william@indiana.edu>• Technical: Chris Robb <chrobb@indiana.edu>• 24x7 NOC: (317) 278-6630 <noc@transpac.org>

top related