a high-performance campus-scale cyberinfrastructure for effectively bridging end-user laboratories...

8
A High-Performance Campus-Scale Cyberinfrastructure For Effectively Bridging End-User Laboratories to Data-Intensive Sources Presentation by Larry Smarr to the NSF Campus Bridging Workshop April 7, 2010 University Place Conference Center Indianapolis, IN Philip Papadopoulos, SDSC Larry Smarr, Calit2 University of California, San Diego

Upload: maria-white

Post on 04-Jan-2016

218 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: A High-Performance Campus-Scale Cyberinfrastructure For Effectively Bridging End-User Laboratories to Data-Intensive Sources Presentation by Larry Smarr

A High-Performance Campus-Scale Cyberinfrastructure

For Effectively Bridging End-User Laboratories to Data-Intensive Sources

Presentation by Larry Smarr to the NSF Campus Bridging Workshop

April 7, 2010

University Place Conference Center

Indianapolis, IN

Philip Papadopoulos, SDSC

Larry Smarr, Calit2

University of California, San Diego

Page 2: A High-Performance Campus-Scale Cyberinfrastructure For Effectively Bridging End-User Laboratories to Data-Intensive Sources Presentation by Larry Smarr

Academic Research “OptIPlatform” Cyberinfrastructure:An End-to-End 10Gbps Lightpath Cloud

National LambdaRail

CampusOpticalSwitch

Data Repositories & Clusters

HPC

HD/4k Video Images

HD/4k Video Cams

End User OptIPortal

10G Lightpaths

HD/4k TelepresenceInstruments

Page 3: A High-Performance Campus-Scale Cyberinfrastructure For Effectively Bridging End-User Laboratories to Data-Intensive Sources Presentation by Larry Smarr

“Blueprint for the Digital University”--Report of the UCSD Research Cyberinfrastructure Design Team

• Focus on Data Storage and Data Curation– These Become the Centralized Components– Other Common Elements “Plug In”

research.ucsd.edu/documents/rcidt/RCIDTReportFinal2009.pdf

April 24, 2009

Page 4: A High-Performance Campus-Scale Cyberinfrastructure For Effectively Bridging End-User Laboratories to Data-Intensive Sources Presentation by Larry Smarr

Source: Jim Dolgonas, CENIC

Campus Bridging Preparations Needed to Accept CENIC CalREN Handoff to Campus

Page 5: A High-Performance Campus-Scale Cyberinfrastructure For Effectively Bridging End-User Laboratories to Data-Intensive Sources Presentation by Larry Smarr

Current UCSD Prototype Optical Core:Bridging End-Users to CENIC L1, L2, L3 Services

Source: Phil Papadopoulos, SDSC/Calit2 (Quartzite PI, OptIPuter co-PI)

Quartzite Network MRI #CNS-0421555; OptIPuter #ANI-0225642

Lucent

Glimmerglass

Force10

Enpoints:

>= 60 endpoints at 10 GigE

>= 32 Packet switched

>= 32 Switched wavelengths

>= 300 Connected endpoints

Approximately 0.5 TBit/s Arrive at the “Optical”

Center of Campus.Switching is a Hybrid of: Packet, Lambda, Circuit --OOO and Packet Switches

Page 6: A High-Performance Campus-Scale Cyberinfrastructure For Effectively Bridging End-User Laboratories to Data-Intensive Sources Presentation by Larry Smarr

Calit2 Sunlight Optical Exchange Contains Quartzite 3-Level Switch

10:45 am Feb. 21,

2008

Page 7: A High-Performance Campus-Scale Cyberinfrastructure For Effectively Bridging End-User Laboratories to Data-Intensive Sources Presentation by Larry Smarr

UCSD Campus Investment in Fiber and Networks Enables High Performance Campus Bridging CI

DataOasis (Central) Storage

OptIPortalTile Display Wall

Campus Lab Cluster

Digital Data Collections

Triton – Petadata Analysis

Gordon – HPC System

Cluster Condo

Scientific Instruments

N x 10GbeN x 10Gbe CENIC, NLR, I2DCNCENIC, NLR, I2DCN

Source: Philip Papadopoulos, SDSC, UCSD

Page 8: A High-Performance Campus-Scale Cyberinfrastructure For Effectively Bridging End-User Laboratories to Data-Intensive Sources Presentation by Larry Smarr

Rapid Evolution of 10GbE Port PricesMakes Campus-Scale 10Gbps CI Affordable

2005 2007 2009 2010

$80K/port Chiaro

(60 Max)

$ 5KForce 10(40 max)

$ 500Arista

48 ports

~$1000(300+ Max)

$ 400Arista

48 ports

• Port Pricing is Falling • Density is Rising – Dramatically• Cost of 10GbE Approaching Cluster HPC Interconnects

Source: Philip Papadopoulos, SDSC, UCSD