the optiputer project – removing bandwidth as an obstacle in data intensive sciences opening...
Post on 25-Dec-2015
221 Views
Preview:
TRANSCRIPT
The OptIPuter Project – Removing Bandwidth as an Obstacle
In Data Intensive Sciences
Opening Remarks
OptIPuter Team Meeting
University of California, San Diego
February 6, 2003
Dr. Larry Smarr
Director, California Institute for Telecommunications and Information Technologies
Harry E. Gruber Professor,
Dept. of Computer Science and Engineering
Jacobs School of Engineering, UCSD
The Move to Data-Intensive Science & Engineering-e-Science Community Resources
ATLAS
Sloan Digital Sky Survey
LHC
ALMA
Why Optical Networks Are Emerging as the 21st Century Driver for the Grid
Scientific American, January 2001
Parallel Lambdas Will Drive This DecadeThe Way Parallel Processors Drove the 1990s
CONTROL
PLANE
Clusters
DynamicallyAllocatedLightpaths
Switch Fabrics
PhysicalMonitoring
Apps Middleware
A LambdaGrid Will Be the Backbone for an e-Science Network
Source: Joe Mambretti, NU
The Biomedical Informatics Research Network a Multi-Scale Brain Imaging Federated Repository
BIRN Test-bedsBIRN Test-beds::Multiscale Mouse Models of Disease, Human Brain Morphometrics, and Multiscale Mouse Models of Disease, Human Brain Morphometrics, and
FIRST BIRN (FIRST BIRN (10 site project for fMRI’s of Schizophrenics)10 site project for fMRI’s of Schizophrenics)
NIH Plans to Expand to Other Organs
and Many Laboratories
GEON’s Data Grid Team Has Strong Overlap with BIRN and OptIPuter
• Learning From The BIRN Project– The GEON Grid:
– Heterogeneous Networks, Compute Nodes, Storage
– Deploy Grid And Cluster Software Across GEON– Peer-to-Peer Information Fabric for Sharing:
– Data, Tools, And Compute Resources
Source: Chaitan Baru, SDSC, Cal-(IT)2
Two Science “Testbeds” Broad Range Of Geoscience Data Sets
NSF ITR Grant $11.25M
2002-2007
Data Intensive Scientific Applications Require Experimental Optical Networks
• Large Data Challenges in Neuro and Earth Sciences– Each Data Object is 3D and Gigabytes– Data are Generated and Stored in Distributed Archives– Research is Carried Out on Federated Repository
• Requirements– Computing Requirements PC Clusters– Communications Dedicated Lambdas Over Fiber– Data Large Peer-to-Peer Lambda Attached Storage – Visualization Collaborative Volume Algorithms
• Response– OptIPuter Research Project
Coherence
DRAM - 4 GB - HIGHLY INTERLEAVEDMULTI-LAMBDAOptical Network
VLIW/RISC CORE40 GFLOPS
10 GHz
240 GB/s24 Bytes wide
240 GB/s24 Bytes wide
VLIW/RISC CORE 40 GFLOPS 10 GHz
...
2nd LEVEL CACHE8 MB
2nd LEVEL CACHE 8 MB
CROSS BAR
DRAM – 16 GB64/256 MB - HIGHLY INTERLEAVED
640GB/s
OptIPuter Inspiration--Node of a 2009 PetaFLOPS Supercomputer
Updated From Steve Wallach, Supercomputing 2000 Keynote
5 Terabits/s
Global Architecture of a 2009 COTS PetaFLOPS System
I/O
ALL-OPTICAL SWITCH
Multi-DieMulti-Processor
1
23
64
63
49
48
4 516
17
18
32
3347 46
128 Die/Box4 CPU/Die
10 meters= 50 nanosec Delay
...
...
...
...
LAN/WAN
Source: Steve Wallach, Supercomputing 2000 Keynote
Systems Become GRID Enabled
From SuperComputers to SuperNetworks--Changing the Grid Design Point
• The TeraGrid is Optimized for Computing– 1024 IA-64 Nodes Linux Cluster– Assume 1 GigE per Node = 1 Terabit/s I/O– Grid Optical Connection 4x10Gig Lambdas = 40 Gigabit/s– Optical Connections are Only 4% Bisection Bandwidth
• The OptIPuter is Optimized for Bandwidth– 32 IA-64 Node Linux Cluster– Assume 1 GigE per Processor = 32 gigabit/s I/O– Grid Optical Connection 4x10GigE = 40 Gigabit/s– Optical Connections are Over 100% Bisection Bandwidth
Convergence of Networking Fabrics
• Today's Computer Room– Router For External Communications (WAN)– Ethernet Switch For Internal Networking (LAN)– Fibre Channel For Internal Networked Storage (SAN)
• Tomorrow's Grid Room– A Unified Architecture Of LAN/WAN/SAN Switching– More Cost Effective
– One Network Element vs. Many
– One Sphere of Scalability– ALL Resources are GRID Enabled
– Layer 3 Switching and Addressing Throughout
Source: Steve Wallach, Chiaro Networks
½ Mile
The UCSD OptIPuter Deployment
SIO
SDSC
CRCA
Phys. Sci -Keck
SOM
JSOE Preuss
6th College
Phase I, Fall 02
Phase II, 2003
SDSCAnnex
Collocation point
Node M
The OptIPuter Experimental UCSD Campus Optical Network
Earth Sciences
SDSC
Arts
Chemistry
Medicine
Engineering
High School
UndergradCollege
Phase I, Fall 02
Phase II, 2003
SDSCAnnex
To CENIC
Collocation point
Collocation
Chiaro Router
Production Router
Source: Phil Papadopoulos, SDSC; Greg Hidley, Cal-(IT)2
Metro Optically Linked Visualization Wallswith Industrial Partners Set Stage for Federal Grant
• Driven by SensorNets Data– Real Time Seismic– Environmental Monitoring – Distributed Collaboration– Emergency Response
• Linked UCSD and SDSU– Dedication March 4, 2002
Linking Control Rooms
Cox, Panoram,SAIC, SGI, IBM,
TeraBurst NetworksSD Telecom Council
UCSD SDSU44 Miles of Cox Fiber
National Light Rail- Serving Very High-End Experimental and Research Applications
• Extension of CalREN-XD Dark Fiber Network– Serves Network Researchers in California Research
Institutions– Four UC Institutes, USC/ISI, Stanford and CalTech
– 10Gb Wavelengths (OC-192c or 10G LANPHY) – Dark Fiber– Point-Point, Point-MultiPoint 1G Ethernet Possible
• NLR is a Dark Fiber National Footprint– 4 - 10GB Wavelengths Initially– Capable of 40 10Gb Wavelengths at Build-Out– Partnership model
John Silvester, Dave Reese, Tom West-CENIC
National Light Rail Footprint Layer 1 Topology
PITPIT
PORPOR
FREFRE
RALRAL
WALWAL
NASNASPHOPHO
STHSTHATLATL
CHICHI
CLECLE
KANKAN
OGDOGD
SACSAC BOSBOSNYCNYC
WDCWDC
STRSTR
DALDAL
DENDEN
LAXLAX
SVLSVL
SEASEA
SDGSDG
JACJAC
15808 Terminal, Regen or OADM site (OpAmp sites not shown)
Fiber route
John Silvester, Dave Reese, Tom West-CENIC
Calient Lambda Switches Now Installed at StarLight and NetherLight
GigE = Gigabit Ethernet (Gbps connection type)
8-processor cluster
16-processor cluster
Switch/Router
8 GigE16 GigE
8 GigE16 GigE
Control plane
Data plane
“Groomer” at StarLight
8 GigE
2 GigE
128x128MEMS
Optical Switch
N E T H E R L I G H T
16-processor cluster
8 GigE
16 GigE
16 GigE
“Groomer” at NetherLight
Control plane
Data plane
2 GigE
OC-192
(10Gbps)
64x64MEMS
Optical Switch
Switch/Router
GigE = Gigabit Ethernet (Gbps connection type)
8-processor cluster
16-processor cluster
Switch/Router
8 GigE16 GigE
8 GigE16 GigE
Control plane
Data plane
“Groomer” at StarLight
8 GigE
2 GigE
128x128MEMS
Optical Switch
N E T H E R L I G H T
16-processor cluster
8 GigE
16 GigE
16 GigE
“Groomer” at NetherLight
Control plane
Data plane
2 GigE
OC-192
(10Gbps)
64x64MEMS
Optical Switch
Switch/Router
Source: Maxine Brown
Amplified Collaboration Environments
Collaborative Tiled Display Accessgrid Multisite
Video Conferencing
CollaborativePassive Stereo
Display
CollaborativeTouch Screen
Whiteboard
WirelessLaptops &
Tablet PCs To Steer The Displays
Source: Jason Leigh
OptIPuter Software Research
• Near-term Goals: – Build Software To Support Applications With Traditional Models
– High Speed IP Protocol Variations (RBUDP, SABUL, …)– Switch Control Software For DWDM Management And Dynamic Setup– Distributed Configuration Management For OptIPuter Systems
• Long-Term Goals: – System Model Which Supports:
– Grid– Single System– Multi-System Views
– Architectures Which Can: – Harness High Speed DWDM– Exploit Flexible Dispersion Of Data And Computation
– New Communication Abstractions & Data Services – Make Lambda-Based Communication Easily Usable– Use DWDM to Enable Uniform Performance View Of Storage
Source: Andrew Chien, UCSD
Photonic Data Services & OptIPuter
1. Physical
4. Transport – TCP, UDP, SABUL,… (USC,UIC)
5b. Data Services – SOAP, DWTP, (UIC/LAC)
6. Data Intensive Applications (UCI)
2. Photonic Path Serv. – ODIN, THOR,... (NW)
3. IP
5a. Storage (UCSD)
Source: Robert Grossman, UIC/LAC
OptIPuter is Exploring Quanta as a High Performance Middleware
• Quanta Is A High Performance Networking Toolkit / API
• Quanta Uses Reliable Blast UDP:– Assumes An Over-Provisioned Or Dedicated Network– Excellent For Photonic Networks – Don’t Try This On Commodity Internet!
– It Is Fast!
– It Is Very Predictable
– We Give You A Prediction Equation To Predict Performance
– It Is Most Suited For Transferring Very Large Payloads
• RBUDP, SABUL, and Tsunami Are All Similar Protocols That Use UDP For Bulk Data Transfer
Source: Jason Leigh, UIC
XCP Is A New Congestion Control SchemeWhich is Good for Gigabit Flows
• Better Than TCP – Almost Never Drops Packets– Converges To Available Bandwidth Very Quickly, ~1Round Trip Time– Fair Over Large Variations In Flow Bandwidth and RTT
• Supports existing TCP semantics– Replaces Only Congestion Control, Reliability Unchanged– No Change To Application/Network API
• Status– To Date: Simulations and SIGCOMM Paper (MIT).
– See Dina Katabi, Mark Handley, and Charles Rohrs, "Internet Congestion Control for Future High Bandwidth-Delay Product Environments." ACM SIGCOMM 2002, August 2002. http://ana.lcs.mit.edu/dina/XCP/
– Current: – Developing Protocol, Implementation – Extending Simulations (ISI)
Source: Aaron Falk, Joe Bannister, ISI USC
Multi-Lambda Security Research
• Security Frequently Defined Through Three Measures: – Integrity, Confidentiality, And Reliability (”Uptime”)
• Can These Measures Can Be Enhanced By Routing Transmissions Over Multiple Lambdas Of Light?
• Can Confidentiality Be Improved By Dividing The Transmission Over Multiple Lambdas And Using “Cheap” Encryption?
• Can Integrity Be Ensured Or Reliability Be Improved Through Sending Redundant Transmissions And Comparing?
Source: Goodrich, Karin
Research on Developing an Integrated Control Plane
OpticalLambda Switching
LogicalLabel
Switching
OpticalBurst
Switching
Integrated Control Plane
Megabit Stream
Gigabit Stream
Bursty Traffic
Multiple User Data Planes
Lambda Inverse
Multiplexing
Tera/Peta Stream
Source: Oliver Yu, UIC
Fast polygon and volume rendering with stereographics
GeoWall
Earth Science
GeoFusion GeoMatrix Toolkit
Underground Earth Science
Rob Mellors and Eric Frost, SDSUSDSC Volume Explorer
Dave Nadeau, SDSC, BIRNSDSC Volume Explorer
NeuroscienceAnatomy
Visible Human ProjectNLM, Brooks AFB,
SDSC Volume Explorer
3D APPLICATIONS:
+
=
OptIPuter Transforms Individual Laboratory Visualization, Computation, & Analysis Facilities
The Preuss School UCSD OptIPuter Facility
top related