www.openfabrics.org distributed visualization and data resources enabling remote interaction and...

11
www.openfabrics.org Distributed Visualization and Data Resources Enabling Remote Interaction and Analysis Scott A. Friedman UCLA Institute for Digital Research and Education

Upload: arabella-brown

Post on 29-Dec-2015

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Www.openfabrics.org Distributed Visualization and Data Resources Enabling Remote Interaction and Analysis Scott A. Friedman UCLA Institute for Digital

www.openfabrics.org

Distributed Visualization and Data Resources Enabling Remote Interaction and Analysis

Scott A. Friedman

UCLA Institute for Digital Research and Education

Page 2: Www.openfabrics.org Distributed Visualization and Data Resources Enabling Remote Interaction and Analysis Scott A. Friedman UCLA Institute for Digital

2www.openfabrics.org

Motivating Challenge

Remote access to visualization cluster• Limited resources ($) for development

• Re-use as much exiting IB (verb) code as possible (iWarp)• Leverage existing 10G infrastructure (UCLA, CENIC)

• Use layer 3• Pacing critical for consistent user interaction

Remote access to researcher’s data• Same limitations as above• High performance retrieval of time step data• Synchronized to modulo of rendering rate

Page 3: Www.openfabrics.org Distributed Visualization and Data Resources Enabling Remote Interaction and Analysis Scott A. Friedman UCLA Institute for Digital

3www.openfabrics.org

Visualization Cluster

Hydra - Infiniband based 24 rendering nodes (48 nvidia 9800GTX+)

• 1 high-definition display node• 3 visualization center projection nodes• 1 remote visualization bridge node*

Research System - primarily• High performance interactive rendering (60Hz)• Dynamic (on-line) load balancing

System requires both low latency and high bandwidth• Perfect for Infiniband!

* We will return to this in a minute

Page 4: Www.openfabrics.org Distributed Visualization and Data Resources Enabling Remote Interaction and Analysis Scott A. Friedman UCLA Institute for Digital

4www.openfabrics.org

Network Diagram

visualizationcluster

visualizationcluster

localviz nodes

localviz nodes

IB iWarp

IB

Remoteviz nodeRemoteviz node

uclaucla

ceniccenic

nlrnlr

scinetscinet

pixeldata

input

bridgenode

bridgenode

Acts likea display tothe cluster

Remotedisk nodeRemote

disk node

command

command

data

data

Page 5: Www.openfabrics.org Distributed Visualization and Data Resources Enabling Remote Interaction and Analysis Scott A. Friedman UCLA Institute for Digital

5www.openfabrics.org

Remote Visualization Bridge Node

Bridges from IB to iWarp/10G Ethernet Appears like a display node to cluster

No change to existing rendering system Pixel data arrives over IB, which

is sent using iWarp to a remote display node uses same RDMA protocol

• Same buffer used for receive and send (bounce)• Very simple in principle - sort of…• TCP settings can be a challenge

Time step data also passes through (opposite direction)• Distributed to appropriate rendering nodes

Page 6: Www.openfabrics.org Distributed Visualization and Data Resources Enabling Remote Interaction and Analysis Scott A. Friedman UCLA Institute for Digital

6www.openfabrics.org

Data flow

interaction

framesra

te c

ontro

l

step

dat

a

remotedisplay

remotedisk

GPU0

GPU1

GPU0

GPU1

GPU1

GPU0

GPU1

IB s

witc

h

IB

iWarp

Hydra visualization cluster

Time step data inbound

Time step data inbound

Frame data outbound

Frame data outbound

WAN

SDR IB

DDR IB

10GE

Data set size and rate determine throughput.Example data is 1000 16MB time steps.Rates from 10-60hz give data flows from 1.25-7.5Gbps

Display resolution and update rate determine throughput.Example resolution of 2560x1600 is 11.75MB per frame.Rates from 10-60hz give data flows from 0.9 to 5.5Gbps

Page 7: Www.openfabrics.org Distributed Visualization and Data Resources Enabling Remote Interaction and Analysis Scott A. Friedman UCLA Institute for Digital

7www.openfabrics.org

Example data set

Uncompressed video stream• 20-60Hz at up to 2560x1600 ~ 1.8Gbps – 5.5Gbps

Time steps• Time step size and rate determine flow• 10000 steps at 20-60Hz at 16MB/step ~ 2.4Gbps – 7.5Gbps

Actual visualization• Interactive 100M particle n-body simulation• 11 unique processes

Performance measurements are internal• Summaries are displayed at completion of run

Page 8: Www.openfabrics.org Distributed Visualization and Data Resources Enabling Remote Interaction and Analysis Scott A. Friedman UCLA Institute for Digital

8www.openfabrics.org

Running Demo

Page 9: Www.openfabrics.org Distributed Visualization and Data Resources Enabling Remote Interaction and Analysis Scott A. Friedman UCLA Institute for Digital

9www.openfabrics.org

Future Work

Latency is a non-issue at UCLA (which is our main focus)• ~60 usec across campus

Longer latencies / distances are an issue• UCLA to UC Davis ~20ms rtt (CENIC)• UCLA to SC07 (Reno) ~14ms rtt (CENIC/NLR)• UCLA to SC08 (Austin) ~30ms rtt (CENIC/NLR)

Frames and time step data completely in-flight (at high frame-rates)• Not presently compensating for this

Minimizing latency important for interaction• Pipelining transfers helps hide latency: lat’ = ( lat / chunks ) chunks + 1

chunks = 1 : lat = 2 chunks = 2 : lat = 1.5

Page 10: Www.openfabrics.org Distributed Visualization and Data Resources Enabling Remote Interaction and Analysis Scott A. Friedman UCLA Institute for Digital

10www.openfabrics.org

Lessons Learned

TCP nature of iWarp (some layer 3 related)

Congestion control wreaks havoc on interaction / latencies Reliability

• Not needed for video frames

• Maybe needed for data (depends on application)

Buffer tuning – headache

Layer 9 issues Better to use UDP?

Is there an unreliable iWarp?

Dedicated/Dynamic circuits? Obsidian Longbow IB extenders (haven’t tried this) Understand there may be an IP version out there? (not iWarp, but…)

Page 11: Www.openfabrics.org Distributed Visualization and Data Resources Enabling Remote Interaction and Analysis Scott A. Friedman UCLA Institute for Digital

11www.openfabrics.org

Thank you

[email protected]