software libraries and middleware for exascale systems · increasing usage of hpc, big data and...

67
Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systems Dhabaleswar K. (DK) Panda The Ohio State University E-mail: [email protected] http://www.cse.ohio-state.edu/~panda Keynote Talk at HPCAC Stanford Conference (Feb ‘18) by

Upload: ngonguyet

Post on 25-May-2018

226 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systems

Dhabaleswar K. (DK) Panda

The Ohio State University

E-mail: [email protected]

http://www.cse.ohio-state.edu/~panda

Keynote Talk at HPCAC Stanford Conference (Feb ‘18)

by

Page 2: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 2Network Based Computing Laboratory

High-End Computing (HEC): Towards Exascale

Expected to have an ExaFlop system in 2019-2020!

100 PFlops in

2016

1 EFlops in 2019-

2020?

Page 3: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 3Network Based Computing Laboratory

Big Data (Hadoop, Spark,

HBase, Memcached,

etc.)

Deep Learning(Caffe, TensorFlow, BigDL,

etc.)

HPC (MPI, RDMA, Lustre, etc.)

Increasing Usage of HPC, Big Data and Deep Learning

Convergence of HPC, Big Data, and Deep Learning!

Increasing Need to Run these applications on the Cloud!!

Page 4: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 4Network Based Computing Laboratory

• Traditional HPC

– Message Passing Interface (MPI), including MPI + OpenMP

– Support for PGAS and MPI + PGAS (OpenSHMEM, UPC)

– Exploiting Accelerators

• Deep Learning

– Caffe and CNTK

• Cloud for HPC

– Virtualization with SR-IOV and Containers

Focus of Today’s Talk: HPC, Deep Learning, and Cloud

Focus of Tomorrow’s Talk: HPC + Big Data, Deep Learning, and Cloud

Page 5: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 5Network Based Computing Laboratory

• Big Data/Enterprise/Commercial Computing

– Spark and Hadoop (HDFS, HBase, MapReduce)

– Memcached for Web 2.0

– Kafka for Stream Processing

– Swift

• Deep Learning

– TensorFlow with gRPC

• Cloud for Big Data Stacks

– Virtualization with SR-IOV and Containers

Focus of Tomorrow’s Talk: HPC, BigData, Deep Learning and Cloud

Page 6: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 6Network Based Computing Laboratory

Parallel Programming Models Overview

P1 P2 P3

Shared Memory

P1 P2 P3

Memory Memory Memory

P1 P2 P3

Memory Memory Memory

Logical shared memory

Shared Memory Model

SHMEM, DSM

Distributed Memory Model

MPI (Message Passing Interface)

Partitioned Global Address Space (PGAS)

Global Arrays, UPC, Chapel, X10, CAF, …

• Programming models provide abstract machine models

• Models can be mapped on different types of systems

– e.g. Distributed Shared Memory (DSM), MPI within a node, etc.

• PGAS models and Hybrid MPI+PGAS models are gradually receiving

importance

Page 7: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 7Network Based Computing Laboratory

Partitioned Global Address Space (PGAS) Models

• Key features

- Simple shared memory abstractions

- Light weight one-sided communication

- Easier to express irregular communication

• Different approaches to PGAS

- Languages

• Unified Parallel C (UPC)

• Co-Array Fortran (CAF)

• X10

• Chapel

- Libraries

• OpenSHMEM

• UPC++

• Global Arrays

Page 8: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 8Network Based Computing Laboratory

Hybrid (MPI+PGAS) Programming

• Application sub-kernels can be re-written in MPI/PGAS

based on communication characteristics

• Benefits:

– Best of Distributed Computing Model

– Best of Shared Memory Computing Model

Kernel 1MPI

Kernel 2MPI

Kernel 3MPI

Kernel NMPI

HPC Application

Kernel 2PGAS

Kernel NPGAS

Page 9: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 9Network Based Computing Laboratory

Supporting Programming Models for Multi-Petaflop and Exaflop Systems: Challenges

Programming ModelsMPI, PGAS (UPC, Global Arrays, OpenSHMEM), CUDA, OpenMP,

OpenACC, Cilk, Hadoop (MapReduce), Spark (RDD, DAG), etc.

Application Kernels/Applications

Networking Technologies(InfiniBand, 40/100GigE,

Aries, and Omni-Path)

Multi-/Many-coreArchitectures

Accelerators(GPU and FPGA)

MiddlewareCo-Design

Opportunities

and

Challenges

across Various

Layers

Performance

Scalability

Resilience

Communication Library or Runtime for Programming Models

Point-to-point

Communication

Collective

Communication

Energy-

Awareness

Synchronization

and Locks

I/O and

File Systems

Fault

Tolerance

Page 10: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 10Network Based Computing Laboratory

• Scalability for million to billion processors– Support for highly-efficient inter-node and intra-node communication (both two-sided and one-sided)

– Scalable job start-up

• Scalable Collective communication– Offload

– Non-blocking

– Topology-aware

• Balancing intra-node and inter-node communication for next generation nodes (128-1024 cores)– Multiple end-points per node

• Support for efficient multi-threading

• Integrated Support for GPGPUs and Accelerators

• Fault-tolerance/resiliency

• QoS support for communication and I/O

• Support for Hybrid MPI+PGAS programming (MPI + OpenMP, MPI + UPC, MPI + OpenSHMEM, MPI+UPC++, CAF, …)

• Virtualization

• Energy-Awareness

Broad Challenges in Designing Communication Middleware for (MPI+X) at Exascale

Page 11: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 11Network Based Computing Laboratory

• Extreme Low Memory Footprint– Memory per core continues to decrease

• D-L-A Framework

– Discover

• Overall network topology (fat-tree, 3D, …), Network topology for processes for a given job

• Node architecture, Health of network and node

– Learn

• Impact on performance and scalability

• Potential for failure

– Adapt

• Internal protocols and algorithms

• Process mapping

• Fault-tolerance solutions

– Low overhead techniques while delivering performance, scalability and fault-tolerance

Additional Challenges for Designing Exascale Software Libraries

Page 12: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 12Network Based Computing Laboratory

Overview of the MVAPICH2 Project• High Performance open-source MPI Library for InfiniBand, Omni-Path, Ethernet/iWARP, and RDMA over Converged Ethernet (RoCE)

– MVAPICH (MPI-1), MVAPICH2 (MPI-2.2 and MPI-3.0), Started in 2001, First version available in 2002

– MVAPICH2-X (MPI + PGAS), Available since 2011

– Support for GPGPUs (MVAPICH2-GDR) and MIC (MVAPICH2-MIC), Available since 2014

– Support for Virtualization (MVAPICH2-Virt), Available since 2015

– Support for Energy-Awareness (MVAPICH2-EA), Available since 2015

– Support for InfiniBand Network Analysis and Monitoring (OSU INAM) since 2015

– Used by more than 2,875 organizations in 85 countries

– More than 445,000 (> 0.4 million) downloads from the OSU site directly

– Empowering many TOP500 clusters (Nov ‘17 ranking)

• 1st, 10,649,600-core (Sunway TaihuLight) at National Supercomputing Center in Wuxi, China

• 12th, 368,928-core (Stampede2) at TACC

• 17th, 241,108-core (Pleiades) at NASA

• 48th, 76,032-core (Tsubame 2.5) at Tokyo Institute of Technology

– Available with software stacks of many vendors and Linux Distros (RedHat and SuSE)

– http://mvapich.cse.ohio-state.edu

• Empowering Top500 systems for over a decade

– System-X from Virginia Tech (3rd in Nov 2003, 2,200 processors, 12.25 TFlops) ->

– Sunway TaihuLight (1st in Jun’17, 10M cores, 100 PFlops)

Page 13: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 13Network Based Computing Laboratory

0

50000

100000

150000

200000

250000

300000

350000

400000

450000

500000

Sep

-04

Feb

-05

Jul-

05

Dec

-05

May

-06

Oct

-06

Mar

-07

Au

g-0

7

Jan

-08

Jun

-08

No

v-0

8

Ap

r-0

9

Sep

-09

Feb

-10

Jul-

10

Dec

-10

May

-11

Oct

-11

Mar

-12

Au

g-1

2

Jan

-13

Jun

-13

No

v-1

3

Ap

r-1

4

Sep

-14

Feb

-15

Jul-

15

Dec

-15

May

-16

Oct

-16

Mar

-17

Au

g-1

7

Jan

-18

Nu

mb

er o

f D

ow

nlo

ads

Timeline

MV

0.9

.4

MV

2 0

.9.0

MV

2 0

.9.8

MV

21

.0

MV

1.0

MV

21

.0.3

MV

1.1

MV

21

.4

MV

21

.5

MV

21

.6

MV

21

.7

MV

21

.8

MV

21

.9

MV

22

.1

MV

2-G

DR

2.0

b

MV

2-M

IC 2

.0

MV

2-G

DR

2.3

aM

V2

-X2

.3b

MV

2V

irt

2.2

MV

2 2

.3rc

1

MVAPICH2 Release Timeline and Downloads

Page 14: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 14Network Based Computing Laboratory

Architecture of MVAPICH2 Software Family

High Performance Parallel Programming Models

Message Passing Interface(MPI)

PGAS(UPC, OpenSHMEM, CAF, UPC++)

Hybrid --- MPI + X(MPI + PGAS + OpenMP/Cilk)

High Performance and Scalable Communication RuntimeDiverse APIs and Mechanisms

Point-to-

point

Primitives

Collectives

Algorithms

Energy-

Awareness

Remote

Memory

Access

I/O and

File Systems

Fault

ToleranceVirtualization

Active

MessagesJob Startup

Introspection

& Analysis

Support for Modern Networking Technology(InfiniBand, iWARP, RoCE, Omni-Path)

Support for Modern Multi-/Many-core Architectures(Intel-Xeon, OpenPower, Xeon-Phi, ARM, NVIDIA GPGPU)

Transport Protocols Modern Features

RC XRC UD DC UMR ODPSR-

IOV

Multi

Rail

Transport Mechanisms

Shared

MemoryCMA IVSHMEM

Modern Features

MCDRAM* NVLink* CAPI*

* Upcoming

XPMEM*

Page 15: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 15Network Based Computing Laboratory

• Released on 02/19/2018

• Major Features and Enhancements

– Enhanced performance for Allreduce, Reduce_scatter_block, Allgather, Allgatherv through new algorithms

– Enhance support for MPI_T PVARs and CVARs

– Improved job startup time for OFA-IB-CH3, PSM-CH3, and PSM2-CH3

– Support to automatically detect IP address of IB/RoCE interfaces when RDMA_CM is enabled without relying on mv2.conf file

– Enhance HCA detection to handle cases where node has both IB and RoCE HCAs

– Automatically detect and use maximum supported MTU by the HCA

– Added logic to detect heterogeneous CPU/HFI configurations in PSM-CH3 and PSM2-CH3 channels

– Enhanced intra-node and inter-node tuning for PSM-CH3 and PSM2-CH3 channels

– Enhanced HFI selection logic for systems with multiple Omni-Path HFIs

– Enhanced tuning and architecture detection for OpenPOWER, Intel Skylake and Cavium ARM (ThunderX) systems

– Added 'SPREAD', 'BUNCH', and 'SCATTER' binding options for hybrid CPU binding policy

– Rename MV2_THREADS_BINDING_POLICY to MV2_HYBRID_BINDING_POLICY

– Added support for MV2_SHOW_CPU_BINDING to display number of OMP threads

– Update to hwloc version 1.11.9

MVAPICH2 2.3rc1

Page 16: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 16Network Based Computing Laboratory

• Scalability for million to billion processors– Support for highly-efficient communication

– Scalable Start-up

– Optimized Collectives using SHArP and Multi-Leaders

– Optimized CMA-based Collectives

– Optimized XPMEM-based Collectives

– Dynamic and Adaptive Communication and Tag Matching

– Performance Engineering with MPI-T Support

– Integrated Network Analysis and Monitoring

• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI + UPC, CAF, UPC++, …)

• Integrated Support for GPGPUs

• Optimized MVAPICH2 for OpenPower and ARM

• Application Scalability and Best Practices

Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale

Page 17: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 17Network Based Computing Laboratory

One-way Latency: MPI over IB with MVAPICH2

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2 Small Message Latency

Message Size (bytes)

Late

ncy

(u

s)

1.11

1.19

0.98

1.15

1.04

TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch

ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-5-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch

Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch

0

20

40

60

80

100

120TrueScale-QDRConnectX-3-FDRConnectIB-DualFDRConnectX-5-EDROmni-Path

Large Message Latency

Message Size (bytes)

Late

ncy

(u

s)

Page 18: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 18Network Based Computing Laboratory

TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch

ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-5-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 IB switch

Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch

Bandwidth: MPI over IB with MVAPICH2

0

5000

10000

15000

20000

25000TrueScale-QDR

ConnectX-3-FDR

ConnectIB-DualFDR

ConnectX-5-EDR

Omni-Path

Bidirectional Bandwidth

Ban

dw

idth

(M

Byt

es/

sec)

Message Size (bytes)

22,564

12,161

21,983

6,228

24,136

0

2000

4000

6000

8000

10000

12000

14000 Unidirectional Bandwidth

Ban

dw

idth

(M

Byt

es/

sec)

Message Size (bytes)

12,590

3,373

6,356

12,35812,366

Page 19: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 19Network Based Computing Laboratory

Startup Performance on KNL + Omni-Path

0

5

10

15

20

25

64

12

8

25

6

51

2

1K

2K

4K

8K

16

K

32

K

64

K

Tim

e Ta

ken

(Se

con

ds)

Number of Processes

MPI_Init & Hello World - Oakforest-PACS

Hello World (MVAPICH2-2.3a)

MPI_Init (MVAPICH2-2.3a)

• MPI_Init takes 22 seconds on 229,376 processes on 3,584 KNL nodes (Stampede2 – Full scale)• 8.8 times faster than Intel MPI at 128K processes (Courtesy: TACC)• At 64K processes, MPI_Init and Hello World takes 5.8s and 21s respectively (Oakforest-PACS)• All numbers reported with 64 processes per node

5.8s

21s

22s

New designs available since MVAPICH2-2.3a and as patch for SLURM 15, 16, and 17

Page 20: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 20Network Based Computing Laboratory

0

0.05

0.1

0.15

0.2

(4,28) (8,28) (16,28)

Late

ncy

(se

con

ds)

(Number of Nodes, PPN)

MVAPICH2

MVAPICH2-SHArP13%

Mesh Refinement Time of MiniAMR

Advanced Allreduce Collective Designs Using SHArP

12%

0

0.1

0.2

0.3

0.4

(4,28) (8,28) (16,28)

Late

ncy

(se

con

ds)

(Number of Nodes, PPN)

MVAPICH2

MVAPICH2-SHArP

Avg DDOT Allreduce time of HPCG

SHArP Support is available

since MVAPICH2 2.3a

M. Bayatpour, S. Chakraborty, H. Subramoni, X.

Lu, and D. K. Panda, Scalable Reduction

Collectives with Data Partitioning-based Multi-

Leader Design, SuperComputing '17.

Page 21: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 21Network Based Computing Laboratory

Performance of MPI_Allreduce On Stampede2 (10,240 Processes)

0

50

100

150

200

250

300

4 8 16 32 64 128 256 512 1024 2048 4096

Late

ncy

(u

s)

Message SizeMVAPICH2 MVAPICH2-OPT IMPI

0

200

400

600

800

1000

1200

1400

1600

1800

2000

8K 16K 32K 64K 128K 256K

Message SizeMVAPICH2 MVAPICH2-OPT IMPI

OSU Micro Benchmark 64 PPN

2.4X

• For MPI_Allreduce latency with 32K bytes, MVAPICH2-OPT can reduce the latency by 2.4X

M. Bayatpour, S. Chakraborty, H. Subramoni, X. Lu, and D. K. Panda, Scalable Reduction Collectives with Data Partitioning-based

Multi-Leader Design, SuperComputing '17. Available in MVAPICH2-X 2.3b

Page 22: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 22Network Based Computing Laboratory

Optimized CMA-based Collectives for Large Messages

1

10

100

1000

10000

100000

10000001K 2K 4K 8K

16K

32K

64K

128K

256K

512K 1M 2M 4M

Message Size

KNL (2 Nodes, 128 Procs)

MVAPICH2-2.3a

Intel MPI 2017

OpenMPI 2.1.0

Tuned CMA

Late

ncy

(u

s)

1

10

100

1000

10000

100000

1000000

1K 2K 4K 8K

16K

32K

64K

128

K

256K

512K 1M 2M

Message Size

KNL (4 Nodes, 256 Procs)

MVAPICH2-2.3a

Intel MPI 2017

OpenMPI 2.1.0

Tuned CMA1

10

100

1000

10000

100000

1000000

1K 2K 4K 8K

16K

32K

64K

128K

256K

512K 1M

Message Size

KNL (8 Nodes, 512 Procs)

MVAPICH2-2.3a

Intel MPI 2017

OpenMPI 2.1.0

Tuned CMA

• Significant improvement over existing implementation for Scatter/Gather with 1MB messages (up to 4x on KNL, 2x on Broadwell, 14x on OpenPower)

• New two-level algorithms for better scalability• Improved performance for other collectives (Bcast, Allgather, and Alltoall)

~ 2.5xBetter

~ 3.2xBetter

~ 4xBetter

~ 17xBetter

S. Chakraborty, H. Subramoni, and D. K. Panda, Contention Aware Kernel-Assisted MPI

Collectives for Multi/Many-core Systems, IEEE Cluster ’17, BEST Paper Finalist

Performance of MPI_Gather on KNL nodes (64PPN)

Available in MVAPICH2-X 2.3b

Page 23: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 23Network Based Computing Laboratory

Shared Address Space (XPMEM)-based Collectives Design

1

10

100

1000

10000

100000

16K 32K 64K 128K 256K 512K 1M 2M 4M

Late

ncy

(u

s)

Message Size

MVAPICH2-2.3bIMPI-2017v1.132MVAPICH2-Opt

OSU_Allreduce (Broadwell 256 procs)

• “Shared Address Space”-based true zero-copy Reduction collective designs in MVAPICH2

• Offloaded computation/communication to peers ranks in reduction collective operation

• Up to 4X improvement for 4MB Reduce and up to 1.8X improvement for 4M AllReduce

73.2

1.8X

1

10

100

1000

10000

100000

16K 32K 64K 128K 256K 512K 1M 2M 4M

Message Size

MVAPICH2-2.3bIMPI-2017v1.132MVAPICH2-Opt

OSU_Reduce (Broadwell 256 procs)

4X

36.1

37.9

16.8

J. Hashmi, S. Chakraborty, M. Bayatpour, H. Subramoni, and D. Panda, Designing Efficient Shared Address Space Reduction

Collectives for Multi-/Many-cores, International Parallel & Distributed Processing Symposium (IPDPS '18), May 2018.Will be available in future

Page 24: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 24Network Based Computing Laboratory

Application-Level Benefits of XPMEM-Based Collectives

MiniAMR (Broadwell, ppn=16)

• Up to 20% benefits over IMPI for CNTK DNN training using AllReduce

• Up to 27% benefits over IMPI and up to 15% improvement over MVAPICH2 for MiniAMR application kernel

0

200

400

600

800

28 56 112 224

Exec

uti

on

Tim

e (s

)

No. of Processes

IMPI-2017v1.132MVAPICH2-2.3bMVAPICH2-Opt

CNTK AlexNet Training (Broadwell, B.S=default, iteration=50, ppn=28)

0

20

40

60

80

16 32 64 128 256

Exec

uti

on

Tim

e (s

)

No. of Processes

IMPI-2017v1.132

MVAPICH2-2.3b

MVAPICH2-Opt20%

9%

27%

15%

Page 25: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 25Network Based Computing Laboratory

Dynamic and Adaptive MPI Point-to-point Communication Protocols

Process on Node 1 Process on Node 2

Eager Threshold for Example Communication Pattern with Different Designs

0 1 2 3

4 5 6 7

Default

16 KB 16 KB 16 KB 16 KB

0 1 2 3

4 5 6 7

Manually Tuned

128 KB 128 KB 128 KB 128 KB

0 1 2 3

4 5 6 7

Dynamic + Adaptive

32 KB 64 KB 128 KB 32 KB

H. Subramoni, S. Chakraborty, D. K. Panda, Designing Dynamic & Adaptive MPI Point-to-Point Communication Protocols for Efficient Overlap of Computation & Communication, ISC'17 - Best Paper

0

200

400

600

128 256 512 1K

Wal

l Clo

ck T

ime

(sec

on

ds)

Number of Processes

Execution Time of Amber

Default Threshold=17K Threshold=64K

Threshold=128K Dynamic Threshold

0

5

10

128 256 512 1K

Rel

ativ

e M

emo

ry

Co

nsu

mp

tio

n

Number of Processes

Relative Memory Consumption of Amber

Default Threshold=17K Threshold=64K

Threshold=128K Dynamic Threshold

Default Poor overlap; Low memory requirement Low Performance; High Productivity

Manually Tuned Good overlap; High memory requirement High Performance; Low Productivity

Dynamic + Adaptive Good overlap; Optimal memory requirement High Performance; High Productivity

Process Pair Eager Threshold (KB)

0 – 4 32

1 – 5 64

2 – 6 128

3 – 7 32

Desired Eager Threshold

Will be available in future

Page 26: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 26Network Based Computing Laboratory

Dynamic and Adaptive Tag Matching

Normalized Total Tag Matching Time at 512 ProcessesNormalized to Default (Lower is Better)

Normalized Memory Overhead per Process at 512 ProcessesCompared to Default (Lower is Better)

Adaptive and Dynamic Design for MPI Tag Matching; M. Bayatpour, H. Subramoni, S. Chakraborty, and D. K. Panda; IEEE Cluster 2016. [Best Paper Nominee]

Ch

alle

nge

Tag matching is a significant overhead for receivers

Existing Solutions are

- Static and do not adapt dynamically to communication pattern

- Do not consider memory overhead

Solu

tio

n A new tag matching design

- Dynamically adapt to communication patterns

- Use different strategies for different ranks

- Decisions are based on the number of request object that must be traversed before hitting on the required one

Res

ult

s Better performance than other state-of-the art tag-matching schemes

Minimum memory consumption

Will be available in future MVAPICH2 releases

Page 27: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 27Network Based Computing Laboratory

● Enhance existing support for MPI_T in MVAPICH2 to expose a richer

set of performance and control variables

● Get and display MPI Performance Variables (PVARs) made available by

the runtime in TAU

● Control the runtime’s behavior via MPI Control Variables (CVARs)

● Introduced support for new MPI_T based CVARs to MVAPICH2

○ MPIR_CVAR_MAX_INLINE_MSG_SZ,

MPIR_CVAR_VBUF_POOL_SIZE,

MPIR_CVAR_VBUF_SECONDARY_POOL_SIZE

● TAU enhanced with support for setting MPI_T CVARs in a non-

interactive mode for uninstrumented applications

Performance Engineering Applications using MVAPICH2 and TAU

VBUF usage without CVAR based tuning as displayed by ParaProf VBUF usage with CVAR based tuning as displayed by ParaProf

S. Ramesh, A. Maheo, S. Shende, A. Malony, H. Subramoni, and D. K. Panda, MPI Performance Engineering with the MPI Tool Interface: the Integration of MVAPICH and TAU,

Euro MPI, 2017 [Best Paper Award]

Page 28: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 28Network Based Computing Laboratory

Overview of OSU INAM• A network monitoring and analysis tool that is capable of analyzing traffic on the InfiniBand network with inputs from the

MPI runtime

– http://mvapich.cse.ohio-state.edu/tools/osu-inam/

• Monitors IB clusters in real time by querying various subnet management entities and gathering input from the MPI runtimes

• OSU INAM v0.9.2 released on 10/31/2017

• Significant enhancements to user interface to enable scaling to clusters with thousands of nodes

• Improve database insert times by using 'bulk inserts‘

• Capability to look up list of nodes communicating through a network link

• Capability to classify data flowing over a network link at job level and process level granularity in conjunction with

MVAPICH2-X 2.3b

• “Best practices “ guidelines for deploying OSU INAM on different clusters

• Capability to analyze and profile node-level, job-level and process-level activities for MPI communication

– Point-to-Point, Collectives and RMA

• Ability to filter data based on type of counters using “drop down” list

• Remotely monitor various metrics of MPI processes at user specified granularity

• "Job Page" to display jobs in ascending/descending order of various performance metrics in conjunction with MVAPICH2-X

• Visualize the data transfer happening in a “live” or “historical” fashion for entire network, job or set of nodes

Page 29: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 29Network Based Computing Laboratory

OSU INAM Features

• Show network topology of large clusters

• Visualize traffic pattern on different links

• Quickly identify congested links/links in error state

• See the history unfold – play back historical state of the network

Comet@SDSC --- Clustered View

(1,879 nodes, 212 switches, 4,377 network links)

Finding Routes Between Nodes

Page 30: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 30Network Based Computing Laboratory

OSU INAM Features (Cont.)

Visualizing a Job (5 Nodes)

• Job level view• Show different network metrics (load, error, etc.) for any live job

• Play back historical data for completed jobs to identify bottlenecks

• Node level view - details per process or per node• CPU utilization for each rank/node

• Bytes sent/received for MPI operations (pt-to-pt, collective, RMA)

• Network metrics (e.g. XmitDiscard, RcvError) per rank/node

Estimated Process Level Link Utilization

• Estimated Link Utilization view• Classify data flowing over a network link at

different granularity in conjunction with

MVAPICH2-X 2.2rc1

• Job level and

• Process level

Page 31: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 31Network Based Computing Laboratory

• Scalability for million to billion processors

• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI + UPC, CAF, UPC++, …)

• Integrated Support for GPGPUs

• Optimized MVAPICH2 for OpenPower and ARM

• Application Scalability and Best Practices

Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale

Page 32: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 32Network Based Computing Laboratory

MVAPICH2-X for Hybrid MPI + PGAS Applications

• Current Model – Separate Runtimes for OpenSHMEM/UPC/UPC++/CAF and MPI– Possible deadlock if both runtimes are not progressed

– Consumes more network resource

• Unified communication runtime for MPI, UPC, UPC++, OpenSHMEM, CAF

– Available with since 2012 (starting with MVAPICH2-X 1.9)

– http://mvapich.cse.ohio-state.edu

Page 33: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 33Network Based Computing Laboratory

Application Level Performance with Graph500 and SortGraph500 Execution Time

J. Jose, S. Potluri, K. Tomko and D. K. Panda, Designing Scalable Graph500 Benchmark with Hybrid MPI+OpenSHMEM Programming Models,

International Supercomputing Conference (ISC’13), June 2013

J. Jose, K. Kandalla, M. Luo and D. K. Panda, Supporting Hybrid MPI and OpenSHMEM over InfiniBand: Design and Performance Evaluation,

Int'l Conference on Parallel Processing (ICPP '12), September 2012

05

101520253035

4K 8K 16K

Tim

e (

s)

No. of Processes

MPI-Simple

MPI-CSC

MPI-CSR

Hybrid (MPI+OpenSHMEM)13X

7.6X

• Performance of Hybrid (MPI+ OpenSHMEM) Graph500 Design

• 8,192 processes- 2.4X improvement over MPI-CSR

- 7.6X improvement over MPI-Simple

• 16,384 processes- 1.5X improvement over MPI-CSR

- 13X improvement over MPI-Simple

J. Jose, K. Kandalla, S. Potluri, J. Zhang and D. K. Panda, Optimizing Collective Communication in OpenSHMEM, Int'l Conference on Partitioned

Global Address Space Programming Models (PGAS '13), October 2013.

Sort Execution Time

0

500

1000

1500

2000

2500

3000

500GB-512 1TB-1K 2TB-2K 4TB-4K

Tim

e (

seco

nd

s)

Input Data - No. of Processes

MPI Hybrid

51%

• Performance of Hybrid (MPI+OpenSHMEM) Sort Application

• 4,096 processes, 4 TB Input Size- MPI – 2408 sec; 0.16 TB/min

- Hybrid – 1172 sec; 0.36 TB/min

- 51% improvement over MPI-design

Page 34: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 34Network Based Computing Laboratory

Optimized OpenSHMEM with AVX and MCDRAM: Application Kernels Evaluation

Heat Image Kernel

• On heat diffusion based kernels AVX-512 vectorization showed better performance

• MCDRAM showed significant benefits on Heat-Image kernel for all process counts. Combined with AVX-512 vectorization, it showed up to 4X improved performance

1

10

100

1000

16 32 64 128

Tim

e (s

)

No. of processes

KNL (Default)KNL (AVX-512)KNL (AVX-512+MCDRAM)Broadwell

Heat-2D Kernel using Jacobi method

0.1

1

10

100

16 32 64 128

Tim

e (s

)

No. of processes

KNL (Default)KNL (AVX-512)KNL (AVX-512+MCDRAM)Broadwell

Page 35: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 35Network Based Computing Laboratory

• Scalability for million to billion processors

• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI + UPC, CAF, UPC++, …)

• Integrated Support for GPGPUs– CUDA-aware MPI

– Optimized GPUDirect RDMA (GDR) Support

– Support for Streaming Applications

• Optimized MVAPICH2 for OpenPower and ARM

Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale

Page 36: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 36Network Based Computing Laboratory

At Sender:

At Receiver:

MPI_Recv(r_devbuf, size, …);

inside

MVAPICH2

• Standard MPI interfaces used for unified data movement

• Takes advantage of Unified Virtual Addressing (>= CUDA 4.0)

• Overlaps data movement from GPU with RDMA transfers

High Performance and High Productivity

MPI_Send(s_devbuf, size, …);

GPU-Aware (CUDA-Aware) MPI Library: MVAPICH2-GPU

Page 37: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 37Network Based Computing Laboratory

CUDA-Aware MPI: MVAPICH2-GDR 1.8-2.3 Releases• Support for MPI communication from NVIDIA GPU device memory

• High performance RDMA-based inter-node point-to-point communication (GPU-GPU, GPU-Host and Host-GPU)

• High performance intra-node point-to-point communication for multi-GPU adapters/node (GPU-GPU, GPU-Host and Host-GPU)

• Taking advantage of CUDA IPC (available since CUDA 4.1) in intra-node communication for multiple GPU adapters/node

• Optimized and tuned collectives for GPU device buffers

• MPI datatype support for point-to-point and collective communication from GPU device buffers

• Unified memory

Page 38: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 38Network Based Computing Laboratory

0

2000

4000

6000

1 2 4 8

16

32

64

12

8

25

6

51

2

1K

2K

4K

Ban

dw

idth

(M

B/s

)

Message Size (Bytes)

GPU-GPU Inter-node Bi-Bandwidth

MV2-(NO-GDR) MV2-GDR-2.3a

0

1000

2000

3000

4000

1 2 4 8

16

32

64

12

8

25

6

51

2

1K

2K

4K

Ban

dw

idth

(M

B/s

)

Message Size (Bytes)

GPU-GPU Inter-node Bandwidth

MV2-(NO-GDR) MV2-GDR-2.3a

0

10

20

300 1 2 4 8

16

32

64

12

8

25

6

51

2

1K

2K

4K

8K

Late

ncy

(u

s)

Message Size (Bytes)

GPU-GPU Inter-node Latency

MV2-(NO-GDR) MV2-GDR-2.3a

MVAPICH2-GDR-2.3aIntel Haswell (E5-2687W @ 3.10 GHz) node - 20 cores

NVIDIA Volta V100 GPUMellanox Connect-X4 EDR HCA

CUDA 9.0Mellanox OFED 4.0 with GPU-Direct-RDMA

10x

9x

Optimized MVAPICH2-GDR Design

1.88us11X

Page 39: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 39Network Based Computing Laboratory

• Platform: Wilkes (Intel Ivy Bridge + NVIDIA Tesla K20c + Mellanox Connect-IB)

• HoomdBlue Version 1.0.5

• GDRCOPY enabled: MV2_USE_CUDA=1 MV2_IBA_HCA=mlx5_0 MV2_IBA_EAGER_THRESHOLD=32768

MV2_VBUF_TOTAL_SIZE=32768 MV2_USE_GPUDIRECT_LOOPBACK_LIMIT=32768

MV2_USE_GPUDIRECT_GDRCOPY=1 MV2_USE_GPUDIRECT_GDRCOPY_LIMIT=16384

Application-Level Evaluation (HOOMD-blue)

0

500

1000

1500

2000

2500

4 8 16 32

Ave

rage

Tim

e St

eps

per

se

con

d (

TPS)

Number of Processes

MV2 MV2+GDR

0

500

1000

1500

2000

2500

3000

3500

4 8 16 32Ave

rage

Tim

e St

ep

s p

er

seco

nd

(TP

S)

Number of Processes

64K Particles 256K Particles

2X2X

Page 40: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 40Network Based Computing Laboratory

Application-Level Evaluation (Cosmo) and Weather Forecasting in Switzerland

0

0.2

0.4

0.6

0.8

1

1.2

16 32 64 96No

rmal

ized

Exe

cuti

on

Tim

e

Number of GPUs

CSCS GPU cluster

Default Callback-based Event-based

0

0.2

0.4

0.6

0.8

1

1.2

4 8 16 32

No

rmal

ized

Exe

cuti

on

Tim

e

Number of GPUs

Wilkes GPU Cluster

Default Callback-based Event-based

• 2X improvement on 32 GPUs nodes• 30% improvement on 96 GPU nodes (8 GPUs/node)

C. Chu, K. Hamidouche, A. Venkatesh, D. Banerjee , H. Subramoni, and D. K. Panda, Exploiting Maximal Overlap for Non-Contiguous Data

Movement Processing on Modern GPU-enabled Systems, IPDPS’16

On-going collaboration with CSCS and MeteoSwiss (Switzerland) in co-designing MV2-GDR and Cosmo Application

Cosmo model: http://www2.cosmo-model.org/content

/tasks/operational/meteoSwiss/

Page 41: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 41Network Based Computing Laboratory

• Streaming applications on GPU clusters

– Using a pipeline of broadcast operations to move host-

resident data from a single source—typically live— to

multiple GPU-based computing sites

– Existing schemes require explicitly data movements between

Host and GPU memories

Poor performance and breaking the pipeline

• IB hardware multicast + Scatter-List

– Efficient heterogeneous-buffer broadcast operation

• CUDA Inter-Process Communication (IPC)

– Efficient intra-node topology-aware broadcast

operations for multi-GPU systems

• Available MVAPICH2-GDR 2.3a!

High-Performance Heterogeneous Broadcast for Streaming Applications

Node N

IB HCA

IB HCA

CPU

GPU

Source

IB Switch

GPU

CPU

Node 1

Multicast steps

CData

C

IB SL step

Data

IB HCA

GPU

CPU

Data

C

Node N

Node 1

IB Switch

GPU 0 GPU 1 GPU N

GPU

CPUSource

GPU

CPU

CPUMulticast steps

IPC-based cudaMemcpy(Device<->Device)

3

Designing High Performance Heterogeneous Broadcast for Streaming Applications on GPU Clusters. C.-H. Chu, K. Hamidouche, H. Subramoni, A. Venkatesh , B. Elton, and D. K. Panda, SBAC-PAD'16, Oct 2016.

Page 42: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 42Network Based Computing Laboratory

• Scalability for million to billion processors

• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI + UPC, CAF, UPC++, …)

• Integrated Support for GPGPUs

• Optimized MVAPICH2 for OpenPower and ARM

• Application Scalability and Best Practices

Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale

Page 43: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 43Network Based Computing Laboratory

0

1

2

3

4

0 1 2 4 8 16 32 64 256 512 1K 2K 4K

Late

ncy

(u

s)

MVAPICH2

Inter-node Point-to-Point Performance on OpenPower

0

5000

10000

15000

1 2 4 8

16

32

64

128

256

512

1K

2K

4K

8K

16K

32K

64K

128

K

256

K

512

K

1M

Ban

dw

idth

(M

B/s

)

MVAPICH2

0

10000

20000

30000

1 2 4 8

16

32

64

128

256

512

1K

2K

4K

8K

16K

32K

64K

128

K

256

K

512

K

1M

Bid

irec

tio

nal

Ban

dw

idth MVAPICH2

Platform: Two nodes of OpenPOWER (Power8-ppc64le) CPU using Mellanox EDR (MT4115) HCA.

0

50

100

150

200

8K 16K 32K 64K 128K 256K 512K 1M 2M

Late

ncy

(u

s)

MVAPICH2

Small Message Latency Large Message Latency

Bi-directional BandwidthBandwidth

Page 44: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 44Network Based Computing Laboratory

0

1

2

3

0 1 2 4 8 16 32 64 128 256 512 1K 2K 4K

Late

ncy

(u

s)

MVAPICH2

Intra-node Point-to-point Performance on ARMv8

0

1000

2000

3000

4000

5000

Ban

dw

idth

(M

B/s

)

MVAPICH2

0

2000

4000

6000

8000

10000

Bid

irec

tio

nal

Ban

dw

idth MVAPICH2

Platform: ARMv8 (aarch64) MIPS processor with 96 cores dual-socket CPU. Each socket contains 48 cores.

0

200

400

600

8K 16K 32K 64K 128K 256K 512K 1M 2M

Late

ncy

(u

s)

MVAPICH2

Small Message Latency Large Message Latency

Bi-directional BandwidthBandwidth

Available since

MVAPICH2 2.3a

0.74 microsec

(4 bytes)

Page 45: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 45Network Based Computing Laboratory

0

10

20

30

1 2 4 8

16

32

64

12

8

25

6

51

2

1K

2K

4K

8K

Late

ncy

(u

s)

Message Size (Bytes)

INTRA-NODE LATENCY (SMALL)

INTRA-SOCKET(NVLINK) INTER-SOCKET

MVAPICH2-GDR: Performance on OpenPOWER (NVLink + Pascal)

010203040

1 4

16

64

25

6

1K

4K

16

K

64

K

25

6K

1M

4M

Ban

dw

idth

(G

B/s

ec)

Message Size (Bytes)

INTRA-NODE BANDWIDTH

INTRA-SOCKET(NVLINK) INTER-SOCKET

0

5

10

15

1 4

16

64

25

6

1K

4K

16

K

64

K

25

6K

1M

4MB

and

wid

th (

GB

/sec

)

Message Size (Bytes)

INTER-NODE BANDWIDTH

Platform: OpenPOWER (ppc64le) nodes equipped with a dual-socket CPU, 4 Pascal P100-SXM GPUs, and EDR InfiniBand Inter-connect

0

200

400

16K 32K 64K 128K256K512K 1M 2M 4M

Late

ncy

(u

s)

Message Size (Bytes)

INTRA-NODE LATENCY (LARGE)

INTRA-SOCKET(NVLINK) INTER-SOCKET

20

22

24

26

28

30

1 2 4 8

16

32

64

12

8

25

6

51

2

1K

2K

4K

8K

Late

ncy

(u

s)

Message Size (Bytes)

INTER-NODE LATENCY (SMALL)

0

100

200

300

400

500

Late

ncy

(u

s)

Message Size (Bytes)

INTER-NODE LATENCY (LARGE)

Intra-node Bandwidth: 33.9 GB/sec (NVLINK)Intra-node Latency: 14.6 us (without GPUDirectRDMA)

Inter-node Latency: 23.8 us (without GPUDirectRDMA) Inter-node Bandwidth: 11.9 GB/sec (EDR)Available in MVAPICH2-GDR 2.3a

Page 46: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 46Network Based Computing Laboratory

• Scalability for million to billion processors

• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI + UPC, CAF, UPC++, …)

• Integrated Support for GPGPUs

• Optimized MVAPICH2 for OpenPower and ARM

• Application Scalability and Best Practices

Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale

Page 47: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 47Network Based Computing Laboratory

Application Scalability on Skylake and KNLMiniFE (1300x1300x1300 ~ 910 GB)

Runtime parameters: MV2_SMPI_LENGTH_QUEUE=524288 PSM2_MQ_RNDV_SHM_THRESH=128K PSM2_MQ_RNDV_HFI_THRESH=128K

0

50

100

150

2048 4096 8192

Exec

uti

on

Tim

e (s

)

No. of Processes (KNL: 64ppn)

MVAPICH2

0

20

40

60

2048 4096 8192Exec

uti

on

Tim

e (s

)

No. of Processes (Skylake: 48ppn)

MVAPICH2

0

500

1000

1500

48 96 192 384 768

No. of Processes (Skylake: 48ppn)

MVAPICH2

NEURON (YuEtAl2012)

Courtesy: Mahidhar Tatineni @SDSC, Dong Ju (DJ) Choi@SDSC, and Samuel Khuvis@OSC ---- Testbed: TACC Stampede2 using MVAPICH2-2.3b

0

1000

2000

3000

4000

64 128 256 512 1024 2048 4096

No. of Processes (KNL: 64ppn)

MVAPICH2

0

500

1000

1500

68 136 272 544 1088 2176 4352

No. of Processes (KNL: 68ppn)

MVAPICH2

0

1000

2000

48 96 192 384 768 1536 3072

No. of Processes (Skylake: 48ppn)

MVAPICH2

Cloverleaf (bm64) MPI+OpenMP, NUM_OMP_THREADS = 2

Page 48: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 48Network Based Computing Laboratory

0

50

100

150

200

250

300

350

400

milc leslie3d pop2 lammps wrf2 tera_tf lu

Exe

cuti

on

Tim

e in

(S)

Intel MPI 18.0.0

MVAPICH2 2.3 rc1

2%

6%

1%

6%

Performance of SPEC MPI 2007 Benchmarks (KNL + Omni-Path)

Mvapich2 outperforms Intel MPI by up to 10%

448 processes

on 7 KNL nodes of

TACC Stampede2

(64 ppn)

10%2%

4%

Page 49: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 49Network Based Computing Laboratory

0

20

40

60

80

100

120

milc leslie3d pop2 lammps wrf2 GaP tera_tf lu

Exe

cuti

on

Tim

e in

(S)

Intel MPI 18.0.0

MVAPICH2 2.3 rc1

2%

4%

Performance of SPEC MPI 2007 Benchmarks (Skylake + Omni-Path)

MVAPICH2 outperforms Intel MPI by up to 38%

480 processes

on 10 Skylake nodes

of TACC Stampede2

(48 ppn)

0% 1%

0%

-4%38%

-3%

Page 50: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 50Network Based Computing Laboratory

• MPI runtime has many parameters

• Tuning a set of parameters can help you to extract higher performance

• Compiled a list of such contributions through the MVAPICH Website– http://mvapich.cse.ohio-state.edu/best_practices/

• Initial list of applications– Amber

– HoomDBlue

– HPCG

– Lulesh

– MILC

– Neuron

– SMG2000

• Soliciting additional contributions, send your results to mvapich-help at cse.ohio-state.edu.

• We will link these results with credits to you.

Compilation of Best Practices

Page 51: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 51Network Based Computing Laboratory

• Traditional HPC

– Message Passing Interface (MPI), including MPI + OpenMP

– Support for PGAS and MPI + PGAS (OpenSHMEM, UPC)

– Exploiting Accelerators

• Deep Learning

– Caffe, CNTK, TensorFlow, and many more

• Cloud for HPC

– Virtualization with SR-IOV and Containers

HPC, Big Data, Deep Learning, and Cloud

Page 52: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 52Network Based Computing Laboratory

• Deep Learning frameworks are a different game

altogether

– Unusually large message sizes (order of megabytes)

– Most communication based on GPU buffers

• Existing State-of-the-art

– cuDNN, cuBLAS, NCCL --> scale-up performance

– CUDA-Aware MPI --> scale-out performance

• For small and medium message sizes only!

• Proposed: Can we co-design the MPI runtime (MVAPICH2-

GDR) and the DL framework (Caffe) to achieve both?

– Efficient Overlap of Computation and Communication

– Efficient Large-Message Communication (Reductions)

– What application co-designs are needed to exploit

communication-runtime co-designs?

Deep Learning: New Challenges for MPI Runtimes

Scal

e-u

p P

erf

orm

ance

Scale-out Performance

cuDNN

NCCL

gRPC

Hadoop

ProposedCo-Designs

MPIcuBLAS

A. A. Awan, K. Hamidouche, J. M. Hashmi, and D. K. Panda, S-Caffe: Co-designing MPI Runtimes and Caffe for Scalable Deep Learning on Modern GPU Clusters. In Proceedings of the 22nd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP '17)

Page 53: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 53Network Based Computing Laboratory

• NCCL 1.x had some limitations

– Only worked for a single node; no scale-out on multiple nodes

– Degradation across IOH (socket) for scale-up (within a node)

• We propose optimized MPI_Bcast design that exploits NCCL [1]

– Communication of very large GPU buffers

– Scale-out on large number of dense multi-GPU nodes

Efficient Broadcast: MVAPICH2-GDR and NCCL

1. A. A. Awan, K. Hamidouche, A. Venkatesh, and D. K. Panda, Efficient Large Message Broadcast using NCCL and CUDA-Aware MPI for Deep Learning. In Proceedings of the 23rd European MPI Users' Group Meeting (EuroMPI 2016). [Best Paper Nominee]

2. A. A. Awan, C-H. Chu, H. Subramoni, and D. K. Panda. Optimized Broadcast for Deep Learning Workloads on Dense-GPU InfiniBand Clusters: MPI or NCCL?, arXiv ’17 (https://arxiv.org/abs/1707.09414)

1

32

4

InternodeComm.(Knomial)

1 2

CPU

PLX

3 4

PLX

Intranode Comm.(NCCLRing)

RingDirection

0

5

10

15

20

25

30

2 4 8 32 64 128

TrainingTime(seconds)

No.ofGPUs

MV2-GDR-NCCL MV2-GDR-Opt

• Hierarchical Communication that efficiently exploits:

– CUDA-Aware MPI_Bcast in MV2-GDR

– NCCL Broadcast for intra-node transfers

• Can pure MPI-level designs be done that achieve similar

or better performance than NCCL-based approach? [2]

0

5

10

15

20

25

30

2 4 8 16 32 64

TrainingTime(seconds)

No.ofGPUs

MV2-GDR MV2-GDR-NCCL

VGG Training with CNTK

Page 54: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 54Network Based Computing Laboratory

0

100

200

300

2 4 8 16 32 64 128

Late

ncy

(m

s)

Message Size (MB)

Reduce – 192 GPUs

Large Message Optimized Collectives for Deep Learning

0

100

200

128 160 192

Late

ncy

(m

s)

No. of GPUs

Reduce – 64 MB

0

100

200

300

16 32 64

Late

ncy

(m

s)

No. of GPUs

Allreduce - 128 MB

0

50

100

2 4 8 16 32 64 128

Late

ncy

(m

s)

Message Size (MB)

Bcast – 64 GPUs

0

50

100

16 32 64

Late

ncy

(m

s)

No. of GPUs

Bcast 128 MB

• MV2-GDR provides

optimized collectives for

large message sizes

• Optimized Reduce,

Allreduce, and Bcast

• Good scaling with large

number of GPUs

• Available since MVAPICH2-

GDR 2.2GA

0

100

200

300

2 4 8 16 32 64 128La

ten

cy (

ms)

Message Size (MB)

Allreduce – 64 GPUs

Page 55: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 55Network Based Computing Laboratory

0

50000

100000

150000

200000

250000

83

88

608

16

77

721

6

33

55

443

2

67

10

886

4

13

42

177

28

26

84

354

56

53

68

709

12

Late

ncy

(u

s)

Message Size (Bytes)

MV2-GDR-Tuned

Baidu-Allreduce

1

10

100

1000

10000

4

16

64

25

6

10

24

40

96

16

38

4

65

53

6

26

21

44

Late

ncy

(u

s) -

logs

cale

Message Size (Bytes)

MV2-GDR-Tuned Baidu-Allreduce

• Initial Evaluation shows promising performance gains for MVAPICH2-GDR 2.3a*

• 8 GPUs (4 nodes) MPI_Allreduce (MVAPICH2-GDR) vs. Baidu-Allreduce

MVAPICH2-GDR vs. Baidu-allreduce

*Available with MVAPICH2-GDR 2.3a

~100X better

~20% better

0

1000

2000

3000

4000

5000

6000

Late

ncy

(u

s) -

logs

cale

Message Size (Bytes)

MV2-GDR-Tuned Baidu-Allreduce

~10X better

Page 56: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 56Network Based Computing Laboratory

• Caffe : A flexible and layered Deep Learning framework.

• Benefits and Weaknesses

– Multi-GPU Training within a single node

– Performance degradation for GPUs across different

sockets

– Limited Scale-out

• OSU-Caffe: MPI-based Parallel Training

– Enable Scale-up (within a node) and Scale-out (across

multi-GPU nodes)

– Scale-out on 64 GPUs for training CIFAR-10 network on

CIFAR-10 dataset

– Scale-out on 128 GPUs for training GoogLeNet network on

ImageNet dataset

OSU-Caffe: Scalable Deep Learning

0

50

100

150

200

250

8 16 32 64 128

Trai

nin

g Ti

me

(sec

on

ds)

No. of GPUs

GoogLeNet (ImageNet) on 128 GPUs

Caffe OSU-Caffe (1024) OSU-Caffe (2048)

Invalid use caseOSU-Caffe publicly available from

http://hidl.cse.ohio-state.edu/

Page 57: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 57Network Based Computing Laboratory

• Traditional HPC

– Message Passing Interface (MPI), including MPI + OpenMP

– Support for PGAS and MPI + PGAS (OpenSHMEM, UPC)

– Exploiting Accelerators

• Deep Learning

– Caffe, CNTK, TensorFlow, and many more

• Cloud for HPC

– Virtualization with SR-IOV and Containers

HPC, Big Data, Deep Learning, and Cloud

Page 58: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 58Network Based Computing Laboratory

• Virtualization has many benefits– Fault-tolerance

– Job migration

– Compaction

• Have not been very popular in HPC due to overhead associated with Virtualization

• New SR-IOV (Single Root – IO Virtualization) support available with Mellanox InfiniBand adapters changes the field

• Enhanced MVAPICH2 support for SR-IOV

• MVAPICH2-Virt 2.2 supports:– OpenStack, Docker, and singularity

Can HPC and Virtualization be Combined?

J. Zhang, X. Lu, J. Jose, R. Shi and D. K. Panda, Can Inter-VM Shmem Benefit MPI Applications on SR-IOV based

Virtualized InfiniBand Clusters? EuroPar'14

J. Zhang, X. Lu, J. Jose, M. Li, R. Shi and D.K. Panda, High Performance MPI Libray over SR-IOV enabled InfiniBand

Clusters, HiPC’14 J. Zhang, X .Lu, M. Arnold and D. K. Panda, MVAPICH2 Over OpenStack with SR-IOV: an Efficient Approach to build

HPC Clouds, CCGrid’15

Page 59: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 59Network Based Computing Laboratory

0

50

100

150

200

250

300

350

400

milc leslie3d pop2 GAPgeofem zeusmp2 lu

Exe

cuti

on

Tim

e (

s)

MV2-SR-IOV-Def

MV2-SR-IOV-Opt

MV2-Native

1%9.5%

0

1000

2000

3000

4000

5000

6000

22,20 24,10 24,16 24,20 26,10 26,16

Exe

cuti

on

Tim

e (

ms)

Problem Size (Scale, Edgefactor)

MV2-SR-IOV-Def

MV2-SR-IOV-Opt

MV2-Native2%

• 32 VMs, 6 Core/VM

• Compared to Native, 2-5% overhead for Graph500 with 128 Procs

• Compared to Native, 1-9.5% overhead for SPEC MPI2007 with 128 Procs

Application-Level Performance on Chameleon

SPEC MPI2007Graph500

5%

Page 60: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 60Network Based Computing Laboratory

0

10

20

30

40

50

60

70

80

90

100

MG.D FT.D EP.D LU.D CG.D

Exec

uti

on

Tim

e (s

)

Container-Def

Container-Opt

Native

• 64 Containers across 16 nodes, pining 4 Cores per Container

• Compared to Container-Def, up to 11% and 73% of execution time reduction for NAS and Graph 500

• Compared to Native, less than 9 % and 5% overhead for NAS and Graph 500

Application-Level Performance on Docker with MVAPICH2

Graph 500NAS

11%

0

50

100

150

200

250

300

1Cont*16P 2Conts*8P 4Conts*4P

BFS

Exe

cuti

on

Tim

e (

ms)

Scale, Edgefactor (20,16)

Container-Def

Container-Opt

Native

73%

Page 61: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 61Network Based Computing Laboratory

0

500

1000

1500

2000

2500

3000

22,16 22,20 24,16 24,20 26,16 26,20

BFS

Exe

cuti

on

Tim

e (

ms)

Problem Size (Scale, Edgefactor)

Graph500

Singularity

Native

0

50

100

150

200

250

300

CG EP FT IS LU MG

Exec

uti

on

Tim

e (

s)

NPB Class D

Singularity

Native

• 512 Processes across 32 nodes

• Less than 7% and 6% overhead for NPB and Graph500, respectively

Application-Level Performance on Singularity with MVAPICH2

7%

6%

J. Zhang, X .Lu and D. K. Panda, Is Singularity-based Container Technology Ready for Running MPI Applications on HPC Clouds?,

UCC ’17, Best Student Paper Award

Page 62: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 62Network Based Computing Laboratory

High Performance SR-IOV enabled VM Migration Framework

J. Zhang, X. Lu, D. K. Panda. High-Performance Virtual Machine Migration Framework for MPI Applications on SR-IOV enabled InfiniBand Clusters.

IPDPS, 2017

• Consist of SR-IOV enabled IB Cluster and External

Migration Controller

• Detachment/Re-attachment of virtualized

devices: Multiple parallel libraries to coordinate

VM during migration (detach/reattach SR-

IOV/IVShmem, migrate VMs, migration status)

• IB Connection: MPI runtime handles IB

connection suspending and reactivating

• Propose Progress Engine (PE) and Migration

Thread based (MT) design to optimize VM

migration and MPI application performance

Page 63: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 63Network Based Computing Laboratory

Bcast (4VMs * 2Procs/VM)

• Migrate a VM from one machine to another while benchmark is running inside

• Proposed MT-based designs perform slightly worse than PE-based designs because of lock/unlock

• No benefit from MT because of NO computation involved

Performance Evaluation of VM Migration FrameworkPt2Pt Latency

0

20

40

60

80

100

120

140

160

180

200

1 4 16 64 256 1K 4K 16K 64K 256K 1M

Late

ncy

(u

s)

Message Size ( bytes)

PE-IPoIB

PE-RDMA

MT-IPoIB

MT-RDMA

0

100

200

300

400

500

600

700

1 4 16 64 256 1K 4K 16K 64K 256K 1M

2La

ten

cy (

us)

Message Size ( bytes)

PE-IPoIB

PE-RDMA

MT-IPoIB

MT-RDMA

Will be available in upcoming MVAPICH2-Virt

Page 64: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 64Network Based Computing Laboratory

• Next generation HPC systems need to be designed with a holistic view of Deep

Learning and Cloud

• Presented some of the approaches and results along these directions

• Enable Deep Learning and Cloud community to take advantage of modern HPC

technologies

• Many other open issues need to be solved

Concluding Remarks

Page 65: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 65Network Based Computing Laboratory

Funding Acknowledgments

Funding Support by

Equipment Support by

Page 66: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 66Network Based Computing Laboratory

Personnel AcknowledgmentsCurrent Students (Graduate)

– A. Awan (Ph.D.)

– R. Biswas (M.S.)

– M. Bayatpour (Ph.D.)

– S. Chakraborthy (Ph.D.)

– C.-H. Chu (Ph.D.)

– S. Guganani (Ph.D.)

Past Students

– A. Augustine (M.S.)

– P. Balaji (Ph.D.)

– S. Bhagvat (M.S.)

– A. Bhat (M.S.)

– D. Buntinas (Ph.D.)

– L. Chai (Ph.D.)

– B. Chandrasekharan (M.S.)

– N. Dandapanthula (M.S.)

– V. Dhanraj (M.S.)

– T. Gangadharappa (M.S.)

– K. Gopalakrishnan (M.S.)

– R. Rajachandrasekar (Ph.D.)

– G. Santhanaraman (Ph.D.)

– A. Singh (Ph.D.)

– J. Sridhar (M.S.)

– S. Sur (Ph.D.)

– H. Subramoni (Ph.D.)

– K. Vaidyanathan (Ph.D.)

– A. Vishnu (Ph.D.)

– J. Wu (Ph.D.)

– W. Yu (Ph.D.)

Past Research Scientist

– K. Hamidouche

– S. Sur

Past Post-Docs

– D. Banerjee

– X. Besseron

– H.-W. Jin

– W. Huang (Ph.D.)

– W. Jiang (M.S.)

– J. Jose (Ph.D.)

– S. Kini (M.S.)

– M. Koop (Ph.D.)

– K. Kulkarni (M.S.)

– R. Kumar (M.S.)

– S. Krishnamoorthy (M.S.)

– K. Kandalla (Ph.D.)

– M. Li (Ph.D.)

– P. Lai (M.S.)

– J. Liu (Ph.D.)

– M. Luo (Ph.D.)

– A. Mamidala (Ph.D.)

– G. Marsh (M.S.)

– V. Meshram (M.S.)

– A. Moody (M.S.)

– S. Naravula (Ph.D.)

– R. Noronha (Ph.D.)

– X. Ouyang (Ph.D.)

– S. Pai (M.S.)

– S. Potluri (Ph.D.)

– J. Hashmi (Ph.D.)

– H. Javed (Ph.D.)

– P. Kousha (Ph.D.)

– D. Shankar (Ph.D.)

– H. Shi (Ph.D.)

– J. Zhang (Ph.D.)

– J. Lin

– M. Luo

– E. Mancini

Current Research Scientists

– X. Lu

– H. Subramoni

Past Programmers

– D. Bureddy

– J. Perkins

Current Research Specialist

– J. Smith

– M. Arnold

– S. Marcarelli

– J. Vienne

– H. Wang

Current Post-doc

– A. Ruhela

Current Students (Undergraduate)

– N. Sarkauskas (B.S.)

Page 67: Software Libraries and Middleware for Exascale Systems · Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run

HPCAC-Stanford (Feb ’18) 67Network Based Computing Laboratory

Thank You!

Network-Based Computing Laboratoryhttp://nowlab.cse.ohio-state.edu/

[email protected]

The High-Performance MPI/PGAS Projecthttp://mvapich.cse.ohio-state.edu/

The High-Performance Deep Learning Projecthttp://hidl.cse.ohio-state.edu/

The High-Performance Big Data Projecthttp://hibd.cse.ohio-state.edu/