openfoam on a gpu-based heterogeneous cluster rajat phull, srihari cadambi, nishkam ravi and srimat...

23
OpenFOAM on a GPU-based Heterogeneous Cluster Rajat Phull, Srihari Cadambi, Nishkam Ravi and Srimat Chakradhar NEC Laboratories America Princeton, New Jersey, USA. www.nec-labs.com

Upload: oscar-sutton

Post on 18-Dec-2015

219 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: OpenFOAM on a GPU-based Heterogeneous Cluster Rajat Phull, Srihari Cadambi, Nishkam Ravi and Srimat Chakradhar NEC Laboratories America Princeton, New

OpenFOAM on a GPU-based Heterogeneous Cluster

Rajat Phull, Srihari Cadambi, Nishkam Ravi and Srimat

ChakradharNEC Laboratories America

Princeton, New Jersey, USA.

www.nec-labs.com

Page 2: OpenFOAM on a GPU-based Heterogeneous Cluster Rajat Phull, Srihari Cadambi, Nishkam Ravi and Srimat Chakradhar NEC Laboratories America Princeton, New

OpenFOAM Overview

• OpenFOAM stands for:– ‘Open Field Operations And Manipulation’

• Consists of a library of efficient CFD related C++ modules

• These can be combined together to create– solvers– utilities (for example pre/post-processing, mesh

checking, manipulation, conversion, etc)

2

Page 3: OpenFOAM on a GPU-based Heterogeneous Cluster Rajat Phull, Srihari Cadambi, Nishkam Ravi and Srimat Chakradhar NEC Laboratories America Princeton, New

3

OpenFOAM Application Domain: Examples Buoyancy driven flow:

Temperature flow

Fluid Structure Interaction

Modeling capabilities used by aerospace,

automotive, biomedical, energy

and processing industries.

Page 4: OpenFOAM on a GPU-based Heterogeneous Cluster Rajat Phull, Srihari Cadambi, Nishkam Ravi and Srimat Chakradhar NEC Laboratories America Princeton, New

4

OpenFOAM on a CPU clustered version

• Domain decomposition: Mesh and associated fields are decomposed.

• Scotch Practitioner

Page 5: OpenFOAM on a GPU-based Heterogeneous Cluster Rajat Phull, Srihari Cadambi, Nishkam Ravi and Srimat Chakradhar NEC Laboratories America Princeton, New

Motivation for GPU based cluster

5

• Each node: Quad-core 2.4GHz processor and 48GB RAM

• Performance degradation with increasing data size

OpenFOAM solver on a CPU based cluster

0 1000000 2000000 30000000

1000

2000

3000

4000

5000

6000

7000

8000

9000

10000

11000

12000

2 Nodes (8-cores) 3 Nodes (12-cores)

Problem Size

Tim

e(s

)

Page 6: OpenFOAM on a GPU-based Heterogeneous Cluster Rajat Phull, Srihari Cadambi, Nishkam Ravi and Srimat Chakradhar NEC Laboratories America Princeton, New

This Paper

• Ported a key OpenFOAM solver to CUDA. – Compared performance of OpenFOAM solvers on

CPU and GPU based clusters– Around 4 times faster on GPU based cluster

• Solved the imbalance due to different GPU generations in the cluster. – A run-time analyzer to dynamically load balance

the computation by repartitioning the input data.

6

Page 7: OpenFOAM on a GPU-based Heterogeneous Cluster Rajat Phull, Srihari Cadambi, Nishkam Ravi and Srimat Chakradhar NEC Laboratories America Princeton, New

How We Went About Designing the Framework

7

Profiled representative workloads

Computational Bottlenecks

CUDA implementation for clustered application

Imbalance due to different generation of GPUs or nodes without GPU

Load balance the computations by repartitioning the input data

Page 8: OpenFOAM on a GPU-based Heterogeneous Cluster Rajat Phull, Srihari Cadambi, Nishkam Ravi and Srimat Chakradhar NEC Laboratories America Princeton, New

InterFOAM Application Profiling

8

main()

PCG solver80.81%

Preconditioner34.28%

MV mult23.94%

Computational Bottleneck

• Straightaway porting on GPU Additional data transfer per iteration.

• Avoid data transfer each iterationHigher granularity to port entire solver on the GPU

Page 9: OpenFOAM on a GPU-based Heterogeneous Cluster Rajat Phull, Srihari Cadambi, Nishkam Ravi and Srimat Chakradhar NEC Laboratories America Princeton, New

PCG Solver

• Iterative algorithm for solving linear systems

• Solves Ax=b

• Each iteration computes vector x and r, the residual r is checked for convergence.

9

1 1

1 initial guess2

0, 04 Solve for in Preconditioning)5 .

for 0,1,2....

.

0 0 0

0 0

i i i -1 i-1

i i

i i i

i+1 i

i

=

0

0 0

xr = b - Axp

w Kw = rr w

p w pq Ap

p qx x

1 1 1

1 1 1

1

if accurate enough then quitSolve for in

.

end

i

i+1 i i

i+1

i i i

i i i

i i i

p r r q

xw Kw = r

r w

Page 10: OpenFOAM on a GPU-based Heterogeneous Cluster Rajat Phull, Srihari Cadambi, Nishkam Ravi and Srimat Chakradhar NEC Laboratories America Princeton, New

InterFOAM on a GPU-based cluster

10

Convert Input matrix A from LDU to CSR

Transfer A, x0 and b to GPU memory

Kernel for Diagonal preconditioning

CUBLAS APIs for linear algebra operations. CUSPASE for matrix vector

multiplication

Communication requires intermediate vectors in host memory. Scatter and

Gather kernels reduces data transfer.

Transfer vector x to host memory

Converged?

No

Yes

Page 11: OpenFOAM on a GPU-based Heterogeneous Cluster Rajat Phull, Srihari Cadambi, Nishkam Ravi and Srimat Chakradhar NEC Laboratories America Princeton, New

PROBLEM SIZE

TIME(S)

1 Node(4-cores)

2 Nodes(8-cores)

3 Node(12-cores)

1 Node(2 CPU cores + 2-GPUs)

2 Nodes(4 CPU cores + 4-GPUs)

3 Nodes(6 CPU cores +

6-GPUs)

159500 46 36 32 88 87 106

318500 153 85 70 146 142 165

637000 527 337 222 368 268 320

955000 1432 729 498 680 555 489

2852160 20319 11362 5890 4700 3192 2900

4074560 39198 19339 12773 7388 4407 4100

Cluster with Identical GPUs : Experimental Results (I)

11

Node: A quad-core Xeon, 2.4GHz, 48GB RAM + 2x NVIDIA Fermi C2050 GPU with 3GB RAM each.

Page 12: OpenFOAM on a GPU-based Heterogeneous Cluster Rajat Phull, Srihari Cadambi, Nishkam Ravi and Srimat Chakradhar NEC Laboratories America Princeton, New

Cluster with Identical GPUs : Experimental Results (II)

12

Performance: 4-GPU cluster is optimal

Performance: 3-node CPU cluster vs. 2 GPUs

0 2000000 40000000

2000

4000

6000

8000

10000

12000

14000

16000

3 Node (12-cores)

1 Node (2 CPU cores + 2-GPUs)

Problem size

Tim

e (

se

c)

0 2000000 40000000

2000

4000

6000

8000

1 Node (2 CPU cores + 2-GPUs)2 Nodes (4 CPU cores + 4-GPUs)3 Nodes (6 CPU cores + 6-GPUs)

Problem size

Tim

e (

se

c)

Page 13: OpenFOAM on a GPU-based Heterogeneous Cluster Rajat Phull, Srihari Cadambi, Nishkam Ravi and Srimat Chakradhar NEC Laboratories America Princeton, New

Cluster with Different GPUs

• OpenFOAM employs task parallelism, where the input data is partitioned and assigned to different MPI processes

• Nodes do not have GPUs or the GPUs have different compute capabilities

• Iterative algorithms: Uniform domain decomposition can lead to imbalance and suboptimal performance

13

Page 14: OpenFOAM on a GPU-based Heterogeneous Cluster Rajat Phull, Srihari Cadambi, Nishkam Ravi and Srimat Chakradhar NEC Laboratories America Princeton, New

Heterogeneous cluster : Case for suboptimal performance for Iterative methods

14

Iterative convergence algorithms: Creates parallel tasks that communicate with each other

P0 and P1 : Higher compute capability when compared to P2 and P3

Suboptimal performance when data equally partitioned: P0 and P2 complete the computations and wait for P2 and P3 to finish

Page 15: OpenFOAM on a GPU-based Heterogeneous Cluster Rajat Phull, Srihari Cadambi, Nishkam Ravi and Srimat Chakradhar NEC Laboratories America Princeton, New

Case for Dynamic data partitioningon Heterogeneous clusters

15

Runtime analysis +

Repartitioning

T2 < T1

Page 16: OpenFOAM on a GPU-based Heterogeneous Cluster Rajat Phull, Srihari Cadambi, Nishkam Ravi and Srimat Chakradhar NEC Laboratories America Princeton, New

Why not static partitioning based on compute power of nodes?

• Inaccurate prediction of optimal data partitioning, especially when GPUs with different memory bandwidths, cache levels and processing elements

• Multi-tenancy makes the prediction even harder.

• Data-aware scheduling scheme (selection of computation to be offloaded to the GPU is done at runtime) makes it even more complex.

16

Page 17: OpenFOAM on a GPU-based Heterogeneous Cluster Rajat Phull, Srihari Cadambi, Nishkam Ravi and Srimat Chakradhar NEC Laboratories America Princeton, New

How Data repartitioning system works?

17

Page 18: OpenFOAM on a GPU-based Heterogeneous Cluster Rajat Phull, Srihari Cadambi, Nishkam Ravi and Srimat Chakradhar NEC Laboratories America Princeton, New

How Data repartitioning system works?

18

Page 19: OpenFOAM on a GPU-based Heterogeneous Cluster Rajat Phull, Srihari Cadambi, Nishkam Ravi and Srimat Chakradhar NEC Laboratories America Princeton, New

Model for Imbalance Analysis : In context of OpenFOAM

19

Low communication overhead With Unequal partitions: No significant commn

overhead

Weighted mean (tw) = ∑ (T[node] * x[node]) / ∑ (x[node])

If T[node] < tw

Increase the partitioning ratio on P[node]else Decrease the partitioning ratio on P[node]

Processes P[0] P[1] P[2] P[3]

Data Ratio x[0] x[1] x[2] x[3]

Compute Time

T[0] T[1] T[2] T[3]

Page 20: OpenFOAM on a GPU-based Heterogeneous Cluster Rajat Phull, Srihari Cadambi, Nishkam Ravi and Srimat Chakradhar NEC Laboratories America Princeton, New

Data Repartitioning: Experimental Results

20

PROBLEM SIZE

AVERAGE TIME PER ITERATION(MS)

Work load equally balanced

Static partitioning

Dynamic

repartitioning

159500 1.9 1.9 1.9318500 2.4 2.2 2.2637000 2.85 2.7 2.35955000 5.9 3.15 2.75

2852160 13.05 6.1 5.84074560 25.5 8.2 7.2

Node 1 contains 2 CPU cores + 2 C2050 FermiNode 2 contains 4 CPU coresNode 3 contains 2 CPU cores + 2 Tesla C1060

Page 21: OpenFOAM on a GPU-based Heterogeneous Cluster Rajat Phull, Srihari Cadambi, Nishkam Ravi and Srimat Chakradhar NEC Laboratories America Princeton, New

Summary

• OpenFOAM solver to a GPU-based heterogeneous cluster. Learning can be extended to other solvers with similar characteristics (domain decomposition, Iterative, sparse computations)

• For a large problem size, speedup of around 4x on a GPU based cluster

• Imbalance in GPU clusters caused by fast evolution of GPUs, and propose a run-time analyzer to dynamically load balance the computation by repartitioning the input data. 21

Page 22: OpenFOAM on a GPU-based Heterogeneous Cluster Rajat Phull, Srihari Cadambi, Nishkam Ravi and Srimat Chakradhar NEC Laboratories America Princeton, New

Future Work

• Scale up to a larger cluster and perform experiments with multi-tenancy in the cluster

• Extend this work to incremental data repartitioning without restarting the application

• Introduce a sophisticated model for imbalance analysis to support large sub-set of applications.

22

Page 23: OpenFOAM on a GPU-based Heterogeneous Cluster Rajat Phull, Srihari Cadambi, Nishkam Ravi and Srimat Chakradhar NEC Laboratories America Princeton, New

23

Thank You!