zki-tagung supercomputing, 10-11. okt. 2013 parallel patterns … · 2013-10-16 · zki-tagung...

44
ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism in HPC Applications Hans Pabst Software and Services Group Intel Corporation

Upload: others

Post on 21-May-2020

8 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

ZKI-Tagung Supercomputing, 10-11. Okt. 2013Parallel Patterns for Composing and NestingParallelism in HPC Applications

Hans Pabst

Software and Services GroupIntel Corporation

Page 2: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Agenda

Introduction

Parallel Patterns Today

What about HPC?

Summary

3

10/11/2013

Page 3: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

What is a Parallel Pattern?

Design patterns

An encoded expertise to capture that “quality without a name” that distinguishes truly excellent designs

A small number of patterns can support a wide range of applications

Parallel pattern

A common occurring combination of task distributionand data sharing (data access).

Page 4: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Superscalar

sequence

Speculative

selection

Map

Scan and Recurrence

Pipeline

Reduce

Pack and

Expand

Nest

Search and

Match

Stencil

Partition

Gather/scatter

Parallel Patterns

* http://software.intel.com/sites/products/documentation/hpc/composerxe/en-us/2011Update/tbbxe/Design_Patterns.pdf

http://www.cs.cmu.edu/afs/cs.cmu.edu/Web/People/guyb/papers/Ble90.pdf

Page 5: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Why Parallel Patterns?

Pattern: “DSL for parallel programming”

• Ready to use “pieces” efficiently mapping concurrency to hardware parallelism

• Less error-prone (data races, etc.)

• More productive

… and what is the reality?– Parallel programming is still difficult.

– Standard languages progress slowly.

6

10/11/2013

Page 6: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

References

Details a pattern language for parallel algorithm design

Represents the author's hypothesis of how to think about parallel programming

Source code samples in MPI, OpenMP and Java

Patterns for Parallel Programming, Timothy G. Mattson,

Beverly A. Sanders, Berna L. Massingill, Addison-Wesley,

2005, ISBN 0321228111

10/11/2013

Page 7: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

(c) 2012, publisher: Morgan Kaufmann

References (cont.)

Teaches parallel programming in a new more effective manner.

It’s about effective parallel programming.

Not about any specific hardware.

8

11.10.2013

Page 8: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

References (cont.)

• http://www.cs.cmu.edu/afs/cs.cmu.edu/Web/People/guyb/papers/Ble90.pdf

• http://software.intel.com/sites/products/documentation/hpc/composerxe/en-us/2011Update/tbbxe/Design_Patterns.pdf

• Arch D. Robinson: Composable Parallel Patterns with Intel Cilk Plus, IEEE, 2013

9

10/11/2013

Page 9: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Finding Concurrency

Practical approach: “optimizing” sequential code*

– Use profiler, such as OProfile or Intel VTune Amplifier XE

– Identify the hotspots of an application; then:Check if the hotspots consist of independent tasksCheck if the hotspots can execute independently

Theoretical approach: design document

– Examine the design components, services, etc.

– Find components that contain independent operations

Other approach: parallel patterns (Practical?)

Find parallel pattern. Make the pattern explicit in the code.

– Use library based model e.g., C++11, Intel TBB

– Use language primitive e.g., C++11, Intel Cilk Plus

10/11/2013

* Parallelizing a sequential algorithm may never lead to the best known parallel algorithm.

Page 10: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Intel Advisor

Tool for scalability and what-if analysis

• Modeling: use code annotations to introduce parallelism

• Evaluation: estimate speedup and check correctness

• GUI-driven assistant (5 steps)

Productivity and safety

• Parallel correctness is checked based on a correct program

• Non-intrusive API

It’s not auto-parallelization

It’s not modifying the code

10/11/2013

Page 11: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

1. Survey the application (profiler)

2. Annotate the application (API)

3. Check suitability (estimation)

4. Check correctness (analysis)

5. Apply parallel model (user)

Idea: Correctly parallelized application

Intel Advisor: Five Steps

Page 12: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Intel Advisor: Quicksort Example

template<typename I> void serial_qsort(I begin, I end){typedef typename std::iterator_traits<I>::value_type T;if (begin != end) {

const I pivot = end - 1;const I middle = std::partition(begin, pivot,

std::bind2nd(std::less<T>(), *pivot));std::swap(*pivot, *middle);

ANNOTATE_SITE_BEGIN(Parallel Region);ANNOTATE_TASK_BEGIN(Left Partition);

serial_qsort(begin, middle),ANNOTATE_TASK_END(Left Partition);

ANNOTATE_TASK_BEGIN(Right Partition);serial_qsort(middle + 1, end));

ANNOTATE_TASK_END(Right Partition);ANNOTATE_SITE_END(Parallel Region);

}}

* Pure Quicksort (i.e., not attempt to avoid the tail recursion).

Page 13: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Intel Advisor: Fortran w/ Annotations

subroutine sgemv(res, mat, vec, nrows, ncols)

implicit none

integer :: nrows, ncols

real(kind=4), intent(out), dimension(0:nrows-1) :: res

real(kind=4), intent(in), dimension(0:ncols-1) :: vec

real(kind=4), intent(in), dimension(0:nrows*ncols-1) :: mat

integer :: i, u, v

ANNOTATE_SITE_BEGIN("parallel region")

do i = 0, nrows – 1

ANNOTATE_ITERATION_TASK("I")

u = i * ncols

v = u + ncols

res(i) = DOT_PRODUCT(mat(u:v), vec)

end do

ANNOTATE_SITE_END("parallel region")

end subroutine

14

10/11/2013

Page 14: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Intel Advisor: Fortran (cont.)

• Subset of Intel Advisor annotations#define ANNOTATE_SITE_BEGIN(NAME) call annotate_site_begin(NAME)

#define ANNOTATE_SITE_END(NAME) call annotate_site_end()

#define ANNOTATE_ITERATION_TASK(NAME) call annotate_iteration_task(NAME)

#define ANNOTATE_TASK_BEGIN(NAME) call annotate_task_begin(NAME)

#define ANNOTATE_TASK_END(NAME) call annotate_task_end()

#define ANNOTATE_LOCK_ACQUIRE(ADDRESS) call annotate_lock_acquire(ADDRESS)

#define ANNOTATE_LOCK_RELEASE(ADDRESS) call annotate_lock_release(ADDRESS)

• Enable source code preprocessing$ ifort -fpp source_code_with_macros.f90

• Advisor module (advisor_annotate) and library-I$ADVISOR_XE_2013_DIR/include/intel64

-L$ADVISOR_XE_2013_DIR/lib64

-ladvisor

15

10/11/2013

Page 15: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Intel Advisor: Finding Concurrency

• Introduce “as-if” parallelism and evaluate it safely

• Works for C, C++, and Fortran

• Threading model agnostic

16

Page 16: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Agenda

Introduction

Parallel Patterns Today

What about HPC?

Summary

17

10/11/2013

Page 17: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

OpenCL*

• Elemental functions in particular,or kernel functions in general– Similar to vectorizable loop body

tiling

• Tiling serves multiple things– Maps to memory hierarchy

– ND range “loop” blocking;N>1: nested loops

– Synchronization

…eee e e e

Page 18: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Fortran

• Standardization advances quickly (’03, ‘08, ‘13)

– Contrary to a “dying language”

• Cornerstones

– Array notation (“slices”)

– Elemental functions

– Fortran Co-array

19

10/11/2013

Page 19: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

C++: Intel® Threading Building Blocks(Library Based Approach)

20

10/11/2013

Concurrent Containers

Concurrent access, and a scalablealternative to containers that are

externally locked for thread-safety

Miscellaneous

Thread-safe timers

Generic Parallel Algorithms

Efficient scalable way to exploit the power of multi-core without

having to start from scratch

Task scheduler

The engine that empowers parallel

algorithms that employs task-stealing to maximize concurrency

Synchronization Primitives

User-level and OS wrappers for

mutual exclusion, ranging from atomic operations to several flavors of mutexes

and condition variables

Memory Allocation

Per-thread scalable memory manager and false-sharing free allocators

Threads

OS API wrappers

Thread Local Storage

Scalable implementation of thread-local data that supports infinite number of TLS

TBB Flow Graph

Page 20: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

C++: Intel® Threading Building Blocks(Library Based Approach)

21

10/11/2013

Concurrent Containers

Concurrent access, and a scalablealternative to containers that are

externally locked for thread-safety

Miscellaneous

Thread-safe timers

Generic Parallel Algorithms

Efficient scalable way to exploit the power of multi-core without

having to start from scratch

Task scheduler

The engine that empowers parallel

algorithms that employs task-stealing to maximize concurrency

Synchronization Primitives

User-level and OS wrappers for

mutual exclusion, ranging from atomic operations to several flavors of mutexes

and condition variables

Memory Allocation

Per-thread scalable memory manager and false-sharing free allocators

Threads

OS API wrappers

Thread Local Storage

Scalable implementation of thread-local data that supports infinite number of TLS

TBB Flow Graph

Page 21: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Loop parallelization

parallel_for

parallel_reduce

- load balanced parallel execution

- fixed number of independent iterations

parallel_deterministic_reduce

- run-to-run reproducible results

parallel_scan

- computes parallel prefix

y[i] = y[i-1] op x[i]

Parallel Algorithms for Streams

parallel_do

- Use for unstructured stream or pile of work

- Can add additional work to pile while running

parallel_for_each

- parallel_do without an additional work feeder

pipeline / parallel_pipeline

- Linear pipeline of stages

- Each stage can be parallel or serial in-order or serial out-of-order.

- Uses cache efficiently

Parallel function invocation

parallel_invoke

- Parallel execution of a number of user-specified functions

Parallel sorting

parallel_sort

Computational graph

flow::graph

- Implements dependencies between tasks

- Pass messages between tasks

C++: Intel® Threading Building Blocks(Generic Algorithms)

22

10/11/2013

Page 22: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

C++11 a.k.a. “C++0B”

… it’s late but not too late.

• Core language ext. memory consistency model

• Standard library parallelism can now rely onmem. consistency model

What about Pthreads? Not portable*.

– Lack of memory consistency model

23

10/11/2013

* Remember, Intel Architecture supports sequentially consistent atomics efficiently (“strong memory consistency model”).

Page 23: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

C++11

Core language

• Lambda expressions, type inference, etc.

• Range-based “for” loops

• TLS, atomics

Standard Template Lib. (STL)

• std::thread, std::async

• Synchronization (atomics, locks, and conditions)

• No tasks!

struct background {

background()

: i(0), thread(work, this)

{}

~background() {

thread.join();

}

static

void work(background* self) {

++self->i; // do something

}

int i;

std::thread thread;

};

// work in the background

background task;

Page 24: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Thread vs. Task

HW view “thread”– Stream of instructions (along

with a given state)

– Hardware resource e.g., “core0”

User view “task”– It’s not so interesting who will

be the actual “worker”.

OS view– Stores the machine state

(registers, etc.)

– At least one thread per process

User code– Code that runs in an own

process space (separate from the OS kernel)

NThre

ads

MTasks

Scheduler

Page 25: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Intel® Cilk™ Plus

• Tasking model with only three keywords

– Library based as far as Hyperobjects are concerned

– Really, the “Plus” does not mean C++...

– It is for C and C++ programmer

• Vectorization model

– Array notation and SIMD functions (“elemental”)

– Pragmas: almost all are promoted to OpenMP 4.0

Array Notation is accepted in GNU* GCC mainline!

http://software.intel.com/en-us/articles/cilk-plus-array-notation-for-c-accepted-into-gcc-mainline

26

10/11/2013

Page 26: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Intel® Cilk™ Plus: Writing Vector Code

Array Notation

A[:] = B[:] + C[:];

SIMD Directive

#pragma simdfor (int i = 0; i < N; ++i) {A[i] = B[i] + C[i];

}

Elemental Function

__declspec(vector)float ef(float a, float b) {

return a + b;}

A[:] = ef(B[:], C[:]);

Auto-Vectorization

for (int i = 0; i < N; ++i) {A[i] = B[i] + C[i];

}

27

10/11/2013

Page 27: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Intel® Cilk™ Plus and OpenMP 4.0

The programmer (i.e. you!) is responsible for correctness

Available clauses (both OpenMP and Intel versions)

• PRIVATE |

• FIRSTPRIVATE |

• LASTPRIVATE | --- like OpenMP

• REDUCTION |

• COLLAPSE | (OpenMP 4.0; for nested loops)

• LINEAR (additional induction variables)

• SAFELEN (OpenMP 4.0)

• VECTORLENGTH (Intel only)

• ALIGNED (OpenMP 4.0)

• ASSERT (Intel only; “vectorize or die!”)

28

Page 28: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Intel® Cilk™ Plus: Targeting Multicore

void qsort(int* begin, int* end){if (begin != end) {

int* pivot = end – 1;int* middle = std::partition(begin, pivot,

std::bind2nd(std::less<int>(), *pivot));std::swap(*pivot, *middle); cilk_spawn qsort(begin, middle);qsort(middle + 1, end);cilk_sync;

}}

29

(std::bind2nd is deprecated in C++11)

Page 29: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Intel® Cilk™ Plus: Lock Free(Reducers or Hyperobjects)

int accumulate(const int* array,std::size_t size)

{reducer_opadd<int> result(0);cilk_for (std::size_t i = 0; i < size; ++i) {

result += array[i];}

return result.get_value();}

30

Page 30: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Intel® Cilk™ Plus: Array Notation

Correspond to vector processing (SIMD)

• Explicit construct to express vectorization

• Compiler assumes no aliasing of pointers

Synonyms

• array notation, array section, array slice, vector

Syntax

• [start:size], or

• [start:size:stride]

• [:] all elements*

Intel Confidential31

* only works for array shapes known at compile-time

Page 31: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Intel® Cilk™ Plus: Notation Example

Array notation:

y[0:10:10] = sin(x[20:10:2]);

Corresponding loop:

for (int i = 0, j = 0, k = 20;

i < 10; ++i, j += 10, k += 2)

{

y[j] = sin(x[k]);

}

Intel Confidential32

Page 32: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Intel® Cilk™ Plus: Array Operators

Most C/C++ operators work with array sections

• Element-wise operators a[0:10] * b[4:10](rank and size must match)

• Scalar expansion a[10:10] * c

Assignment and evaluation

• Evaluation of RHS before assignment a[1:8] = a[0:8] + 1

• Parallel assignment to LHS ^ temp!

Gather and scatter

a[idx[0:1024]] = 0

b[idx[0:1024]] = a[0:1024]

c[0:512] = a[idx[0:512:2]]

33

Page 33: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Intel® Cilk™ Plus: Array Op. (cont.)

Index generation

a[:] = __sec_implicit_index(rank)

Shift operators

b[:] = __sec_shift (a[:], signed shift_val, fill_val)

b[:] = __sec_rotate(a[:], signed shift_val)

Cast-operation (array dimensionality) e.g.,

float[100] float[10][10]

34

10/11/2013

Page 34: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Intel® Cilk™ Plus: Array Reductions

Reductions

Built-in

__sec_reduce_add(a[:]), __sec_reduce_mul(a[:])__sec_reduce_min(a[:]), __sec_reduce_max(a[:])__sec_reduce_min_ind(a[:])__sec_reduce_max_ind(a[:])__sec_reduce_all_zero(a[:])__sec_reduce_all_nonzero(a[:])__sec_reduce_any_nonzero(a[:])

User-defined

result __sec_reduce (initial, a[:], fn-id)

void __sec_reduce_mutating(reduction, a[:], fn-id)

35

10/11/2013

Page 35: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Intel® Cilk™ Plus: SIMD Functions

__declspec(vector)void kernel(int& result, int a, int b){

result = a + b;}

void sum(int* result, const int* a, const int* b,std::size_t size)

{cilk_for (std::size_t i = 0; i < size; i += 8) {

const std::size_t n = std::min(size - i, 8);kernel(result[i:n], a[i:n], b[i:n]);

}}

36

10/11/2013

* For example, the remainder could be also handled separately (outside of the loop).

Page 36: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Intel® Threading Runtimes: Summary

Open source’d

• Intel® Threading Building Blocks

• Intel® Cilk™ Plus

• Intel® OpenMP*

37

10/11/2013

Page 37: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Agenda

Introduction

Parallel Patterns Today

What about HPC?

Summary

38

10/11/2013

Page 38: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Intel TBB*: Affinity PartitionerApplicable to parallel_for, parallel_reduce, and parallel_scan

void sum(int* result, const int* a, const int* b,std::size_t size)

{static affinity_partitioner partitioner;parallel_for(blocked_range<std::size_t>(0, size),[=](const blocked_range<std::size_t>& r) {

for (std::size_t i = r.begin(); i != r.end(); ++i) {result[i] = a[i] + b[i];

}},partitioner

);}

39

10/11/2013

* That’s not HPC!!! Well, that’s no religion and yes it’s C++ and people use it for HPC…

Page 39: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

OpenMP 4.0: proc_bind clause

Give the system more information about the mapping needed at each parallel region…

void fft_func(...)

{

nthr = mkl_domain_get_max_threads(MKL_FFT);

# pragma omp parallel num_threads(nthr), proc_bind(spread)

{

// do an FFT

}

}

void blas_func(...)

{

nthr = mkl_domain_get_max_threads(MKL_BLAS);

# pragma omp parallel num_threads(nthr), proc_bind(close)

{

// execute a BLAS routine

}

}

Summary• Gives you the affinity you’re looking for• Actual implementations may currently be expensive• It is already supported in the Intel® Compiler 14.0

You can play with it right away!

40

10/11/2013

Page 40: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Hybrid Parallelism: MPI + Threading

Pinning and Affinity

I_MPI_PIN_DOMAIN=socket

KMP_AFFINITY=compact,1

Intel® Xeon Phi

MPI + Offload

MPI symmetric model

Host/coprocessor only

mpiexec.hydra $* \-host $HOST -n 1 \-env I_MPI_PIN_DOMAIN=socket \-env OMP_NUM_THREADS=2 \-env KMP_AFFINITY=compact,1 \$ROOT/compile/linux-isc-xeon-mpi${OGL}/tachyon $ROOT/scenes/teapot.dat \-camfile $ROOT/scenes/teapot.cam \-nosave \

: \-host $HOST -n $RNKS \-env I_MPI_PIN_DOMAIN=socket \-env OMP_NUM_THREADS=30 \-env OMP_SCHEDULE=dynamic \-env KMP_AFFINITY=compact,1 \$ROOT/compile/linux-isc-xeon-mpi/tachyon $ROOT/scenes/teapot.dat \-camfile $ROOT/scenes/teapot.cam \-nosave \

: \-host mic0 -n $MICR \-env LD_LIBRARY_PATH=$MIC_LD_LIBRARY_PATH \-env I_MPI_PIN_DOMAIN=node \-env OMP_NUM_THREADS=224 \-env OMP_SCHEDULE=dynamic \-env KMP_AFFINITY=balanced \-env KMP_PLACE_THREADS=${MICT}T \$ROOT/compile/linux-isc-mic-mpi/tachyon $ROOT/scenes/teapot.dat \-camfile $ROOT/scenes/teapot.cam \-nosave \

: \-host mic1 -n $MICR \-env LD_LIBRARY_PATH=$MIC_LD_LIBRARY_PATH \-env I_MPI_PIN_DOMAIN=node \-env OMP_NUM_THREADS=224 \-env OMP_SCHEDULE=dynamic \-env KMP_AFFINITY=balanced \-env KMP_PLACE_THREADS=${MICT}T \$ROOT/compile/linux-isc-mic-mpi/tachyon $ROOT/scenes/teapot.dat \-camfile $ROOT/scenes/teapot.cam \-nosave \

$NULL

41

10/11/2013

Page 41: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Agenda

Introduction

Parallel Patterns Today

What about HPC?

Summary

42

10/11/2013

Page 42: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Summary

Prerequisites for composing and nesting parallelism

• Tasks in addition to (or instead of) plain threads

• Runtime dynamic scheduler (nesting!)

Challenges

• Scheduling and load balancing

• Determinism and reproducibility

43

10/11/2013

Page 43: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism
Page 44: ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns … · 2013-10-16 · ZKI-Tagung Supercomputing, 10-11. Okt. 2013 Parallel Patterns for Composing and Nesting Parallelism

Copyright© 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

INFORMATION IN THIS DOCUMENT IS PROVIDED “AS IS”. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO THIS INFORMATION INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT.

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.

Copyright © 2013, Intel Corporation. All rights reserved. Intel, the Intel logo, Xeon, Xeon Phi, Core, VTune, and Cilk are trademarks of Intel Corporation in the U.S. and other countries.

Optimization Notice

Intel’s compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804

Legal Disclaimer & Optimization Notice

Copyright© 2012, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

45

10/11/2013