p-threads and shared memory programming in...

24
1 P-Threads and Shared Memory Programming in PETSc Kerry Stevens APAM Research Conference February 3, 2012

Upload: others

Post on 10-Oct-2020

12 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: P-Threads and Shared Memory Programming in PETScapam.columbia.edu/files/seasdepts/applied-physics-and...• P-Threads/OpenMP designed for shared memory parallel programming, coordinating

1

P-Threads and Shared Memory

Programming in PETSc

Kerry Stevens

APAM Research Conference

February 3, 2012

Page 2: P-Threads and Shared Memory Programming in PETScapam.columbia.edu/files/seasdepts/applied-physics-and...• P-Threads/OpenMP designed for shared memory parallel programming, coordinating

2

Software Needs To Utilize Resources

• Most HPC systems consist of many, many, many

nodes networked together in a suitable topology

• A node consists (most often) of multiple processors

(sockets) that all have multiple cores

• Utilizing only 1 MPI process per node utilizes only

1 core of 1 processor

• P-Threads/OpenMP designed for shared

memory parallel programming,

coordinating work intra-node

Your basic desktop

Your basic supercomputer

Page 3: P-Threads and Shared Memory Programming in PETScapam.columbia.edu/files/seasdepts/applied-physics-and...• P-Threads/OpenMP designed for shared memory parallel programming, coordinating

3

Communication Model • P-Threads provides 2 routines to inform

waiting threads that the state of some shared data structure has changed

• pthread_cond_broadcast awakens all the threads

• initiates a 'thundering herd' all contending for the lock

• pthread_cond_signal awakens only 1 thread waiting

• Can specify which thread is to be awoken by using

a different condition variable for each thread

• pthread_cond_wait causes the thread to unlock

the held mutex and sleep AUTOMICALLY

WAKE UP

EVERYONE!

… zzz ...

… zzz ...

… zzz ...

… zzz ...

WAKE UP!

… zzz ...

… zzz ...

… zzz ...

… zzz ...

Page 4: P-Threads and Shared Memory Programming in PETScapam.columbia.edu/files/seasdepts/applied-physics-and...• P-Threads/OpenMP designed for shared memory parallel programming, coordinating

4

The Track Race Analogy

• Guy with the gun – the 'main' thread

• Runners – the posix threads

• The Event

• All runners line up and get into position

• The guy with the gun fires it off

• Runners perform their task: run

• How can the above procedure be accomplished in

software?

Page 5: P-Threads and Shared Memory Programming in PETScapam.columbia.edu/files/seasdepts/applied-physics-and...• P-Threads/OpenMP designed for shared memory parallel programming, coordinating

5

• Common track race events include the 100m,

200m, 400m, and 800m

• Track race events in software we term thread

kernels and include vector dot product, matrix-vector

multiply, etc.

• Just as the 100m takes less time to complete

than the 800m, different thread kernels take

different amounts of time to complete

The Track Race Analogy Continued …

Page 6: P-Threads and Shared Memory Programming in PETScapam.columbia.edu/files/seasdepts/applied-physics-and...• P-Threads/OpenMP designed for shared memory parallel programming, coordinating

6

The Firing of the Starting Gun

• In real life the Starter fires the gun, the

runners all hear it simultaneously, and

everyone starts running

• The Starter firing the gun analogous to the

'main' thread issuing a

pthread_cond_broadcast instruction

Page 7: P-Threads and Shared Memory Programming in PETScapam.columbia.edu/files/seasdepts/applied-physics-and...• P-Threads/OpenMP designed for shared memory parallel programming, coordinating

7

Implicit Synchronization

The Starter

• Must ensure all the runners are lined up and

in position

• Must ensure all the runners have finished

the race before commencing to set up a new

race

• Must collect race results (sometimes)

Page 8: P-Threads and Shared Memory Programming in PETScapam.columbia.edu/files/seasdepts/applied-physics-and...• P-Threads/OpenMP designed for shared memory parallel programming, coordinating

8

Implicit Synchronization

The Runners • In real life, unless running a relay, runners

are completely independent of one another

• In software, a baton (mutex) enters

• Depending on the implementation, threads may

solely coordinate with the 'main' thread or could

coordinate with

multiple other threads

• The need for runner

synchronization makes

thread pool use much

less appealing

mutex

Page 9: P-Threads and Shared Memory Programming in PETScapam.columbia.edu/files/seasdepts/applied-physics-and...• P-Threads/OpenMP designed for shared memory parallel programming, coordinating

9

Why can't all runners (threads)

start simultaneously? • Associated with the pthread_cond_wait() is

a condition variable AND a mutex

• Any way that wake up & run can truly be

simultaneous, not sequential?

• NO! Unlock and wait needs to be atomic

Page 10: P-Threads and Shared Memory Programming in PETScapam.columbia.edu/files/seasdepts/applied-physics-and...• P-Threads/OpenMP designed for shared memory parallel programming, coordinating

10

Thread Pool & Use

• Collection of threads

• Used often in producer/consumer model

• Producer: add task to some shared data

structure

• Consumer: remove task from shared data

structure and complete task

• Lock must protect the shared data structure

Remove

Add

Consumer Thread

Producer Thread Queue of Tasks

Page 11: P-Threads and Shared Memory Programming in PETScapam.columbia.edu/files/seasdepts/applied-physics-and...• P-Threads/OpenMP designed for shared memory parallel programming, coordinating

11

Thread Pool Implementation in PETSc

• Thread pools designed for programs

comprised of many short tasks

• Run time consists of combining

computation and P-Thread overhead time

• Small jobs can take longer using P-Threads

due to (comparatively) large amount of time

for thread creation

• Thread creation occurs on startup, thread

termination occurs at shutdown

Page 12: P-Threads and Shared Memory Programming in PETScapam.columbia.edu/files/seasdepts/applied-physics-and...• P-Threads/OpenMP designed for shared memory parallel programming, coordinating

12

Thread Pool Implementations

“True Pool”

Broadcast

Signal

Main

Worker Pool

• 1 mutex coordinates/synchronizes everyone

• “Thundering herd” for mutex control

• Last worker to finish signals Main

Page 13: P-Threads and Shared Memory Programming in PETScapam.columbia.edu/files/seasdepts/applied-physics-and...• P-Threads/OpenMP designed for shared memory parallel programming, coordinating

13

Thread Pool Implementations

“Chain”

• Explicit ordering of worker threads

• Many mutexes, 1 for each worker

• Sequentiality explicit

Main

0

1

2

3

4

5

6

Signal

“Get to Work”

Signal

“All Done”

Signal

“Wake Up”

Sig

nal

“I'

m D

one”

Page 14: P-Threads and Shared Memory Programming in PETScapam.columbia.edu/files/seasdepts/applied-physics-and...• P-Threads/OpenMP designed for shared memory parallel programming, coordinating

14

Thread Pool Implementations

“Dictatorship”

• Many mutexes, 1 for each worker thread

• Main sends signals sequentially

• Each worker thread signals Main

• Workers truly independent

Main

0 1 2 3 4 5 6

Page 15: P-Threads and Shared Memory Programming in PETScapam.columbia.edu/files/seasdepts/applied-physics-and...• P-Threads/OpenMP designed for shared memory parallel programming, coordinating

15

Thread Pool Implementations

“Corporate” or “Tree”

• Threads tell their subordinates to get to work

• Subordinates inform their bosses that they're done

• “Parallelization” of wake up procedure

Main

0

1

3

2

4 5 6

Page 16: P-Threads and Shared Memory Programming in PETScapam.columbia.edu/files/seasdepts/applied-physics-and...• P-Threads/OpenMP designed for shared memory parallel programming, coordinating

16

Threads And Core Affinity

• Which core does Main run on? Which do

my threads run on?

• Process-to-processor and thread-to-core

mappings greatly affect performance

• “Task mapping problem” heavily researched

AMD Intel

Page 17: P-Threads and Shared Memory Programming in PETScapam.columbia.edu/files/seasdepts/applied-physics-and...• P-Threads/OpenMP designed for shared memory parallel programming, coordinating

17

Parallel Performance

• All tests conducted on MCS's

PETSC machine (see schematic)

• What's the Bottleneck? • Compute bound: program execution time

determined by speed at which floating point

operations can be retired

• Memory bound: program execution time determined by speed at

which cores can obtain data from memory

• Important questions to ask: • Can we double sequential performance by using 2 threads?

• Can we quadruple sequential performance by using 4 threads?

• Can we octuple sequential performance by using 8 threads?

processor processor

core

Page 18: P-Threads and Shared Memory Programming in PETScapam.columbia.edu/files/seasdepts/applied-physics-and...• P-Threads/OpenMP designed for shared memory parallel programming, coordinating

18

Sequential (Uni-core)

Performance Hardware Limited

• Argonne “PETSC” machine • 32KB L1 Data & Instruction

• 6144KB L2 Unified

• Working Set • Amount of physical memory

needed by program

• As size increases, L2 cannot

hold it all and must utilize DRAM

Page 19: P-Threads and Shared Memory Programming in PETScapam.columbia.edu/files/seasdepts/applied-physics-and...• P-Threads/OpenMP designed for shared memory parallel programming, coordinating

19

Vector Dot Product Results

Vector of Size 100,000 Elements

• Sequential

• Runs made using different cores

• As expected, the particular core utilized did not

make a difference

• Results ranged from 1868 to 1983 Mflops/sec

with 1960 Mflops/sec as the median

• Thread Creation/Destruction (2 Threads)

• Results varied widely between 860 to 1300

Mflops/sec

• Indicative of current inability to control where

threads are placed

Page 20: P-Threads and Shared Memory Programming in PETScapam.columbia.edu/files/seasdepts/applied-physics-and...• P-Threads/OpenMP designed for shared memory parallel programming, coordinating

20

Thread Pool Vector Dot Product Results

Vector of Size 100,000 Elements

• Performance shows how much data

movement matters

Thread 0, Core 0; Thread 1, Core 4 Thread 0, Core 0; Thread 1, Core 1 Thread 0, Core 0; Thread 1, Core 2

Page 21: P-Threads and Shared Memory Programming in PETScapam.columbia.edu/files/seasdepts/applied-physics-and...• P-Threads/OpenMP designed for shared memory parallel programming, coordinating

21

Vector Norm Results

Vector of Size 1,000,000 Elements

• Sequential

• Median of 850 Mflops/sec

• Working set overflow of L2 means core has to

wait for data to arrive from DRAM

• Thread Creation/Destruction (2 Threads)

• Comparable performance to sequential

Page 22: P-Threads and Shared Memory Programming in PETScapam.columbia.edu/files/seasdepts/applied-physics-and...• P-Threads/OpenMP designed for shared memory parallel programming, coordinating

22

Thread Pool Vector Norm Results

Vector of Size 1,000,000 Elements

Thread 0, Core 0; Thread 1, Core 4 Thread 0, Core 0; Thread 1, Core 1 Thread 0, Core 0; Thread 1, Core 2

Page 23: P-Threads and Shared Memory Programming in PETScapam.columbia.edu/files/seasdepts/applied-physics-and...• P-Threads/OpenMP designed for shared memory parallel programming, coordinating

23

Matrix-Vector Multiplication

SNES ex19, dmmg_nlevels = 6 • Sequential implementation sees 585

Mflops/sec performance

Page 24: P-Threads and Shared Memory Programming in PETScapam.columbia.edu/files/seasdepts/applied-physics-and...• P-Threads/OpenMP designed for shared memory parallel programming, coordinating

24

References

• Documentation: http://www.mcs.anl.gov/petsc/docs

• PETSc Users manual

• Manual pages

• Many hyperlinked examples

• FAQ, Troubleshooting info, installation info, etc.

• Publications: http://www.mcs.anl.gov/petsc/publications

• Research and publications that make use PETSc

• Programming with POSIX Threads, by Butenhof