how openmp compiled · if program is compiled sequentially openmp comments and pragmas are ignored...

35
How OpenMP * is Compiled 1 Acknowledgements: Acknowledgements: Acknowledgements: Acknowledgements: Slides here were also contributed by Chunhua Liao and Lei Huang from the HPCTools Group of the University of Houston. Barbara Chapman University of Houston * The name “OpenMP” is the property of the OpenMP Architecture Review Board.

Upload: others

Post on 27-Oct-2020

11 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: How OpenMP Compiled · If program is compiled sequentially OpenMP comments and pragmas are ignored If code is compiled for parallel execution comments and/or pragmas are read, and

How OpenMP * is Compiled

1

Acknowledgements: Acknowledgements: Acknowledgements: Acknowledgements: Slides here were also contributed by Chunhua Liao and Lei Huang from the HPCTools Group of the University of Houston.

Barbara ChapmanUniversity of Houston

* The name “OpenMP” is the property of the OpenMP Architecture Review Board.

Page 2: How OpenMP Compiled · If program is compiled sequentially OpenMP comments and pragmas are ignored If code is compiled for parallel execution comments and/or pragmas are read, and

How Does OpenMP Enable Us to Exploit Threads?

� OpenMP provides thread programming model at a “high level”. �The user does not need to specify all the details

– Especially with respect to the assignment of work t o threads– Creation of threads

� User makes strategic decisions

2

� User makes strategic decisions

� Compiler figures out details

� Alternatives:�MPI�POSIX thread library is lower level�Automatic parallelization is even higher level (use r does nothing)

– But usually successful on simple codes only

Page 3: How OpenMP Compiled · If program is compiled sequentially OpenMP comments and pragmas are ignored If code is compiled for parallel execution comments and/or pragmas are read, and

OpenMP Parallel Computing Solution Stack

Directives, Environment

Application

End User

3

Runtime library

OS/system support for shared memory.

Directives,Compiler

OpenMP libraryEnvironment

variables

Page 4: How OpenMP Compiled · If program is compiled sequentially OpenMP comments and pragmas are ignored If code is compiled for parallel execution comments and/or pragmas are read, and

Recall Basic Idea: How OpenMP Works

�User must decide what is parallel in program�Makes any changes needed to original source code�E.g. to remove any dependences in parts that

4

�E.g. to remove any dependences in parts that should run in parallel

�User inserts directives telling compiler how statements are to be executed�what parts of the program are parallel�how to assign code in parallel regions to threads�what data is private (local) to threads

Page 5: How OpenMP Compiled · If program is compiled sequentially OpenMP comments and pragmas are ignored If code is compiled for parallel execution comments and/or pragmas are read, and

How The User Interacts with Compiler

�Compiler generates explicit threaded code� shields user from many details of the multithreaded

code

� Compiler figures out details of code each thread needs to execute

5

thread needs to execute� Compiler does not check that programmer

directives are correct!� Programmer must be sure the required

synchronization is inserted

� The result is a multithreaded object program

Page 6: How OpenMP Compiled · If program is compiled sequentially OpenMP comments and pragmas are ignored If code is compiled for parallel execution comments and/or pragmas are read, and

Recall Basic Idea of OpenMP

� The program generated by the compiler is executed by multiple threads�One thread per processor or core

6

� Each thread performs part of the work�Parallel parts executed by multiple threads�Sequential parts executed by single thread

� Dependences in parallel parts require synchronization between threads

Page 7: How OpenMP Compiled · If program is compiled sequentially OpenMP comments and pragmas are ignored If code is compiled for parallel execution comments and/or pragmas are read, and

OpenMP Implementation

OpenMP OpenMP OpenMP OpenMP

SequentialObject Code

sequential compilation

Program with OpenMP directives

7

OpenMP OpenMP OpenMP OpenMP Fortran/C/C++

compiler

AnnotatedSource Code

ParallelObject CodeOpenMP

compilation

directives

With calls to With calls to With calls to With calls to runtime libraryruntime libraryruntime libraryruntime library

Page 8: How OpenMP Compiled · If program is compiled sequentially OpenMP comments and pragmas are ignored If code is compiled for parallel execution comments and/or pragmas are read, and

OpenMP Implementation

� If program is compiled sequentially �OpenMP comments and pragmas are ignored

� If code is compiled for parallel execution �comments and/or pragmas are read, and

8

�drive translation into parallel program

� Ideally, one source for both sequential and parallel program ( big maintenance plus )

Usually this is accomplished by choosing a Usually this is accomplished by choosing a Usually this is accomplished by choosing a Usually this is accomplished by choosing a specific compiler optionspecific compiler optionspecific compiler optionspecific compiler option

Page 9: How OpenMP Compiled · If program is compiled sequentially OpenMP comments and pragmas are ignored If code is compiled for parallel execution comments and/or pragmas are read, and

How is OpenMP Invoked ?

The user provides the required option or switch

� Sometimes this also needs a specific optimization level , so manual should be consulted

� May also need to set threads’ stacksize explicitlyExamples of compiler options

9

Examples of compiler options

� Commercial:

-openmp (Intel, Sun, NEC), -mp (SGI, PathScale, PGI ), --openmp (Lahey, Fujitsu), -qsmp=omp (IBM) /openmp flag (Microsoft Visual Studio 2005), etc.

� Freeware: Omni, OdinMP, OMPi, OpenUH, …

Check information at http:// www.compunity.org

Page 10: How OpenMP Compiled · If program is compiled sequentially OpenMP comments and pragmas are ignored If code is compiled for parallel execution comments and/or pragmas are read, and

How Does OpenMP Really Work?

We have seen what the application programmer does

� States what is to be carried out in parallel by multiple threads

10

� Gives strategy for assigning work to threads

� Arranges for threads to synchronize

� Specify data sharing attributes: shared, private, firstprivate, threadprivate,…

Page 11: How OpenMP Compiled · If program is compiled sequentially OpenMP comments and pragmas are ignored If code is compiled for parallel execution comments and/or pragmas are read, and

Overview of OpenMP Translation Process� Compiler processes directives and uses them to create

explicitly multithreaded code� Generated code makes calls to a runtime library

�The runtime library also implements the OpenMP user -level run-time routines

11

� Details are different for each compiler, but strate gies are similar

� Runtime library and details of memory management also proprietary

� Fortunately the basic translation is not all that d ifficult

Page 12: How OpenMP Compiled · If program is compiled sequentially OpenMP comments and pragmas are ignored If code is compiled for parallel execution comments and/or pragmas are read, and

The OpenMP Implementation…

� Transforms OpenMP programs into multi-threaded code

� Figures out the details of the work to be performed by each thread

� Arranges storage for different data and performs th eir

12

� Arranges storage for different data and performs th eir initializations: shared, private…

� Manages threads: creates, suspends, wakes up, terminates threads

� Implements thread synchronization

The details of how OpenMP is implemented varies fro m one compiler to another. We can only give an idea of how it is d one here!!

Page 13: How OpenMP Compiled · If program is compiled sequentially OpenMP comments and pragmas are ignored If code is compiled for parallel execution comments and/or pragmas are read, and

� Front End: �Read in source program, ensure that it is error-fre e, build

the intermediate representation (IR)

Sourcecode

Front End Back End

Structure of a Compiler

Targetcode

MiddleEnd

13

the intermediate representation (IR)

� Middle End: �Analyze and optimize program as much as possible.

“Lower” IR to machine-like form

� Back End: �Determine layout of program data in memory. Generat e

object code for the target architecture and optimiz e it

Page 14: How OpenMP Compiled · If program is compiled sequentially OpenMP comments and pragmas are ignored If code is compiled for parallel execution comments and/or pragmas are read, and

Compiler Sets Up Memory Allocation

At run time, code and objects must have locations in memory. The compiler arranges for this

(Not all programming languages need a heap: e.g. Fortran 77 doesn’t, C does.)

Object code• Stack and heap grow and shrink over time

14

Static, global data

stack

heap

• Stack and heap grow and shrink over time

• Grow toward each other

• Very old strategy

• Code, data may be interleaved

But in a multithreaded program, But in a multithreaded program, But in a multithreaded program, But in a multithreaded program, each thread needs its own stackeach thread needs its own stackeach thread needs its own stackeach thread needs its own stack

Page 15: How OpenMP Compiled · If program is compiled sequentially OpenMP comments and pragmas are ignored If code is compiled for parallel execution comments and/or pragmas are read, and

OpenMP Compiler Front End

In addition to reading in the base language (Fortran, C or C++)

� Read (parse) OpenMP directives

� Check them for correctness� Is directive in the right place? Is the information

FE

Source code

15

� Is directive in the right place? Is the information correct? Is the form of the for loop permitted? ….

� Create an intermediate representation with OpenMP annotations for further handling

Nasty problem: incorrect OpenMP sentinel means directive may not be recognized. And there might be no error message!!

ME

BEobject code

Page 16: How OpenMP Compiled · If program is compiled sequentially OpenMP comments and pragmas are ignored If code is compiled for parallel execution comments and/or pragmas are read, and

OpenMP Compiler Middle End

� Preprocess OpenMP constructs� Translate SECTIONs to DO/FOR constructs

� Make implicit BARRIERs explicit� Apply even more correctness checks

FE

Source code

16

� Apply some optimizations to code to ensure it performs well�Merge adjacent parallel regions�Merge adjacent barriers

OpenMP directives reduce scope in which some optimizations can be applied. Compiler writer must work hard to avoid a negative impact on performance .

ME

BEobject code

Page 17: How OpenMP Compiled · If program is compiled sequentially OpenMP comments and pragmas are ignored If code is compiled for parallel execution comments and/or pragmas are read, and

OpenMP Compiler: Rest of Processing

� Translate OpenMP constructs to multithreaded code� Sometimes simple

– Replace certain OpenMP constructs by calls to runti me routines., e.g.: barrier, atomic, flush, etc

� Sometimes a little more complex– Implement parallel construct by creating a separate task

FE

Source code

17

– Implement parallel construct by creating a separate task that contains the code in a parallel region

– Compiler must modify code somewhat, and insert call s to runtime library to fork threads and pass work to th em, perform required synchronization

– Translation of worksharing constructs requires thi s too

� Implement variable data attributes, set up storage and arrange for their initialization� Allocate space for each thread’s private data� Implement private, reduction, etc

ME

BEobject code

Page 18: How OpenMP Compiled · If program is compiled sequentially OpenMP comments and pragmas are ignored If code is compiled for parallel execution comments and/or pragmas are read, and

OpenUH Compiler Infrastructure

IPA(Inter Procedural Analyzer)

Source code w/ OpenMP directives

Source code with runtime library calls

A Native

FRONTENDS(C/C++, Fortran 90, OpenMP)

Ope

n64

Com

pile

r in

fras

truc

ture

LNO(Loop Nest Optimizer)

OMP_PRELOWER(Preprocess OpenMP )

18

Linking

CGGen. IA-64/IA-32/Opteron code

WOPT(global scalar optimizer) Object files

LOWER_MP(Transformation of OpenMP )

A NativeCompiler

Executables

A Portable OpenMPRuntime library

Ope

n64

Com

pile

r in

fras

truc

ture

(Loop Nest Optimizer)

WHIRL2C & WHIRL2F(IR-to-source for non-Itanium )

Collaboration between University of Houston and Tsinghua UniversityCollaboration between University of Houston and Tsinghua UniversityCollaboration between University of Houston and Tsinghua UniversityCollaboration between University of Houston and Tsinghua University

Page 19: How OpenMP Compiled · If program is compiled sequentially OpenMP comments and pragmas are ignored If code is compiled for parallel execution comments and/or pragmas are read, and

Implementing a Parallel Region: OutliningCompiler creates a new procedure containing

the region enclosed by a parallel construct

� Each thread will execute this procedure

� Shared data passed as argumentsReferenced via their address in routine

19

� Referenced via their address in routine

� Private data stored on thread’s stack� Threadprivate may be on stack or heap

Outlining introduces a few overheads, but makes the translation straightforward.

It makes the scope of OpenMP data attributes explic it.

Page 20: How OpenMP Compiled · If program is compiled sequentially OpenMP comments and pragmas are ignored If code is compiled for parallel execution comments and/or pragmas are read, and

An Outlining Example: Hello world� Original Code

#include <omp.h>

void main()

{

#pragma omp parallel

{

� Translated multi-threaded code with runtime library calls

//here is the outlined code

void __ompregion_main1(…)

{

int ID =ompc_get_thread_num();

printf(“Hello world(%d)”,ID);

20

{

int ID=omp_get_thread_num();

printf(“Hello world(%d)”,ID);

}

}

} /* end of ompregion_main1*/

void main()

{

…__ompc_fork(&__ompregion_main1,…);

…}

Page 21: How OpenMP Compiled · If program is compiled sequentially OpenMP comments and pragmas are ignored If code is compiled for parallel execution comments and/or pragmas are read, and

OpenMP Transformations –Do/For� Transform original

loop so each thread performs only its own portion

� Most of scheduling

� Original Code#pragma omp for

for( i = 0; i < n; i++ )

{ …}

� Transformed Code

21

Most of scheduling calculations usually hidden in runtime

� Some extra work to handle firstprivate, lastprivate

tid = ompc_get_thread_num();

ompc_static_init (tid, lower,upper, incr,.);

for( i = lower;i < upper;i += incr ) { … }

// Implicit BARRIER

ompc_barrier();

Page 22: How OpenMP Compiled · If program is compiled sequentially OpenMP comments and pragmas are ignored If code is compiled for parallel execution comments and/or pragmas are read, and

OpenMP Transformations –Reduction

� Reduction variables can be translated into a two-step operation

� First, each thread performs its own

� Original Code#pragma omp parallel for \

reduction (+:sum) private (x)

for(i=1;i<=num_steps;i++)

{ …

sum=sum+x ;}

� Transformed Codefloat local_sum ;

22

performs its own reduction using a private variable

� Then the global sum is formed

� The compiler must ensure atomicity of the final reduction

float local_sum ;

ompc_static_init (tid, lower,uppder, incr,.);

for( i = lower;i < upper;i += incr ) { … local_sum = local_sum + x; }

ompc_barrier();

ompc_critical();

sum = (sum + local_sum); ompc_end_critical();

Page 23: How OpenMP Compiled · If program is compiled sequentially OpenMP comments and pragmas are ignored If code is compiled for parallel execution comments and/or pragmas are read, and

OpenMP Transformation –Single/Master� Master thread

has a threadid of 0, very easy to test for.

� The runtime function for the

� Original Code#pragma omp parallel

{ #pragma omp master

a=a+1;

#pragma omp single

b=b+1;}

� Transformed Code

23

function for the single construct might use a lock to test and set an internal flag in order to ensure only one thread get the work done

� Transformed Code

Is_master= ompc_master(tid);

if((Is_master == 1))

{ a = a + 1; }

Is_single = ompc_single(tid);

if((Is_single == 1))

{ b = b + 1; }

ompc_barrier();

Page 24: How OpenMP Compiled · If program is compiled sequentially OpenMP comments and pragmas are ignored If code is compiled for parallel execution comments and/or pragmas are read, and

OpenMP Transformations –Threadprivate

� Original Codestatic int px;

int foo() {#pragma omp threadprivate(px)

bar( &px );}

� Transformed Code

� Every threadprivate variable reference becomes an indirect reference through an auxiliary structure to the private copyEvery thread needs

24

� Transformed Codestatic int px;

static int ** thdprv_px;

int _ompregion_foo1() {int* local_px;

…tid = ompc_get_thread_num();local_px=get_thdprv(tid,thdprv_px,

&px);bar( local_px );}

� Every thread needs to find its index into the auxiliary structure – This can be expensive � Some OS’es (and

codegen schemes) dedicate register to identify thread

� Otherwise OpenMP runtime has to do this

Page 25: How OpenMP Compiled · If program is compiled sequentially OpenMP comments and pragmas are ignored If code is compiled for parallel execution comments and/or pragmas are read, and

OpenMP Transformations –WORKSHARE

� Original CodeREAL AA(N,N), BB(N,N)

!$OMP PARALLEL!$OMP WORKSHARE

AA = BB!$OMP END WORKSHARE!$OMP END PARALLEL

� WORKSHARE can be translated to OMP DO during preprocessing phase

25

!$OMP END PARALLEL

� Transformed CodeREAL AA(N,N), BB(N,N)!$OMP PARALLEL!$OMP DO

DO J=1,N,1DO I=1,N,1

AA(I,J) = BB(I,J)END DO

END DO!$OMP END PARALLEL

� If there are several different array statements involved, it requires a lot of work by the compiler to do a good job

� So there may be a performance penalty

Page 26: How OpenMP Compiled · If program is compiled sequentially OpenMP comments and pragmas are ignored If code is compiled for parallel execution comments and/or pragmas are read, and

Runtime Memory Allocation

� Outlining creates a new scope: private data become local variables for the outlined routine.

� Local variables can be saved on stack

� Includes compiler-generated temporaries

Heap Threadprivate

stack Thread 1 stack

….

Threadprivate

Local data

One possible organization of memory

26

temporaries

� Private variables, includingfirstprivate and lastprivate

� Could be a lot of data

� Local variables in a procedure called within a parallel region are private by default

� Location of threadprivate data depends on implementation

� On heap

� On local stack

Thread 0 stack

Main process stack

GlobalData …

Code

main()__ompregion_main1()…

Local data

pointers to shared variables

Arg. Passed by value

registers

Program counter

Page 27: How OpenMP Compiled · If program is compiled sequentially OpenMP comments and pragmas are ignored If code is compiled for parallel execution comments and/or pragmas are read, and

Role of Runtime Library

� Thread management and work dispatch�Routines to create threads, suspend them and wake t hem up/

spin them, destroy threads�Routines to schedule work to threads

– Manage queue of work– Provide schedulers for static, dynamic and guided

27

– Provide schedulers for static, dynamic and guided

� Maintain internal control variables� threadid, numthreads, dyn-var, nest-var, sched_var, etc

� Implement library routines omp_..() and some simple constructs (e.g. barrier, atomic)

Some routines in runtime library – e.g. to return th e threadid - are heavily accessed, so they must be car efully implemented and tuned. The runtime library should a void any unnecessary internal synchronization.

Page 28: How OpenMP Compiled · If program is compiled sequentially OpenMP comments and pragmas are ignored If code is compiled for parallel execution comments and/or pragmas are read, and

Synchronization

� Barrier is main synchronization construct since it i s often invoked implicitly. It in turn is often imple mented using locks.

void __ompc_barrier (omp_team_t *team){

…pthread_mutex_lock(&(team ->barrier_lock));

One simple way to implement barrier• Each thread team maintains a barrier counter

28

pthread_mutex_lock(&(team ->barrier_lock));team->barrier_count++;barrier_flag = team->barrier_flag;

/* The last one reset flags*/

if (team->barrier_count == team->team_size){

team->barrier_count = 0;team->barrier_flag = barrier_flag ^ 1; /* Xor: togg le*/pthread_mutex_unlock(&(team->barrier_lock));return;

}pthread_mutex_unlock(&(team->barrier_lock));

/* Wait for the last to reset the barrier*/OMPC_WAIT_WHILE(team->barrier_flag == barrier_flag);

}

• Each thread team maintains a barrier counter and a barrier flag.

• Each thread increments the barrier counter when it enters the barrier and waits for a barrier flag to be set by the last one.

• When the last thread enters the barrier and increment the counter, the counter will be equal to the team size and the barrier flag is reset.

• All other waiting threads can then proceed.

Page 29: How OpenMP Compiled · If program is compiled sequentially OpenMP comments and pragmas are ignored If code is compiled for parallel execution comments and/or pragmas are read, and

Constructs That Use a Barrier

29

� Careful implementation can achieve modest overhead for most synchronization constructs.

� Parallel reduction is costly because it often uses critical region to summarize variables at the end.

Synchronization Overheads (in cycles) on SGI Origin 2000*

* Courtesy of J. M. Bull, "Measuring Synchronisatio n and Scheduling Overheads in OpenMP", EWOMP '99, L und, Sep., 1999.

Page 30: How OpenMP Compiled · If program is compiled sequentially OpenMP comments and pragmas are ignored If code is compiled for parallel execution comments and/or pragmas are read, and

Static Scheduling: Under The Hood/ *Static even: static without specifying

chunk size; scheduler divides loop iterations evenly onto each thread. */

// the outlined task for each thread_gtid_s1 = __ompc_get_thread_num ();temp_limit = n – 1__ompc_static_init(_gtid_s1 , static,

&_do_lower, &_do_upper, &_do_stride,..);if(_do_upper > temp_limit){ _do_upper = temp_limit; }

// The OpenMP code// possible unknown loop upper bound: n// unknown number of threads to be used#pragma omp for schedule(static)for (i=0;i<n;i++){

30

{ _do_upper = temp_limit; }for(_i = _do_lower; _i <= _do_upper; _i ++){

do_sth();}

{do_sth();

}

• Most (if not all) OpenMP compilers choose static as default method• Number of threads and loop bounds possibly unknown, so final details usually deferred to runtime• Two simple runtime library calls are enough to hand le static case:

Constant overhead

Page 31: How OpenMP Compiled · If program is compiled sequentially OpenMP comments and pragmas are ignored If code is compiled for parallel execution comments and/or pragmas are read, and

Dynamic Scheduling : Under The Hood_gtid_s1 = __ompc_get_thread_num ();temp_limit = n -1;_do_upper = temp_limit;_do_lower = 0;__ompc_scheduler_init(__ompv_gtid_s1, dynamic ,do_l ower, _do_upper, stride, chunksize..);_i = _do_lower;mpni_status = __ompc_schedule_next(_gtid_s1, &_do_l ower, &_do_upper, &_do_stride);while(mpni_status){if(_do_upper > temp_limit)

// Schedule(dynamic, chunksize)

31

if(_do_upper > temp_limit){ _do_upper = temp_limit; }for(_i = _do_lower; _i <= _do_upper; _i = _i + _do_ stride){ do_sth(); }mpni_status = __ompc_schedule_next(_gtid_s1, &_do_l ower, &_do_upper, &_do_stride);

}

• Scheduling is performed during runtime.• A while loop to grab available loop iterations from a work queue

•Similar way to implement STATIC with a chunk size a nd GUIDED scheduling

Average overhead= c1*(iteration space/chunksize)+c2

Page 32: How OpenMP Compiled · If program is compiled sequentially OpenMP comments and pragmas are ignored If code is compiled for parallel execution comments and/or pragmas are read, and

Using OpenMP Scheduling Constructs

32

Scheduling Overheads (in cycles) on Sun HPC 3500*� Conclusion:

� Use default static scheduling when work load is bal anced and thread processing capability is constant.

� Use dynamic/guided otherwise

* Courtesy of J. M. Bull, "Measuring Synchronizatio n and Scheduling Overheads in OpenMP", EWOMP '99, L und, Sep., 1999.

Page 33: How OpenMP Compiled · If program is compiled sequentially OpenMP comments and pragmas are ignored If code is compiled for parallel execution comments and/or pragmas are read, and

Implementation-Defined Issues

� OpenMP also leaves some issues to the implementation�Default number of threads

33

�Default schedule and default for schedule (runtime)

�Number of threads to execute nested parallel regions

�Behavior in case of thread exhaustion

�And many others..

Despite many similarities, each implementation is a little different from all others.

Page 34: How OpenMP Compiled · If program is compiled sequentially OpenMP comments and pragmas are ignored If code is compiled for parallel execution comments and/or pragmas are read, and

Recap

� OpenMP-aware compiler uses directives to generate c ode for each thread

� It also arranges for the program’s data to be store d in memory� To do this, it:

� Creates a new procedure for each parallel region� Gets each thread to invoke this procedure with the required

34

� Gets each thread to invoke this procedure with the required arguments

� Has each thread compute its set of iterations for a parallel loop� Uses runtime routines to implement synchronization a s well as many

other details of parallel object code

� Get to “know” a compiler by running microbenchmarks to see overheads (visit http://www.epcc.ed.ac.uk/~jmbull for more)

Page 35: How OpenMP Compiled · If program is compiled sequentially OpenMP comments and pragmas are ignored If code is compiled for parallel execution comments and/or pragmas are read, and

Questions?

35