cs 591x – cluster and parallel programming

25
CS 591x – Cluster and Parallel Programming Nonblocking communications

Upload: teagan-miller

Post on 31-Dec-2015

30 views

Category:

Documents


0 download

DESCRIPTION

CS 591x – Cluster and Parallel Programming. Nonblocking communications. Remember…. Its about performance. Blocking vs. Nonblocking Communications. Recall that MPI_Send & MPI_Recv (and others) are blocking operations - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: CS 591x – Cluster and Parallel Programming

CS 591x – Cluster and Parallel Programming

Nonblocking communications

Page 2: CS 591x – Cluster and Parallel Programming

Remember…

Its about performance

Page 3: CS 591x – Cluster and Parallel Programming

Blocking vs. Nonblocking Communications

Recall that MPI_Send & MPI_Recv (and others) are blocking operationsIn blocking communications the communications process must complete before program execution can proceed.Clearly, MPI_Recv must block MPI_Recv(a,1,MPI_INT,next,tag, spcomm) When is a safe? … When MPI_Recv finishes

Page 4: CS 591x – Cluster and Parallel Programming

Blocking vs. Nonblocking Communications

MPI_Send also blocks Blocks until the message is received or until the message is in a system

buffer

Consider… MPI_Send(b,1,MPI_INT,dest,tag,spcomm) When is b safe to change? When MPI_Send is completed and

program execution continues

Page 5: CS 591x – Cluster and Parallel Programming

Blocking vs. Nonblocking Communications

Communications takes timeTime means compute cycles…compute cycle that might be used for some other computations…better performance in our application if we could initiate a communcations…do something else useful while it is in progressand check back when it is done.

Page 6: CS 591x – Cluster and Parallel Programming

Nonblocking communications

That’s the idea behind nonblocking communication… Initiate a communications transaction Do something else for while… …but don’t mess with the variables

involved in the transaction check to see if the transaction is finish proceed with computation using the

results of the transaction

Page 7: CS 591x – Cluster and Parallel Programming

Nonblocking Communications

Type

MPI_Request request;

**This is used to keep track of the transaction

Page 8: CS 591x – Cluster and Parallel Programming

Nonblocking Send

int MPI_Isend(void* message,int count,MPI_Datetype type,int dest,int tag,MPI_Comm comm,MPI_Request* request)

Page 9: CS 591x – Cluster and Parallel Programming

Nonblocking Recv

int MPI_Irecv(void* message,int count,MPI_Datatype type,int dest,int tag,MPI_Comm comm,MPI_Request request)

Page 10: CS 591x – Cluster and Parallel Programming

Nonblocking Send/Recv

Note: the arguments are the same as blocking Send/Recv …except for the inclusion of the request argumentThe request argument is known as the transactions “handle”

Page 11: CS 591x – Cluster and Parallel Programming

Nonblocking Send/Recv

So how do we know when the transaction is complete?

Page 12: CS 591x – Cluster and Parallel Programming

MPI_Wait

int MPI_Wait(MPI_Request request,MPI_Status status)

Page 13: CS 591x – Cluster and Parallel Programming

MPI_Wait

MPI_Wait stops program execution until the communication transaction… identified by the request handle… complete… then the application execution proceeds

Page 14: CS 591x – Cluster and Parallel Programming

So something like this…

MPI_Request request1;

MPI_Isend(a,1,MPI_INT, dest, tag,mycomm, &request1);

…// do other stuff here;….MPI_Wait(&request1, &status); ….

Page 15: CS 591x – Cluster and Parallel Programming

Or something like this

MPI_Request request1;MPI_Request request2;MPI_Isend(a,1,MPI_INT,dest,tag,comm1,

&request1);MPI_Irecv(b,1,MPI_INT,src,tag,comm1,

&request2);… other stuff…MPI_Wait(&request1, &status);MPI_Wait(&request2, &status);

Page 16: CS 591x – Cluster and Parallel Programming

MPI_Test

int MPI_Test(MPI_Request* request,int* flag,MPI_Status status);

Page 17: CS 591x – Cluster and Parallel Programming

MPI_Test

Tests to determine if the transaction identified by the request handle has completed…Unlike MPI_Wait, it does not stop program execution

Page 18: CS 591x – Cluster and Parallel Programming

MPI_Test … something like this…MPI_Request request1;

MPI_Isend(a,1,MPI_INT,dest,tag,mycomm,&request1);

MPI_Test(&request1, &flag, &status);if (flag == 1) { code that executed when

the transaction has completed } else { code that executes when the

transaction has not completed};

Page 19: CS 591x – Cluster and Parallel Programming

Let’s revisit Request Handles

You can store multiple request handles in an array…

MPI_Request req[4];

** which means you can treat them as a set

Page 20: CS 591x – Cluster and Parallel Programming

Request Handle ArraysMPI_Request recreq[4];MPI_Status status[4];….MPI_Irecv(&a[0,0],4,MPI_INT,src0,0,comm,&recreq[0]);MPI_Irecv(&a[1,0],4,MPI_INT,src1,1,comm,&recreq[1]);MPI_Irecv(&a[2,0],4,MPI_INT,src2,2,comm,&recreq[2]);MPI_Irecv)&a[3,0],4,MPI_INT,src3,3,comm,&recreq[3]);…. //do other stuff……MPI_Waitall(4, recreq, status);…//continue execution

Page 21: CS 591x – Cluster and Parallel Programming

MPI_Wait…

int MPI_Waitall(int req_array_size,MPI_Request req_array[],MPI_Status stat_array[]);

*** Wait for all transactions in req_array to complete

Page 22: CS 591x – Cluster and Parallel Programming

MPI_Wait…

int MPI_Waitany(int array_size,MPI_Request req_array[],int* completed,MPI_Status stat);

Waits for any one transaction in req_array to complete

Page 23: CS 591x – Cluster and Parallel Programming

MPI_Wait…int MPI_Waitsome(

int array_size,MPI_Request req_array[]int*

complete_count,int indices[],MPI_Status stat[])

Waits for at least one (can be more) transactions in req_array to complete

Page 24: CS 591x – Cluster and Parallel Programming

MPI_Test…

MPI_Testall --- tests to see if all transaction in list[] have completed

MPI_Testany – tests to see if at least one transaction in list[] have completed

MPI_Testsome – tests to see which of the transactions in list[] have completed

**note: argument list similar to MPI_Wait counterpart, but includes a flag or flag[] variable

Page 25: CS 591x – Cluster and Parallel Programming