tutorial on mpi experimental environment for ece5610/csc6220

22
Tutorial on MPI Experimental Environment for ECE5610/CSC6220 1

Upload: kaya

Post on 05-Jan-2016

26 views

Category:

Documents


0 download

DESCRIPTION

Tutorial on MPI Experimental Environment for ECE5610/CSC6220. Outline. The WSU Grid cluster How to login to the Grid How to run your program on a single node How to run your program on multiple nodes. WSU Grid Cluster. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Tutorial on  MPI Experimental Environment for ECE5610/CSC6220

Tutorial on MPI Experimental Environment for

ECE5610/CSC6220

1

Page 2: Tutorial on  MPI Experimental Environment for ECE5610/CSC6220

Outline

• The WSU Grid cluster• How to login to the Grid• How to run your program on a single node• How to run your program on multiple nodes

2

Page 3: Tutorial on  MPI Experimental Environment for ECE5610/CSC6220

WSU Grid Cluster

The WSU Grid Cluster is a high performance computing system that hosts and manages research related projects.

 

 The Grid currently has the combined processing power of 4,568 cores: 1,346 Intel cores, 3,222 AMD cores, with over 13.5TB of RAM and 1.2PB of disk space. The system is open to every researcher at WSU.

Page 4: Tutorial on  MPI Experimental Environment for ECE5610/CSC6220

Login to the Grid

Host name: grid.wayne.edu, Port: 22

4

Download putty.exe:http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html

Page 5: Tutorial on  MPI Experimental Environment for ECE5610/CSC6220

Login to the Grid

• Use putty.exe to loginUsername: ab1234 (your AccessID)Password: your pipeline password

5

Page 6: Tutorial on  MPI Experimental Environment for ECE5610/CSC6220

Login to the GridYou can start writing an MPI program now!

6

Page 7: Tutorial on  MPI Experimental Environment for ECE5610/CSC6220

Start MPI Programming• MPI environment

Initialize and finalizeKnow who I am and my community

• Writing MPI programs Similar to writing a C program Call MPI functions

• Compiling and running MPI programsCompiler: mpiccExecution: mpiexec

• Example: copy hello.c to your home directory

7

Page 8: Tutorial on  MPI Experimental Environment for ECE5610/CSC6220

Initialize and Finalize the Environment

• Initializing the MPI environment before calling any MPI functionsint MPI_Init (int *argc, char *argv)

• Finalizing the MPI environment before terminating your programint MPI_Finalize ()

• The two functions should be called by all processes, and no other MPI calls are allowed before MPI_Init and after MPI_Finalize.

8

Page 9: Tutorial on  MPI Experimental Environment for ECE5610/CSC6220

Finding out about the Environment

• Two important questions that arise early in a parallel program are:How many processes are participating in this

computation?Who am I?

• MPI provides functions to answer these questionsMPI_Comm_size reports the number of processes.MPI_Comm_rank reports the rank, a number between 0

and size-1, identifying the calling process.

9

Page 10: Tutorial on  MPI Experimental Environment for ECE5610/CSC6220

First Program hello.c: “Hello World!”

10

#include "mpi.h"#include <stdio.h>

int main(int argc, char *argv[]){

int n, myid, numprocs,i,namelen; char processor_name[MPI_MAX_PROCESSOR_NAME];

MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI_Comm_rank(MPI_COMM_WORLD,&myid); MPI_Get_processor_name(processor_name,&namelen); printf("hello world, I am Process %d of %d is on %s\n",myid,numprocs, processor_name); MPI_Finalize(); return 0;}

Page 11: Tutorial on  MPI Experimental Environment for ECE5610/CSC6220

Compile and run your program

• Questions:Why is the rank order random?Can we serialize the rank order?

11

Page 12: Tutorial on  MPI Experimental Environment for ECE5610/CSC6220

MPI Basic (Blocking) Send

MPI_Send(start, count, datatype, dest, tag, comm)

• The message buffer is described by (start, count, datatype).• The target process is specified by dest, which is the rank of

the target process in the communicator specified by comm.• When this function returns, the data has been delivered to

the system and the buffer can be reused. The message may not have been received by the target process.

12

Page 13: Tutorial on  MPI Experimental Environment for ECE5610/CSC6220

MPI Basic (Blocking) Receive

MPI_RECV(start, count, datatype, source, tag, comm, status)

• Waits until a matching (on source and tag) message is received from the system, and the buffer can be used.

• Source is rank in communicator specified by comm, or MPI_ANY_SOURCE.

13

Page 14: Tutorial on  MPI Experimental Environment for ECE5610/CSC6220

Processes Execution in Order

I. Process i sends a message to process i+1; II. After receiving the message, process i+1 sends its

message;

14

Page 15: Tutorial on  MPI Experimental Environment for ECE5610/CSC6220

The Program hello_order.c#include "mpi.h"#include <stdio.h>

int main(int argc, char *argv[]){

int n, myid, numprocs,i,namelen; char processor_name[MPI_MAX_PROCESSOR_NAME]; char message[100]; MPI_Status status;

MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI_Comm_rank(MPI_COMM_WORLD,&myid); MPI_Get_processor_name(processor_name,&namelen); if(0 == myid){ printf("hello world, I am Process %d of %d is on %s\n",myid,numprocs, processor_name); strcpy(message, "next"); MPI_Send(message, strlen(message)+1, MPI_CHAR, myid+1, 99, MPI_COMM_WORLD); }else if(myid < (numprocs-1)){ MPI_Recv(message, 100, MPI_CHAR, myid-1, 99, MPI_COMM_WORLD, &status); printf("hello world, I am Process %d of %d is on %s\n",myid,numprocs, processor_name); MPI_Send(message, strlen(message)+1, MPI_CHAR, myid+1, 99, MPI_COMM_WORLD); }else{ MPI_Recv(message, 100, MPI_CHAR, myid-1, 99, MPI_COMM_WORLD, &status); printf("hello world, I am ProcesSs %d of %d is on %s\n",myid,numprocs, processor_name); } MPI_Finalize(); return 0;} 15

Page 16: Tutorial on  MPI Experimental Environment for ECE5610/CSC6220

Results

16

Discuss:

Parallel programs use message communication to achieve determinism.

Page 17: Tutorial on  MPI Experimental Environment for ECE5610/CSC6220

Run programs on multiple nodes#!/bin/bash

#PBS -l ncpus=4#PBS -l nodes=2:ppn=2#PBS -m ea#PBS -q mtxq#PBS -o grid.wayne.edu:~fb4032/tmp3/output_file.64

#PBS -e grid.wayne.edu:~fb4032/tmp3/error_file.64

/wsu/arch/x86_64/mpich/mpich-3.0.4-icc/bin/mpiexec \-machinefile $PBS_NODEFILE \-n 8 \/wsu/home/fb/fb40/fb4032/main

• Edit the job running script: job.sh• This job requests 2 nodes with 2 processors each,

and it is submitted to queue mtxq 17

Page 18: Tutorial on  MPI Experimental Environment for ECE5610/CSC6220

Run programs on multiple nodes

• -l specify the resources_list,• ncpus - Number of CPUs

nodes - Number of Nodes • -o specify the location of output file• -e specify the location of the error file 18

Page 19: Tutorial on  MPI Experimental Environment for ECE5610/CSC6220

Execution on Multiple nodes

• Make sure you change the permissions of the job.sh before your submit it.

19

Page 20: Tutorial on  MPI Experimental Environment for ECE5610/CSC6220

Execution on Multiple Nodes

• Use “qsub job.sh” to submit the job• Use “qme” to check the status of the job• Use “qdel 971324.vpbs1” to delete the job if

necessary. (971324.vpbs1 is the job ID).

20

Page 21: Tutorial on  MPI Experimental Environment for ECE5610/CSC6220

Execution on Multiple Servers

• The output will be copied to the location specified in job.sh.

• It is in ~/tmp3/output_file.64 in this case.

21

Page 22: Tutorial on  MPI Experimental Environment for ECE5610/CSC6220

Useful Links

• Grid tutorial: http://www.grid.wayne.edu/resources/tutorials/index.html

• Job scheduling on grid: http://www.grid.wayne.edu/resources/tutorials/pbs.html

• Step by step to run jobs on Grid:http://www.grid.wayne.edu/resources/tutorials/jobs/index.html

22