HPCS Lab5
HPCS Lab5
HPCS Lab5
Lab 5
Dept. of Computer Architecture
Faculty of ETI
Gdansk University of Technology
Paweł Czarnul
For this exercise, study support for multithreading offered by MPI. This refers to the
possibility of calling MPI functions from multiple threads started within a process.
Namely, an MPI implementation may support one of the following support levels for using
threads:
1. MPI_THREAD_SINGLE – no support,
3. MPI_THREAD_FUNNELED – only the thread that initialized MPI will call MPI
functions,
4. MPI_THREAD_MULTIPLE – no restrictions.
Instead of MPI_Init, MPI_Init_thread should be called to initialize MPI and thread support.
A program requests a certain level of thread support while MPI returns the level it
supports.
The application presented here is an extended version of the program from lab1. That is, it
computes pi in parallel using an old method from the 17th century:
A similar method to lab1 was adopted. However, in this case the program requests
MPI_THREAD_MULTIPLE from MPI. Each process calculates subsum over its range
depending on process rank. Subsum within process are calculated in parallel using
threads (number of threads defined in THREADNUM). Then, successive elements of the
aforementioned sum will be assigned to the threads of process 0, the threads of process
1, ..., the threads of process (n-1).
Compilation instructions:
#include <stdlib.h>
#include <mpi.h>
#include <omp.h>
#define THREADNUM 8
#define RESULT 1
double step;
double pi_part = 0;
double elem=0;
int i=0;
for(i=rank*step;i<end;i++) {
if(i%2) {
} else {
elem = 1.0/((2*i)+1);
}
pi_part = pi_part + elem;
omp_set_num_threads(THREADNUM);
double pi_final = 0;
int i;
int threadsupport;
MPI_Status status;
// Initialize MPI
if (threadsupport != MPI_THREAD_MULTIPLE)
threadsupport);
MPI_Finalize();
exit (-1);
MPI_Finalize ();
return -1;
calculate(myrank);
if (!myrank)
double resulttemp;
pi_final += resulttemp;
if (!myrank)
pi_final *= 4;
MPI_Finalize ();
return 0;