Nothing Special   »   [go: up one dir, main page]

Solutions Midterm 1 March 72020

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 7

CSM015 Parallel Computing SOLUTIONS

Do not copy questions to answer book. Weight of each


question indicated)

1, 2 (2+2points). Following segment of code builds a


data type for column vector of a two-dimensional
array. What is the expected output of this program?

The prototype of the function to build a vector is:


int MPI_Type_vector(int count,
int blocklen, int stride,
MPI_Datatype element_type,
MPI_datatype* new_mpi_type )

#include <stdio.h>
#include "mpi.h"
Void main (int argc, char* argv[]) {
int p;
int my_rank;
float A[10][10];
MPI_Status status;
MPI_Datatype column_mpi_t;
int i, j;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
//1. COMPLETE the arguments in following function call
MPI_Type_vector(10, 1, 10, MPI_FLOAT,
&column_mpi_t);
MPI_Type_commit(&column_mpi_t);
if (my_rank == 0) {
for (i = 0; i < 10; i++)
for (j = 0; j < 10; j++)
A[i][j] = (float) i;
MPI_Send(&(A[0]
[0]),1,column_mpi_t,1,0,MPI_COMM_WORLD);
} else { /* my_rank = 1 */
for (i = 0; i < 10; i++)
for (j = 0; j < 10; j++)
A[i][j] = 0.0;
MPI_Recv(&(A[0]
[0]),10,MPI_FLOAT,0,0,MPI_COMM_WORLD, &status);
for (j = 0; j < 10; j++)
printf("%3.1f ", A[0][j]);
printf("\n");
}
MPI_Finalize();
} /* main */

3,4,5 (2+2+4 points) Complete the following segment of


code which uses the MPI_Type_indexed() function to
create a data type for the upper triangular matrix.
Function prototype of MPI_Type_indexed() is:

int MPI_Type_indexed(int count, int block_lengths[],


int displacements[],MPI_Datatype old_type,
MPI_Datatype* new_mpi_type)

int rank, dest, i, p;


double a[100][100], disp[100], blocklen[100], i;
MPI_Datatype upper;
...
/*compute start and size of each row */
for (i = 0; i < 100; i++)
{
disp[i] = 100*i + i; //3.Complete statement
blocklen[i] = 100- i; //4.Complete statement
}
/* create data type for upper triangular part */
//5.Complete statement
MPI_Type_indexed(100, blocklen, disp,
MPI_DOUBLE, &upper);
MPI_Type_commit(&upper);
/* send the upper array */
MPI_Send(a, 1, upper, dest, tag, MPI_COMM_WORLD);
MPI_Finalize();
6.(2+2 points) Two processes with rank 0 and rank 1 are trying to
communicate. The program however deadlocks, explain why the program
deadlocks. Rewrite the code so it would not deadlock.
….
if (rank == 0) {
dest = 1; source = 1;
rc = MPI_Send(&outmsg, 1, MPI_CHAR, dest, tag,
MPI_COMM_WORLD);
rc = MPI_Recv(&inmsg, 1, MPI_CHAR, source, tag,
MPI_COMM_WORLD, &Stat); }
else if (rank == 1) {
dest = 0; source = 0;
rc = MPI_Send(&outmsg, 1, MPI_CHAR, dest, tag,
MPI_COMM_WORLD);
rc = MPI_Recv(&inmsg, 1, MPI_CHAR, source, tag,
MPI_COMM_WORLD, &Stat);
}
The program will always deadlock because both the process with
rank == 0 and the process with rank == 1 can not get past the
blocking MPI_Send call.
Corrected code: Change the sequence of the MPI_Send and
MPI_Recv call for the process with rank ==1. The resulting code is
as follows.

if (rank == 0) {
dest = 1; source = 1;
rc = MPI_Send(&outmsg, 1, MPI_CHAR, dest, tag, MPI_COMM_WORLD);
rc = MPI_Recv(&inmsg, 1, MPI_CHAR, source, tag,
MPI_COMM_WORLD, &Stat); }
else if (rank == 1) {
dest = 0; source = 0;
rc = MPI_Recv(&inmsg, 1, MPI_CHAR, source, tag, MPI_COMM_WORLD,
&Stat);
rc = MPI_Send(&outmsg, 1, MPI_CHAR, dest, tag,
MPI_COMM_WORLD);

7, 8. (4+4 points) Skeleton of a program is provided below which computes into


the array freq[] how often an element appears in an int array x[]. (freq[0] has
frequency of 0 in array x, freq[1] has frequency of 1 in array x, freq[2] has
frequency of 2 in array x, etc.)
#define N 10000
#define P 4 // number of processors

int NOverP = N/P;
int *x = malloc(sizeof(int)*N); // set up global int array x of size N
int *local_x = malloc(sizeof(int)*NOverP); //set up local int array
local_x of size N/P
int freq[10]; // set up global frequency array
int local_freq[10]; //set up local frequency array

//Process with rank == 0 reads from stdin, N int values into an int
array x of size N.
…….
//Scatter the array x among P processors from root process, rank
==0 ---use MPI_Scatter()
//prototype appears below
MPI_Scatter(x, N, MPI_INT, local_x, NOverP,
MPI_INT,0,MPI_COMM_WORLD);
<<< 7. complete this call
//Each process sequentially computes frequency of elements of
local_x into array local_freq[].
………
// Reduce the array local_freq[] from P processors to root process
with rank ==0 in freq[] ---use MPI_Reduce()
// prototype appears below
MPI_Reduce(local_freq,freq, 10, MPI_INT,MPI_SUM,0,
MPI_COMM_WORLD);
<<< 8. complete this call
Process with rank == 0 prints the resulting array freq[] to stdout.
…….

MPI_Scatter and MPI_Reduce operations are illustrated below.

MPI_Scatter (&send_data,send_count,send_type, &recv_data, recv_count,recv_type, root,comm);

MPI_Reduce (&sendbuf,&recvbuf,count,datatype,op,root,comm);

9-12 (2+2+2+4) The equivalent C structure for which we want to build a


new MPI structure ParticleStrType is as follows. Prototype of function
MPI_Type_struct() appears below.
MPI_Type_struct (count, blocklengths[], offsets[], Oldtypes [], &newtype);
Struct ParticleStruct{
char Cname[3]; //chemical name
float AtmNu; // Atomic Number
float Mass; // Atomic Mass
int valcy; // valency
};
MPI_Datatype ParticleStrType;

// 9. Set up the definition of arrays for address mapping

int blocklengths[4] = {4,1,1,1}; //<< 3,1,1,1 is also fine


MPI_Aint offsets[4] = {0, sizeof(float), 2*sizeof(float),
3*sizeof(float));
MPI_Datatype Oldtypes[4] = {MPI_CHAR, MPI_FLOAT,
MPI_FLOAT, MPI_INT};

//12. complete function call to build MPI_type for the


structure

MPI_Type_struct (4, blocklengths, offsets, Oldtypes,


ParticleStrType);

13, 14 (2 + 4 points) (Answer whether the following fragment of code always


deadlock; or never deadlocks.
If your answer is that it deadlocks explain why and how will you fix it.

int buf1[10];
int buf2[20];
…..
If (rank ==0){
MPI_Bcast(buf1, 10, MPI_INT, 0, MPI_COMM_WORLD);
..... //
MPI_Bcast(buf2, 20, MPI_INT, 1, MPI_COMM_WORLD);
..... //
} else
{
MPI_Bcast(buf2, 20, MPI_INT, 1, MPI_COMM_WORLD);
..... //
MPI_Bcast(buf1, 10, MPI_INT, 0, MPI_COMM_WORLD);
.....
}

…..
This code will ALWAYS deadlock. All processes (irrespective
of their rank) in a group communication such as MPI_Bcast
here, within the communicator participate (symmetrically). The
calls to group communication function must never be inside an
if statement with rank as a condition.

Solution: remove the if condition before MPI_Bcast.

MPI_Bcast(buf1, 10, MPI_INT, 0, MPI_COMM_WORLD);


..... //
MPI_Bcast(buf2, 20, MPI_INT, 1, MPI_COMM_WORLD);
..... //

You might also like