Nothing Special   »   [go: up one dir, main page]

Lab Manual 07 - P&DC

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 3

Lab # 7 – Parallel & Distributed Computing (CPE-421)

Lab Manual # 7

Objective:
To study the advance communication between MPI processes

Theory:
This Lab session will focus on more information about se4nding and receiving in MPI
like sending of arrays and simultaneous send and receive

Key Points:

 Whenever you send and receive data, MPI assumes that you have provided non
overlapping positions in memory. As discussed in the previous lab session,
MPI_COMM_WORLD is referred to as a communicator. In general, a communicator is a
collection of processes that can send messages to each other. MPI_COMM_WORLD is
pre-defined in all implementations of MPI, and it consists of all MPI processes
running after the initial execution of the program.
 In the send/receive, we are required to use a tag. The tag variable is used to
distinguish upon receipt between two messages sent by the same process.
 The order of sending does not necessarily guarantee the order of receiving. Tags are
used to distinguish between messages. MPI allows the tag MPI_ANY_TAG which can
be used by MPI_Recvto accept any valid tag from a sender but you cannot use
MPI_ANY_TAG in the MPI_Send command.
 Similar to the MPI_ ANY_ TAG wildcard for tags, there is also an MPI_ANY_SOURCE
wildcard that can also be used by MPI_Recv. By using it in an MPI_Recv, a process
is ready to receive from any sending process. Again, you cannot use MPI_ ANY_
SOURCE in the MPI_ Send command. There is no wildcard for senderdestinations.
 When you pass an array to MPI_ Send/MPI_Recv, it need not have exactly the
number of items to be sent. It must have greater than or equal to the number of items
to be sent. Suppose, for example, that you had an array of 100 items, but you only
wanted to send the first ten items, you can do so by passing the array to
MPI_Sendand only stating that ten items are to be sent.

An Example Program:

In the following MPI code, array on each process is created, initialize it on process 0.
Once the array has been initialized on process 0, then it is sent out to each process.

Raheel Amjad 2018-CPE-07


Lab # 7 – Parallel & Distributed Computing (CPE-421)

Key Points:

 An array is created, on each process, using dynamic memoryallocation.


 On process 0 only (i.e., mynode == 0), an array is initialized to contain the ascending
indexvalues.
 On process 0, program proceeds with (totalnodes-1) calls to MPISend.
 On all other processes other than 0, MPI_Recv is called to receive thesent message.
 On each individual process, the results are printed of the sending/receivingpair.

Simultaneous Send and Receive, MPI_Sendrecv:

The subroutine MPI_Sendrecv exchanges messages with another process. A send-


receive operation is useful for avoiding some kinds of unsafe interaction patterns and for
implementing remote procedure calls. A message sent by a send-receive operation can be
received by MPI_Recv and a send-receive operation can receive a message sent by an
MPI_Send.

MPI_Sendrecv(&data_to_send, send_count, send_type,


destination_ID, send_tag, &received_data, receive_count,
receive_type, sender_ID, receive_tag, comm, &status)

Understanding the Argument Lists:

 data_to_send: variable of a C type that corresponds to the MPI send_type supplied


below
 send_count: number of data elements to send(int)
 send_type: datatype of elements to send (one of the MPI datatypehandles)

Raheel Amjad 2018-CPE-07


Lab # 7 – Parallel & Distributed Computing (CPE-421)

 destination_ID: process ID of the destination(int)


 send_tag: send tag(int)
 received_data: variable of a C type that corresponds to the MPI receive_type
supplied below
 receive_count: number of data elements to receive(int)
 receive_type: datatype of elements to receive (one of the MPI datatypehandles)
 sender_ID: process ID of the sender(int)
 receive_tag: receive tag(int)
 comm: communicator(handle)
 status: status object(MPI_Status)

Conclusion:

Raheel Amjad 2018-CPE-07

You might also like