Nothing Special   »   [go: up one dir, main page]

Micro

Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY

VIRUDHUNAGAR - 626 005

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

CP4292-MULTICORE ARCHITECTURE AND

PROGRAMMING LABORATORY

M.E-FIRST YEAR – SECOND SEMESTER


SRI VIDYA COLLEGE OF ENGINEERING & TECHNOLOGY

VIRUDHUNAGAR- 626 005

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

BONAFIDE CERTIFICATE

STUDENT NAME :

REGISTER NO :

YEAR/BRANCH :

SEMESTER :

Bonafide Record of work done in ………………………………………………….

Laboratory of Sri Vidya College of Engineering & Technology, Virudhunagar during

the Year 2023 - 2024 (Even Sem).

Staff in Charge Head of the Department

Submitted for the practical Examination held on ……………………………

Internal Examiner External Examiner


INDEX

S.NO DATE NAME OF THE EXPERIMENT PAGE NO SIGNATURE

Write a simple Program to demonstrate an OpenMP


1 Fork-Join Parallelism.

Create a program that computes a simple matrix-


2 vector multiplication b=Ax, either in C/C++. Use
OpenMP directives to make it run in parallel.

Create a program that computes the sum of all the


3 elements in an array A (C/C++) or a program that
finds the largest number in an array A. Use OpenMP
directives to make it run in parallel.

Write a simple Program demonstrating Message-


4 Passing logic using OpenMP

Implement the All-Pairs Shortest-Path Problem


5 (Floyd's Algorithm) Using OpenMP

Implement a program Parallel Random Number


6 Generators using Monte Carlo Methods in OpenMP

Write a Program to demonstrate MPI-broadcast-and-


7 collective-communication in C

Write a Program to demonstrate MPI-scatter-gather-


8 and-all gather in C

Write a Program to demonstrate MPI-send-and-


9 receive in C

Write a Program to demonstrate by performing-


10 parallel-rank-with-MPI in C
Ex.No:1 Write a simple Program to demonstrate an OpenMP Date:
Fork-Join Parallelism.

Aim
To Demonstrate an OpenMP Fork-Join Parallelism.

Procedure
 In this program, the #pragma omp parallel directive marks the parallel region, where the
program will be executed in parallel by multiple threads.
 The private (thread_id) clause ensures that each thread has its own copy of the thread_id
variable.
 Within the parallel region, the omp_get_num_threads() function retrieves the total
number of threads, and omp_get_thread_num() returns the ID of the current thread.
 These values are then used to display a message indicating the thread ID and the total
number of threads.
Program
#include <stdio.h>
#include <omp.h>
int main() {
int num_threads, thread_id;
// Parallel region begins here
#pragma omp parallel private(thread_id)
{
// Get the number of threads and the ID of each thread
num_threads = omp_get_num_threads();
thread_id = omp_get_thread_num();
// Print the thread ID and the total number of threads
printf("Hello from thread %d of %d\n", thread_id, num_threads);
}
return 0;
}
Output
Thread 1 is executing iteration 1
Thread 3 is executing iteration 3
Thread 2 is executing iteration 2
Thread 0 is executing iteration 0
Thread 3 is executing iteration 6
Thread 2 is executing iteration 5
Thread 0 is executing iteration 4
Thread 1 is executing iteration 7
Thread 3 is executing iteration 8
Thread 2 is executing iteration 9

Sum: 45
Result
Thus the program to demonstrate an OpenMP Fork-Join Parallelism was executed
successfully.
Ex.No:2 Create a program that computes a simple matrix-vector Date:
multiplication b=Ax, either in C/C++. Use OpenMP directives
to make it run in parallel.

Aim
To Create a program that computes a simple matrix vector multiplication
b=Ax, either in fortran or C/C++. Use OpenMP directives to make
it run in parallel.

Program
/*
Create a program that computes a simple matrix vector multiplication
b=Ax, either in fortran or C/C++. Use OpenMP directives to make
it run in parallel.
This is the parallel version.
*/
#include <stdio.h>
#include <omp.h>
int main() {
float A[2][2] = {{1,2},{3,4}};
float b[] = {8,10};
float c[2];
int i,j;
// computes A*b
#pragma omp parallel for
for (i=0; i<2; i++) {
c[i]=0;
for (j=0;j<2;j++) {
c[i]=c[i]+A[i][j]*b[j];
}
}
// prints result
for (i=0; i<2; i++) {
printf("c[%i]=%f \n",i,c[i]);
}
return 0;
}

Output

Hello from thread 0 of 4


Hello from thread 2 of 4
Hello from thread 1 of 4
Hello from thread 3 of 4
Result
Thus the program that computes a simple matrix-vector multiplication b=Ax, either in
C/C++. Use OpenMP directives to make it run in parallel program was executed successfully.
Ex.No:3 Create a program that computes the sum of all the Date:
elements in an array A (C/C++) or a program that finds
the largest number in an array A. Use OpenMP directives
to make it run in parallel.

Aim
To create a program that computes the sum of all the elements in an array A (C/C++)

Procedure
 In this program, we have an array A of size ARRAY_SIZE that is initialized with
consecutive numbers starting from 1.
 The parallel region is marked with the #pragma omp parallel for directive. This directive
distributes the iterations of the following loop across multiple threads, allowing them to
execute in parallel.
 The reduction(+:sum) clause ensures that each thread has its own private copy of the sum
variable, and the results are correctly combined at the end.
 Similarly, the reduction(max:max_value) clause ensures that each thread has its own
private copy of the max_value variable, and the maximum value is correctly determined.
 The loop calculates the sum of the elements in the array by adding each element to the
sum variable using the reduction clause.
 Simultaneously, it finds the largest number in the array by comparing each element with
the max_value variable
 Once the parallel region is completed, we print the computed sum and the largest number

Program 1:
#include <stdio.h>
#include <omp.h>
#define ARRAY_SIZE 1000
int main() {
int A[ARRAY_SIZE];
int sum = 0;
int max_value = 0;
// Initialize the array
for (int i = 0; i < ARRAY_SIZE; i++) {
A[i] = i + 1;
// Parallel region begins here
#pragma omp parallel for reduction(+:sum) reduction(max:max_value)
for (int i = 0; i < ARRAY_SIZE; i++) {
sum += A[i];
if (A[i] > max_value) {
max_value = A[i];
}}
printf("Sum: %d\n", sum);
printf("Largest number: %d\n", max_value);
return 0;}

Program 2:
#include <stdio.h>
#define ARRAY_SIZE 1000
int main() {
int A[ARRAY_SIZE];
int sum = 0;
int max_value = 0;
// Initialize the array
for (int i = 0; i < ARRAY_SIZE; i++) {
A[i] = i + 1;
}
// Compute the sum
for (int i = 0; i < ARRAY_SIZE; i++) {
sum += A[i];}
// Find the largest number
for (int i = 0; i < ARRAY_SIZE; i++) {
if (A[i] > max_value) {
max_value = A[i];
} }
printf("Sum: %d\n", sum);
printf("Largest number: %d\n", max_value);
return 0;
}

Output
Sum: 500500
Largest number: 1000
Result

Thus the program that computes the sum of all the elements in an array A (C/C++) or a
program that finds the largest number in an array A. Use OpenMP directives to make it run in
parallel was executed successfully
Ex.No:4 Write a simple Program demonstrating Message-Passing Date:
logic using OpenMP

Aim
To demonstrating Message-Passing logic using OpenMP

Procedure

 To demonstrate message-passing logic using OpenMP, you can use the OpenMP
library functions omp_get_thread_num() to get the current thread ID and
omp_get_num_threads() to get the total number of threads.
 Here's an example program that showcases a basic message-passing logic using
OpenMP
 n this program, the #pragma omp parallel directive creates a parallel region where
multiple threads can execute in parallel.
 Inside the parallel region, each thread retrieves its own thread ID using
omp_get_thread_num() and the total number of threads using
omp_get_num_threads().
 The master thread (thread with ID 0) then sends a message by printing a message to
the console.
 Other threads receive the message by printing their own messages.

Program
#include <stdio.h>
#include <omp.h>
int main() {
int num_threads, thread_id;
#pragma omp parallel private(thread_id) shared(num_threads)
{
thread_id = omp_get_thread_num();
num_threads = omp_get_num_threads();
if (thread_id == 0) {
printf("Total number of threads: %d\n", num_threads);
printf("Master thread sending a message...\n");
} else {
printf("Thread %d received a message from the master thread.\n", thread_id);
}
}
return 0;
}
Output

Total number of threads: 4


Master thread sending a message...
Thread 1 received a message from the master thread.
Thread 3 received a message from the master thread.
Thread 2 received a message from the master thread.
Thread 0 received a message from the master thread.
Result
Thus the program demonstrating Message-Passing logic using OpenMP
was executed successfully
Ex.No:5 Implement the All-Pairs Shortest-Path Problem (Floyd's Date:
Algorithm) Using OpenMP

Aim
To Implement the All-Pairs Shortest-Path Problem (Floyd's Algorithm)
Using OpenMP

Procedure

 In this program, we have a graph represented by a 2D array graph where graph[i][j]


represents the weight of the edge from vertex i to vertex j.
 If there is no edge between the vertices, we use the value INF to indicate an infinite
distance.
 The floydWarshall() function implements Floyd's algorithm to find the shortest
distances between every pair of vertices.
 The outermost loop that iterates over the vertices k is parallelized using the #pragma
omp parallel for directive.
 The inner loops are executed sequentially
 .After calculating the shortest distances, the program prints the resulting distance
matrix.

Program
#include <stdio.h>
#include <omp.h>
#define INF 99999
#define N 4
void floydWarshall(int graph[N][N]) {
int dist[N][N];
// Initialize the distance matrix
for (int i = 0; i < N; i++) {
for (int j = 0; j < N; j++) {
dist[i][j] = graph[i][j];
}
}
// Parallelize the outermost loop
#pragma omp parallel for shared(dist)
for (int k = 0; k < N; k++) {
// Inner loops performed sequentially
for (int i = 0; i < N; i++) {
for (int j = 0; j < N; j++) {
// If vertex k is on the shortest path from i to j, update the distance
if (dist[i][k] + dist[k][j] < dist[i][j]) {
dist[i][j] = dist[i][k] + dist[k][j];
}
}
}
}
// Print the shortest distances
printf("Shortest distances between every pair of vertices:\n");
for (int i = 0; i < N; i++) {
for (int j = 0; j < N; j++) {
if (dist[i][j] == INF) {
printf("%7s", "INF");
} else {
printf("%7d", dist[i][j]);
}
}
printf("\n");
}
}
int main() {
int graph[N][N] = {
{0, 5, INF, 10},
{INF, 0, 3, INF},
{INF, INF, 0, 1},
{INF, INF, INF, 0}
};
floydWarshall(graph);
return 0;
}
Output

Shortest distances between every pair of vertices:


0 5 8 9
INF 0 3 4
INF INF 0 1
INF INF INF 0
Result

Thus the implement of the All-Pairs Shortest-Path Problem (Floyd's Algorithm) Using
OpenMP program was executed successfully
Ex.No:6 Implement a program Parallel Random Number Date:
Generators using Monte Carlo Methods in OpenMP

Aim

To Implement a program Parallel Random Number Generators using Monte Carlo


Methods in OpenMP.

Procedure
 In this program, we use the Monte Carlo method to estimate the value of pi by randomly
generating points within a unit square and determining how many fall within the unit
circle.
 The more points we generate, the more accurate our estimate will be.
 The program uses OpenMP to parallelize the generation of random points.
 The #pragma omp parallel for directive distributes the iterations of the loop across
multiple threads, allowing them to generate points in parallel.
 The reduction(+:points_in_circle) clause ensures that each thread has its own private copy
of the points_in_circle variable, and the results are correctly combined at the end.
 The rand_r(&seed) function is used to generate random numbers in a thread-safe manner.
 Each thread has its own seed value to ensure independent random number generation.
 The program prints the estimated value of pi based on the number of points falling inside
the unit circle divided by the total number of points.

Program
#include <stdio.h>
#include <stdlib.h>
#include <omp.h>
int main() {
int total_points = 10000000;
int points_in_circle = 0;
double x, y;
// Set the random seed
unsigned int seed = omp_get_thread_num();
#pragma omp parallel for private(x, y) reduction(+:points_in_circle) num_threads(4)
for (int i = 0; i < total_points; i++) {
// Generate random x and y coordinates
x = (double)rand_r(&seed) / RAND_MAX;
y = (double)rand_r(&seed) / RAND_MAX;
// Check if the point is inside the unit circle
if (x * x + y * y <= 1) {
points_in_circle++;
}
}
// Estimate the value of pi
double pi = 4.0 * points_in_circle / total_points;
printf("Estimated value of pi: %f\n", pi);
return 0;
}

Output

Estimated value of pi: 3.141763


Result
Thus the Implement a program Parallel Random Number Generators using Monte
Carlo Methods in OpenMP was executed successfully.
Ex.No:7 Write a Program to demonstrate MPI-broadcast-and- Date:
collective-communication in C

Aim
To demonstrate MPI-broadcast-and-collective-communication in C.

Procedure

 In this program, each process receives an input value from rank 0 and performs a global
sum of all input values using MPI broadcast and collective communication.
 The MPI_Init function initializes the MPI environment, MPI_Comm_rank retrieves the
rank of the current process,
 and MPI_Comm_size retrieves the total number of processes.Process 0 reads an input
value from the user and broadcasts it to all other processes using MPI_Bcast.
 The broadcast function is called by all processes using the same communicator
(MPI_COMM_WORLD), and the value is sent from rank 0 to all other processes.Each
process prints the received data using printf.Next, MPI_Allreduce is used to perform the
reduction operation, which calculates the sum of all myData values across all processes.
 The result is stored in globalSum. The reduction is done using the MPI_SUM
operation.Finally, each process prints its rank and the value of globalSum using printf.

Program
#include <stdio.h>
#include <mpi.h>
int main(int argc, char** argv) {
int rank, size;
int myData = 0;
int globalSum = 0;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
if (rank == 0) {
printf("Enter a value: ");
scanf("%d", &myData);
}
// Broadcast myData from rank 0 to all other processes
MPI_Bcast(&myData, 1, MPI_INT, 0, MPI_COMM_WORLD);
printf("Process %d received data: %d\n", rank, myData);

// Perform the reduction operation: sum of all myData values


MPI_Allreduce(&myData, &globalSum, 1, MPI_INT, MPI_SUM, MPI_COMM_WORLD);
printf("Process %d global sum: %d\n", rank, globalSum);
MPI_Finalize();
return 0;
}
Output

Enter a value: 10
Process 0 received data: 10
Process 1 received data: 10
Process 2 received data: 10
Process 3 received data: 10
Process 0 global sum: 40
Process 1 global sum: 40
Process 2 global sum: 40
Process 3 global sum: 40
Result
Thus the Program to demonstrate MPI-broadcast-and- collective-communication in
C was executed successfully
Ex.No:8 Write a Program to demonstrate MPI-scatter-gather- Date:
and-all gather in C

Aim
To Write a Program to demonstrate MPI-scatter-gather-and-all gather in C

Procedure

 In this program, each process receives an input array from rank 0 using MPI scatter,
gathers the received values at rank 0 using MPI gather, and all processes exchange the
values with each other using MPI allgather.
 The MPI_Init function initializes the MPI environment, MPI_Comm_rank retrieves the
rank of the current process, and MPI_Comm_size retrieves the total number of
processes.Process 0 reads an input array from the user and scatters it to all other processes
using MPI_Scatter.
 The scatter function is called by all processes using the same communicator
(MPI_COMM_WORLD), and the values are sent from rank 0 to all other processes.Each
process prints the received value using printf.
 Next, MPI_Gather is used to gather the values from all processes to rank 0.
 The gathered values are stored in the sendbuf array at rank 0.
 The gather operation is done using the MPI_INT datatype.Process 0 prints the gathered
values using printf.After that, MPI_Allgather is used to exchange the values among all
processes.
 The allgather operation collects values from all processes and distributes them to all
processes.
 The allgathered values are stored in the sendbuf array.
 Each process prints its rank and the allgathered values using printf.

Program

#include <stdio.h>
#include <mpi.h>
#define ARRAY_SIZE 8
int main(int argc, char** argv) {
int rank, size;
int sendbuf[ARRAY_SIZE];
int recvbuf[ARRAY_SIZE];
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
if (rank == 0) {
printf("Enter %d values: ", ARRAY_SIZE);
for (int i = 0; i < ARRAY_SIZE; i++) {
scanf("%d", &sendbuf[i]);
}
}
// Scatter the values from rank 0 to all other processes
MPI_Scatter(sendbuf, 1, MPI_INT, recvbuf, 1, MPI_INT, 0, MPI_COMM_WORLD);
printf("Process %d received value: %d\n", rank, recvbuf[0]);
// Gather the values from all processes to rank 0
MPI_Gather(recvbuf, 1, MPI_INT, sendbuf, 1, MPI_INT, 0, MPI_COMM_WORLD);
if (rank == 0) {
printf("Gathered values at rank 0: ");
for (int i = 0; i < ARRAY_SIZE; i++) {
printf("%d ", sendbuf[i]);
}
printf("\n");
}

// Allgather the values from all processes to all processes


MPI_Allgather(recvbuf, 1, MPI_INT, sendbuf, 1, MPI_INT, MPI_COMM_WORLD);
printf("Allgathered values at process %d: ", rank);
for (int i = 0; i < ARRAY_SIZE; i++) {
printf("%d ", sendbuf[i]);
}
printf("\n");
MPI_Finalize();
return 0;
}

Output
Enter 8 values: 1 2 3 4 5 6 7 8
Process 0 received value: 1
Process 1 received value: 2
Process 2 received value: 3
Process 3 received value: 4
Gathered values at rank 0: 1 2 3 4 5 6 7 8
Allgathered values at process 0: 1 2 3 4 5 6 7 8
Allgathered values at process 1: 1 2 3 4 5 6 7 8
Allgathered values at process 2: 1 2 3 4 5 6 7 8
Allgathered values at process 3: 1 2 3 4 5 6 7 8
Result
Thus the program to demonstrate MPI-scatter-gather-and-all gather in C was executed

successfully
Ex.No:9 Write a Program to demonstrate MPI-send-and-receive in Date:
C

Aim
To Write a Program to demonstrate MPI-send-and-receive in C

Procedure
 In this program, process 0 sends an integer value to process 1 using MPI_Send, and
process 1 receives the value using MPI_Recv.
 The MPI_Init function initializes the MPI environment, MPI_Comm_rank retrieves the
rank of the current process, and MPI_Comm_size retrieves the total number of processes.
 In process 0 (rank 0), an integer value data is set to 123.
 Then, MPI_Send is used to send the value to process 1 (rank 1) with a tag of 0.
 The message is sent using the communicator MPI_COMM_WORLD.
 In process 1 (rank 1), MPI_Recv is used to receive the value from process 0.
 The received value is stored in the variable receivedData.
 The receive operation specifies the source rank as 0 and the tag as 0.
 The received message is ignored using MPI_STATUS_IGNORE.Process 0 then prints a
message indicating the sent data, and process 1 prints a message indicating the received
data.

Program
#include <stdio.h>
#include <mpi.h>
int main(int argc, char** argv) {
int rank, size;
int data = 0;
int receivedData;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
if (rank == 0) {
data = 123;
MPI_Send(&data, 1, MPI_INT, 1, 0, MPI_COMM_WORLD);
printf("Process 0 sent data: %d\n", data);
}
else if (rank == 1) {
MPI_Recv(&receivedData, 1, MPI_INT, 0, 0, MPI_COMM_WORLD,
MPI_STATUS_IGNORE);
printf("Process 1 received data: %d\n", receivedData);
}
MPI_Finalize();
return 0;
}

Output
Process 0 sent data: 123
Process 1 received data: 123
Result
Thus the program to demonstrate MPI-send-and-receive in C was executed successfully
Ex.No:10 Write a Program to demonstrate by performing-parallel- Date:
rank-with-MPI in C

Aim
To write a program to demonstrate by performing-parallel-rank-with-MPI in C

Procedure

 In this program, each MPI process retrieves its rank using MPI_Comm_rank and the total
number of processes using MPI_Comm_size.
 Then, each process prints its rank and the total number of processes.
 The MPI_Init function initializes the MPI environment, MPI_Comm_rank retrieves the
rank of the current process, and MPI_Comm_size retrieves the total number of processes.
 Each process then prints a message using printf that includes its rank and the total number
of processes

Program

#include <stdio.h>
#include <mpi.h>
int main(int argc, char** argv) {
int rank, size;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
printf("Hello from process %d of %d\n", rank, size);
MPI_Finalize();
return 0;
}

Output

Hello from process 0 of 4


Hello from process 1 of 4
Hello from process 2 of 4
Hello from process 3 of 4
Result
Thus the program to demonstrate by performing-parallel-rank-with-MPI in C was
executed successfully

You might also like