Quantcast
Channel: My experiments with life
Viewing all articles
Browse latest Browse all 78

MPI/C – A Simple Send & Receive

$
0
0

MPI

For my understanding of what MPI is &/or does, please refer to this post.

MPI_Send & MPI_Recv

More often than not, a sequential/serial program gets parallelized – assuming resources are available for such a conversion as well as execution – to solve a problem faster. Implied here is the fact that there is some sort of communication between the MASTER processor and its WORKER processors. MPI_Send & MPI_Recv are often used to accomplish such a communication and the program below demonstrates a simple usage.

Program Listing

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
/* send_receive_simple.c
 * PARALLEL [MPI] C PROGRAM TO DEMONSTRATE MPI_Send & MPI_Recv FUNCTIONS.
 * MASTER [proc_id = 0] SENDS SOME DATA TO A WORKER [proc_id = 1].  
 * WORKER DISPLAYS THE DATA RECEIVED. 
 *
 * TESTED SUCCESSFULLY WITH MPICH2 (1.3.1) COMPILED AGAINST GCC (4.1.2) 
 * IN A LINUX BOX WITH QUAD CORE INTEL XEON PROCESSOR (1.86 GHz) & 4GB OF RAM.
 *
 * FIRST WRITTEN: GOWTHAM; Sat, 27 Nov 2010 16:01:01 -0500
 * LAST MODIFIED: GOWTHAM; Sat, 27 Nov 2010 16:49:11 -0500
 *
 * URL:
 * http://sgowtham.net/blog/2010/11/28/mpi-c-a-simple-send-receive/
 *
 * COMPILATION:
 * mpicc -g -Wall -lm send_receive_simple.c -o send_receive_simple.x
 *
 * EXECUTION:
 * mpirun -machinefile MACHINEFILE -np NPROC ./send_receive_simple.x
 *
 * NPROC       : NUMBER OF PROCESSORS ALLOCATED TO RUNNING THIS PROGRAM;
 *               MUST BE EQUAL TO 2
 * MACHINEFILE : FILE LISTING THE HOSTNAMES OF PROCESSORS ALLOCATED TO
 *               RUNNING THIS PROGRAM
 *
*/
 
/* STANDARD HEADERS AND DEFINITIONS 
 * REFERENCE: http://en.wikipedia.org/wiki/C_standard_library
*/
#include <stdio.h>  /* Core input/output operations                         */
#include <stdlib.h> /* Conversions, random numbers, memory allocation, etc. */
#include <math.h>   /* Common mathematical functions                        */
#include <time.h>   /* Converting between various date/time formats         */
#include <mpi.h>    /* MPI functionality                                    */
 
#define MASTER  0   /* Process ID for MASTER                                */
#define WORKER  1   /* Process ID for WORKER                                */
 
/* MAIN PROGRAM BEGINS */
int main(int argc, char **argv) {
 
  /* VARIABLE DECLARATION */
  int    proc_id,       /* Process identifier                    */
         n_procs;       /* Number of processors                  */
 
  double x;             /* Information sent from MASTER
                           Information received by WORKER        */ 
 
  MPI_Status status;    /* MPI structure containing return codes
                           for message passing operations        */
 
  /* INITIALIZE MPI */
  MPI_Init(&argc, &argv);
 
  /* GET THE PROCESS ID AND NUMBER OF PROCESSORS */
  MPI_Comm_rank(MPI_COMM_WORLD, &proc_id);
  MPI_Comm_size(MPI_COMM_WORLD, &n_procs);
 
  /* IF MASTER, THEN DO THE FOLLOWING:
   * INITIALIZE x
   * SEND IT TO THE WORKER 
  */
  if (proc_id == MASTER) {
 
    /* INITIALIZE x */
    x = 100.001;
 
    /* SEND x TO WORKER */
    printf("\n  Sending x to WORKER [proc_id = 1] with TAG = 0\n");
 
    /* MPI_Send syntax:
     * MPI_Send(buf, count, datatype, dest, tag, comm)
     * [IN buf]      initial address of send buffer (choice)
     * [IN count]    number of elements in send buffer (nonnegative integer)
     * [IN datatype] datatype of each send buffer element (handle)
     * [IN dest]     rank of destination (integer)
     * [IN tag]      message tag (integer)
     *               must be unique for a pair of Send/Recv statements
     * [IN comm]     communicator (handle) 
    */
    MPI_Send(&x, 1, MPI_DOUBLE, 1, 0, MPI_COMM_WORLD);
 
  } /* MASTER LOOP ENDS */
 
 
  /* IF WORKER, THEN DO THE FOLLOWING:
   * RECEIVE x FROM MASTER
   * DISPLAY IT 
  */
  if (proc_id == WORKER) {
 
    /* RECEIVE x FROM MASTER */
    printf("\n  Receiving x from MASTER [proc_id = 0] with TAG = 0\n");
 
    /* MPI_Recv syntax:
     * MPI_Recv(buf, count, datatype, source, tag, comm, status)
     * [OUT buf]     initial address of receive buffer (choice)
     * [IN count]    number of elements in receive buffer (integer)
     * [IN datatype] datatype of each receive buffer element (handle)
     * [IN source]   rank of source (integer)
     * [IN tag]      message tag (integer)
     *               must be same as the tag in Send statement
     * [IN comm]     communicator (handle)
     * [OUT status]  status object (Status) 
    */
    MPI_Recv(&x, 1, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD, &status);
 
    /* DISPLAY x */
    printf("\n    x = %lf\n", x);
 
  } /* WORKER LOOP ENDS */
 
  /* FINALIZE MPI */
  MPI_Finalize();
 
  /* INDICATE THE TERMINATION OF THE PROGRAM */
  return 0;
 
} /* MAIN PROGRAM ENDS */

Program Compilation & Execution

The machine where I am running this calculation, dirac, has 4 processors and has MPICH2 v1.3.1 compiled against GCC v4.1.2 compilers.

[guest@dirac mpi_samples]$ which mpicc
alias mpicc='mpicc -g -Wall -lm'
	~/mpich2/1.3.1/gcc/4.1.2/bin/mpicc
 
[guest@dirac mpi_samples]$ which mpirun
alias mpirun='mpirun -machinefile $HOME/machinefile'
	~/mpich2/1.3.1/gcc/4.1.2/bin/mpirun
 
[guest@dirac mpi_samples]$ mpicc send_receive_simple.c -o send_receive_simple.x
 
[guest@dirac mpi_samples]$ mpirun -np 2 ./send_receive_simple.x
 
  Sending x to WORKER [proc_id = 1] with TAG = 0
 
  Receiving x from MASTER [proc_id = 0] with TAG = 0
 
    x = 100.001000
 
[guest@dirac mpi_samples]$

Viewing all articles
Browse latest Browse all 78

Trending Articles