Quantcast
Channel: My experiments with life
Viewing all articles
Browse latest Browse all 78

MPI/C – An Advanced Send & Receive

$
0
0

MPI

For my understanding of what MPI is &/or does, please refer to this post.

MPI_Send & MPI_Recv

Rarely does the MASTER in a parallelized program sends some information to a WORKER solely for the purpose of being displayed. Albeit simple, the program below demonstrates usage of MPI_Send & MPI_Recv where the WORKER manipulates the information received from MASTER; sends that new information to MASTER and MASTER displays it.

Program Listing

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
/* send_receive_advanced.c
 * PARALLEL [MPI] C PROGRAM TO DEMONSTRATE MPI_Send & MPI_Recv FUNCTIONS.
 * MASTER [proc_id = 0] SENDS SOME DATA TO A WORKER [proc_id = 1].
 * WORKER PERFORMS SOME MATHEMATICAL OPERATION AND RETURNS THE
 * 'NEW INFORMATION' TO MASTER.
 *
 * TESTED SUCCESSFULLY WITH MPICH2 (1.3.1) COMPILED AGAINST GCC (4.1.2) 
 * IN A LINUX BOX WITH QUAD CORE INTEL XEON PROCESSOR (1.86 GHz) & 4GB OF RAM.
 *
 * FIRST WRITTEN: GOWTHAM; Sat, 27 Nov 2010 17:10:10 -0500
 * LAST MODIFIED: GOWTHAM; Sat, 27 Nov 2010 18:15:23 -0500
 *
 * URL:
 * http://sgowtham.net/blog/2010/11/28/mpi-c-an-advanced-send-receive/
 *
 * COMPILATION:
 * mpicc -g -Wall -lm send_receive_advanced.c -o send_receive_advanced.x
 *
 * EXECUTION:
 * mpirun -machinefile MACHINEFILE -np NPROC ./send_receive_advanced.x
 *
 * NPROC       : NUMBER OF PROCESSORS ALLOCATED TO RUNNING THIS PROGRAM;
 *               MUST BE EQUAL TO 2
 * MACHINEFILE : FILE LISTING THE HOSTNAMES OF PROCESSORS ALLOCATED TO
 *               RUNNING THIS PROGRAM
 *
*/
 
/* STANDARD HEADERS AND DEFINITIONS 
 * REFERENCE: http://en.wikipedia.org/wiki/C_standard_library
*/
#include <stdio.h>  /* Core input/output operations                         */
#include <stdlib.h> /* Conversions, random numbers, memory allocation, etc. */
#include <math.h>   /* Common mathematical functions                        */
#include <time.h>   /* Converting between various date/time formats         */
#include <mpi.h>    /* MPI functionality                                    */
 
#define MASTER  0   /* Process ID for MASTER                                */
#define WORKER  1   /* Process ID for WORKER                                */
#define N      10   /* Array size                                           */
 
/* MAIN PROGRAM BEGINS */
int main(int argc, char **argv) {
 
  /* VARIABLE DECLARATION */
  int    proc_id,       /* Process identifier                    */
         n_procs,       /* Number of processors                  */
         i;             /* Dummy/Running index                   */
 
  double x[N],          /* 1D array of size N
                           Information sent from MASTER
                           Information received by WORKER        */
         y[N];          /* 1D array of size N
                           Information sent from WORKER
                           Information received by MASTER        */
 
  MPI_Status status;    /* MPI structure containing return codes
                           for message passing operations        */
 
  /* INITIALIZE MPI */
  MPI_Init(&argc, &argv);
 
  /* GET THE PROCESS ID AND NUMBER OF PROCESSORS */
  MPI_Comm_rank(MPI_COMM_WORLD, &proc_id);
  MPI_Comm_size(MPI_COMM_WORLD, &n_procs);
 
  /* IF MASTER, THEN DO THE FOLLOWING:
   * POPULATE x[N]
   * SEND x[N] TO WORKER 
   * RECEIVE y[N] FROM WORKER
   * DISPLAY y[N]
  */
  if (proc_id == MASTER) {
 
    /* POPULATE x[N] - EACH ARRAY ELEMENT IS JUST THE INDEX */
    for (i=0; i < N; i++) {
      x[i] = i;
    }
 
    /* SEND x[N] TO WORKER */
    /* MPI_Send(buf, count, datatype, dest, tag, comm) */
    printf("\n  Sending x[N] to WORKER [proc_id = 1] with TAG = 0\n");
    MPI_Send(x, N, MPI_DOUBLE, 1, 0, MPI_COMM_WORLD);
 
    /* RECEIVE y[N] FROM WORKER */
    /* MPI_Recv(buf, count, datatype, source, tag, comm, status) */
    printf("\n  Receiving y[N] from WORKER [proc_id = 1] with TAG = 1\n");
    MPI_Recv(y, N, MPI_DOUBLE, 1, 1, MPI_COMM_WORLD, &status);
 
    /* DISPLAY y[N] */
    for (i=0; i < N; i++) {
      printf("    y[%d] = %2.0lf\n", i, y[i]);
    }
 
  } /* MASTER LOOP ENDS */
 
 
  /* IF WORKER, THEN DO THE FOLLOWING:
   * RECEIVE x[N] FROM MASTER 
   * CREATE y[N]
   * SEND y[N] TO MASTER
  */
  if (proc_id == WORKER) {
 
    /* RECEIVE x[N] FROM MASTER */
    printf("\n  Receiving x[N] from MASTER [proc_id = 0] with TAG = 0\n");
    MPI_Recv(x, N, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD, &status);
 
    /* CREATE A NEW ARRAY y[N]
     * A SIMPLE MATHEMATICAL MANIPULATION 
     * TAKE THE ELEMENTS OF x[N] AND SQUARE THEM
    */
    for (i=0; i < N; i++) {
      y[i] = x[i] * x[i];
    }
 
    /* SEND y[N] TO MASTER */
    printf("\n  Sending y[N] to MASTER [proc_id = 0] with TAG = 1\n");
    MPI_Send(y, N, MPI_DOUBLE, 0, 1, MPI_COMM_WORLD);
 
  } /* WORKER LOOP ENDS */
 
  /* FINALIZE MPI */
  MPI_Finalize();
 
  /* INDICATE THE TERMINATION OF THE PROGRAM */
  return 0;
 
} /* MAIN PROGRAM ENDS */

Program Compilation & Execution

The machine where I am running this calculation, dirac, has 4 processors and has MPICH2 v1.3.1 compiled against GCC v4.1.2 compilers.

[guest@dirac mpi_samples]$ which mpicc
alias mpicc='mpicc -g -Wall -lm'
	~/mpich2/1.3.1/gcc/4.1.2/bin/mpicc
 
[guest@dirac mpi_samples]$ which mpirun
alias mpirun='mpirun -machinefile $HOME/machinefile'
	~/mpich2/1.3.1/gcc/4.1.2/bin/mpirun
 
[guest@dirac mpi_samples]$ mpicc send_receive_advanced.c -o send_receive_advanced.x
 
[guest@dirac mpi_samples]$ mpirun -np 2 ./send_receive_advanced.x
 
  Sending x[N] to WORKER [proc_id = 1] with TAG = 0
 
  Receiving y[N] from WORKER [proc_id = 1] with TAG = 1
    y[0] =  0
    y[1] =  1
    y[2] =  4
 
  Receiving x[N] from MASTER [proc_id = 0] with TAG = 0
 
  Sending y[N] to MASTER [proc_id = 0] with TAG = 1
    y[3] =  9
    y[4] = 16
    y[5] = 25
    y[6] = 36
    y[7] = 49
    y[8] = 64
    y[9] = 81
 
[guest@dirac mpi_samples]$



Often, the output is not synchronized – by that, one means that printf statements from MASTER and WORKERS don’t always show up in the logically expected order. One way to fix this issue is to remove all printf statements from WORKERS.


Viewing all articles
Browse latest Browse all 78

Trending Articles