MPI how to receive dynamic arrays from slave nodes?

I am new to MPI. I want to send three ints to three slave nodes to create dynamic arrays, and each arrays will be send back to master. According to this post, I modified the code, and it's close to the right answer. But I got breakpoint when received array from slave #3 (m ==3) in receiver code. Thank you in advance!

My code is as follow:

#include <mpi.h>
#include <iostream>
#include <stdlib.h>

int main(int argc, char** argv)
{
    int firstBreakPt, lateralBreakPt;
    //int reMatNum1, reMatNum2;
    int tmpN;

    int breakPt[3][2]={{3,5},{6,9},{4,7}};

    int myid, numprocs;
    MPI_Status status;

//  double *reMat1;
//  double *reMat2;


    MPI_Init(&argc,&argv);
    MPI_Comm_rank(MPI_COMM_WORLD,&myid);
    MPI_Comm_size(MPI_COMM_WORLD,&numprocs);

    tmpN = 15;

    if (myid==0)
    {
        // send three parameters to slaves;
        for (int i=1;i<numprocs;i++)
        {
            MPI_Send(&tmpN,1,MPI_INT,i,0,MPI_COMM_WORLD);

            firstBreakPt = breakPt[i-1][0];
            lateralBreakPt = breakPt[i-1][1];           

            //std::cout<<i<<" "<<breakPt[i-1][0] <<" "<<breakPt[i-1][1]<<std::endl;

            MPI_Send(&firstBreakPt,1,MPI_INT,i,1,MPI_COMM_WORLD);
            MPI_Send(&lateralBreakPt,1,MPI_INT,i,2,MPI_COMM_WORLD);
        }

        // receive arrays from slaves;
        for (int m =1; m<numprocs; m++)
        {
            MPI_Probe(m, 3, MPI_COMM_WORLD, &status);

            int nElems3, nElems4;
            MPI_Get_elements(&status, MPI_DOUBLE, &nElems3);

            // Allocate buffer of appropriate size
            double *result3 = new double[nElems3];
            MPI_Recv(result3,nElems3,MPI_DOUBLE,m,3,MPI_COMM_WORLD,&status);

            std::cout<<"Tag is 3, ID is "<<m<<std::endl;
            for (int ii=0;ii<nElems3;ii++)
            {
                std::cout<<result3[ii]<<std::endl;
            }

            MPI_Probe(m, 4, MPI_COMM_WORLD, &status);
            MPI_Get_elements(&status, MPI_DOUBLE, &nElems4);

            // Allocate buffer of appropriate size
            double *result4 = new double[nElems4];
            MPI_Recv(result4,nElems4,MPI_DOUBLE,m,4,MPI_COMM_WORLD,&status);

            std::cout<<"Tag is 4, ID is "<<m<<std::endl;
            for (int ii=0;ii<nElems4;ii++)
            {
                std::cout<<result4[ii]<<std::endl;
            }
        }
    }
    else
    {
        // receive three paramters from master;
        MPI_Recv(&tmpN,1,MPI_INT,0,0,MPI_COMM_WORLD,&status);

        MPI_Recv(&firstBreakPt,1,MPI_INT,0,1,MPI_COMM_WORLD,&status);
        MPI_Recv(&lateralBreakPt,1,MPI_INT,0,2,MPI_COMM_WORLD,&status);

        // width
        int width1 = (rand() % (tmpN-firstBreakPt+1))+ firstBreakPt;
        int width2 = (rand() % (tmpN-lateralBreakPt+1))+ lateralBreakPt;

        // create dynamic arrays
        double *reMat1 = new double[width1*width1];
        double *reMat2 = new double[width2*width2];

        for (int n=0;n<width1; n++)
        {
            for (int j=0;j<width1; j++)
            {
                reMat1[n*width1+j]=(double)rand()/RAND_MAX + (double)rand()/(RAND_MAX*RAND_MAX); 
                //a[i*Width+j]=1.00;
            }
        }

        for (int k=0;k<width2; k++)
        {
            for (int h=0;h<width2; h++)
            {
                reMat2[k*width2+h]=(double)rand()/RAND_MAX + (double)rand()/(RAND_MAX*RAND_MAX); 
                //a[i*Width+j]=1.00;
            }
        }

        // send it back to master
        MPI_Send(reMat1,width1*width1,MPI_DOUBLE,0,3,MPI_COMM_WORLD);
        MPI_Send(reMat2,width2*width2,MPI_DOUBLE,0,4,MPI_COMM_WORLD);
    }

    MPI_Finalize();

    std::cin.get();

    return 0;
}

P.S. This code is the right answer.

Answers


Use collective MPI operations, as Zulan suggested. For example, first thing your code does is that the root sends to all the slaves the same value, which is broadcasting, i.e.,MPI_Bcast(). Then, the root sends to each slave a different value, which is scatter, i.e., MPI_Scatter().

The last operation is that the slave processes send to the root variably-sized data, for which exists the MPI_Gatherv() function. However, to use this function, you need to:

  1. allocate the incoming buffer by the root (there is no malloc() for reMat1 and reMat2 in the first if-branch of your code), therefore, the root needs to know their count,
  2. tell MPI_Gatherv() on the root how many elements will be received from each slave and where to put them.

This problem can be easily solved by so-called parallel prefix, look at MPI_Scan() or MPI_Exscan().


Here you create randomized width

    int width1 = (rand() % (tmpN-firstBreakPt+1))+ firstBreakPt;
    int width2 = (rand() % (tmpN-lateralBreakPt+1))+ lateralBreakPt;

which you later use to send data back to process 0

    MPI_Send(reMat1,width1*width1,MPI_DOUBLE,0,3,MPI_COMM_WORLD);

But it expects different number of

    MPI_Recv(reMat1,firstBreakPt*tmpN*firstBreakPt*tmpN,MPI_DOUBLE,m,3,MPI_COMM_WORLD,&status);

which causes problems. It does not know what sizes each slave process generated so you have to send them back the same way you did for sending sizes to them.


Need Your Help

How to set RaddatePicker Min Date and Max Date Based on Financial Year in C# Coding?

c# asp.net-ajax telerik rad-controls

I am using ASP RadControls for my project. I want to know how to set MinDate and MaxDate based on the Financial Year. I try following code but I am not getting how set min value and max value to