Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
108 views
in Technique[技术] by (71.8m points)

c++ - Dynamic Memory Allocation in MPI

I am new to MPI. I wrote a simple code to display a matrix using multiple process. Say if I have a matrix of 8x8 and launching the MPI program with 4 processes, the 1st 2 rows will be printed my 1st process the 2nd set of 2 rows will be printed by 2nd thread so on by dividing itself equally.

#define S 8

MPI_Status status;

int main(int argc, char *argv[])
{
int numtasks, taskid;
int i, j, k = 0;

MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &taskid);
MPI_Comm_size(MPI_COMM_WORLD, &numtasks);

int rows, offset, remainPart, orginalRows, height, width;
int **a;
//  int a[S][S];

if(taskid == 0)
{
    cout<<taskid<<endl;
    height = width = S;

    a = (int **)malloc(height*sizeof(int *));
    for(i=0; i<height; i++)
        a[i] =  (int *)malloc(width*sizeof(int));

    for(i=0; i<S; i++)
        for(j=0; j<S; j++)
            a[i][j] = ++k;

    rows = S/numtasks;
    offset = rows;
    remainPart = S%numtasks;

    cout<<"Num Rows : "<<rows<<endl;

    for(i=1; i<numtasks; i++)
        if(remainPart > 0)
        {
            orginalRows = rows;
            rows++;
            remainPart--;

            MPI_Send(&offset, 1, MPI_INT, i, 1, MPI_COMM_WORLD);
            MPI_Send(&rows, 1, MPI_INT, i, 1, MPI_COMM_WORLD);
            MPI_Send(&width, 1, MPI_INT, i, 1, MPI_COMM_WORLD);
            MPI_Send(&a[offset][0], rows*S, MPI_INT,i,1, MPI_COMM_WORLD);

            offset += rows;
            rows = orginalRows;
        }
        else
        {
            MPI_Send(&offset, 1, MPI_INT, i, 1, MPI_COMM_WORLD);
            MPI_Send(&rows, 1, MPI_INT, i, 1, MPI_COMM_WORLD);
            MPI_Send(&width, 1, MPI_INT, i, 1, MPI_COMM_WORLD);
            MPI_Send(&a[offset][0], rows*S, MPI_INT,i,1, MPI_COMM_WORLD);

            offset += rows;
        }

        //Processing
        rows = S/numtasks;
        for(i=0; i<rows; i++)
        {
            for(j=0; j<width; j++)
                cout<<a[i][j]<<"";
            cout<<endl;
        }
}else
{
    cout<<taskid<<endl;

    MPI_Recv(&offset, 1, MPI_INT, 0, 1, MPI_COMM_WORLD, &status);
    MPI_Recv(&rows, 1, MPI_INT, 0, 1, MPI_COMM_WORLD, &status);
    MPI_Recv(&width, 1, MPI_INT, 0, 1, MPI_COMM_WORLD, &status);
    a = (int **)malloc(rows*sizeof(int *));
    for(i=0; i<rows; i++)
        a[i] =  (int *)malloc(width*sizeof(int));
    MPI_Recv(&a, rows*width, MPI_INT, 0, 1, MPI_COMM_WORLD, &status);
    cout<<"Offset : "<<offset<<"
Rows : "<<rows<<"
Width : "<<width<<endl;

    for(i=0; i<rows; i++)
    {
        for(j=0; j<width; j++)
            cout<<a[i][j]<<"";
        cout<<endl;
    }
}

getch();
MPI_Finalize();

return 0;
}

This is my complete code, here I have allocated the memory dynamically for 'a', while printing a[i][j], under the else part, I am getting runtime error. If I change the dynamic memory allocation to static like changing int **a to int a[N][N] and removing

    a = (int **)malloc(rows*sizeof(int));
    for(i=0; i<rows; i++)
        a[i] =  (int *)malloc(width*sizeof(int));

it works perfectly.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

There are at least two ways to dynamically allocate a 2D array.

The first one is the one of @HRoid : each row is allocated one at a time. Look here for getting an scheme.

The second one is suggested by @Claris, and it will ensure that the data is contiguous in memory. This is required by many MPI operations...it is also required by libraries like FFTW (2D fast fourier transform) or Lapack (dense matrices for linear algebra). Your program may fail at

MPI_Send(&a[offset][0], rows*S, MPI_INT,i,1, MPI_COMM_WORLD);

if S>1, this program will try to send items that are after the end of the line n°offset...That may trigger a segmentation fault or undefined behavior.

You may allocate your array this way :

a = malloc(rows * sizeof(int *));
if(a==NULL){fprintf(stderr,"out of memory...i will fail
");}
int *t = malloc(rows * width * sizeof(int));
if(t==NULL){fprintf(stderr,"out of memory...i will fail
");}
for(i = 0; i < rows; ++i)
  a[i] = &t[i * width];

Watch out : malloc does not initialize memory to 0 !

It seems that you want to spread a 2D array over many process. Look at MPI_Scatterv() here. Look at this question too.

If you want to know more about 2D arrays and MPI, look here.

You may find a basic example of MPI_Scatterv here.

I changed #define S 8 for #define SQUARE_SIZE 42. It's always better to give descriptive names.

And here is a working code using MPI_Scatterv() !

#include <mpi.h>
#include <iostream> 
#include <cstdlib>

using namespace std;

#define SQUARE_SIZE 42

MPI_Status status;

int main(int argc, char *argv[])
{
    int numtasks, taskid;
    int i, j, k = 0;

    MPI_Init(&argc, &argv);
    MPI_Comm_rank(MPI_COMM_WORLD, &taskid);
    MPI_Comm_size(MPI_COMM_WORLD, &numtasks);

    int rows, offset, remainPart, orginalRows, height, width;
    int **a;

    height = width = SQUARE_SIZE;

    //on rank 0, let's build a big mat of int
    if(taskid == 0){ 
        a=new int*[height]; 
        int *t =new int[height * width];
        for(i = 0; i < height; ++i)
            a[i] = &t[i * width];
        for(i=0; i<height; i++)
            for(j=0; j<width; j++)
                a[i][j] = ++k;
    }

    //for everyone, lets compute numbers of rows, numbers of int and displacements for everyone. Only 0 will use these arrays, but it's a practical way to get `rows` 
    int nbrows[numtasks];
    int sendcounts[numtasks];
    int displs[numtasks];
    displs[0]=0;
    for(i=0;i<numtasks;i++){
        nbrows[i]=height/numtasks;
        if(i<height%numtasks){
            nbrows[i]=nbrows[i]+1;
        }
        sendcounts[i]=nbrows[i]*width;
        if(i>0){
            displs[i]=displs[i-1]+sendcounts[i-1];
        }
    }
    rows=nbrows[taskid];

    //scattering operation. 
    //The case of the root is particular, since the communication is not to be done...Hence, the flag MPI_IN_PLACE is used.
    if(taskid==0){
        MPI_Scatterv(&a[0][0],sendcounts,displs,MPI_INT,MPI_IN_PLACE,0,MPI_INT,0,MPI_COMM_WORLD);
    }else{
        //allocation of memory for the piece of mat on the other nodes.
        a=new int*[rows];
        int *t =new int[rows * width];
        for(i = 0; i < rows; ++i)
            a[i] = &t[i * width];

        MPI_Scatterv(NULL,sendcounts,displs,MPI_INT,&a[0][0],rows*width,MPI_INT,0,MPI_COMM_WORLD);
    }
    //printing, one proc at a time
    if(taskid>0){
        MPI_Status status;
        MPI_Recv(NULL,0,MPI_INT,taskid-1,0,MPI_COMM_WORLD,&status);
    }
    cout<<"rank"<< taskid<<" Rows : "<<rows<<" Width : "<<width<<endl;

    for(i=0; i<rows; i++)
    {
        for(j=0; j<width; j++)
            cout<<a[i][j]<<"";
        cout<<endl;
    }
    if(taskid<numtasks-1){
        MPI_Send(NULL,0,MPI_INT,taskid+1,0,MPI_COMM_WORLD);
    }

    //freeing the memory !

    delete[] a[0];
    delete[] a;

    MPI_Finalize();

    return 0;
}

To compile : mpiCC main.cpp -o main

To run : mpiexec -np 3 main


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...