Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
1.1k views
in Technique[技术] by (71.8m points)

cuda - How can I add up two 2d (pitched) arrays using nested for loops?

I'm new to cuda. I want to add up two 2d array into a third array. I use following code:

cudaMallocPitch((void**)&device_a, &pitch, 2*sizeof(int),2);
cudaMallocPitch((void**)&device_b, &pitch, 2*sizeof(int),2);
cudaMallocPitch((void**)&device_c, &pitch, 2*sizeof(int),2);

now my problem is that i dont want to use these array as flattened 2-d array all in my kernel code i want to di is use two for loop & put the result in the third array like

__global__ void add(int *dev_a ,int *dev_b,int* dec_c)
{
    for i=0;i<2;i++)
    { 
      for j=0;j<2;j++)
      {
        dev_c[i][j]=dev_a[i][j]+dev_b[i][j];
      }
    }
}

How i can do this in CUDA? please tell me how to use 2-d array in this way ?

What should be the kernel call for using 2d-array ? If possible, please explain using code samples.

Question&Answers:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

The short answer is, you can't. The cudaMallocPitch()function does exactly what its name implies, it allocates pitched linear memory, where the pitch is chosen to be optimal for the GPU memory controller and texture hardware.

If you wanted to use arrays of pointers in the kernel, the kernel code would have to look like this:

__global___ void add(int *dev_a[] ,int *dev_b[], int* dec_c[])
{
    for i=0;i<2;i++) { 
      for j=0;j<2;j++) {
        dev_c[i][j]=dev_a[i][j]+dev_b[i][j];
      }
    }
}

and then you would need nested cudaMalloc calls on the host side to construct the array of pointers and copy it to device memory. For your rather trivial 2x2 example, the code to allocate a single array would look like this:

int ** h_a = (int **)malloc(2 * sizeof(int *));
cudaMalloc((void**)&h_a[0], 2*sizeof(int));
cudaMalloc((void**)&h_a[1], 2*sizeof(int));

int **d_a;
cudaMalloc((void ***)&d_a, 2 * sizeof(int *));
cudaMemcpy(d_a, h_a, 2*sizeof(int *), cudaMemcpyHostToDevice);

Which would leave the allocated device array of pointers in d_a, and you would pass that to your kernel.

For code complexity and performance reasons, you really don't want to do that, using arrays of pointers in CUDA code is both harder and slower than the alternative using linear memory.


To show what folly using arrays of pointers is in CUDA, here is a complete working example of your sample problem which combines the two ideas above:

#include <cstdio>
__global__ void add(int * dev_a[], int * dev_b[], int * dev_c[])
{
    for(int i=0;i<2;i++)
    { 
        for(int j=0;j<2;j++)
        {
            dev_c[i][j]=dev_a[i][j]+dev_b[i][j];
        }
    }
}

inline void GPUassert(cudaError_t code, char * file, int line, bool Abort=true)
{
    if (code != 0) {
        fprintf(stderr, "GPUassert: %s %s %d
", cudaGetErrorString(code),file,line);
        if (Abort) exit(code);
    }       
}

#define GPUerrchk(ans) { GPUassert((ans), __FILE__, __LINE__); }

int main(void)
{
    const int aa[2][2]={{1,2},{3,4}};
    const int bb[2][2]={{5,6},{7,8}};
    int cc[2][2];

    int ** h_a = (int **)malloc(2 * sizeof(int *));
    for(int i=0; i<2;i++){
        GPUerrchk(cudaMalloc((void**)&h_a[i], 2*sizeof(int)));
        GPUerrchk(cudaMemcpy(h_a[i], &aa[i][0], 2*sizeof(int), cudaMemcpyHostToDevice));
    }

    int **d_a;
    GPUerrchk(cudaMalloc((void ***)&d_a, 2 * sizeof(int *)));
    GPUerrchk(cudaMemcpy(d_a, h_a, 2*sizeof(int *), cudaMemcpyHostToDevice));

    int ** h_b = (int **)malloc(2 * sizeof(int *));
    for(int i=0; i<2;i++){
        GPUerrchk(cudaMalloc((void**)&h_b[i], 2*sizeof(int)));
        GPUerrchk(cudaMemcpy(h_b[i], &bb[i][0], 2*sizeof(int), cudaMemcpyHostToDevice));
    }

    int ** d_b;
    GPUerrchk(cudaMalloc((void ***)&d_b, 2 * sizeof(int *)));
    GPUerrchk(cudaMemcpy(d_b, h_b, 2*sizeof(int *), cudaMemcpyHostToDevice));

    int ** h_c = (int **)malloc(2 * sizeof(int *));
    for(int i=0; i<2;i++){
        GPUerrchk(cudaMalloc((void**)&h_c[i], 2*sizeof(int)));
    }

    int ** d_c;
    GPUerrchk(cudaMalloc((void ***)&d_c, 2 * sizeof(int *)));
    GPUerrchk(cudaMemcpy(d_c, h_c, 2*sizeof(int *), cudaMemcpyHostToDevice));

    add<<<1,1>>>(d_a,d_b,d_c);
    GPUerrchk(cudaPeekAtLastError());

    for(int i=0; i<2;i++){
        GPUerrchk(cudaMemcpy(&cc[i][0], h_c[i], 2*sizeof(int), cudaMemcpyDeviceToHost));
    }

    for(int i=0;i<2;i++) {
        for(int j=0;j<2;j++) {
            printf("(%d,%d):%d
",i,j,cc[i][j]);
        }
    }

    return cudaThreadExit();
}

I recommend you study it until you understand what it does, and why it is such a poor idea compared to using linear memory.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...