Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
694 views
in Technique[技术] by (71.8m points)

python - Clear memory allocated on the GPU with CUDA and Numba

I have a very large array that I would like to process on the GPU using a CUDA kernel implemented with Numba, so I split the processing in smaller steps in this way:

from numba import cuda
from math import ceil
import numpy as np

my_array = np.zeros((1000000,100))
step = 100

for ievt in range(0, my_array.shape[0], step):
    sub_array = my_array[ievt:ievt+step]
    output = np.zeros((sub_array.shape[0],100,100))
    TPB = (8,8,8)
    BPG = (ceil(output.shape[0] / TPB[0]), ceil(output.shape[1] / TPB[1]), ceil(output.shape[2] / TPB[2]))
    my_kernel[BPG,TPB](sub_array, output)

This runs fine for the first N iterations, but after a while I get

numba.cuda.cudadrv.driver.CudaAPIError: [2] Call to cuMemAlloc results in CUDA_ERROR_OUT_OF_MEMORY

which I don't understand: if the memory is enough for the first N iterations, why it runs out of memory at N+1? It looks like the memory for output is not being cleared, how do I do that explicitly? Also, is there a smarter way to achieve what I am trying to do?


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)
等待大神答复

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

1.4m articles

1.4m replys

5 comments

57.0k users

...