I'll try and make this clear;
I've got two classes; GPU(Object)
, for general access to GPU functionality, and multifunc(threading.Thread)
for a particular function I'm trying to multi-device-ify. GPU
contains most of the 'first time' processing needed for all subsequent usecases, so multifunc
gets called from GPU
with its self
instance passed as an __init__
argument (along with the usual queues and such).
Unfortunately, multifunc
craps out with:
File "/home/bolster/workspace/project/gpu.py", line 438, in run
prepare(d_A,d_B,d_XTG,offset,grid=N_grid,block=N_block)
File "/usr/local/lib/python2.7/dist-packages/pycuda-0.94.2-py2.7-linux-x86_64.egg/pycuda/driver.py", line 158, in function_call
func.set_block_shape(*block)
LogicError: cuFuncSetBlockShape failed: invalid handle
First port of call was of course the block dimensions, but they are well within range (same behaviour even if I force block=(1,1,1)
, likewise grid.
Basically, within multifunc
, all of the usual CUDA memalloc etc functions work fine, (implying its not a context problem) So the problem must be with the SourceModule
ing of the kernel function itself.
I have a kernel template containing all my CUDA code that's file-scoped, and templating is done with jinja2
in the GPU
initialisation. Regardless of whether that templated object is converted to a SourceModule
object in GPU
and passed to multifunc
, or if its converted in multifunc
the same thing happens.
Google has been largely useless for this particular issue, but following the stack, I'm assuming the Invalid Handle
being referred to is the kernel function handle rather than anything strange going on with the block dimensions.
I'm aware this is a very corner-case situation, but I'm sure someone can see a problem that I've missed.
See Question&Answers more detail:
os