First of all, I've read multiple forums, papers and articles in the subject.
I had not had the need to implement the use of a GPU in my processes, however they have become more robust. The problem is that I have a somewhat complex function created in python, vectorized in numpy and using @jit to make it run faster.
However, my GPU (AMD) does not show use in the task panel (0%). I have seen PyOpenCL, however I want to know if there is something simpler than translating the code. The function is fast, the problem is that I want to use parallel processing to iterate that function 18 million times which currently takes me 5 hours into multiple proceso, I know that I can use multiprocessing on the CPU, but I want to use my GPU, there is some 'easy' way to split the task on the GPU?
question from:
https://stackoverflow.com/questions/65924564/is-there-an-easy-way-of-parallel-processing-with-gpu-with-a-defined-python-funct 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…