Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
131 views
in Technique[技术] by (71.8m points)

Is there an easy way of parallel processing with GPU with a defined python function?

First of all, I've read multiple forums, papers and articles in the subject. I had not had the need to implement the use of a GPU in my processes, however they have become more robust. The problem is that I have a somewhat complex function created in python, vectorized in numpy and using @jit to make it run faster.

However, my GPU (AMD) does not show use in the task panel (0%). I have seen PyOpenCL, however I want to know if there is something simpler than translating the code. The function is fast, the problem is that I want to use parallel processing to iterate that function 18 million times which currently takes me 5 hours into multiple proceso, I know that I can use multiprocessing on the CPU, but I want to use my GPU, there is some 'easy' way to split the task on the GPU?

question from:https://stackoverflow.com/questions/65924564/is-there-an-easy-way-of-parallel-processing-with-gpu-with-a-defined-python-funct

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

We had some discussion, whether Numba can compile code for GPU automatically. I think it was able to, but now this is deprecated way. The other approach is to use @numba.cuda.jit and write code in terms of CUDA blocks, threads and so on. It works well. With it you enter big and fascinating (I am not joking) world of CUDA programming. You can maybe parallelize running of your big function with different parameters. Maybe you will even not need to rewrite it itself for this...


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...