The Ultimate Way to Resolve This Problem
Because all the c / cpp files would be compiled by using make
commend, and make
has an option which specify how many cpu cores shoule be used to compile the source code, we could do some tricks on make
.
Backup your original make
command:
sudo cp /usr/bin/make /usr/bin/make.bak
write a "fake" make
command, which will append --jobs=6
to its parameter list and pass them to the original make command make.bak
:
make.bak --jobs=6 $@
So after that, not even compile python with c libs, but also others contain c libs would speed up on compilation by 6 cores. Actually all files compiled by using make
command will speed up.
And good luck.
Use: --install-option="--jobs=6" (pip docs).
pip3 install --install-option="--jobs=6" PyXXX
I have the same demand that use pip install to speed the compile progress. My target pkg is PySide. At first I use pip3 install pyside
, it takes me nearly 30 minutes (AMD 1055T 6-cores, 10G RAM), only one core take 100% load.
There are no clues in pip3 --help
, but I found lots of options like pip install -u pyXXX
, but I didn't know what is '-u' and this parameter was not in pip --help
too. I tried 'pip3 install --help' and there came the answer: --install-option.
I read the code of PySide's code and found another clue: OPTION_JOBS = has_option('jobs')
, I put ipdb.set_trace() there and finally understand how to use multicore to compile by using pip install.
it took me about 6 minutes.
--------------------------update------------------------------
as comment below, I finally used tricks like this:
cd /usr/bin
sudo mv make make.bak
touch make
then edit make: vim make
or other way you like and type this:
make.bak --jobs=6 $*
I'm not familiar with bash, so I'm not sure if this is the correcct bash code. I'm writing this comment in windows. The key is rename make into make.bak, and then create a new make, use this new make to call make.bak with added param --jobs=6
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…