I have n files to analyze separately and independently of each other with the same Python script analysis.py
. In a wrapper script, wrapper.py
, I loop over those files and call analysis.py
as a separate process with subprocess.Popen
:
for a_file in all_files:
command = "python analysis.py %s" % a_file
analysis_process = subprocess.Popen(
shlex.split(command),
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
analysis_process.wait()
Now, I would like to use all the k CPU cores of my machine in order to speed up the whole analysis.
Is there a way to always have k-1
running processes as long as there are files to analyze?
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…