I'm new to multiprocessing in Python and trying to figure out if I should use Pool or Process for calling two functions async. The two functions I have make curl calls and parse the information into a 2 separate lists. Depending on the internet connection, each function could take about 4 seconds each. I realize that the bottleneck is in the ISP connection and multiprocessing won't speed it up much, but it would be nice to have them both kick off async. Plus, this is a great learning experience for me to get into python's multi-processing because I will be using it more later.
I have read Python multiprocessing.Pool: when to use apply, apply_async or map? and it was useful, but still had my own questions.
So one way I could do it is:
def foo():
pass
def bar():
pass
p1 = Process(target=foo, args=())
p2 = Process(target=bar, args=())
p1.start()
p2.start()
p1.join()
p2.join()
Questions I have for this implementation is:
1) Since join blocks until calling process is completed...does this mean p1 process has to finish before p2 process is kicked off? I always understood the .join() be the same as pool.apply() and pool.apply_sync().get() where the parent process can not launch another process(task) until the current one running is completed.
The other alternative would be something like:
def foo():
pass
def bar():
pass
pool = Pool(processes=2)
p1 = pool.apply_async(foo)
p1 = pool.apply_async(bar)
Questions I have for this implementation would be:
1) Do I need a pool.close(), pool.join()?
2) Would pool.map() make them all complete before I could get results? And if so, are they still ran asynch?
3) How would pool.apply_async() differ from doing each process with pool.apply()
4) How would this differ from the previous implementation with Process?
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…