Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
617 views
in Technique[技术] by (71.8m points)

python - Keep unified count during multiprocessing?

I have a python program that runs a Monte Carlo simulation to find answers to probability questions. I am using multiprocessing and here it is in pseudo code

import multiprocessing

def runmycode(result_queue):
    print "Requested..."
    while 1==1:
       iterations +=1
    if "result found (for example)":
        result_queue.put("result!")

    print "Done"

processs = []
result_queue = multiprocessing.Queue()

for n in range(4): # start 4 processes
    process = multiprocessing.Process(target=runmycode, args=[result_queue])
    process.start()
    processs.append(process)

print "Waiting for result..."

result = result_queue.get() # wait

for process in processs: # then kill them all off
    process.terminate()

print "Got result:", result

I'd like to extend this so that I can keep a unified count of the number of iterations that have been run. Like if thread 1 has run 100 times and thread 2 has run 100 times then I want to show 200 iterations total, as a print to the console. I am referring to the iterations variable in the thread process. How can I make sure that ALL threads are adding to the same variable? I thought that using a Global version of iterations would work but it does not.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Normal global variables are not shared between processes the way they are shared between threads. You need to use a process-aware data structure. For your use-case, a multiprocessing.Value should work fine:

import multiprocessing

def runmycode(result_queue, iterations):
   print("Requested...")
   while 1==1: # This is an infinite loop, so I assume you want something else here
       with iterations.get_lock(): # Need a lock because incrementing isn't atomic
           iterations.value += 1
   if "result found (for example)":
       result_queue.put("result!")

   print("Done")


if __name__ == "__main__":
    processs = []
    result_queue = multiprocessing.Queue()

    iterations = multiprocessing.Value('i', 0)
    for n in range(4): # start 4 processes
        process = multiprocessing.Process(target=runmycode, args=(result_queue, iterations))
        process.start()
        processs.append(process)

    print("Waiting for result...")

    result = result_queue.get() # wait

    for process in processs: # then kill them all off
        process.terminate()

    print("Got result: {}".format(result))
    print("Total iterations {}".format(iterations.value))

A few notes:

  1. I explicitly passed the Value to the children, to keep the code compatible with Windows, which can't share read/write global variables between parent and children.
  2. I protected the increment with a lock, because its not an atomic operation, and is susceptible to race conditions.
  3. I added an if __name__ == "__main__": guard, again to help with Windows compatibility, and just as a general best practice.

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...