This simply defines the maximum size of the internal training queue which is used to "precache" your samples from generator. It is used during generation of the the queues
def generator_queue(generator, max_q_size=10,
wait_time=0.05, nb_worker=1):
'''Builds a threading queue out of a data generator.
Used in `fit_generator`, `evaluate_generator`, `predict_generator`.
'''
q = queue.Queue()
_stop = threading.Event()
def data_generator_task():
while not _stop.is_set():
try:
if q.qsize() < max_q_size:
try:
generator_output = next(generator)
except ValueError:
continue
q.put(generator_output)
else:
time.sleep(wait_time)
except Exception:
_stop.set()
raise
generator_threads = [threading.Thread(target=data_generator_task)
for _ in range(nb_worker)]
for thread in generator_threads:
thread.daemon = True
thread.start()
return q, _stop
In other words you have a thread filling the queue up to given, maximum capacity directly from your generator, while (for example) training routine consumes its elements (and sometimes waits for the completion)
while samples_seen < samples_per_epoch:
generator_output = None
while not _stop.is_set():
if not data_gen_queue.empty():
generator_output = data_gen_queue.get()
break
else:
time.sleep(wait_time)
and why default of 10? No particular reason, like most of the defaults - it simply makes sense, but you could use different values too.
Construction like this suggests, that authors thought about expensive data generators, which might take time to execture. For example consider downloading data over a network in generator call - then it makes sense to precache some next batches, and download next ones in parallel for the sake of efficiency and to be robust to network errors etc.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…