Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
259 views
in Technique[技术] by (71.8m points)

python - Meaning of inter_op_parallelism_threads and intra_op_parallelism_threads

Can somebody please explain the following TensorFlow terms

  1. inter_op_parallelism_threads

  2. intra_op_parallelism_threads

or, please, provide links to the right source of explanation.

I have conducted a few tests by changing the parameters, but the results have not been consistent to arrive at a conclusion.

Question&Answers:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

The inter_op_parallelism_threads and intra_op_parallelism_threads options are documented in the source of the tf.ConfigProto protocol buffer. These options configure two thread pools used by TensorFlow to parallelize execution, as the comments describe:

// The execution of an individual op (for some op types) can be
// parallelized on a pool of intra_op_parallelism_threads.
// 0 means the system picks an appropriate number.
int32 intra_op_parallelism_threads = 2;

// Nodes that perform blocking operations are enqueued on a pool of
// inter_op_parallelism_threads available in each process.
//
// 0 means the system picks an appropriate number.
//
// Note that the first Session created in the process sets the
// number of threads for all future sessions unless use_per_session_threads is
// true or session_inter_op_thread_pool is configured.
int32 inter_op_parallelism_threads = 5;

There are several possible forms of parallelism when running a TensorFlow graph, and these options provide some control multi-core CPU parallelism:

  • If you have an operation that can be parallelized internally, such as matrix multiplication (tf.matmul()) or a reduction (e.g. tf.reduce_sum()), TensorFlow will execute it by scheduling tasks in a thread pool with intra_op_parallelism_threads threads. This configuration option therefore controls the maximum parallel speedup for a single operation. Note that if you run multiple operations in parallel, these operations will share this thread pool.

  • If you have many operations that are independent in your TensorFlow graph—because there is no directed path between them in the dataflow graph—TensorFlow will attempt to run them concurrently, using a thread pool with inter_op_parallelism_threads threads. If those operations have a multithreaded implementation, they will (in most cases) share the same thread pool for intra-op parallelism.

Finally, both configuration options take a default value of 0, which means "the system picks an appropriate number." Currently, this means that each thread pool will have one thread per CPU core in your machine.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...