I am operating on multiple graphs simultanously.
I would for example, like one graph to use CPU and the other to use the GPU.
How can I achieve this ?
Current approach and its problems
When I use a tf.Session()
with a tf.ConfigProto
as follows, it does not work and still uses a GPU.
config = tf.ConfigProto(
device_count = {'GPU': 0}
)
I have to use the environment variable CUDA_VISIBLE_DEVICES
to disable the use of a GPU. I later on use os.unsetenv()
to remove this variable after my work.
These solutions are not useful to me because for one graph I want GPUs to be used and for the other I don't want the GPUs to be used. Setting os.environ()
will affect both the graphs.
How can I achieve my purpose ?
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…