I have implemented a TensorFlow DNN model (2 hidden layers with elu activation functions trained on MNIST) as a Python class in order to wrap TF calls within another library with its own optimization routines and tools.
When running some tests on a TeslaK20 I noticed that the GPU was being used at 4% of the total capacity. Therefore I looked a bit more closely to the log-device-placement and figured that all critical operations like MatMul
, Sum
, Add
, Mean
etc were being assigned to the CPU.
The first thing that came to mind was that it was because I was using dtype=float64
, therefore I switched to dtype=float32
. While a lot more operations were assigned to GPU, still a good number were assigned to the CPU, like Mean
, gradient/Mean_grad/Prod
, gradient/Mean
.
So here comes my first question (I'm linking a working code example at the end),
1) why would that be? I have written different TF models that consist of simple tensor multiplications and reductions and they run fully on GPU as long as I use single precision.
So here comes the second question,
2) why does TF assign the graph to different devices depending on the data type? I understand that not all kernels are implemented for GPU but I would have thought that things like MatMul
could run on GPU for both single and double precision.
3) Could the fact that the model is wrapped within a Python class have an effect? I do not think this is the case because as I said, it did not happen for other models wrapped similarly but that were simpler.
4) What sort of steps can I take to run the model fully on a GPU?
Here is a full working example of my code that I have isolated from my library
https://gist.github.com/smcantab/8ecb679150a327738102 .
If you run it and look at the output you'll see how different parts of the graph have been assigned to different devices. To see how this changes with types and devices change dtype
and device
within main()
at the end of the example. Note that if I set allow_soft_placement=False
the graph fails to initialize.
Any word of advice would be really appreciated.
See Question&Answers more detail:
os