More specifically I have a simple fprop that is a composition of tf operations.
I want to override the tensorflow gradient computation with my own gradient method using RegisterGradient.
What's wrong with this code?
import tensorflow as tf
from tensorflow.python.framework import ops
@ops.RegisterGradient("MyopGrad")
def frop_grad(op, grad):
x = op.inputs[0]
return 0 * x # zero out to see the difference:
def fprop(x):
x = tf.sqrt(x)
out = tf.maximum(x, .2)
return out
a = tf.Variable(tf.constant([5., 4., 3., 2., 1.], dtype=tf.float32))
h = fprop(a)
h = tf.identity(h, name="Myop")
grad = tf.gradients(h, a)
g = tf.get_default_graph()
with g.gradient_override_map({'Myop': 'MyopGrad'}):
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
result = sess.run(grad)
print(result[0])
I want to see all zeros in the print, but instead I am getting:
[ 0.2236068 0.25000003 0.28867513 0.35355341 0.5 ]
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…