I have implemented a custom training loop for my LSTM model,
with tf.GradientTape() as tape:
y_pred = model(x_batch, training=True)
loss_value = loss_fn(y_true=y_batch, y_pred=y_pred)
# calculate gradients
gradients = tape.gradient(loss_value, model.trainable_variables)
# clip the gradients
gradients, _ = tf.clip_by_global_norm(gradients, 5.0)
# update weights
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
and i'm trying to using the learning rate scheduler
boundaries = [100000, 110000]
lr_values = [1.0, 0.5, 0.1]
tf.keras.optimizers.schedules.PiecewiseConstantDecay(boundaries, lr_values)
However I get this error message when it call optimizer.apply_gradients(zip(gradients, model.trainable_variables)):
TypeError: To be compatible with tf.contrib.eager.defun, Python functions must return zero or more Tensors; in compilation of <function train_model at 0x2ba5bde90dc0>, found return value of type <class 'tensorflow.python.keras.optimizer_v2.adam.Adam'>, which is not a Tensor.
question from:
https://stackoverflow.com/questions/65936476/tensorflow2-learning-rate-schedular 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…