I am training a Generative Adversarial Network (GAN) in tensorflow, where basically we have two different networks each one with its own optimizer.
self.G, self.layer = self.generator(self.inputCT,batch_size_tf)
self.D, self.D_logits = self.discriminator(self.GT_1hot)
...
self.g_optim = tf.train.MomentumOptimizer(self.learning_rate_tensor, 0.9).minimize(self.g_loss, global_step=self.global_step)
self.d_optim = tf.train.AdamOptimizer(self.learning_rate, beta1=0.5)
.minimize(self.d_loss, var_list=self.d_vars)
The problem is that I train one of the networks (g) first, and then, I want to train g and d together. However, when I call the load function:
self.sess.run(tf.initialize_all_variables())
self.sess.graph.finalize()
self.load(self.checkpoint_dir)
def load(self, checkpoint_dir):
print(" [*] Reading checkpoints...")
ckpt = tf.train.get_checkpoint_state(checkpoint_dir)
if ckpt and ckpt.model_checkpoint_path:
ckpt_name = os.path.basename(ckpt.model_checkpoint_path)
self.saver.restore(self.sess, ckpt.model_checkpoint_path)
return True
else:
return False
I have an error like this (with a lot more traceback):
Tensor name "beta2_power" not found in checkpoint files checkpoint/MR2CT.model-96000
I can restore the g network and keep training with that function, but when I want to star d from scratch, and g from the the stored model I have that error.
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…