Well, I have not figured out a way to do what I want exactly, but I've found a way around the problem; instead of passing a new list of variables to the original optimizer, I defined a new optimizer with those variables passed to its minimize()
method. The code is given below:
learning_rate = 0.0001
training_iters = 60000
batch_size = 64
display_step = 20
ImVecDim = 784# The number of elements in a an image vector (flattening a 28x28 2D image)
NumOfClasses = 10
dropout = 0.8
with tf.Session() as sess:
LoadMod = tf.train.import_meta_graph('simple_mnist.ckpt.meta') # This object loads the model
LoadMod.restore(sess, tf.train.latest_checkpoint('./')) # Loading weights and biases and other stuff to the model
g = tf.get_default_graph()
# Retraining:
X = g.get_tensor_by_name('ImageIn:0')
Y = g.get_tensor_by_name('LabelIn:0')
KP = g.get_tensor_by_name('KeepProb:0')
accuracy = g.get_tensor_by_name('NetAccuracy:0')
cost = g.get_tensor_by_name('loss:0')
######################## Producing a list and defining a new optimizer ####################################
VarToTrain = g.get_collection_ref('trainable__variables')
del VarToTrain[0] # Removing a variable from the list
del VarToTrain[5] # Removing another variable from the list
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).
minimize(cost,var_list= VarToTrain)
##########################################################################################
step = 1
while step * batch_size < training_iters:
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
sess.run(optimizer, feed_dict={X: batch_xs, Y: batch_ys, KP: dropout})
if step % display_step == 0:
acc = sess.run(accuracy, feed_dict={X: batch_xs, Y: batch_ys, KP: 1.})
loss = sess.run(cost, feed_dict={X: batch_xs, Y: batch_ys, KP: 1.})
print("Iter " + str(step * batch_size) + ", Minibatch Loss= " + "{:.6f}".format(
loss) + ", Training Accuracy= " + "{:.5f}".format(acc))
step += 1
feed_dict = {X: mnist.test.images[:256], Y: mnist.test.labels[:256], KP: 1.0}
ModelAccuracy = sess.run(accuracy, feed_dict)
print('Retraining finished'+', Test Accuracy = %f' %ModelAccuracy)
The code above did the job, but it has some issues! First, for some reason, I keep getting error messages every time I define a similar optimizer to the original one, tf.train.AdamOtimizer()
. The only optimizer that I can define without TF throwing me error messages is the tf.train.GradientDescentOptimizer()
. The other issue in this solution is its inconvenience; in order to produce a list of the variables I want to train, I first have to produce a list of all trainable variables using VarToTrain = g.get_collection_ref('trainable_variables')
, print them out, memorize the location of the unwanted variables in the list, then, delete them one by one using del
method!! There must be a more elegant way to doing that. What I have done works fine only for small networks!!
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…