I am trying to freeze the free trained VGG16's layers ('conv_base' below) and add new layers on top of them for feature extracting.
I expect to get same prediction results from 'conv_base' before(ret1) / after(ret2) fit of model but it is not.
Is this wrong way to check weight freezing?
loading VGG16 and set to untrainable
conv_base = applications.VGG16(weights='imagenet', include_top=False, input_shape=[150, 150, 3])?
conv_base.trainable = False
result before model fit
ret1 = conv_base.predict(np.ones([1, 150, 150, 3]))
add layers on top of the VGG16 and compile a model
model = models.Sequential()
model .add(conv_base)
model .add(layers.Flatten())
model .add(layers.Dense(10, activation='relu'))
model .add(layers.Dense(1, activation='sigmoid'))
m.compile('rmsprop', 'binary_crossentropy', ['accuracy'])
fit the model
m.fit_generator(train_generator, 100, validation_data=validation_generator, validation_steps=50)
result after model fit
ret2 = conv_base.predict(np.ones([1, 150, 150, 3]))
hope this is True but it is not.
np.equal(ret1, ret2)
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…