you have to modify the Vgg model because it is designed to classify 1000 images. setting include_top=False removes the top layer of the model which had 1000 neurons. Now we need to include a layer which will have 2 neurons in it. The code below will accomplish that. Note in the parameters of the VGG model I have set pooling='max'. This results in the output of the Vgg model to be a vector that can be used as input to a dense layer.
base_model=tf.keras.applications.VGG16( include_top=False, input_shape=(86,86,3),
pooling='max', weights='imagenet' )
x=base_model.output
output=Dense(2, activation='softmax')(x)
model=Model(inputs=base_model.input, outputs=output)
model.compile(Adam(lr=.001), loss='categorical_crossentropy', metrics=['accuracy')
As an aside I do not like to use VGG16. It has about 40 million traninable parameters so it is computationally expense resulting in long training time. I prefer to use the MobileNet model which only has about 4 million trainable parameters and is about as accurate. To use the MobileNet model just use this line of code instead of the code for the Vgg model. Note I set the image_shape to (128,128,3) because there is a version of the mobilenet weights trained on imagenet with 128 X 128 images that will download automatically and help the model converge faster. But you can use 86 X86 if you choose. So in your train_generator set target_size=(128,128). Also in the ImageDataGenerator the code preprocessing_function=preprocess_vgg16 should still work for the Mobilenet model because I think it is the same as keras.applications.mobilenet.preprocess_input. I believe both of them just rescale the pixels to be between -1 and +1.
base_model=tf.keras.applications.mobilenet.MobileNet( include_top=False,
input_shape=(128,128,3), pooling='max', weights='imagenet',dropout=.4)
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…