I have two similar and separate CNN models and with both I am able to obtain more than 80% classification accuracy on an unbalanced data set where the class there is 1 sample of the first class for every 7 samples of the second class. After the training, I save the model weights for both models. Then I use a different dataset with the exact same input shape and the nature of input data is very similar to the previous data set because both are EEG signals with 8 channels. The only difference is there are 2 samples of the first class for every 9 samples of the second class. I always include this information in class_weights parameter of model.fit. Then I keep some layers of the previously trained models trainable, train with some amount of the new dataset and test with the remaining amount. The interesting thing is whatever parameter I change, I get accuracy around 65% for general accuracy and class accuracies and also for both models. For both models, in the first epoch, the training and validation accuracies are around 60% and they continue around the same accuracy until the last epoch. This situation is also the same for both models. One possibility could have been that this 60% accuracy equals the chance level but as I stated there are 2 samples of the first class for every 9 samples of the second class and I obtain this accuracy for general predicition and separately for both classes. Finally, I used both models from scratch without training with the previous dataset and I used the new dataset both for training, validation and testing of both models. However; although the models are not previously trained, they still start with 60% accuracy. Below text also shows an example training procedure of a model. How could it be possible? Could you please help me with what the reason might be? Thanks.
Class weights = {0: 0.6081382810901892, 1: 2.8118547611413915}
Epoch 1/1000
1097/1097 - 9s - loss: 0.6688 - accuracy: 0.5946 - val_loss: 0.6730 - val_accuracy: 0.5912
Epoch 2/1000
1097/1097 - 4s - loss: 0.6654 - accuracy: 0.6060 - val_loss: 0.6770 - val_accuracy: 0.5802
Epoch 3/1000
1097/1097 - 4s - loss: 0.6652 - accuracy: 0.6074 - val_loss: 0.6687 - val_accuracy: 0.6012
Epoch 4/1000
1097/1097 - 4s - loss: 0.6646 - accuracy: 0.6096 - val_loss: 0.6699 - val_accuracy: 0.5990
Epoch 5/1000
1097/1097 - 4s - loss: 0.6651 - accuracy: 0.6083 - val_loss: 0.6683 - val_accuracy: 0.6024
-
Epoch 71/1000
1097/1097 - 4s - loss: 0.6629 - accuracy: 0.6215 - val_loss: 0.6727 - val_accuracy: 0.6027
Epoch 72/1000
1097/1097 - 4s - loss: 0.6629 - accuracy: 0.6190 - val_loss: 0.6633 - val_accuracy: 0.6179
Epoch 73/1000
1097/1097 - 4s - loss: 0.6626 - accuracy: 0.6223 - val_loss: 0.6675 - val_accuracy: 0.6104
Epoch 74/1000
1097/1097 - 4s - loss: 0.6632 - accuracy: 0.6180 - val_loss: 0.6624 - val_accuracy: 0.6197
<tensorflow.python.keras.callbacks.History at 0x7ff2c510ceb8>
question from:
https://stackoverflow.com/questions/65901552/transfer-learning-model-does-not-learn 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…