Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
210 views
in Technique[技术] by (71.8m points)

tensorflow - Why does my CNN not predict labels as expected?

I am new to the concept of Similarity Learning. I am currently doing a face recognition model using Siamese Neural Network for the Labelled Faces in the Wild Dataset.

Code for Siamese Network Model (Consider each code snippets to be a cell in Colab):

from keras.applications.inception_v3 import InceptionV3
from keras.applications.mobilenet_v2 import MobileNetV2
from keras.models import Model
from keras.layers import Input,Flatten

def return_inception_model():

  input_vector=Input((224,224,3))
  subnet=InceptionV3(include_top=False,weights="imagenet",input_tensor=input_vector)
  out=subnet.output
  out=Flatten()(out)
  model=Model(subnet.input,out,name="SubConvNet")

  return model
import keras.backend as K

def euclidean_distance(vect):
  x,y=vect
  return K.sqrt(K.maximum(K.sum(K.square(x - y), axis=1, keepdims=True), K.epsilon()))

def contrastive_loss(y_true, y_pred):
    margin = 1
    return K.mean(y_true * K.square(y_pred) + (1 - y_true) * K.square(K.maximum(margin - y_pred, 0)))

def accuracy(y_true, y_pred):
  return K.mean(K.equal(y_true, K.cast(y_pred < 0.5, y_true.dtype)))
from keras.layers import Lambda

base_model=return_inception_model()

left_input=Input((224,224,3))
right_input=Input((224,224,3))

feature_1=base_model(left_input)
feature_2=base_model(right_input)

lambda_layer= Lambda(euclidean_distance)([feature_1,feature_2])
output=Dense(1,activation='sigmoid')(lambda_layer)

model=Model([left_input,right_input],output)
model.summary()

Model: "model"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_2 (InputLayer)            [(None, 224, 224, 3) 0                                            
__________________________________________________________________________________________________
input_3 (InputLayer)            [(None, 224, 224, 3) 0                                            
__________________________________________________________________________________________________
SubConvNet (Functional)         (None, 51200)        21802784    input_2[0][0]                    
                                                                 input_3[0][0]                    
__________________________________________________________________________________________________
lambda (Lambda)                 (None, 1)            0           SubConvNet[0][0]                 
                                                                 SubConvNet[1][0]                 
__________________________________________________________________________________________________
dense (Dense)                   (None, 1)            2           lambda[0][0]                     
==================================================================================================
Total params: 21,802,786
Trainable params: 21,768,354
Non-trainable params: 34,432
from keras.utils.vis_utils import plot_model

plot_model(model, to_file='model_plot.png', show_shapes=True, show_layer_names=True)

Siamese Network Plot

from keras.optimizers import SGD,RMSprop,Adam

optimizer=Adam(lr=0.00001)
model.compile(loss=contrastive_loss,metrics=[accuracy],optimizer=optimizer)
model.fit(x=[[train_nparr_pairs[:, 0], train_nparr_pairs[:, 1]]], y=train_labels[:], 
          validation_data=([[test_nparr_pairs[:, 0], test_nparr_pairs[:, 1]]], test_labels[:]), epochs=64,use_multiprocessing=True)
Epoch 56/64
69/69 [==============================] - 8s 118ms/step - loss: 0.5132 - accuracy: 0.4868 - val_loss: 0.5000 - val_accuracy: 0.4883
Epoch 57/64
69/69 [==============================] - 8s 118ms/step - loss: 0.5044 - accuracy: 0.4956 - val_loss: 0.5000 - val_accuracy: 0.4883
Epoch 58/64
69/69 [==============================] - 8s 118ms/step - loss: 0.5064 - accuracy: 0.4936 - val_loss: 0.5000 - val_accuracy: 0.4883
Epoch 59/64
69/69 [==============================] - 8s 118ms/step - loss: 0.4806 - accuracy: 0.5194 - val_loss: 0.5000 - val_accuracy: 0.4883
Epoch 60/64
69/69 [==============================] - 8s 118ms/step - loss: 0.4843 - accuracy: 0.5157 - val_loss: 0.5000 - val_accuracy: 0.4883
Epoch 61/64
69/69 [==============================] - 8s 117ms/step - loss: 0.5060 - accuracy: 0.4940 - val_loss: 0.5000 - val_accuracy: 0.4883
Epoch 62/64
69/69 [==============================] - 8s 119ms/step - loss: 0.5048 - accuracy: 0.4952 - val_loss: 0.5000 - val_accuracy: 0.4883
Epoch 63/64
69/69 [==============================] - 8s 119ms/step - loss: 0.5110 - accuracy: 0.4890 - val_loss: 0.5000 - val_accuracy: 0.4883
Epoch 64/64
69/69 [==============================] - 8s 119ms/step - loss: 0.5118 - accuracy: 0.4882 - val_loss: 0.5000 - val_accuracy: 0.4883

In the output, one can notice that the loss and accuracy are the same throughout the session. The value of val_loss is exactly 0.5. Also the val_accuracy remains as a constant value throughout the session. I have normalized the images yet still this happens. Is there any reason behind this output?

question from:https://stackoverflow.com/questions/65644501/why-does-my-cnn-not-predict-labels-as-expected

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

First of all, you can easily remove the accuracy as a metric, since it is not relevant for your case (at least in the way Keras calculates the accuracy).

In Siamese Networks, the way in which accuracy should be calculated, is that you select a threshold, say T, according to your task, and if two images are the same and similarity index >= T, then you consider that a good prediction => +=1, otherwise do not increase the count.

But this is different from what Keras does, with accuracy > 0.5 is considered a True Positive, you can remove it altogether, because the built-in accuracy metric in Keras is suitable only for typical classification problems.

That's for the first part.

The second part is that your weights are not updated accordingly. This is due to the fact that you declared the optizimer like this: optimizer=Adam(lr=0.1). The learning rate is too high in this case, particularly if we are talking about transfer learning what you applied with the pre-trained InceptionV3.

In summary:

  1. Remove the accuracy as a metric.
  2. Instantiate optimizer=Adam(lr=0.00001)

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...