Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
96 views
in Technique[技术] by (71.8m points)

python - why my model performance performing so slow?

i have this CNN model with 3 block of VGG architecture

import tensorflow as tf
from tensorflow.keras import datasets, layers, models
import matplotlib.pyplot as plt
from keras.preprocessing import image
from keras.preprocessing.image import ImageDataGenerator
from keras.regularizers import L2, L1, L1L2
from keras.optimizers import SGD, Adam, Adagrad, RMSprop
from keras.models import load_model, Model
import numpy as np
import keras as k

#Load data dan split data
(train_images, train_labels),(test_images, test_labels) = datasets.cifar10.load_data()

#Normalize Data
train_images = train_images / 255.0
test_images = test_images / 255.0

#Convert menjadi one-hot-encode
num_classes = 10
train_labels = k.utils.to_categorical(train_labels, num_classes)
test_labels = k.utils.to_categorical(test_labels, num_classes)

# Data Augmentation
datagen = ImageDataGenerator(
width_shift_range=0.1,
height_shift_range=0.1,
horizontal_flip=True,)

datagen.fit(train_images)

reg=None
num_filters=32
ac='relu'
adm=Adam(lr=0.001,decay=0, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
sgd=SGD(lr=0.01, momentum=0.9)
rms=RMSprop(lr=0.0001,decay=1e-6)
agr=Adagrad(learning_rate=0.0001,initial_accumulator_value=0.1,epsilon=1e-08)
opt=adm
drop_dense=0.5
drop_conv=0.2

model = models.Sequential()

model.add(layers.Conv2D(num_filters, (3, 3), activation=ac, kernel_regularizer=reg, input_shape=(32, 32, 3),padding='same'))
model.add(layers.BatchNormalization())
model.add(layers.Conv2D(num_filters, (3, 3), activation=ac,kernel_regularizer=reg,padding='same'))
model.add(layers.BatchNormalization())
model.add(layers.MaxPooling2D(pool_size=(2, 2)))   
model.add(layers.Dropout(drop_conv))

model.add(layers.Conv2D(2*num_filters, (3, 3), activation=ac,kernel_regularizer=reg,padding='same'))
model.add(layers.BatchNormalization())
model.add(layers.Conv2D(2*num_filters, (3, 3), activation=ac,kernel_regularizer=reg,padding='same'))
model.add(layers.BatchNormalization())
model.add(layers.MaxPooling2D(pool_size=(2, 2)))   
model.add(layers.Dropout(2 * drop_conv))

model.add(layers.Conv2D(4*num_filters, (3, 3), activation=ac,kernel_regularizer=reg,padding='same'))
model.add(layers.BatchNormalization())
model.add(layers.Conv2D(4*num_filters, (3, 3), activation=ac,kernel_regularizer=reg,padding='same'))
model.add(layers.BatchNormalization())
model.add(layers.MaxPooling2D(pool_size=(2, 2)))
model.add(layers.Dropout(3 * drop_conv))

model.add(layers.Flatten())
model.add(layers.Dense(512, activation=ac,kernel_regularizer=reg))
model.add(layers.BatchNormalization())
model.add(layers.Dropout(drop_dense))
model.add(layers.Dense(num_classes, activation='softmax'))


model.compile(loss='categorical_crossentropy', metrics=['accuracy'],optimizer=adm)

model.summary()


history=model.fit_generator(datagen.flow(train_images, train_labels, batch_size=256),
                    steps_per_epoch = len(train_images) / 256, epochs=200, 
                            validation_data=(test_images, test_labels))

loss, accuracy = model.evaluate(test_images, test_labels)
print("Accuracy is : ", accuracy * 100)
print("Loss is : ", loss)


N = 200
plt.style.use("ggplot")
plt.figure()
plt.plot(np.arange(0, N), history.history["loss"], label="train_loss")
plt.plot(np.arange(0, N), history.history["val_loss"], label="val_loss")
plt.plot(np.arange(0, N), history.history["accuracy"], label="train_acc")
plt.plot(np.arange(0, N), history.history["val_accuracy"], label="val_acc")
plt.title("Training Loss and Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Loss/Accuracy")
plt.legend(loc="upper left")
plt.show()

model.save("model_test_9.h5") # serialize weights to HDF5
FileLink(r'model_test_9.h5')

# ADM Improve Dropout dataaugment

output :

Epoch 40/200
195/195 [==============================] - 21s 107ms/step - loss: 0.4334 - accuracy: 0.8507 - val_loss: 0.5041 - val_accuracy: 0.8357
Epoch 41/200
195/195 [==============================] - 21s 107ms/step - loss: 0.4289 - accuracy: 0.8522 - val_loss: 0.5354 - val_accuracy: 0.8284
Epoch 42/200
195/195 [==============================] - 21s 110ms/step - loss: 0.4333 - accuracy: 0.8490 - val_loss: 0.4560 - val_accuracy: 0.8499: 0.4334 - ac - ETA: 1s - loss:
Epoch 43/200
195/195 [==============================] - 21s 110ms/step - loss: 0.4198 - accuracy: 0.8555 - val_loss: 0.4817 - val_accuracy: 0.8429
Epoch 44/200
195/195 [==============================] - 21s 107ms/step - loss: 0.4130 - accuracy: 0.8556 - val_loss: 0.4768 - val_accuracy: 0.8407ccuracy: 0. - ETA: 5s - los
Epoch 45/200
195/195 [==============================] - 21s 109ms/step - loss: 0.4180 - accuracy: 0.8544 - val_loss: 0.4526 - val_accuracy: 0.8483 accuracy
Epoch 46/200
195/195 [==============================] - 21s 108ms/step - loss: 0.4113 - accuracy: 0.8565 - val_loss: 0.4129 - val_accuracy: 0.8618
Epoch 47/200
195/195 [==============================] - 21s 108ms/step - loss: 0.4078 - accuracy: 0.8584 - val_loss: 0.4108 - val_accuracy: 0.8659
Epoch 48/200
195/195 [==============================] - 21s 109ms/step - loss: 0.4184 - accuracy: 0.8538 - val_loss: 0.4370 - val_accuracy: 0.8557
Epoch 49/200
195/195 [==============================] - 21s 107ms/step - loss: 0.3926 - accuracy: 0.8641 - val_loss: 0.3817 - val_accuracy: 0.8685
Epoch 50/200
195/195 [==============================] - 21s 109ms/step - loss: 0.4044 - accuracy: 0.8587 - val_loss: 0.4225 - val_accuracy: 0.8571
Epoch 51/200
195/195 [==============================] - 21s 110ms/step - loss: 0.3919 - accuracy: 0.8640 - val_loss: 0.4101 - val_accuracy: 0.8625
Epoch 52/200
195/195 [==============================] - 21s 106ms/step - loss: 0.4035 - accuracy: 0.8623 - val_loss: 0.4341 - val_accuracy: 0.8561059 - accuracy: 0.86 - ETA: 8s - loss: 0.4059 - accuracy:  - ETA: 7s - loss: 0.4057 - ac - ETA: 7s - loss: 0.4054 - ac - ETA: 6s - - ETA: 0s - loss: 0.4036 - accura
Epoch 53/200
195/195 [==============================] - 21s 109ms/step - loss: 0.3963 - accuracy: 0.8619 - val_loss: 0.4180 - val_accuracy: 0.8576
Epoch 54/200
195/195 [==============================] - 21s 109ms/step - loss: 0.3901 - accuracy: 0.8635 - val_loss: 0.3744 - val_accuracy: 0.8712
Epoch 55/200
195/195 [==============================] - 21s 106ms/step - loss: 0.3917 - accuracy: 0.8640 - val_loss: 0.3751 - val_accuracy: 0.87363909 - accu - ETA: 2s - loss: 0.3911 - ac
Epoch 56/200
195/195 [==============================] - 21s 110ms/step - loss: 0.3795 - accuracy: 0.8679 - val_loss: 0.4697 - val_accuracy: 0.8445ss: 0.3764 - ac - ETA: 15s - loss: 0.3758 - accuracy:
Epoch 57/200
195/195 [==============================] - 22s 111ms/step - loss: 0.3844 - accuracy: 0.8656 - val_loss: 0.4058 - val_accuracy: 0.8620- los - ETA: 0s - loss: 0.3842 - accuracy - ETA: 0s - loss: 0.3843 - accura
Epoch 58/200
195/195 [==============================] - 21s 107ms/step - loss: 0.3864 - accuracy: 0.8656 - val_loss: 0.4226 - val_accuracy: 0.8588
Epoch 59/200
195/195 [==============================] - 22s 110ms/step - loss: 0.3821 - accuracy: 0.8684 - val_loss: 0.3986 - val_accuracy: 0.8666
Epoch 60/200
195/195 [==============================] - 21s 109ms/step - loss: 0.3728 - accuracy: 0.8708 - val_loss: 0.4196 - val_accuracy: 0.8638
Epoch 61/200
195/195 [==============================] - 21s 106ms/step - loss: 0.3724 - accuracy: 0.8699 - val_loss: 0.3928 - val_accuracy: 0.8654loss: 0 - ETA: 3s - loss: 0.3 -
Epoch 62/200
195/195 [==============================] - 21s 109ms/step - loss: 0.3724 - accuracy: 0.8712 - val_loss: 0.3615 - val_accuracy: 0.8782
Epoch 63/200
195/195 [==============================] - 22s 110ms/step - loss: 0.3758 - accuracy: 0.8691 - val_loss: 0.3976 - val_accuracy: 0.8707
Epoch 64/200
195/195 [==============================] - 21s 109ms/step - loss: 0.3698 - accuracy: 0.8714 - val_loss: 0.4429 - val_accuracy: 0.8554
Epoch 65/200
195/195 [==============================] - 21s 109ms/step - loss: 0.3570 - accuracy: 0.8750 - val_loss: 0.3702 - val_accuracy: 0.8740
Epoch 66/200
195/195 [==============================] - 21s 110ms/step - loss: 0.3588 - accuracy: 0.8751 - val_loss: 0.3885 - val_accuracy: 0.8717
Epoch 67/200
195/195 [==============================] - 21s 106ms/step - loss: 0.3597 - accuracy: 0.8749 - val_loss: 0.3781 - val_accuracy: 0.8777
Epoch 68/200
195/195 [==============================] - 21s 108ms/step - loss: 0.3590 - accuracy: 0.8756 - val_loss: 0.4230 - val_accuracy: 0.8613
Epoch 69/200
195/195 [==============================] - 21s 110ms/step - loss: 0.3540 - accuracy: 0.8756 - val_loss: 0.3972 - val_accuracy: 0.8694
Epoch 70/200
195/195 [==============================] - 21s 108ms/step - loss: 0.3588 - accuracy: 0.8729 - val_loss: 0.4242 - val_accuracy: 0.8598
Epoch 71/200
195/195 [==============================] - 21s 109ms/step - loss: 0.3608 - accuracy: 0.8748 - val_loss: 0.3887 - val_accuracy: 0.8683
Epoch 72/200
195/195 [==============================] - 21s 108ms/step - loss: 0.3511 - accuracy: 0.8783 - val_loss: 0.3912 - val_accuracy: 0.8716
Epoch 73/200
195/195 [==============================] - 21s 106ms/step - loss: 0.3516 - accuracy: 0.8769 - val_loss: 0.4673 - val_accuracy: 0.8515
Epoch 74/200
195/195 [==============================] - 21s 108ms/step - loss: 0.3484 - accuracy: 0.8787 - val_loss: 0.3990 - val_accuracy: 0.8664
Epoch 75/200
195/195 [==============================] - 21s 108ms/step - loss: 0.3506 - accuracy: 0.8780 - val_loss: 0.3869 - val_accuracy: 0.8666
Epoch 76/200
195/195 [==============================] - 20s 105ms/step - loss: 0.3484 - accuracy: 0.8795 - val_loss: 0.3447 - val_accuracy: 0.8853
Epoch 77/200
195/195 [==============================] - 21s 110ms/step - loss: 0.3493 - accuracy: 0.8774 - val_loss: 0.3644 - val_accuracy: 0.8794
Epoch 78/200
195/195 [==============================] - 21s 108ms/step - loss: 0.3443 - accuracy: 0.8813 - val_loss: 0.4117 - val_accuracy: 0.8665
Epoch 79/200
195/195 [==============================] - 20s 104ms/step - loss: 0.3436 - accuracy: 0.8796 - val_loss: 0.3695 - val_accuracy: 0.8758
Epoch 80/200
195/195 [==============================] - 21s 109ms/step - loss: 0.3487 - accuracy: 0.8788 - val_loss: 0.3583 - val_accuracy: 0.8789
Epoch 81/200
accuracy:  - ETA: 1s - los
Epoch 92/200
195/195 [==============================] - 21s 109ms/step - loss: 0.3320 - accuracy: 0.8834 - val_loss: 0.3658 - val_accuracy: 0.8794
Epoch 93/200
195/195 [==============================] - 21s 107ms/step - loss: 0.3251 - accuracy: 0.8858 - val_loss: 0.4003 - val_accuracy: 0.8646
Epoch 94/200
195/195 [==============================] - 20s 103ms/step - loss: 0.3202 - accuracy: 0.8894 - val_loss: 0.3943 - val_accuracy: 0.8695
Epoch 95/200
195/195 [==============================] - 21s 108ms/step - loss: 0.3238 - accuracy: 0.8887 - val_loss: 0.3232 - val_accuracy: 0.8931
Epoch 96/200
195/195 [==============================] - 21s 105ms/step - loss: 0.3236 - accuracy: 0.8881 - val_loss: 0.3659 - val_accuracy: 0.8777
Epoch 97/200
195/195 [==============================] - 21s 107ms/step - loss: 0.3116 - accuracy: 0.8912 - val_loss: 0.4218 - val_accuracy: 0.8634
Epoch 98/200
195/195 [==============================] - 21s 109ms/step - loss: 0.3189 - accuracy: 0.8893 - val_loss: 0.3783 - val_accuracy: 0.8740s - loss: 0.3189 - accuracy - ETA: 0s - loss: 0.3189 - ac
Epoch 99/200
195/195 [==============================] - 21s 106ms/step - loss: 0.3260 - accuracy: 0.8845 - val_loss: 0.3418 - val_accuracy: 0.8875
Epoch 100/200
195/195 [==============================] - 21s 108ms/step - loss: 0.3143 - accuracy: 0.8893 - val_loss: 

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)
  1. No - because valuation loss not increasing
  2. Your plots look fine. It is expected that the training process goes slower
  3. Yes, but it doesn't make sense. If you train any model for infinity - its performance will permanently improved - e.g. you can get 89.5% accuracy (which is better than 89.48%) if you train it for year.
  4. Try decaying learning rate with different schedules

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...