The data are 10 videos and each videos split into 86 frames and each frame has 28*28 pixels,
video_num = 10
frame_num = 86
pixel_num = 28*28
I want to use Conv2D+LSDM to build the Model, and at each time_steps(=frame_num=86) send the pixels data (=INPUT_SIZE=28*28) in the model.So the following is my code about the Model
BATCH_SIZE = 2 (just try)
TIME_STEPS=frame_num (=86)
INPUT_SIZE=pixel_num (=28*28)
model = Sequential()
model.add(InputLayer(batch_input_shape=(BATCH_SIZE, TIME_STEPS,
INPUT_SIZE)))
print (model.output_shape)
model.add(TimeDistributed(Conv2D(64,(1,3),strides=(1,1), padding='same',
data_format='channels_last'))) ##always the error here
print (model.output_shape)
model.add(TimeDistributed(MaxPooling2D(pool_size=(2,2),padding='same')))
print (model.output_shape)
model.add(TimeDistributed(Conv2D(64,(1,3),strides=(1,1),
data_format='channels_last', padding='same')))
print (model.output_shape)
model.add(TimeDistributed(MaxPooling2D(pool_size=(2,2),padding='same')))
print (model.output_shape)
model.add(TimeDistributed(Flatten()))
print (model.output_shape)
model.add(TimeDistributed(Dense(4096, activation='relu')))
print (model.output_shape)
model.add(LSTM(100, stateful=True, return_sequences=True))
print (model.output_shape)
model.add(Dense(1, activation='sigmoid'))
print (model.output_shape)
the following figure shows the error from command line
https://imgur.com/a/yAPQO
says "list index out of range"
I think that error is about the input shape in TimeDistributed() which gets the input from upper layer(InputLayer()), but I have no idea how to fix the error.
I have tried to remove the InputLayer(), and use
TimeDistributed(Conv2D(...), input_shape=(TIME_STEPS, INPUT_SIZE))
as the first layer, but also get the same error...
If anyone know about this error, please share your idea, I will be very appreciate. Also, I still didn't very clear about the difference between batch_input_shape and input_shape, did anyone use these two before?
Thanks.
See Question&Answers more detail:
os