Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
339 views
in Technique[技术] by (71.8m points)

python - Use keras(TensorFlow) to build a Conv2D+LSTM model

The data are 10 videos and each videos split into 86 frames and each frame has 28*28 pixels,

video_num = 10
frame_num = 86
pixel_num = 28*28

I want to use Conv2D+LSDM to build the Model, and at each time_steps(=frame_num=86) send the pixels data (=INPUT_SIZE=28*28) in the model.So the following is my code about the Model

BATCH_SIZE = 2 (just try)
TIME_STEPS=frame_num (=86)
INPUT_SIZE=pixel_num (=28*28)

model = Sequential()
model.add(InputLayer(batch_input_shape=(BATCH_SIZE, TIME_STEPS,     
INPUT_SIZE)))
print (model.output_shape)

model.add(TimeDistributed(Conv2D(64,(1,3),strides=(1,1), padding='same', 
data_format='channels_last')))  ##always the error here
print (model.output_shape)

model.add(TimeDistributed(MaxPooling2D(pool_size=(2,2),padding='same')))
print (model.output_shape)

model.add(TimeDistributed(Conv2D(64,(1,3),strides=(1,1), 
data_format='channels_last', padding='same')))
print (model.output_shape)

model.add(TimeDistributed(MaxPooling2D(pool_size=(2,2),padding='same')))
print (model.output_shape)

model.add(TimeDistributed(Flatten()))
print (model.output_shape)

model.add(TimeDistributed(Dense(4096, activation='relu')))
print (model.output_shape)

model.add(LSTM(100, stateful=True, return_sequences=True))
print (model.output_shape)

model.add(Dense(1, activation='sigmoid'))
print (model.output_shape)

the following figure shows the error from command line

https://imgur.com/a/yAPQO says "list index out of range"

I think that error is about the input shape in TimeDistributed() which gets the input from upper layer(InputLayer()), but I have no idea how to fix the error. I have tried to remove the InputLayer(), and use

TimeDistributed(Conv2D(...), input_shape=(TIME_STEPS, INPUT_SIZE))

as the first layer, but also get the same error...

If anyone know about this error, please share your idea, I will be very appreciate. Also, I still didn't very clear about the difference between batch_input_shape and input_shape, did anyone use these two before? Thanks.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

A Conv2D layer requires four dimensions, not three:

  • (batch_size, height, width, channels).

And the TimeDistributed will require an additional dimension:

  • (batch_size, frames, height, width, channels)

So, if you're really going to work with TimeDistributed+Conv2D, you need 5 dimensions. Your input_shape=(86,28,28,3), or your batch_input_shape=(batch_size,86,28,28,3), where I assumed you've got an RGB video (3 color channels).

Usually, you just pass an input shape to the TimeDistributed.

model.add(TimeDistributed(Dense(....), input_shape=(86,28,28,3))

You will need the batch_input_shape only in the case of using stateful=True LSTM's. Then you just replace the input_shape with the batch_input_shape.


Notice that only the convolutional 2D layers will see images in terms of height and width. When you add the LSTM's, you will need to reshape the data to bring height, width and channels into a single dimension.

For a shape (frames, h, w, ch):

model.add(Reshape((frames,h*w*ch)))

And you should not use TimeDistributed with these LSTMs, only with the convolutional layers.

Your approach of using model.add(TimeDistributed(Flatten())) is ok instead of the reshape.


Notice also that Keras has recently implemented a ConvLSTM2D layer, which might be useful in your case: https://keras.io/layers/recurrent/#convlstm2d


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...