really finding it hard to understand the input dimensions to the convolutional 1d layer in keras:
Input shape
3D tensor with shape: (samples, steps, input_dim).
Output shape
3D tensor with shape: (samples, new_steps, nb_filter). steps value might have changed due to padding.
I want my network to take in a time series of prices (101, in order) and output 4 probabilities. My current non-convolutional network which does this fairly well (with a training set of 28000) looks like this:
standardModel = Sequential()
standardModel.add(Dense(input_dim=101, output_dim=100, W_regularizer=l2(0.5), activation='sigmoid'))
standardModel.add(Dense(4, W_regularizer=l2(0.7), activation='softmax'))
To improve this, I want to make a feature map from the input layer which has a local receptive field of length 10. (and therefore has 10 shared weights and 1 shared bias). I then want to use max pooling and feed this in to a hidden layer of 40 or so neurons and then output this with 4 neurons with softmax in the outer layer.
picture (it's quite awful sorry!)
So ideally, the convolutional layer would take a 2d tensor of dimensions:
(minibatch_size, 101)
and output a 3d tensor of dimensions
(minibatch_size, 91, no_of_featuremaps)
However, the keras layer seems to require a dimension in the input called step. I've tried understanding this and still don't quite get it. In my case, should step be 1 as each step in the vector is an increase in the time by 1? Also, what is new_step?
In addition, how do you turn the output of the pooling layers (a 3d tensor) into input suitable for the standard hidden layer (i.e a Dense keras layer) in the form of a 2d tensor?
Update: After the very helpful suggestions given, I tried making a convolutional network like so:
conv = Sequential()
conv.add(Convolution1D(64, 10, input_shape=(1,101)))
conv.add(Activation('relu'))
conv.add(MaxPooling1D(2))
conv.add(Flatten())
conv.add(Dense(10))
conv.add(Activation('tanh'))
conv.add(Dense(4))
conv.add(Activation('softmax'))
The line conv.Add(Flatten()) throws a range exceeds valid bounds error. Interestingly, this error is not thrown for just this code:
conv = Sequential()
conv.add(Convolution1D(64, 10, input_shape=(1,101)))
conv.add(Activation('relu'))
conv.add(MaxPooling1D(2))
conv.add(Flatten())
doing
print conv.input_shape
print conv.output_shape
results in
(None, 1, 101
(None, -256)
being returned
Update 2:
Changed
conv.add(Convolution1D(64, 10, input_shape=(1,101)))
to
conv.add(Convolution1D(10, 10, input_shape=(101,1))
and it started working. However, is there any important different between
inputting (None, 101, 1) to a 1d conv layer or (None, 1, 101) that I should be aware of? Why does (None, 1, 101) not work?
See Question&Answers more detail:
os