I am manually creating my dataset from a number of 384x286 b/w images.
I load an image like this:
x = []
for f in files:
img = Image.open(f)
img.load()
data = np.asarray(img, dtype="int32")
x.append(data)
x = np.array(x)
this results in x being an array (num_samples, 286, 384)
print(x.shape) => (100, 286, 384)
reading the keras documentation, and checking my backend, i should provide to the convolution step an input_shape composed by ( rows, cols, channels )
since i don't arbitrarily know the sample size, i would have expected to pass as an input size, something similar to
( None, 286, 384, 1 )
the model is built as follows:
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape))
# other steps...
passing as input_shape (286, 384, 1) causes:
Error when checking input: expected conv2d_1_input to have 4 dimensions, but got array with shape (85, 286, 384)
passing as_input_shape (None, 286, 384, 1 ) causes:
Input 0 is incompatible with layer conv2d_1: expected ndim=4, found ndim=5
what am i doing wrong ? how do i have to reshape the input array?
See Question&Answers more detail:
os