this is workaround to use GridSearch and Keras model with multiple inputs. the trick consists in merge all the inputs in a single array. I create a dummy model that receives a SINGLE input and then split it into the desired parts using Lambda layers. the procedure can be easily modified according to your own data structure
def createMod(optimizer='Adam'):
combi_input = Input((3,)) # (None, 3)
a_input = Lambda(lambda x: tf.expand_dims(x[:,0],-1))(combi_input) # (None, 1)
b_input = Lambda(lambda x: tf.expand_dims(x[:,1],-1))(combi_input) # (None, 1)
c_input = Lambda(lambda x: tf.expand_dims(x[:,2],-1))(combi_input) # (None, 1)
## do something
c = Concatenate()([a_input, b_input, c_input])
x = Dense(32)(c)
out = Dense(1,activation='sigmoid')(x)
model = Model(combi_input, out)
model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics='accuracy')
return model
## recreate multiple inputs
n_sample = 1000
a_input, b_input, c_input = [np.random.uniform(0,1, n_sample) for _ in range(3)]
y = np.random.randint(0,2, n_sample)
## merge inputs
combi_input = np.stack([a_input, b_input, c_input], axis=-1)
model = tf.keras.wrappers.scikit_learn.KerasClassifier(build_fn=createMod, verbose=0)
batch_size = [10, 20]
epochs = [10, 5]
optimizer = ['adam','SGD']
param_grid = dict(batch_size=batch_size, epochs=epochs)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1, cv=3)
grid_result = grid.fit(combi_input, y)
Another simple and valuable solution
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…