New answer
I think you're looking exactly for L2 regularization. Just create a regularizer and add it in the layers:
from keras.regularizers import l2
#in the target layers, Dense, Conv2D, etc.:
layer = Dense(units, ..., kernel_regularizer = l2(some_coefficient))
You can use bias_regularizer
as well.
The some_coefficient
var is multiplied by the square value of the weight.
PS: if val
in your code is constant, it should not harm your loss. But you can still use the old answer below for val
.
Old answer
Wrap the Keras expected function (with two parameters) into an outer function with your needs:
def customLoss(layer_weights, val = 0.01):
def lossFunction(y_true,y_pred):
loss = mse(y_true, y_pred)
loss += K.sum(val, K.abs(K.sum(K.square(layer_weights), axis=1)))
return loss
return lossFunction
model.compile(loss=customLoss(weights,0.03), optimizer =..., metrics = ...)
Notice that layer_weights
must come directly from the layer as a "tensor", so you can't use get_weights()
, you must go with someLayer.kernel
and someLayer.bias
. (Or the respective var name in case of layers that use different names for their trainable parameters).
The answer here shows how to deal with that if your external vars are variable with batches: How to define custom cost function that depends on input when using ImageDataGenerator in Keras?
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…