Note that when talking about the accuracy of one class one may refer to either of the following (not equivalent) two amounts:
- The recall, which, for class C, is the ratio of examples labelled with class C that are predicted to have class C.
- The precision, which, for class C, is the ratio of examples predicted to be of class C that are in fact labelled with class C.
Instead of doing complex indexing, you can just rely on masking for you computation. Assuming we are talking about precision here (changing to recall would be trivial).
from keras import backend as K
INTERESTING_CLASS_ID = 0 # Choose the class of interest
def single_class_accuracy(y_true, y_pred):
class_id_true = K.argmax(y_true, axis=-1)
class_id_preds = K.argmax(y_pred, axis=-1)
# Replace class_id_preds with class_id_true for recall here
accuracy_mask = K.cast(K.equal(class_id_preds, INTERESTING_CLASS_ID), 'int32')
class_acc_tensor = K.cast(K.equal(class_id_true, class_id_preds), 'int32') * accuracy_mask
class_acc = K.sum(class_acc_tensor) / K.maximum(K.sum(accuracy_mask), 1)
return class_acc
If you want to be more flexible, you can also have the class of interest parametrised:
from keras import backend as K
def single_class_accuracy(interesting_class_id):
def fn(y_true, y_pred):
class_id_true = K.argmax(y_true, axis=-1)
class_id_preds = K.argmax(y_pred, axis=-1)
# Replace class_id_preds with class_id_true for recall here
accuracy_mask = K.cast(K.equal(class_id_preds, interesting_class_id), 'int32')
class_acc_tensor = K.cast(K.equal(class_id_true, class_id_preds), 'int32') * accuracy_mask
class_acc = K.sum(class_acc_tensor) / K.maximum(K.sum(accuracy_mask), 1)
return class_acc
return fn
And the use it as:
model.compile(..., metrics=[single_class_accuracy(INTERESTING_CLASS_ID)])
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…