I tried to pass to GridSearchCV
other scoring metrics like balanced_accuracy
for Binary Classification (instead of the default accuracy
)
scoring = ['balanced_accuracy','recall','roc_auc','f1','precision']
validator = GridSearchCV(estimator=clf, param_grid=param_grid, scoring=scoring, refit=refit_scorer, cv=cv)
and got this error
ValueError: 'balanced_accuracy' is not a valid scoring value. Valid
options are
['accuracy','adjusted_mutual_info_score','adjusted_rand_score','average_precision','completeness_score','explained_variance','f1','f1_macro','f1_micro','f1_samples','f1_weighted','fowlkes_mallows_score','homogeneity_score','mutual_info_score','neg_log_loss','neg_mean_absolute_error','neg_mean_squared_error','neg_mean_squared_log_error','neg_median_absolute_error','normalized_mutual_info_score','precision','precision_macro','precision_micro','precision_samples','precision_weighted','r2','recall','recall_macro','recall_micro','recall_samples','recall_weighted','roc_auc','v_measure_score']
This is strange because 'balanced_accuracy' should be valid
Without defining balanced_accuracy
then the code works fine
scoring = ['recall','roc_auc','f1','precision']
Also the scoring metrics in the error above seems to be different from the ones in the document
Any ideas why? Thank you so much
scikit-learn
version is 0.19.2
See Question&Answers more detail:
os