Update
Being more knowledgable about ML today than I was 2.5 years ago, I will now say this approach only works for highly linear decision problems. If you carelessly apply it to a non-linear problem you will have trouble.
Example: Imagine a feature for which neither very large nor very small values predict a class, but values in some intermediate interval do. That could be water intake to predict dehydration. But water intake probably interacts with salt intake, as eating more salt allows for a greater water intake. Now you have an interaction between two non-linear features. The decision boundary meanders around your feature-space to model this non-linearity and to ask only how much one of the features influences the risk of dehydration is simply ignorant. It is not the right question.
Alternative: Another, more meaningful, question you could ask is: If I didn't have this information (if I left out this feature) how much would my prediction of a given label suffer? To do this you simply leave out a feature, train a model and look at how much precision and recall drops for each of your classes. It still informs about feature importance, but it makes no assumptions about linearity.
Below is the old answer.
I worked through a similar problem a while back and posted the same question on Cross Validated. The short answer is that there is no implementation in sklearn
that does all of what you want.
However, what you are trying to achieve is really quite simple, and can be done by multiplying the average standardised mean value of each feature split on each class, with the corresponding model._feature_importances
array element. You can write a simple function that standardises your dataset, computes the mean of each feature split across class predictions, and does element-wise multiplication with the model._feature_importances
array. The greater the absolute resulting values are, the more important the features will be to their predicted class, and better yet, the sign will tell you if it is small or large values that are important.
Here's a super simple implementation that takes a datamatrix X
, a list of predictions Y
and an array of feature importances, and outputs a JSON describing importance of each feature to each class.
def class_feature_importance(X, Y, feature_importances):
N, M = X.shape
X = scale(X)
out = {}
for c in set(Y):
out[c] = dict(
zip(range(N), np.mean(X[Y==c, :], axis=0)*feature_importances)
)
return out
Example:
import numpy as np
import json
from sklearn.preprocessing import scale
X = np.array([[ 2, 2, 2, 0, 3, -1],
[ 2, 1, 2, -1, 2, 1],
[ 0, -3, 0, 1, -2, 0],
[-1, -1, 1, 1, -1, -1],
[-1, 0, 0, 2, -3, 1],
[ 2, 2, 2, 0, 3, 0]], dtype=float)
Y = np.array([0, 0, 1, 1, 1, 0])
feature_importances = np.array([0.1, 0.2, 0.3, 0.2, 0.1, 0.1])
#feature_importances = model._feature_importances
result = class_feature_importance(X, Y, feature_importances)
print json.dumps(result,indent=4)
{
"0": {
"0": 0.097014250014533204,
"1": 0.16932975630904751,
"2": 0.27854300726557774,
"3": -0.17407765595569782,
"4": 0.0961523947640823,
"5": 0.0
},
"1": {
"0": -0.097014250014533177,
"1": -0.16932975630904754,
"2": -0.27854300726557779,
"3": 0.17407765595569782,
"4": -0.0961523947640823,
"5": 0.0
}
}
The first level of keys in result
are class labels, and the second level of keys are column-indices, i.e. feature-indices. Recall that large absolute values corresponds to importance, and the sign tells you whether it's small (possibly negative) or large values that matter.