I need to compute Information Gain scores for >100k features in >10k documents for text classification. Code below works fine but for the full dataset is very slow - takes more than an hour on a laptop. Dataset is 20newsgroup and I am using scikit-learn, chi2 function which is provided in scikit works extremely fast.
Any idea how to compute Information Gain faster for such dataset?
def information_gain(x, y):
def _entropy(values):
counts = np.bincount(values)
probs = counts[np.nonzero(counts)] / float(len(values))
return - np.sum(probs * np.log(probs))
def _information_gain(feature, y):
feature_set_indices = np.nonzero(feature)[1]
feature_not_set_indices = [i for i in feature_range if i not in feature_set_indices]
entropy_x_set = _entropy(y[feature_set_indices])
entropy_x_not_set = _entropy(y[feature_not_set_indices])
return entropy_before - (((len(feature_set_indices) / float(feature_size)) * entropy_x_set)
+ ((len(feature_not_set_indices) / float(feature_size)) * entropy_x_not_set))
feature_size = x.shape[0]
feature_range = range(0, feature_size)
entropy_before = _entropy(y)
information_gain_scores = []
for feature in x.T:
information_gain_scores.append(_information_gain(feature, y))
return information_gain_scores, []
EDIT:
I merged the internal functions and ran cProfiler
as below (on a dataset limited to ~15k features and ~1k documents):
cProfile.runctx(
"""for feature in x.T:
feature_set_indices = np.nonzero(feature)[1]
feature_not_set_indices = [i for i in feature_range if i not in feature_set_indices]
values = y[feature_set_indices]
counts = np.bincount(values)
probs = counts[np.nonzero(counts)] / float(len(values))
entropy_x_set = - np.sum(probs * np.log(probs))
values = y[feature_not_set_indices]
counts = np.bincount(values)
probs = counts[np.nonzero(counts)] / float(len(values))
entropy_x_not_set = - np.sum(probs * np.log(probs))
result = entropy_before - (((len(feature_set_indices) / float(feature_size)) * entropy_x_set)
+ ((len(feature_not_set_indices) / float(feature_size)) * entropy_x_not_set))
information_gain_scores.append(result)""",
globals(), locals())
Result top 20 by tottime
:
ncalls tottime percall cumtime percall filename:lineno(function)
1 60.27 60.27 65.48 65.48 <string>:1(<module>)
16171 1.362 0 2.801 0 csr.py:313(_get_row_slice)
16171 0.523 0 0.892 0 coo.py:201(_check)
16173 0.394 0 0.89 0 compressed.py:101(check_format)
210235 0.297 0 0.297 0 {numpy.core.multiarray.array}
16173 0.287 0 0.331 0 compressed.py:631(prune)
16171 0.197 0 1.529 0 compressed.py:534(tocoo)
16173 0.165 0 1.263 0 compressed.py:20(__init__)
16171 0.139 0 1.669 0 base.py:415(nonzero)
16171 0.124 0 1.201 0 coo.py:111(__init__)
32342 0.123 0 0.123 0 {method 'max' of 'numpy.ndarray' objects}
48513 0.117 0 0.218 0 sputils.py:93(isintlike)
32342 0.114 0 0.114 0 {method 'sum' of 'numpy.ndarray' objects}
16171 0.106 0 3.081 0 csr.py:186(__getitem__)
32342 0.105 0 0.105 0 {numpy.lib._compiled_base.bincount}
32344 0.09 0 0.094 0 base.py:59(set_shape)
210227 0.088 0 0.088 0 {isinstance}
48513 0.081 0 1.777 0 fromnumeric.py:1129(nonzero)
32342 0.078 0 0.078 0 {method 'min' of 'numpy.ndarray' objects}
97032 0.066 0 0.153 0 numeric.py:167(asarray)
Looks that most of the time is spent in _get_row_slice
. I am not entirely sure about the first row, looks it covers the whole block I provided to cProfile.runctx
, though I don't know why there is such a big gap between first line totime=60.27
and second one tottime=1.362
. Where was the difference spent in? Is it possible to check it in cProfile
?
Basically, looks the problem is with sparse matrix operations (slicing, getting elements) -- the solution probably would be to calculate Information Gain using matrix algebra (like chi2 is implemented in scikit). But I have no idea how to express this calculation in terms of matrices operations... Anyone has an idea??
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…