Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
255 views
in Technique[技术] by (71.8m points)

python - How to reverse sklearn.OneHotEncoder transform to recover original data?

I encoded my categorical data using sklearn.OneHotEncoder and fed them to a random forest classifier. Everything seems to work and I got my predicted output back.

Is there a way to reverse the encoding and convert my output back to its original state?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

A good systematic way to figure this out is to start with some test data and work through the sklearn.OneHotEncoder source with it. If you don't much care about how it works and simply want a quick answer, skip to the bottom.

X = np.array([
    [3, 10, 15, 33, 54, 55, 78, 79, 80, 99],
    [5, 1, 3, 7, 8, 12, 15, 19, 20, 8]
]).T

n_values_

Lines 1763-1786 determine the n_values_ parameter. This will be determined automatically if you set n_values='auto' (the default). Alternatively you can specify a maximum value for all features (int) or a maximum value per feature (array). Let's assume that we're using the default. So the following lines execute:

n_samples, n_features = X.shape    # 10, 2
n_values = np.max(X, axis=0) + 1   # [100, 21]
self.n_values_ = n_values

feature_indices_

Next the feature_indices_ parameter is calculated.

n_values = np.hstack([[0], n_values])  # [0, 100, 21]
indices = np.cumsum(n_values)          # [0, 100, 121]
self.feature_indices_ = indices

So feature_indices_ is merely the cumulative sum of n_values_ with a 0 prepended.

Sparse Matrix Construction

Next, a scipy.sparse.coo_matrix is constructed from the data. It is initialized from three arrays: the sparse data (all ones), the row indices, and the column indices.

column_indices = (X + indices[:-1]).ravel()
# array([  3, 105,  10, 101,  15, 103,  33, 107,  54, 108,  55, 112,  78, 115,  79, 119,  80, 120,  99, 108])

row_indices = np.repeat(np.arange(n_samples, dtype=np.int32), n_features)
# array([0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9], dtype=int32)

data = np.ones(n_samples * n_features)
# array([ 1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1., 1.,  1.,  1.,  1.,  1.,  1.,  1.])

out = sparse.coo_matrix((data, (row_indices, column_indices)),
                        shape=(n_samples, indices[-1]),
                        dtype=self.dtype).tocsr()
# <10x121 sparse matrix of type '<type 'numpy.float64'>' with 20 stored elements in Compressed Sparse Row format>

Note that the coo_matrix is immediately converted to a scipy.sparse.csr_matrix. The coo_matrix is used as an intermediate format because it "facilitates fast conversion among sparse formats."

active_features_

Now, if n_values='auto', the sparse csr matrix is compressed down to only the columns with active features. The sparse csr_matrix is returned if sparse=True, otherwise it is densified before returning.

if self.n_values == 'auto':
    mask = np.array(out.sum(axis=0)).ravel() != 0
    active_features = np.where(mask)[0]  # array([  3,  10,  15,  33,  54,  55,  78,  79,  80,  99, 101, 103, 105, 107, 108, 112, 115, 119, 120])
    out = out[:, active_features]  # <10x19 sparse matrix of type '<type 'numpy.float64'>' with 20 stored elements in Compressed Sparse Row format>
    self.active_features_ = active_features

return out if self.sparse else out.toarray()

Decoding

Now let's work in reverse. We'd like to know how to recover X given the sparse matrix that is returned along with the OneHotEncoder features detailed above. Let's assume we actually ran the code above by instantiating a new OneHotEncoder and running fit_transform on our data X.

from sklearn import preprocessing
ohc = preprocessing.OneHotEncoder()  # all default params
out = ohc.fit_transform(X)

The key insight to solving this problem is understanding the relationship between active_features_ and out.indices. For a csr_matrix, the indices array contains the column numbers for each data point. However, these column numbers are not guaranteed to be sorted. To sort them, we can use the sorted_indices method.

out.indices  # array([12,  0, 10,  1, 11,  2, 13,  3, 14,  4, 15,  5, 16,  6, 17,  7, 18, 8, 14,  9], dtype=int32)
out = out.sorted_indices()
out.indices  # array([ 0, 12,  1, 10,  2, 11,  3, 13,  4, 14,  5, 15,  6, 16,  7, 17,  8, 18,  9, 14], dtype=int32)

We can see that before sorting, the indices are actually reversed along the rows. In other words, they are ordered with the last column first and the first column last. This is evident from the first two elements: [12, 0]. 0 corresponds to the 3 in the first column of X, since 3 is the minimum element it was assigned to the first active column. 12 corresponds to the 5 in the second column of X. Since the first row occupies 10 distinct columns, the minimum element of the second column (1) gets index 10. The next smallest (3) gets index 11, and the third smallest (5) gets index 12. After sorting, the indices are ordered as we would expect.

Next we look at active_features_:

ohc.active_features_  # array([  3,  10,  15,  33,  54,  55,  78,  79,  80,  99, 101, 103, 105, 107, 108, 112, 115, 119, 120])

Notice that there are 19 elements, which corresponds to the number of distinct elements in our data (one element, 8, was repeated once). Notice also that these are arranged in order. The features that were in the first column of X are the same, and the features in the second column have simply been summed with 100, which corresponds to ohc.feature_indices_[1].

Looking back at out.indices, we can see that the maximum column number is 18, which is one minus the 19 active features in our encoding. A little thought about the relationship here shows that the indices of ohc.active_features_ correspond to the column numbers in ohc.indices. With this, we can decode:

import numpy as np
decode_columns = np.vectorize(lambda col: ohc.active_features_[col])
decoded = decode_columns(out.indices).reshape(X.shape)

This gives us:

array([[  3, 105],
       [ 10, 101],
       [ 15, 103],
       [ 33, 107],
       [ 54, 108],
       [ 55, 112],
       [ 78, 115],
       [ 79, 119],
       [ 80, 120],
       [ 99, 108]])

And we can get back to the original feature values by subtracting off the offsets from ohc.feature_indices_:

recovered_X = decoded - ohc.feature_indices_[:-1]
array([[ 3,  5],
       [10,  1],
       [15,  3],
       [33,  7],
       [54,  8],
       [55, 12],
       [78, 15],
       [79, 19],
       [80, 20],
       [99,  8]])

Note that you will need to have the original shape of X, which is simply (n_samples, n_features).

TL;DR

Given the sklearn.OneHotEncoder instance called ohc, the encoded data (scipy.sparse.csr_matrix) output from ohc.fit_transform or ohc.transform called out, and the shape of the original data (n_samples, n_feature), recover the original data X with:

recovered_X = np.array([ohc.active_features_[col] for col in out.sorted_indices().indices])
                .reshape(n_samples, n_features) - ohc.feature_indices_[:-1]

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

1.4m articles

1.4m replys

5 comments

57.0k users

...