You can use sklearn's StratifiedKFold
, from the online docs:
Stratified K-Folds cross validation iterator
Provides train/test
indices to split data in train test sets.
This cross-validation object
is a variation of KFold that returns stratified folds. The folds are
made by preserving the percentage of samples for each class.
>>> from sklearn import cross_validation
>>> X = np.array([[1, 2], [3, 4], [1, 2], [3, 4]])
>>> y = np.array([0, 0, 1, 1])
>>> skf = cross_validation.StratifiedKFold(y, n_folds=2)
>>> len(skf)
2
>>> print(skf)
sklearn.cross_validation.StratifiedKFold(labels=[0 0 1 1], n_folds=2,
shuffle=False, random_state=None)
>>> for train_index, test_index in skf:
... print("TRAIN:", train_index, "TEST:", test_index)
... X_train, X_test = X[train_index], X[test_index]
... y_train, y_test = y[train_index], y[test_index]
TRAIN: [1 3] TEST: [0 2]
TRAIN: [0 2] TEST: [1 3]
This will preserve your class ratios so that the splits retain the class ratios, this will work fine with pandas dfs.
As suggested by @Ali_m you could use StratifiedShuffledSplit
which accepts a split ratio param:
sss = StratifiedShuffleSplit(y, 3, test_size=0.7, random_state=0)
would produce a 70% split.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…