What we can do is use nunique
to calculate the number of unique values in each column of the dataframe, and drop the columns which only have a single unique value:
In [285]:
nunique = df.nunique()
cols_to_drop = nunique[nunique == 1].index
df.drop(cols_to_drop, axis=1)
Out[285]:
index id name data1
0 0 345 name1 3
1 1 12 name2 2
2 5 2 name6 7
Another way is to just diff
the numeric columns, take abs
values and sums
them:
In [298]:
cols = df.select_dtypes([np.number]).columns
diff = df[cols].diff().abs().sum()
df.drop(diff[diff== 0].index, axis=1)
?
Out[298]:
index id name data1
0 0 345 name1 3
1 1 12 name2 2
2 5 2 name6 7
Another approach is to use the property that the standard deviation will be zero for a column with the same value:
In [300]:
cols = df.select_dtypes([np.number]).columns
std = df[cols].std()
cols_to_drop = std[std==0].index
df.drop(cols_to_drop, axis=1)
Out[300]:
index id name data1
0 0 345 name1 3
1 1 12 name2 2
2 5 2 name6 7
Actually the above can be done in a one-liner:
In [306]:
df.drop(df.std()[(df.std() == 0)].index, axis=1)
Out[306]:
index id name data1
0 0 345 name1 3
1 1 12 name2 2
2 5 2 name6 7
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…