Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
1.3k views
in Technique[技术] by (71.8m points)

python - Find value counts within a pandas dataframe of strings

I want to get the frequency count of strings within a column. One one hand, this is similar to collapsing a dataframe to a set of rows that only reflects the strings in the column. I was able to solve this with a loop, but know there is a better solution.

Example df:

       2017-08-09  2017-08-10
id                                                             
0             pre         pre   
2      active_1-3    active_1   
3        active_1    active_1   
4      active_3-7  active_3-7   
5        active_1    active_1

And want to get out:

       2017-08-09  2017-08-10
pre             1           1
active_1        2           3
active_1-3      3           0
active_3-7      1           1

I searched a lot of forums but couldnt' find a good answer.

I'm assuming a pivot_table approach is the right one, but couldn't get the right arguments to collapse a table that didn't have an obvious index for the output df.

I was able to get this to work by iterating over each column, using value_counts(), and appending each value count series into a new dataframe, but I know there is a better solution.

for i in range(len(date_cols)):
    new_values = df[date_cols[i]].value_counts()
    output_df = pd.concat([output_df , new_values], axis=1)

Thanks!

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

You can use value counts and pd.Series (Thanks for improvement Jon)i.e

ndf = df.apply(pd.Series.value_counts).fillna(0)
           2017-08-09  2017-08-10
active_1             2         3.0
active_1-3           1         0.0
active_3-7           1         1.0
pre                  1         1.0

Timings:

k = pd.concat([df]*1000)
# @c???s????'s method 
%%timeit
pd.get_dummies(k.T).groupby(by=lambda x: x.split('_', 1)[1], axis=1).sum().T
1 loop, best of 3: 5.68 s per loop


%%timeit
# @c???s????'s method 
k.stack().str.get_dummies().sum(level=1).T
10 loops, best of 3: 84.1 ms per loop

# My method 
%%timeit
k.apply(pd.Series.value_counts).fillna(0)
100 loops, best of 3: 7.57 ms per loop

# FabienP's method 
%%timeit
k.unstack().groupby(level=0).value_counts().unstack().T.fillna(0)
100 loops, best of 3: 7.35 ms per loop

#@Wen's method (fastest for now) 
pd.concat([pd.Series(collections.Counter(k[x])) for x in df.columns],axis=1)
100 loops, best of 3: 4 ms per loop

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

1.4m articles

1.4m replys

5 comments

57.0k users

...