Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
197 views
in Technique[技术] by (71.8m points)

python - Make Pandas groupby act similarly to itertools groupby

Suppose I have a Python dict of lists like so:

{'Grp': ['2'   , '6'   , '6'   , '5'   , '5'   , '6'   , '6'   , '7'   , '7'   , '6'], 
'Nums': ['6.20', '6.30', '6.80', '6.45', '6.55', '6.35', '6.37', '6.36', '6.78', '6.33']}

I can easily group the numbers and group key using itertools.groupby:

from itertools import groupby
for k, l in groupby(zip(di['Grp'], di['Nums']), key=lambda t: t[0]):
    print k, [t[1] for t in l]

Prints:

2 ['6.20']
6 ['6.30', '6.80']      # one field, key=6
5 ['6.45', '6.55']
6 ['6.35', '6.37']      # second
7 ['6.36', '6.78']
6 ['6.33']              # third

Note the 6 key is separated into three separate groups or fields .

Now suppose I have the equivalent Pandas DataFrame to my dict (same data, same list order and same keys):

  Grp  Nums
0   2  6.20
1   6  6.30
2   6  6.80
3   5  6.45
4   5  6.55
5   6  6.35
6   6  6.37
7   7  6.36
8   7  6.78
9   6  6.33

If I use Pandas' groupby I am not seeing how to get group by group iteration. Instead, Pandas groups by key value:

for e in df.groupby('Grp'):
    print e

Prints:

('2',   Grp  Nums
0   2  6.20)
('5',   Grp  Nums
3   5  6.45
4   5  6.55)
('6',   Grp  Nums
1   6  6.30            
2   6  6.80                # df['Grp'][1:2] first field
5   6  6.35                # df['Grp'][5:6] second field
6   6  6.37                 
9   6  6.33)               # df['Grp'][9] third field
('7',   Grp  Nums
7   7  6.36
8   7  6.78)

Note are the 6 group keys are bunched together; not separate groups.

My question: Is there an equivalent way to use Pandas' groupby so that 6, for example, would be in three groups in the same fashion as Python's groupby?

I tried this:

>>> df.reset_index().groupby('Grp')['index'].apply(lambda x: np.array(x))
Grp
2                [0]
5             [3, 4]
6    [1, 2, 5, 6, 9]         # I *could* do a second groupby on this...
7             [7, 8]
Name: index, dtype: object

But it is still grouped by overall Grp key and I would need to do a second groupby on the nd.array to split the sub groups of each key out.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

First you can identify which elements in the Grp column differ from the previous and get the cumulative sum to form the groups you need:

In [9]:
    diff_to_previous = df.Grp != df.Grp.shift(1)
    diff_to_previous.cumsum()
Out[9]:

0    1
1    2
2    2
3    3
4    3
5    4
6    4
7    5
8    5
9    6

So you can then do

df.groupby(diff_to_previous.cumsum()) 

to get the desired groupby object


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

1.4m articles

1.4m replys

5 comments

57.0k users

...