Suppose I have a Python dict of lists like so:
{'Grp': ['2' , '6' , '6' , '5' , '5' , '6' , '6' , '7' , '7' , '6'],
'Nums': ['6.20', '6.30', '6.80', '6.45', '6.55', '6.35', '6.37', '6.36', '6.78', '6.33']}
I can easily group the numbers and group key using itertools.groupby:
from itertools import groupby
for k, l in groupby(zip(di['Grp'], di['Nums']), key=lambda t: t[0]):
print k, [t[1] for t in l]
Prints:
2 ['6.20']
6 ['6.30', '6.80'] # one field, key=6
5 ['6.45', '6.55']
6 ['6.35', '6.37'] # second
7 ['6.36', '6.78']
6 ['6.33'] # third
Note the 6
key is separated into three separate groups or fields .
Now suppose I have the equivalent Pandas DataFrame to my dict (same data, same list order and same keys):
Grp Nums
0 2 6.20
1 6 6.30
2 6 6.80
3 5 6.45
4 5 6.55
5 6 6.35
6 6 6.37
7 7 6.36
8 7 6.78
9 6 6.33
If I use Pandas' groupby I am not seeing how to get group by group iteration. Instead, Pandas groups by key value:
for e in df.groupby('Grp'):
print e
Prints:
('2', Grp Nums
0 2 6.20)
('5', Grp Nums
3 5 6.45
4 5 6.55)
('6', Grp Nums
1 6 6.30
2 6 6.80 # df['Grp'][1:2] first field
5 6 6.35 # df['Grp'][5:6] second field
6 6 6.37
9 6 6.33) # df['Grp'][9] third field
('7', Grp Nums
7 7 6.36
8 7 6.78)
Note are the 6
group keys are bunched together; not separate groups.
My question: Is there an equivalent way to use Pandas' groupby so that 6
, for example, would be in three groups in the same fashion as Python's groupby
?
I tried this:
>>> df.reset_index().groupby('Grp')['index'].apply(lambda x: np.array(x))
Grp
2 [0]
5 [3, 4]
6 [1, 2, 5, 6, 9] # I *could* do a second groupby on this...
7 [7, 8]
Name: index, dtype: object
But it is still grouped by overall Grp
key and I would need to do a second groupby on the nd.array
to split the sub groups of each key out.
See Question&Answers more detail:
os