I trying to deal with writing huge amount of pickled data to disk by small pieces. Here is the example code:
from cPickle import *
from gc import collect
PATH = r'd:est.dat'
@profile
def func(item):
for e in item:
f = open(PATH, 'a', 0)
f.write(dumps(e))
f.flush()
f.close()
del f
collect()
if __name__ == '__main__':
k = [x for x in xrange(9999)]
func(k)
open() and close() placed inside loop to exclude possible causes of accumulation of data in memory.
To illustrate problem I attach results of memory profiling gained with Python 3d party module memory_profiler:
Line # Mem usage Increment Line Contents
==============================================
14 @profile
15 9.02 MB 0.00 MB def func(item):
16 9.02 MB 0.00 MB path= r'd:est.dat'
17
18 10.88 MB 1.86 MB for e in item:
19 10.88 MB 0.00 MB f = open(path, 'a', 0)
20 10.88 MB 0.00 MB f.write(dumps(e))
21 10.88 MB 0.00 MB f.flush()
22 10.88 MB 0.00 MB f.close()
23 10.88 MB 0.00 MB del f
24 collect()
During execution of the loop strange memory usage growth occurs. How it can be eliminated? Any thoughts?
When amount of input data increases volume of this additional data can grow to size much greater then input (upd: in real task i get 300+Mb)
And more wide question - which ways exist to properly work with big amounts of IO data in Python?
upd:
I rewrote the code leaving only the loop body to see when growth happens specifically, and here the results:
Line # Mem usage Increment Line Contents
==============================================
14 @profile
15 9.00 MB 0.00 MB def func(item):
16 9.00 MB 0.00 MB path= r'd:est.dat'
17
18 #for e in item:
19 9.02 MB 0.02 MB f = open(path, 'a', 0)
20 9.23 MB 0.21 MB d = dumps(item)
21 9.23 MB 0.00 MB f.write(d)
22 9.23 MB 0.00 MB f.flush()
23 9.23 MB 0.00 MB f.close()
24 9.23 MB 0.00 MB del f
25 9.23 MB 0.00 MB collect()
It seems like dumps() eats memory. (While I actually thought it will be write())
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…