Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
633 views
in Technique[技术] by (71.8m points)

python - Why pickle eat memory?

I trying to deal with writing huge amount of pickled data to disk by small pieces. Here is the example code:

from cPickle import *
from gc import collect

PATH = r'd:est.dat'
@profile
def func(item):
    for e in item:
        f = open(PATH, 'a', 0)
        f.write(dumps(e))
        f.flush()
        f.close()
        del f
        collect()

if __name__ == '__main__':
    k = [x for x in xrange(9999)]
    func(k)

open() and close() placed inside loop to exclude possible causes of accumulation of data in memory.

To illustrate problem I attach results of memory profiling gained with Python 3d party module memory_profiler:

   Line #    Mem usage  Increment   Line Contents
==============================================
    14                           @profile
    15      9.02 MB    0.00 MB   def func(item):
    16      9.02 MB    0.00 MB       path= r'd:est.dat'
    17
    18     10.88 MB    1.86 MB       for e in item:
    19     10.88 MB    0.00 MB           f = open(path, 'a', 0)
    20     10.88 MB    0.00 MB           f.write(dumps(e))
    21     10.88 MB    0.00 MB           f.flush()
    22     10.88 MB    0.00 MB           f.close()
    23     10.88 MB    0.00 MB           del f
    24                                   collect()

During execution of the loop strange memory usage growth occurs. How it can be eliminated? Any thoughts?

When amount of input data increases volume of this additional data can grow to size much greater then input (upd: in real task i get 300+Mb)

And more wide question - which ways exist to properly work with big amounts of IO data in Python?

upd: I rewrote the code leaving only the loop body to see when growth happens specifically, and here the results:

Line #    Mem usage  Increment   Line Contents
==============================================
    14                           @profile
    15      9.00 MB    0.00 MB   def func(item):
    16      9.00 MB    0.00 MB       path= r'd:est.dat'
    17
    18                               #for e in item:
    19      9.02 MB    0.02 MB       f = open(path, 'a', 0)
    20      9.23 MB    0.21 MB       d = dumps(item)
    21      9.23 MB    0.00 MB       f.write(d)
    22      9.23 MB    0.00 MB       f.flush()
    23      9.23 MB    0.00 MB       f.close()
    24      9.23 MB    0.00 MB       del f
    25      9.23 MB    0.00 MB       collect()

It seems like dumps() eats memory. (While I actually thought it will be write())

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Pickle consume a lot of RAM, see explanations here : http://www.shocksolution.com/2010/01/storing-large-numpy-arrays-on-disk-python-pickle-vs-hdf5adsf/

Why does Pickle consume so much more memory? The reason is that HDF is a binary data pipe, while Pickle is an object serialization protocol. Pickle actually consists of a simple virtual machine (VM) that translates an object into a series of opcodes and writes them to disk. To unpickle something, the VM reads and interprets the opcodes and reconstructs an object. The downside of this approach is that the VM has to construct a complete copy of the object in memory before it writes it to disk.

Pickle is great for small use cases or testing because in most case the memory consumption doesn't matter a lot.

For intensive work where you have to dump and load a lot of files and/or big files you should consider using another way to store your data (ex.: hdf, wrote your own serialize/deserialize methods for your object, ...)


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

1.4m articles

1.4m replys

5 comments

57.0k users

...