Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
361 views
in Technique[技术] by (71.8m points)

caching - How can I memoize a class instantiation in Python?

Ok, here is the real world scenario: I'm writing an application, and I have a class that represents a certain type of files (in my case this is photographs but that detail is irrelevant to the problem). Each instance of the Photograph class should be unique to the photo's filename.

The problem is, when a user tells my application to load a file, I need to be able to identify when files are already loaded, and use the existing instance for that filename, rather than create duplicate instances on the same filename.

To me this seems like a good situation to use memoization, and there's a lot of examples of that out there, but in this case I'm not just memoizing an ordinary function, I need to be memoizing __init__(). This poses a problem, because by the time __init__() gets called it's already too late as there's a new instance created already.

In my research I found Python's __new__() method, and I was actually able to write a working trivial example, but it fell apart when I tried to use it on my real-world objects, and I'm not sure why (the only thing I can think of is that my real world objects were subclasses of other objects that I can't really control, and so there were some incompatibilities with this approach). This is what I had:

class Flub(object):
    instances = {}

    def __new__(cls, flubid):
        try:
            self = Flub.instances[flubid]
        except KeyError:
            self = Flub.instances[flubid] = super(Flub, cls).__new__(cls)
            print 'making a new one!'
            self.flubid = flubid
        print id(self)
        return self

    @staticmethod
    def destroy_all():
        for flub in Flub.instances.values():
            print 'killing', flub


a = Flub('foo')
b = Flub('foo')
c = Flub('bar')

print a
print b
print c
print a is b, b is c

Flub.destroy_all()

Which output this:

making a new one!
139958663753808
139958663753808
making a new one!
139958663753872
<__main__.Flub object at 0x7f4aaa6fb050>
<__main__.Flub object at 0x7f4aaa6fb050>
<__main__.Flub object at 0x7f4aaa6fb090>
True False
killing <__main__.Flub object at 0x7f4aaa6fb050>
killing <__main__.Flub object at 0x7f4aaa6fb090>

It's perfect! Only two instances were made for the two unique id's given, and Flub.instances clearly only has two listed.

But when I tried to take this approach with the objects I was using, I got all kinds of nonsensical errors about how __init__() took only 0 arguments, not 2. So I'd change some things around and then it would tell me that __init__() needed an argument. Totally bizarre.

After a while of fighting with it, I basically just gave up and moved all the __new__() black magic into a staticmethod called get, such that I could call Photograph.get(filename) and it would only call Photograph(filename) if filename wasn't already in Photograph.instances.

Does anybody know where I went wrong here? Is there some better way to do this?

Another way of thinking about it is that it's similar to a singleton, except it's not globally singleton, just singleton-per-filename.

Here's my real-world code using the staticmethod get if you want to see it all together.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Let us see two points about your question.

Using memoize

You can use memoization, but you should decorate the class, not the __init__ method. Suppose we have this memoizator:

def get_id_tuple(f, args, kwargs, mark=object()):
    """ 
    Some quick'n'dirty way to generate a unique key for an specific call.
    """
    l = [id(f)]
    for arg in args:
        l.append(id(arg))
    l.append(id(mark))
    for k, v in kwargs:
        l.append(k)
        l.append(id(v))
    return tuple(l)

_memoized = {}
def memoize(f):
    """ 
    Some basic memoizer
    """
    def memoized(*args, **kwargs):
        key = get_id_tuple(f, args, kwargs)
        if key not in _memoized:
            _memoized[key] = f(*args, **kwargs)
        return _memoized[key]
    return memoized

Now you just need to decorate the class:

@memoize
class Test(object):
    def __init__(self, somevalue):
        self.somevalue = somevalue

Let us see a test?

tests = [Test(1), Test(2), Test(3), Test(2), Test(4)]
for test in tests:
    print test.somevalue, id(test)

The output is below. Note that the same parameters yield the same id of the returned object:

1 3072319660
2 3072319692
3 3072319724
2 3072319692
4 3072319756

Anyway, I would prefer to create a function to generate the objects and memoize it. Seems cleaner to me, but it may be some irrelevant pet peeve:

class Test(object):
    def __init__(self, somevalue):
        self.somevalue = somevalue

@memoize
def get_test_from_value(somevalue):
    return Test(somevalue)

Using __new__:

Or, of course, you can override __new__. Some days ago I posted an answer about the ins, outs and best practices of overriding __new__ that can be helpful. Basically, it says to always pass *args, **kwargs to your __new__ method.

I, for one, would prefer to memoize a function which creates the objects, or even write a specific function which would take care of never recreating a object to the same parameter. Of course, however, this is mostly a opinion of mine, not a rule.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...