You observe the typical issue with finalizers in garbage collected languages. Java has it, C# has it, and they all provide a scope based cleanup method like the Python with
keyword to deal with it.
The main issue is, that the garbage collector is responsible for cleaning up and destroying objects. In C++ an object gets destroyed when it goes out of scope, so you can use RAII and have well defined semantics. In Python the object goes out of scope and lives on as long as the GC likes. Depending on your Python implementation this may be different. CPython with its refcounting based GC is rather benign (so you rarely see issues), while PyPy, IronPython and Jython might keep an object alive for a very long time.
For example:
def bad_code(filename):
return open(filename, 'r').read()
for i in xrange(10000):
bad_code('some_file.txt')
bad_code
leaks a file handle. In CPython it doesn't matter. The refcount drops to zero and it is deleted right away. In PyPy or IronPython you might get IOErrors or similar issues, as you exhaust all available file descriptors (up to ulimit
on Unix or 509 handles on Windows).
Scope based cleanup with a context manager and with
is preferable if you need to guarantee cleanup. You know exactly when your objects will be finalized. But sometimes you cannot enforce this kind of scoped cleanup easily. Thats when you might use __del__
, atexit
or similar constructs to do a best effort at cleaning up. It is not reliable but better than nothing.
You can either burden your users with explicit cleanup or enforcing explicit scopes or you can take the gamble with __del__
and see some oddities now and then (especially interpreter shutdown).
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…