currently, I am importing bunch of .py files scattered across the file system via:
def do_import(name):
import imp
fp, pathname, description = imp.find_module(name)
with fp:
return imp.load_module(name, fp, pathname, description)
known_py_files = ['workingDir/import_one.py', 'anotherDir/import_two.py'] # and so forth
for py_file in known_py_files:
do_import(py_file)
when I've timed the .py files as such below, they are in the magnitude of e-5 and e-6.
import_one.py
import time
import_stime = time.time()
import_dur = time.time() - import_stime
print import_dur
However, the call to do_import() is in the magnitude of e-3. I am guessing because of the overhead of importing it.
This is a problematic for me because im importing lots of files serially and the time to import adds up.
Is there a faster way to import than the approach mentioned above?
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…