Secondly, in the docs, it says The logging service must be explicitly
started through the scrapy.log.start() function
. My question is, where
do I run this scrapy.log.start()? Is it inside my spider?
If you run a spider using scrapy crawl my_spider
-- the log is started automatically if STATS_ENABLED = True
If you start the crawler process manually, you can do scrapy.log.start()
before starting the crawler process.
from scrapy.crawler import CrawlerProcess
from scrapy.conf import settings
settings.overrides.update({}) # your settings
crawlerProcess = CrawlerProcess(settings)
crawlerProcess.install()
crawlerProcess.configure()
crawlerProcess.crawl(spider) # your spider here
log.start() # depends on LOG_ENABLED
print "Starting crawler."
crawlerProcess.start()
print "Crawler stopped."
The little knowledge I have about your first question:
Because you have to start the scrapy log manually, this allows you to use your own logger.
I think you can copy module scrapy/scrapy/log.py
in scrapy sources, modify it, import it instead of scrapy.log
and run start()
- scrapy will use your log. In it there is a line in function start()
which says log.startLoggingWithObserver(sflo.emit, setStdout=logstdout)
.
Make your own observer (http://docs.python.org/howto/logging-cookbook.html#logging-to-multiple-destinations) and use it there.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…