Here's the python program that worked for me:
from scrapy.selector import HtmlXPathSelector
from scrapy.spider import BaseSpider
from scrapy.http import Request
DOMAIN = 'example.com'
URL = 'http://%s' % DOMAIN
class MySpider(BaseSpider):
name = DOMAIN
allowed_domains = [DOMAIN]
start_urls = [
URL
]
def parse(self, response):
hxs = HtmlXPathSelector(response)
for url in hxs.select('//a/@href').extract():
if not ( url.startswith('http://') or url.startswith('https://') ):
url= URL + url
print url
yield Request(url, callback=self.parse)
Save this in a file called spider.py
.
You can then use a shell pipeline to post process this text:
bash$ scrapy runspider spider.py > urls.out
bash$ cat urls.out| grep 'example.com' |sort |uniq |grep -v '#' |grep -v 'mailto' > example.urls
This gives me a list of all the unique urls in my site.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…