Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
841 views
in Technique[技术] by (71.8m points)

scroll - how to scrape using scrapy with infinite loop without next page information

i need to scrape a url using scrapy and i cant scroll down the website to load all the elements.

i try to seach the next page information but i cant found it

my code of the spider is:

import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from appinformatica.items import appinformaticaItem

import w3lib.html

class appinformaticaSpider (CrawlSpider):
    name = 'appinformatica'
    item_count=0
    start_urls =['https://www.appinformatica.com/telefonos/moviles/']
    rules = {
        Rule(LinkExtractor(allow=(), restrict_xpaths=('//*[@class="info-ficha"]/div[1]/a')),
             callback='parse_item', follow=False)
    }

    def parse_item(self, response):
        item = appinformaticaItem()
        self.item_count += 1
        item['Modelo'] = w3lib.html.remove_tags(response.xpath("//h1").get(default=''))
        item['Position'] = self.item_count
        item['Precio'] = w3lib.html.remove_tags(response.xpath('//*[@id="ficha-producto"]/div[2]/div[1]/div/div[1]').get(default=''))
        item['PrecioTienda'] = w3lib.html.remove_tags(response.xpath('//*[@id="ficha-producto"]/div[2]/div[1]/div/div[2]').get(default=''))
        item['Stock'] = w3lib.html.remove_tags(response.xpath('//*[@id="ficha-producto"]/div[2]/div[3]/p[3]').get(default=''))
        item['Submodelo'] = w3lib.html.remove_tags(response.xpath('//*[@id="ficha-producto"]/div[2]/div[3]/p[2]/strong[2]').get(default=''))
        item['Url'] = w3lib.html.remove_tags(response.url)
        yield item

anyone can help me?


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Change allow to allow=(r'/moviles/.*.html'),follow=True and put your allowed_domains. And try this.

import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
# from appinformatica.items import appinformaticaItem

import w3lib.html

class appinformaticaSpider (CrawlSpider):
    name = 'appinformatica'
    allowed_domains = ["appinformatica.com"]
    item_count=0
    start_urls =['https://www.appinformatica.com/telefonos/moviles/']
    rules = {
        Rule(LinkExtractor(allow=(r'/moviles/.*.html'), ),
             callback='parse_item', follow=True)
    }

    def parse_item(self, response):
        item = {}
        self.item_count += 1
        item['Modelo'] = w3lib.html.remove_tags(response.xpath("//h1").get(default=''))
        item['Position'] = self.item_count
        item['Precio'] = w3lib.html.remove_tags(response.xpath('//*[@id="ficha-producto"]/div[2]/div[1]/div/div[1]').get(default=''))
        item['PrecioTienda'] = w3lib.html.remove_tags(response.xpath('//*[@id="ficha-producto"]/div[2]/div[1]/div/div[2]').get(default=''))
        item['Stock'] = w3lib.html.remove_tags(response.xpath('//*[@id="ficha-producto"]/div[2]/div[3]/p[3]').get(default=''))
        item['Submodelo'] = w3lib.html.remove_tags(response.xpath('//*[@id="ficha-producto"]/div[2]/div[3]/p[2]/strong[2]').get(default=''))
        item['Url'] = w3lib.html.remove_tags(response.url)
        yield item

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...