Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
220 views
in Technique[技术] by (71.8m points)

python - Scrapy, scraping data inside a Javascript

I am using scrapy to screen scrape data from a website. However, the data I wanted wasn't inside the html itself, instead, it is from a javascript. So, my question is:

How to get the values (text values) of such cases?

This, is the site I'm trying to screen scrape: https://www.mcdonalds.com.sg/locate-us/

Attributes I'm trying to get: Address, Contact, Operating hours.

If you do a "right click", "view source" inside a chrome browser you will see that such values aren't available itself in the HTML.


Edit

Sry paul, i did what you told me to, found the admin-ajax.php and saw the body but, I'm really stuck now.

How do I retrieve the values from the json object and store it into a variable field of my own? It would be good, if you could share how to do just one attribute for the public and to those who just started scrapy as well.

Here's my code so far

Items.py

class McDonaldsItem(Item):
name = Field()
address = Field()
postal = Field()
hours = Field()

McDonalds.py

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
import re

from fastfood.items import McDonaldsItem

class McDonaldSpider(BaseSpider):
name = "mcdonalds"
allowed_domains = ["mcdonalds.com.sg"]
start_urls = ["https://www.mcdonalds.com.sg/locate-us/"]

def parse_json(self, response):

    js = json.loads(response.body)
    pprint.pprint(js)

Sry for long edit, so in short, how do i store the json value into my attribute? for eg

***item['address'] = * how to retrieve ****

P.S, not sure if this helps but, i run these scripts on the cmd line using

scrapy crawl mcdonalds -o McDonalds.json -t json ( to save all my data into a json file )

I cannot stress enough on how thankful i feel. I know it's kind of unreasonable to ask this of u, will totally be okay even if you dont have time for this.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

(I posted this to scrapy-users mailing list but by Paul's suggestion I'm posting it here as it complements the answer with the shell command interaction.)

Generally, websites that use a third party service to render some data visualization (map, table, etc) have to send the data somehow, and in most cases this data is accessible from the browser.

For this case, an inspection (i.e. exploring the requests made by the browser) shows that the data is loaded from a POST request to https://www.mcdonalds.com.sg/wp-admin/admin-ajax.php

So, basically you have there all the data you want in a nice json format ready for consuming.

Scrapy provides the shell command which is very convenient to thinker with the website before writing the spider:

$ scrapy shell https://www.mcdonalds.com.sg/locate-us/
2013-09-27 00:44:14-0400 [scrapy] INFO: Scrapy 0.16.5 started (bot: scrapybot)
...

In [1]: from scrapy.http import FormRequest

In [2]: url = 'https://www.mcdonalds.com.sg/wp-admin/admin-ajax.php'

In [3]: payload = {'action': 'ws_search_store_location', 'store_name':'0', 'store_area':'0', 'store_type':'0'}

In [4]: req = FormRequest(url, formdata=payload)

In [5]: fetch(req)
2013-09-27 00:45:13-0400 [default] DEBUG: Crawled (200) <POST https://www.mcdonalds.com.sg/wp-admin/admin-ajax.php> (referer: None)
...

In [6]: import json

In [7]: data = json.loads(response.body)

In [8]: len(data['stores']['listing'])
Out[8]: 127

In [9]: data['stores']['listing'][0]
Out[9]: 
{u'address': u'678A Woodlands Avenue 6<br/>#01-05<br/>Singapore 731678',
 u'city': u'Singapore',
 u'id': 78,
 u'lat': u'1.440409',
 u'lon': u'103.801489',
 u'name': u"McDonald's Admiralty",
 u'op_hours': u'24 hours<br>
Dessert Kiosk: 0900-0100',
 u'phone': u'68940513',
 u'region': u'north',
 u'type': [u'24hrs', u'dessert_kiosk'],
 u'zip': u'731678'}

In short: in your spider you have to return the FormRequest(...) above, then in the callback load the json object from response.body and finally for each store's data in the list data['stores']['listing'] create an item with the wanted values.

Something like this:

class McDonaldSpider(BaseSpider):
    name = "mcdonalds"
    allowed_domains = ["mcdonalds.com.sg"]
    start_urls = ["https://www.mcdonalds.com.sg/locate-us/"]

    def parse(self, response):
        # This receives the response from the start url. But we don't do anything with it.
        url = 'https://www.mcdonalds.com.sg/wp-admin/admin-ajax.php'
        payload = {'action': 'ws_search_store_location', 'store_name':'0', 'store_area':'0', 'store_type':'0'}
        return FormRequest(url, formdata=payload, callback=self.parse_stores)

    def parse_stores(self, response):
        data = json.loads(response.body)
        for store in data['stores']['listing']:
            yield McDonaldsItem(name=store['name'], address=store['address'])

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...