Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
690 views
in Technique[技术] by (71.8m points)

javascript - How to properly use mechanize to scrape AJAX sites

So I am fairly new to web scraping. There is this site that has a table on it, the values of the table are controlled by Javascript. The values will determine the address of future values that my browser is told to request from the Javascript. These new pages have JSON responses that the script updates the table with in my browser.

So I wanted to build a class with a mechanize method that takes in an url and spits out the body response, the first time a HTML, afterwards, the body response will be JSON, for remaining iterations.

I have something that works but I want to know if I am doing it right or if there is a better way.

class urlMaintain2:    
    def __init__(self):

        self.first_append = 0
        self.response = ''

    def pageResponse(self,url):
        import mechanize
        import cookielib        

        br = mechanize.Browser()

        #Cookie Jar
        cj = cookielib.LWPCookieJar()
        br.set_cookiejar(cj)

        #Browser options
        br.set_handle_equiv(True)
        br.set_handle_gzip(False)
        br.set_handle_redirect(True)
        br.set_handle_referer(True)
        br.set_handle_robots(False)

        br.set_handle_refresh(mechanize._http.HTTPRefreshProcessor(), max_time=1)

        br.addheaders = [('User-agent','Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.16) Gecko/20110319 Firefox/3.6.16'),
                              ('Accept-Encoding','gzip')]
        if self.first_append == 1:
            br.addheaders.append(['Accept', ' application/json, text/javascript, */*'])
            br.addheaders.append(['Content-Type', 'application/x-www-form-urlencoded; charset=UTF-8'])
            br.addheaders.append(['X-Requested-With', 'XMLHttpRequest'])
            br.addheaders.append(['User-agent','Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.16) Gecko/20110319 Firefox/3.6.16'])
            br.addheaders.append(['If-Modified-Since', 'Thu, 1 Jan 1970 00:00:00 GMT'])
            cj.add_cookie_header(br)   

        response = br.open(url)
        headers = response.info()

        if headers['Content-Encoding']=='gzip':
            import gzip
            gz = gzip.GzipFile(fileobj=response, mode='rb')
            html = gz.read()
            gz.close()
            headers["Content-type"] = "text/html; charset=utf-8"
            response.set_data(html)
        br.close()
        return response

self.first_append becomes positive after the data has been extracted from the main page html, so the br.addheaders.append don't run the first time through since there is no JSON in the body response, but all the other body responses are JSON. Is this the correct way to do this? Is there a more efficient way?

self.first_append becomes positive after the data has been extracted from the main page html, so the br.addheaders.append don't run the first time through since there is no JSON in the body response, but all the other body responses are JSON. Is this the correct way to do this? Is there a more efficient way? Are there other languages/libraries that do this better?

After a long period of running I get this error message:

File "C:UsersDonkeyMy DocumentsAptana Studio WorkspaceUrlMaintain2srcUrlMaintain2.py", line 55, in pageResponse response = br.open(url) 
File "C:Python27libmechanize_mechanize.py", line 203, in open return self._mech_open(url, data, timeout=timeout) 
File "C:Python27libmechanize_mechanize.py", line 230, in _mech_open response = UserAgentBase.open(self, request, data) 
File "C:Python27libmechanize_opener.py", line 193, in open response = urlopen(self, req, data) 
File "C:Python27libmechanize_urllib2_fork.py", line 344, in _open '_open', req) File "C:Python27libmechanize_urllib2_fork.py", line 332, in _call_chain result = func(*args) 
File "C:Python27libmechanize_urllib2_fork.py", line 1142, in http_open return self.do_open(httplib.HTTPConnection, req) 
File "C:Python27libmechanize_urllib2_fork.py", line 1118, in do_open raise URLError(err) urllib2.URLError: 

It kind of has me lost, not sure why it is being generated but I need to have tons of iterations before I see it.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

From the mechanize faq "mechanize does not provide any support for JavaScript", it then elaborates on your options (the choices aren't great).

If you've got something working then great but using selenium webdriver is a much better solution for scraping ajax sites than mechanize


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...