Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
660 views
in Technique[技术] by (71.8m points)

php - Scraping attempts getting 403 error

I am trying to scrape a website and I am getting a 403 Forbidden error no matter what I try:

  1. wget
  2. CURL (command line and PHP)
  3. Perl WWW::Mechanize
  4. PhantomJS

I tried all of the above with and without proxies, changing user-agent, and adding a referrer header.

I even copied the request header from my Chrome browser and tried sending with my request using PHP Curl and I am still getting a 403 Forbidden error.

Any input or suggestions on what is triggering the website to block the request and how to bypass?

PHP CURL Example:

$url ='https://www.vitacost.com/productResults.aspx?allCategories=true&N=1318723&isrc=vitacostbrands%3aquadblock%3asupplements&scrolling=true&No=40&_=1510475982858';
$headers = array(
            'accept:application/json, text/javascript, */*; q=0.01',
            'accept-encoding:gzip, deflate, br',
            'accept-language:en-US,en;q=0.9',               
            'referer:https://www.vitacost.com/productResults.aspx?allCategories=true&N=1318723&isrc=vitacostbrands:quadblock:supplements',
            'user-agent:Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.89 Safari/537.36',
            'x-requested-with:XMLHttpRequest',
);

$res = curl_get($url,$headers);
print $res;
exit;

function curl_get($url,$headers=array(),$useragent=''){ 
    $curl = curl_init();
    curl_setopt($curl, CURLOPT_URL, $url);
    curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);
    curl_setopt($curl, CURLOPT_HEADER, true);           
    curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, false);
    curl_setopt($curl, CURLOPT_SSL_VERIFYHOST, false);
    curl_setopt($curl, CURLOPT_FOLLOWLOCATION, true);   
    curl_setopt($curl, CURLOPT_ENCODING, '');            
    if($useragent)curl_setopt($curl, CURLOPT_USERAGENT,$useragent);             
    if($headers)curl_setopt($curl, CURLOPT_HTTPHEADER, $headers);

    $response = curl_exec($curl);       

    $header_size = curl_getinfo($curl, CURLINFO_HEADER_SIZE);
    $header = substr($response, 0, $header_size);
    $response = substr($response, $header_size);


    curl_close($curl);  
    return $response;
 }

And here is the response I always get:

<HTML><HEAD>
<TITLE>Access Denied</TITLE>
</HEAD><BODY>
<H1>Access Denied</H1>

You don't have permission to access     

  "http&#58;&#47;&#47;www&#46;vitacost&#46;com&#47;productResults&#46;aspx&#63;" 
on this server.<P>
Reference&#32;&#35;18&#46;55f50717&#46;1510477424&#46;2a24bbad
</BODY>
</HTML>
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

First, note that the site does not like web scraping. As @KeepCalmAndCarryOn pointed out in a comment this site has a /robots.txt where it explicitly asks bots to not crawl specific parts of the site, including the parts you want to scrape. While not legally binding a good citizen will adhere to such request.

Additionally the site seems to employ explicit protection against scraping and tries to make sure that this is really a browser. It looks like the site is behind the Akamai CDN, so maybe the anti-scraping protection is from this CDN.

But I've took the request sent by Firefox (which worked) and then tried to simplify it as much as possible. The following works currently for me, but might of course fail if the site updates its browser detection:

use strict;
use warnings;
use IO::Socket::SSL;

(my $rq = <<'RQ') =~s{
?
}{
}g;
GET /productResults.aspx?allCategories=true&N=1318723&isrc=vitacostbrands%3aquadblock%3asupplements&scrolling=true&No=40&_=151047598285 HTTP/1.1
Host: www.vitacost.com
Accept: */*
Accept-Language: en-US
Connection: keep-alive

RQ

my $cl = IO::Socket::SSL->new('www.vitacost.com:443') or die;
print $cl $rq;
my $hdr = '';
while (<$cl>) {
    $hdr .= $_;
    last if $_ eq "
";
}
warn "[header done]
";
my $len = $hdr =~m{^Content-length:s*(d+)}mi && $1 or die "no length";
read($cl,my $buf,$len);
print $buf;

Interestingly, if I remove the Accept header I get a 403 Forbidden. If I instead remove the Accept-Language it simply hangs. And also interestingly it does not seem to need a User-Agent header.

EDIT: it looks like the bot-detection also uses the source IP of the sender as feature. While the code above works for me from two different systems it fails to work for a third system (hosted at Digitalocean) and just hangs.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...