I am making a script that scrapes the games of the Team Liquid database of international StarCraft 2 games. (http://www.teamliquid.net/tlpd/sc2-international/games)
However I come accros a problem. I have my script looping through all the pages, however the Team Liquid site uses some kind of AJAX I think in the table to update it. Now when I use BeautifulSoup I can't get the right data.
So I loop through these pages:
http://www.teamliquid.net/tlpd/sc2-international/games#tblt-948-1-1-DESC
http://www.teamliquid.net/tlpd/sc2-international/games#tblt-948-2-1-DESC
http://www.teamliquid.net/tlpd/sc2-international/games#tblt-948-3-1-DESC
http://www.teamliquid.net/tlpd/sc2-international/games#tblt-948-4-1-DESC etc...
When you open these yourself you see different pages, however my script keeps getting the same first page every time. I think this is because when opening the other pages you see some loading thing for a small amount of time updating the table with games to the correct page. So I guess beatifulsoup is to fast and needs to wait for the loading and updating of the table to be done.
So my question is: How can i make sure it takes the updated table?
I now use this code to get the contents of the table, after which I put the contents in a .csv:
html = urlopen(url).read().lower() bs = BeautifulSoup(html) table = bs.find(lambda tag: tag.name=='table' and tag.has_key('id') and tag['id']=="tblt_table") rows = table.findAll(lambda tag: tag.name=='tr')
When you try to scrape a site using AJAX, it's best to see what the javascript code actually does. In many cases it simply retrieves XML or HTML, which would be even easier to scrape than the non-AJAXy content. It just requires looking at some source code.
In your case, the site retrieves the HTML code for the table control by itself (instead of refreshing the whole page) from a special URL and dynamically replaces it in the browser DOM. Looking at http://www.teamliquid.net/tlpd/tabulator/ajax.js, you'd see this URL is formatted like this:
http://www.teamliquid.net/tlpd/tabulator/update.php?tabulator_id=1811&tabulator_page=1&tabulator_order_col=1&tabulator_order_desc=1&tabulator_Search&tabulator_search=
So all you need to do is to scrape this URL directly with BeautifulSoup and advance the tabulator_page counter each time you want the next page.
1.4m articles
1.4m replys
5 comments
57.0k users