I'm working through a python course on udemy and we're doing a section on website parsing. This particular exercise wants us to parse multiple pages of a website of which we do not know how many pages there are in total.
My thought process was to create a try and except block, except I cannot get the exception to print anything. I'm a newbie, so this could very well be a very silly mistake. Here is my code. It works to get the primary job done which is to grab all unique authors from each page and append to a list, but I want it to stop trying to parse when we reach a page that doesn't exist. I know that the website has 10 pages, but I set range from 1-19 just to see if my code would throw an exception once it got to a page that isnt there.
import requests
import bs4
authors = []
try:
valid_page = True
for n in range(1,20):
base_url = 'http://quotes.toscrape.com/page/{}/'
scrape_url = base_url.format(n)
res = requests.get(scrape_url)
soup = bs4.BeautifulSoup(res.text,'lxml')
for item in soup.select('.author'):
unique_author = item.text
if unique_author not in authors:
authors.append(unique_author)
except exception as ex:
print(ex)
valid_page = False
question from:
https://stackoverflow.com/questions/65909264/trying-to-scrape-from-an-unspecified-amount-of-web-pages-exception-isnt-being 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…