I'm trying to use Python and Selenium to scrape multiple links on a web page. I'm using find_elements_by_xpath
and I'm able to locate a list of elements but I'm having trouble changing the list that is returned to the actual href
links. I know find_element_by_xpath
works, but that only works for one element.
Here is my code:
path_to_chromedriver = 'path to chromedriver location'
browser = webdriver.Chrome(executable_path = path_to_chromedriver)
browser.get("file:///path to html file")
all_trails = []
#finds all elements with the class 'text-truncate trail-name' then
#retrieve the a element
#this seems to be just giving us the element location but not the
#actual location
find_href = browser.find_elements_by_xpath('//div[@class="text truncate trail-name"]/a[1]')
all_trails.append(find_href)
print all_trails
This code is returning:
<selenium.webdriver.remote.webelement.WebElement
(session="dd178d79c66b747696c5d3750ea8cb17",
element="0.5700549730549636-1663")>,
<selenium.webdriver.remote.webelement.WebElement
(session="dd178d79c66b747696c5d3750ea8cb17",
element="0.5700549730549636-1664")>,
I expect the all_trails
array to be a list of links like: www.google.com, www.yahoo.com, www.bing.com
.
I've tried looping through the all_trails
list and running the get_attribute('href')
method on the list but I get the error:
Does anyone have any idea how to convert the selenium WebElement's to href links?
Any help would be greatly appreciated :)
Question&Answers:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…