If you want to use Selenium in an async fashion I would suggest using multiple instances of the Driver and a executor like this:
import asyncio
from concurrent.futures.thread import ThreadPoolExecutor
from selenium import webdriver
executor = ThreadPoolExecutor(10)
def scrape(url, *, loop):
loop.run_in_executor(executor, scraper, url)
def scraper(url):
driver = webdriver.Chrome("./chromedriver")
driver.get(url)
loop = asyncio.get_event_loop()
for url in ["https://google.de"] * 2:
scrape(url, loop=loop)
loop.run_until_complete(asyncio.gather(*asyncio.all_tasks(loop)))
Please note that you can run selenium in headless mode so you don't need to spawn the whole GUI for calling some simple url.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…