Traceback (most recent call last):
File "C:/Users/Administrator/Desktop/WenShuSpider_v1/DetailCrawl/ChromeProxyDetail_4.py", line 86, in get_detail
self.driver.get(url)
File "C:PythonPython37libsite-packagesseleniumwebdriverremotewebdriver.py", line 333, in get
self.execute(Command.GET, {'url': url})
File "C:PythonPython37libsite-packagesseleniumwebdriverremotewebdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "C:PythonPython37libsite-packagesseleniumwebdriverremoteerrorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message: timeout
(Session info: chrome=73.0.3683.103)
(Driver info: chromedriver=70.0.3538.97 (d035916fe243477005bc95fe2a5778b8f20b6ae1),platform=Windows NT 10.0.17134 x86_64)
抓取某网站的时候,由于网站经常会响应比较慢,或者还没加载完全就返回空白页,导致程序经常检测不到元素卡死,返回
selenium.common.exceptions.TimeoutException: Message: timeout
的报错。
通过在 try ... except 后面添加 finally: 当try和except没有完成时让页面刷新程序重新调用(刷新页面或许有用)
def get_detail(self, url):
msgtimeout = 1
try:
print('访问网址:{}'.format(url))
self.driver.get(url)
myxpath = '//div[@class="PDF_pox"]//*[text()]'
locator = (By.XPATH, myxpath)
self.wait.until(EC.presence_of_element_located(locator))
time.sleep(0.25)
html = self.driver.page_source
msgtimeout = 0
return html
except Exception as e:
print('获取详情页失败:{}'.format(e))
self.driver.delete_all_cookies()
print('清除cookie 重新访问 {}'.format(url))
msgtimeout = 0
return self.get_detail(url)
finally:
if msgtimeout:
print('wo 来处理timeout异常')
return self.get_detail(url)
'