致命的python错误:在使用硒解析时无法从堆栈溢出错误中恢复

发布于 2025-01-23 04:14:27 字数 4070 浏览 0 评论 0原文

根据任务,我需要解析所有书籍(浏览所有类别并转到每个产品)。该网站上大约有1万本书。但是,当执行脚本后,一段时间后会发生错误:

Fatal Python error: Cannot recover from stack overflow.

我知道,很可能有足够的RAM(根据Internet上的类似问题来判断),但是如何解决该问题,尚不清楚我。 这就是我的代码的样子:

import requests
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from bs4 import BeautifulSoup as bs
from mongodb import connect_mongo_bd
import time


db = connect_mongo_bd()
collections = db.comparison_new

print('Start!')

headers = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.84 Safari/537.36'}
service = Service('/home/Test/Desktop/work/python_parser/chromedriver')
options = webdriver.ChromeOptions()
options.headless = True
options.binary_location = '/data/opt/apps/cn.google.chrome/files/google-chrome'
browser = webdriver.Chrome(service=service, options=options)


def pagination_cycle(url):

    print(url)

    try:
        browser.get(url)

        wait = WebDriverWait(browser, 10)
        wait.until(
            EC.presence_of_element_located((By.CLASS_NAME, 'product'))
        )

        first_soup = bs(browser.page_source, 'html.parser')

        products = first_soup.select('#resProd > div.product')

        products_list = []
        for product in products:

            time.sleep(1)

            product_link_tag = product.select_one('div.rtd > div.title-mine > a')
            if not product_link_tag:
                continue
            else:
                product_link = 'https://test.de' + product_link_tag['href']

            print(product_link)

            second_request = requests.get(product_link, headers=headers)

            if second_request.status_code == 200:

                second_soup = bs(second_request.content, 'html.parser')

                product_name_tag = product.select_one('div.rtd > div.title-mine > a')
                if product_name_tag:
                    product_name = product_name_tag.text
                else:
                    continue

                product_price_tag = second_soup.select_one('#product_shop > div.product_list_style > div.item-info > div > span.price2')
                if product_price_tag:
                    product_price = float(product_price_tag.text.replace(' €', ''))
                else:
                    continue

                product_isbn_tag = second_soup.select_one('#product_shop > div.product_list_style > div.item-info').find(
                    text='ISBN'
                )
                if product_isbn_tag:
                    product_isbn = product_isbn_tag.find_parent().find_next_sibling().text.replace('-', '')
                else:
                    continue

                collections.update_one(
                    {
                        'isbn': product_isbn
                    },
                    {
                        '$set': {
                            'name': product_name,
                            'test_price': product_price
                        },
                        '$inc': {
                            'cnt_updated': 1
                        }
                    },
                    upsert=True
                )

        next_link = first_soup.select_one(
            '#resPage > div.pager > div > div.pages > ol > li.current'
        ).find_next_sibling().find('a')

        if next_link:
            pagination_cycle('https://test.de/knigi/' + next_link['href'])

    except Exception as ex:
        print("Ошибка: " + ex.__class__.__name__)
        time.sleep(10)
        pagination_cycle(url)

    return True


result = pagination_cycle('https://test.de/knigi/')
print(result)

browser.quit()
db.close()

一切都会好起来的,但是一段时间后,我不断收到此错误:

请告诉我该怎么办以及如何解决此问题?

According to the task, I need to parse all books (go through all categories and go to each product). There are about 100 thousand books on the site. But when executing the script, after some time, an error occurs:

Fatal Python error: Cannot recover from stack overflow.

I understand that, most likely, there is not enough RAM (judging by similar questions found on the Internet), but how to get around it, it is not yet clear to me.
This is what my code looks like:

import requests
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from bs4 import BeautifulSoup as bs
from mongodb import connect_mongo_bd
import time


db = connect_mongo_bd()
collections = db.comparison_new

print('Start!')

headers = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.84 Safari/537.36'}
service = Service('/home/Test/Desktop/work/python_parser/chromedriver')
options = webdriver.ChromeOptions()
options.headless = True
options.binary_location = '/data/opt/apps/cn.google.chrome/files/google-chrome'
browser = webdriver.Chrome(service=service, options=options)


def pagination_cycle(url):

    print(url)

    try:
        browser.get(url)

        wait = WebDriverWait(browser, 10)
        wait.until(
            EC.presence_of_element_located((By.CLASS_NAME, 'product'))
        )

        first_soup = bs(browser.page_source, 'html.parser')

        products = first_soup.select('#resProd > div.product')

        products_list = []
        for product in products:

            time.sleep(1)

            product_link_tag = product.select_one('div.rtd > div.title-mine > a')
            if not product_link_tag:
                continue
            else:
                product_link = 'https://test.de' + product_link_tag['href']

            print(product_link)

            second_request = requests.get(product_link, headers=headers)

            if second_request.status_code == 200:

                second_soup = bs(second_request.content, 'html.parser')

                product_name_tag = product.select_one('div.rtd > div.title-mine > a')
                if product_name_tag:
                    product_name = product_name_tag.text
                else:
                    continue

                product_price_tag = second_soup.select_one('#product_shop > div.product_list_style > div.item-info > div > span.price2')
                if product_price_tag:
                    product_price = float(product_price_tag.text.replace(' €', ''))
                else:
                    continue

                product_isbn_tag = second_soup.select_one('#product_shop > div.product_list_style > div.item-info').find(
                    text='ISBN'
                )
                if product_isbn_tag:
                    product_isbn = product_isbn_tag.find_parent().find_next_sibling().text.replace('-', '')
                else:
                    continue

                collections.update_one(
                    {
                        'isbn': product_isbn
                    },
                    {
                        '$set': {
                            'name': product_name,
                            'test_price': product_price
                        },
                        '$inc': {
                            'cnt_updated': 1
                        }
                    },
                    upsert=True
                )

        next_link = first_soup.select_one(
            '#resPage > div.pager > div > div.pages > ol > li.current'
        ).find_next_sibling().find('a')

        if next_link:
            pagination_cycle('https://test.de/knigi/' + next_link['href'])

    except Exception as ex:
        print("Ошибка: " + ex.__class__.__name__)
        time.sleep(10)
        pagination_cycle(url)

    return True


result = pagination_cycle('https://test.de/knigi/')
print(result)

browser.quit()
db.close()

And everything would be fine, but after some time I am constantly getting this error:

Error screen in the console

Please tell me what to do and how to solve this problem?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

北斗星光 2025-01-30 04:14:27

此错误消息...

Fatal Python error: Cannot recover from stack overflow

...意味着您程序逻辑中的递归极限超过了Python解释器堆栈的最大深度。


深度潜水

您正在遇到此错误,因为您的程序开始执行该行:

result = pagination_cycle('https://test.de/knigi/')

然后在def def Pagination_cycle(url)中:最后您递归地调用与以下方式相同的方法:

pagination_cycle('https://test.de/knigi/' + next_link['href'])

一旦递归限制超过最大值Python解释器堆栈的深度,此错误被抛出。


tl; dr

sys.getRecursionLimit()

返回递归极限的当前值,即Python解释器堆栈的最大深度。该极限可防止无限递归导致C堆栈的溢出和崩溃的python。可以通过 setRecursionLimit() /a>。


This error message...

Fatal Python error: Cannot recover from stack overflow

...implies that the recursion limit in your program logic exceeds the maximum depth of the Python interpreter stack.


Deep Dive

You are getting this error as your program starts executing the line:

result = pagination_cycle('https://test.de/knigi/')

Then within def pagination_cycle(url): at the end you are recursively calling the same method as in:

pagination_cycle('https://test.de/knigi/' + next_link['href'])

Once the recursion limit exceeds the maximum depth of the Python interpreter stack, this error is thrown.


tl; dr

sys.getrecursionlimit():

Return the current value of the recursion limit, the maximum depth of the Python interpreter stack. This limit prevents infinite recursion from causing an overflow of the C stack and crashing Python. It can be set by setrecursionlimit().

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文