使用美丽的汤和砂纸错误给我这个错误,请参阅分配前

发布于 2025-02-11 15:43:52 字数 2434 浏览 1 评论 0 原文

我正在尝试刮擦数据,但它们给我错误 unboundLocalError:分配之前引用的本地变量'd3'如果您有任何解决方案,请帮助我这些页面链接

import scrapy
from scrapy.http import Request
from scrapy.crawler import CrawlerProcess
from bs4 import BeautifulSoup

class TestSpider(scrapy.Spider):
    name = 'test'
    start_urls = ['https://rejestradwokatow.pl/adwokat/list/strona/1/sta/2,3,9']
    custom_settings = {
        'CONCURRENT_REQUESTS_PER_DOMAIN': 1,
        'DOWNLOAD_DELAY': 1,
        'USER_AGENT': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36'
        }


    def parse(self, response):
        soup=BeautifulSoup(response.text, 'html.parser')
        tra = soup.find_all('td', class_='icon_link')
        for links in tra:
            for link in links.find_all('a', href=True):
                comp = link['href']
                yield Request(comp, callback=self.parse_book)
     
    
    def parse_book(self, response):
        soup=BeautifulSoup(response.text, 'html.parser')
        details = soup.find_all('div', class_='line_list_K')
        for detail in details:
            try:
                status = detail.find(
                    'span', string='Status:').findNext('div').getText()
            except:
                pass

            try:
                d1 = detail.find('span', string='Data wpisu w aktualnej izbie na listę adwokatów:').findNext(
                    'div').getText()
            except:
                pass
            
            try:
                d3 = detail.find('span', string='Ostatnie miejsce wpisu:').findNext(
                    'div').getText()
            except:
                pass
            
            
            try:
                d4 = detail.find('span', string='Stary nr wpisu:').findNext(
                    'div').getText()
            except:
                pass
            
            
            try:
                d5 = detail.find('span', string='Zastępca:').findNext(
                    'div').getText()
            except:
                pass
            
            yield{
               'status':status,
               "d1":d1,
               "d3":d3,
               "d4":d4,
               "d5":d5
               }
            

I am trying to scrape data but they give me error UnboundLocalError: local variable 'd3' referenced before assignment how I can solve these error any solution please suggest me I search many solution but I cannot find any solution that help me if you have any solution then suggest me these is page link https://rejestradwokatow.pl/adwokat/abaewicz-agnieszka-51004

import scrapy
from scrapy.http import Request
from scrapy.crawler import CrawlerProcess
from bs4 import BeautifulSoup

class TestSpider(scrapy.Spider):
    name = 'test'
    start_urls = ['https://rejestradwokatow.pl/adwokat/list/strona/1/sta/2,3,9']
    custom_settings = {
        'CONCURRENT_REQUESTS_PER_DOMAIN': 1,
        'DOWNLOAD_DELAY': 1,
        'USER_AGENT': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36'
        }


    def parse(self, response):
        soup=BeautifulSoup(response.text, 'html.parser')
        tra = soup.find_all('td', class_='icon_link')
        for links in tra:
            for link in links.find_all('a', href=True):
                comp = link['href']
                yield Request(comp, callback=self.parse_book)
     
    
    def parse_book(self, response):
        soup=BeautifulSoup(response.text, 'html.parser')
        details = soup.find_all('div', class_='line_list_K')
        for detail in details:
            try:
                status = detail.find(
                    'span', string='Status:').findNext('div').getText()
            except:
                pass

            try:
                d1 = detail.find('span', string='Data wpisu w aktualnej izbie na listę adwokatów:').findNext(
                    'div').getText()
            except:
                pass
            
            try:
                d3 = detail.find('span', string='Ostatnie miejsce wpisu:').findNext(
                    'div').getText()
            except:
                pass
            
            
            try:
                d4 = detail.find('span', string='Stary nr wpisu:').findNext(
                    'div').getText()
            except:
                pass
            
            
            try:
                d5 = detail.find('span', string='Zastępca:').findNext(
                    'div').getText()
            except:
                pass
            
            yield{
               'status':status,
               "d1":d1,
               "d3":d3,
               "d4":d4,
               "d5":d5
               }
            

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

執念 2025-02-18 15:43:53

您将分配给 d3 try/deft 中。如果发生错误,则不会发生分配。如果它发生在第一次迭代中,则变量是不设置的;如果它在以后的迭代中发生,您将不会遇到错误,但是您将 d3 的先前值放在字典中。

您应该在中分配一个默认值: block。

            try:
                d3 = detail.find('span', string='Ostatnie miejsce wpisu:').findNext(
                    'div').getText()
            except:
                d3 = ''

您还应该为所有其他变量分配执行此操作。

如果您始终遇到此错误,则可能会在 lidet.find()中寻找错误的东西。您应该找出根本原因并修复它。

You have the assignment to d3 inside a try/except. If this gets an error, the assignment won't happen. If it happens on the first iteration, the variable is unset; if it happens on a later iteration, you won't get an error, but you'll put the previous value of d3 in the dictionary.

You should assign a default value in the except: block.

            try:
                d3 = detail.find('span', string='Ostatnie miejsce wpisu:').findNext(
                    'div').getText()
            except:
                d3 = ''

You should also do this for all the other variable assignments.

If you always get this error, you're probably looking for the wrong thing in detail.find(). You should figure out the root cause and fix it.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文