刮擦信息时列表错误的列表错误

发布于 2025-02-12 04:46:13 字数 1365 浏览 2 评论 0原文

我正在尝试提取信息,但他们会给我不可变的列表错误,这些是链接 https://rejestradwokatow.pl/adwokat/abaewicz-agnieszka-51004

import scrapy
from scrapy.http import Request
from scrapy.crawler import CrawlerProcess

class TestSpider(scrapy.Spider):
    name = 'test'
    start_urls = ['https://rejestradwokatow.pl/adwokat/list/strona/1/sta/2,3,9']
    custom_settings = {
        'CONCURRENT_REQUESTS_PER_DOMAIN': 1,
        'DOWNLOAD_DELAY': 1,
        'USER_AGENT': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36'
        }


  

    def parse(self, response):
        wev={}
        tic = response.xpath("//div[@class='line_list_K']//div//span//text()").getall()
        det = response.xpath("//div[@class='line_list_K']//div//div//text()").getall()
        wev[tuple(tic)]=[i.strip() for i in det]
        
        yield wev

他们会给我这样的输出:

但我希望这样的输出:

I am trying to extract information but they will give me error of unshapable list these is page link https://rejestradwokatow.pl/adwokat/abaewicz-agnieszka-51004

import scrapy
from scrapy.http import Request
from scrapy.crawler import CrawlerProcess

class TestSpider(scrapy.Spider):
    name = 'test'
    start_urls = ['https://rejestradwokatow.pl/adwokat/list/strona/1/sta/2,3,9']
    custom_settings = {
        'CONCURRENT_REQUESTS_PER_DOMAIN': 1,
        'DOWNLOAD_DELAY': 1,
        'USER_AGENT': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36'
        }


  

    def parse(self, response):
        wev={}
        tic = response.xpath("//div[@class='line_list_K']//div//span//text()").getall()
        det = response.xpath("//div[@class='line_list_K']//div//div//text()").getall()
        wev[tuple(tic)]=[i.strip() for i in det]
        
        yield wev

They will give me output like these:
enter image description here

But I want output like these:
enter image description here

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

旧夏天 2025-02-19 04:46:13

字典键不能变形,并且必须是可见的。尝试以下操作:

def parse(self, response):
    wev={}
    tic = response.xpath("//div[@class='line_list_K']//div//span//text()").getall()
    det = response.xpath("//div[@class='line_list_K']//div//div//text()").getall()
    wev[tuple(tic)]=[i.strip() for i in det]
    print(wev)
    yield wev

甚至更简单:

def parse(self, response):
    tic = response.xpath("//div[@class='line_list_K']//div//span//text()").getall()
    det = response.xpath("//div[@class='line_list_K']//div//div//text()").getall()
    yield {tuple(tic): [i.strip() for i in det]}

Dictionary keys cannot be mutable and must be hashable. Try this:

def parse(self, response):
    wev={}
    tic = response.xpath("//div[@class='line_list_K']//div//span//text()").getall()
    det = response.xpath("//div[@class='line_list_K']//div//div//text()").getall()
    wev[tuple(tic)]=[i.strip() for i in det]
    print(wev)
    yield wev

or even simpler:

def parse(self, response):
    tic = response.xpath("//div[@class='line_list_K']//div//span//text()").getall()
    det = response.xpath("//div[@class='line_list_K']//div//div//text()").getall()
    yield {tuple(tic): [i.strip() for i in det]}
筑梦 2025-02-19 04:46:13

您必须使用zip()ticdet

        for name, value in zip(tic, det):
            wev[name.strip()] = value.strip()

{
    'Status:': 'Były adwokat', 
    'Data wpisu w aktualnej izbie na listę adwokatów:': '2013-09-01', 
    'Data skreślenia z listy:': '2019-07-23', 
    'Ostatnie miejsce wpisu:': 'Katowice', 
    'Stary nr wpisu:': '1077', 
    'Zastępca:': 'Pieprzyk Mirosław'
}

使用 将给出csv具有正确的值

Status:,Data wpisu w aktualnej izbie na listę adwokatów:,Data skreślenia z listy:,Ostatnie miejsce wpisu:,Stary nr wpisu:,Zastępca:
Były adwokat,2013-09-01,2019-07-23,Katowice,1077,Pieprzyk Mirosław

编辑:

最终您应该首先获得行,然后搜索name and code> value 在每个中排。

        all_rows = response.xpath("//div[@class='line_list_K']/div")
        
        for row in all_rows:
            name  = row.xpath(".//span/text()").get()
            value = row.xpath(".//div/text()").get()
            wev[name.strip()] = value.strip()

而且,如果某些行没有某些价值,则有时可以更安全。或当行具有Unusulal的值时,例如电子邮件,该值由JavaScript添加(但是scrapy可以运行JavaScript),但它将其作为属性将其作为tag < div class = class = “ address_e” data-ea =“ adwokat.adach” data-eb =“ gmail.com”>

,因为只有某些页面具有email> email,因此它可能不会在文件中添加此值 -因此,它需要将默认值添加到wev = {'email:':':'',...}开始时。其他值也可能存在同样的问题。

       wev = {'Email:': ''}

       for row in all_rows:
            name  = row.xpath(".//span/text()").get()
            value = row.xpath(".//div/text()").get()
            if name and value:
                wev[name.strip()] = value.strip()
            elif name and name.strip() == 'Email:':
                    # <div class="address_e" data-ea="adwokat.adach" data-eb="gmail.com"></div>
                    div = row.xpath('./div')
                    email_a = div.attrib['data-ea']
                    email_b = div.attrib['data-eb']
                    wev[name.strip()] = f'{email_a}@{email_b}'

完整的工作代码

# rejestradwokatow

import scrapy
from scrapy.crawler import CrawlerProcess

class TestSpider(scrapy.Spider):

    name = 'test'

    start_urls = [
        #'https://rejestradwokatow.pl/adwokat/list/strona/1/sta/2,3,9',
        'https://rejestradwokatow.pl/adwokat/abaewicz-agnieszka-51004',
        'https://rejestradwokatow.pl/adwokat/adach-micha-55082',
    ]

    custom_settings = {
        'CONCURRENT_REQUESTS_PER_DOMAIN': 1,
        'DOWNLOAD_DELAY': 1,
        'USER_AGENT': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36'
    }

    def parse(self, response):
        # it may need default value when item doesn't exist on page 
        wev = {
            'Status:': '',
            'Data wpisu w aktualnej izbie na listę adwokatów:': '',
            'Stary nr wpisu:': '',
            'Adres do korespondencji:': '',
            'Fax:': '',
            'Email:': '',
        }

        tic = response.xpath("//div[@class='line_list_K']//div//span//text()").getall()
        det = response.xpath("//div[@class='line_list_K']//div//div//text()").getall()

        #print(tic)
        #print(det)
        #print('---')

        all_rows = response.xpath("//div[@class='line_list_K']/div")

        for row in all_rows:
            name  = row.xpath(".//span/text()").get()
            value = row.xpath(".//div/text()").get()
            if name and value:
                wev[name.strip()] = value.strip()
            elif name and name.strip() == 'Email:':
                # <div class="address_e" data-ea="adwokat.adach" data-eb="gmail.com"></div>
                div = row.xpath('./div')
                email_a = div.attrib['data-ea']
                email_b = div.attrib['data-eb']
                wev[name.strip()] = f'{email_a}@{email_b}'

        print(wev)

        yield wev

# --- run without creating project and save results in `output.csv` ---

from scrapy.crawler import CrawlerProcess

c = CrawlerProcess({
    #'USER_AGENT': 'Mozilla/5.0',
    'FEEDS': {'output.csv': {'format': 'csv'}},  # new in 2.1
})
c.crawl(TestSpider)
c.start()

You have to use zip() to group values from tic and det

        for name, value in zip(tic, det):
            wev[name.strip()] = value.strip()

and this will give wev with

{
    'Status:': 'Były adwokat', 
    'Data wpisu w aktualnej izbie na listę adwokatów:': '2013-09-01', 
    'Data skreślenia z listy:': '2019-07-23', 
    'Ostatnie miejsce wpisu:': 'Katowice', 
    'Stary nr wpisu:': '1077', 
    'Zastępca:': 'Pieprzyk Mirosław'
}

and this will give CSV with correct values

Status:,Data wpisu w aktualnej izbie na listę adwokatów:,Data skreślenia z listy:,Ostatnie miejsce wpisu:,Stary nr wpisu:,Zastępca:
Były adwokat,2013-09-01,2019-07-23,Katowice,1077,Pieprzyk Mirosław

EDIT:

Eventually you should first get rows and later search name and value in every row.

        all_rows = response.xpath("//div[@class='line_list_K']/div")
        
        for row in all_rows:
            name  = row.xpath(".//span/text()").get()
            value = row.xpath(".//div/text()").get()
            wev[name.strip()] = value.strip()

And this method sometimes can be safer if some row don't has some value. Or when row has unusuall value like email which is added by JavaScript (but scrapy can run JavaScript) but it keep it as attributes in tag <div class="address_e" data-ea="adwokat.adach" data-eb="gmail.com">

Because only some pages have Email so it may not add this value in file - so it need to add default value to wev = {'Email:': '', ...} at start. The same problem can be with other values.

       wev = {'Email:': ''}

       for row in all_rows:
            name  = row.xpath(".//span/text()").get()
            value = row.xpath(".//div/text()").get()
            if name and value:
                wev[name.strip()] = value.strip()
            elif name and name.strip() == 'Email:':
                    # <div class="address_e" data-ea="adwokat.adach" data-eb="gmail.com"></div>
                    div = row.xpath('./div')
                    email_a = div.attrib['data-ea']
                    email_b = div.attrib['data-eb']
                    wev[name.strip()] = f'{email_a}@{email_b}'

Full working code

# rejestradwokatow

import scrapy
from scrapy.crawler import CrawlerProcess

class TestSpider(scrapy.Spider):

    name = 'test'

    start_urls = [
        #'https://rejestradwokatow.pl/adwokat/list/strona/1/sta/2,3,9',
        'https://rejestradwokatow.pl/adwokat/abaewicz-agnieszka-51004',
        'https://rejestradwokatow.pl/adwokat/adach-micha-55082',
    ]

    custom_settings = {
        'CONCURRENT_REQUESTS_PER_DOMAIN': 1,
        'DOWNLOAD_DELAY': 1,
        'USER_AGENT': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36'
    }

    def parse(self, response):
        # it may need default value when item doesn't exist on page 
        wev = {
            'Status:': '',
            'Data wpisu w aktualnej izbie na listę adwokatów:': '',
            'Stary nr wpisu:': '',
            'Adres do korespondencji:': '',
            'Fax:': '',
            'Email:': '',
        }

        tic = response.xpath("//div[@class='line_list_K']//div//span//text()").getall()
        det = response.xpath("//div[@class='line_list_K']//div//div//text()").getall()

        #print(tic)
        #print(det)
        #print('---')

        all_rows = response.xpath("//div[@class='line_list_K']/div")

        for row in all_rows:
            name  = row.xpath(".//span/text()").get()
            value = row.xpath(".//div/text()").get()
            if name and value:
                wev[name.strip()] = value.strip()
            elif name and name.strip() == 'Email:':
                # <div class="address_e" data-ea="adwokat.adach" data-eb="gmail.com"></div>
                div = row.xpath('./div')
                email_a = div.attrib['data-ea']
                email_b = div.attrib['data-eb']
                wev[name.strip()] = f'{email_a}@{email_b}'

        print(wev)

        yield wev

# --- run without creating project and save results in `output.csv` ---

from scrapy.crawler import CrawlerProcess

c = CrawlerProcess({
    #'USER_AGENT': 'Mozilla/5.0',
    'FEEDS': {'output.csv': {'format': 'csv'}},  # new in 2.1
})
c.crawl(TestSpider)
c.start()
注定孤独终老 2025-02-19 04:46:13

检查TIC的数据类型。它很可能是不能是字典键的列表。也许您可以根据您的要求将其铸造成元组。

Check the datatype of tic. It is most probably a list that cannot be dictionary keys. Maybe you can cast it to a tuple based on your requirements.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文