网络剪贴后,将文本写给CSV

发布于 2025-01-31 07:33:10 字数 3211 浏览 1 评论 0原文

我正在通过刮擦提取房地产数据。我希望这些数据在CSV文件中。
当我将数据写入CSV时,如果第一次刮擦的项目没有我需要的值,它只会跳过所有行(但其他项目具有该值),这是无效的,并且没有创建任何行,甚至没有空值。

我用于Web刮擦的代码块:

from selenium import webdriver
from bs4 import BeautifulSoup
import re
import csv
import time


PATH = 'C:\Program Files (x86)\chromedriver.exe'
driver = webdriver.Chrome(PATH)
data = []


def get_dl(soup):
    d_list = {}

    for dl in soup.findAll("dl", {"class": "obj-details"}):
        for el in dl.find_all(["dt", "dd"]):
            if el.name == 'dt':
                key = el.get_text(strip=True)
            elif key in ['Plotas:', 'Buto numeris:', 'Metai:', 'Namo numeris:', 'Kambarių sk.:', 'Aukštas:', 'Aukštų sk.:', 'Pastato tipas:', 'Šildymas:', 'Įrengimas:', 'Pastato energijos suvartojimo klasė:', 'Ypatybės:', 'Papildomos patalpos:', 'Papildoma įranga:', 'Apsauga:']:
                d_list[key] = ' '.join(el.text.strip().replace("\n", ", ").split('NAUDINGA')[0].split('m²')[0].split())
    return d_list

for puslapis in range(1, 2):
    driver.get(f'https://www.aruodas.lt/butai/kaune/puslapis/{puslapis}')
    response = driver.page_source
    soup = BeautifulSoup(response, 'html.parser')
    blocks = soup.find_all('tr', class_='list-row')
    stored_urls = []

    for url in blocks:
        try:
            stored_urls.append(url.a['href'])
        except:
            pass

    for link in stored_urls:
        driver.get(link)
        response = driver.page_source
        soup = BeautifulSoup(response, 'html.parser')
        h1 = soup.find('h1', 'obj-header-text')
        price = soup.find('div', class_ = 'price-left')

        try:
            address1 = h1.get_text(strip=True)
            address2 = re.findall(r'(.*),[^,]*$', address1)
            address = ''.join(address2)
            city, district, street = address.split(',')
        except:
            city, district, street = 'NaN'

        try:
            full_price = price.find('span', class_ = 'price-eur').text.strip()
            full_price1 = full_price.replace('€', '').replace(' ','').strip()
        except:
            full_price1 = 'NaN'

        try:
            price_sq_m = price.find('span', class_ = 'price-per').text.strip()
            price_sq_m1 = price_sq_m.replace('€/m²)', '').replace('(domina keitimas)', '').replace('(', '').replace(' ','').strip()
        except:
            price_sq_m1 = 'NaN'

        try:
            price_change = price.find('div', class_ = 'price-change').text.strip()
            price_change1 = price_change.replace('%', '').strip()
        except:
            price_change1 = 'NaN'

        data.append({'city': city, 'district': district, 'street': street, 'full_price': full_price1, 'price_sq_m': price_sq_m1, 'price_change': price_change1, **get_dl(soup)})

例如,在密钥列表中有值:

['Ypatybės:']:

但是在页面中,我在刮擦第一个平面没有该值,而根本没有创建行,这不是我所需要的。

用于在CSV中写作的代码块:

with open('output_kaunas.csv', 'w', encoding='utf-8', newline='') as f_output:
    csv_output = csv.DictWriter(f_output, fieldnames=data[0].keys(), extrasaction='ignore')
    csv_output.writeheader()
    csv_output.writerows(data)

因此,我的问题是,如何创建行,使用我需要的功能,即使该功能也不存在于第一个刮擦项目中。

I am extracting real estate data via scraping in python. I want this data to be in CSV file.
When i write data to CSV if first scraped item don't have value I need, it just skip all the row (but other items have that values), which is null and not creating any row, not even with null values.

My code block for web scraping:

from selenium import webdriver
from bs4 import BeautifulSoup
import re
import csv
import time


PATH = 'C:\Program Files (x86)\chromedriver.exe'
driver = webdriver.Chrome(PATH)
data = []


def get_dl(soup):
    d_list = {}

    for dl in soup.findAll("dl", {"class": "obj-details"}):
        for el in dl.find_all(["dt", "dd"]):
            if el.name == 'dt':
                key = el.get_text(strip=True)
            elif key in ['Plotas:', 'Buto numeris:', 'Metai:', 'Namo numeris:', 'Kambarių sk.:', 'Aukštas:', 'Aukštų sk.:', 'Pastato tipas:', 'Šildymas:', 'Įrengimas:', 'Pastato energijos suvartojimo klasė:', 'Ypatybės:', 'Papildomos patalpos:', 'Papildoma įranga:', 'Apsauga:']:
                d_list[key] = ' '.join(el.text.strip().replace("\n", ", ").split('NAUDINGA')[0].split('m²')[0].split())
    return d_list

for puslapis in range(1, 2):
    driver.get(f'https://www.aruodas.lt/butai/kaune/puslapis/{puslapis}')
    response = driver.page_source
    soup = BeautifulSoup(response, 'html.parser')
    blocks = soup.find_all('tr', class_='list-row')
    stored_urls = []

    for url in blocks:
        try:
            stored_urls.append(url.a['href'])
        except:
            pass

    for link in stored_urls:
        driver.get(link)
        response = driver.page_source
        soup = BeautifulSoup(response, 'html.parser')
        h1 = soup.find('h1', 'obj-header-text')
        price = soup.find('div', class_ = 'price-left')

        try:
            address1 = h1.get_text(strip=True)
            address2 = re.findall(r'(.*),[^,]*

For example in key list there's value:

['Ypatybės:']:

But in page, where I am scraping first flat doesn't have that value and just doesn't create row at all, which is not what I need.

Code block for writing in csv:

with open('output_kaunas.csv', 'w', encoding='utf-8', newline='') as f_output:
    csv_output = csv.DictWriter(f_output, fieldnames=data[0].keys(), extrasaction='ignore')
    csv_output.writeheader()
    csv_output.writerows(data)

So, my question is, how to create row, with feature I need, even that feature doesn't exists in first scraped item.

, address1) address = ''.join(address2) city, district, street = address.split(',') except: city, district, street = 'NaN' try: full_price = price.find('span', class_ = 'price-eur').text.strip() full_price1 = full_price.replace('€', '').replace(' ','').strip() except: full_price1 = 'NaN' try: price_sq_m = price.find('span', class_ = 'price-per').text.strip() price_sq_m1 = price_sq_m.replace('€/m²)', '').replace('(domina keitimas)', '').replace('(', '').replace(' ','').strip() except: price_sq_m1 = 'NaN' try: price_change = price.find('div', class_ = 'price-change').text.strip() price_change1 = price_change.replace('%', '').strip() except: price_change1 = 'NaN' data.append({'city': city, 'district': district, 'street': street, 'full_price': full_price1, 'price_sq_m': price_sq_m1, 'price_change': price_change1, **get_dl(soup)})

For example in key list there's value:

But in page, where I am scraping first flat doesn't have that value and just doesn't create row at all, which is not what I need.

Code block for writing in csv:

So, my question is, how to create row, with feature I need, even that feature doesn't exists in first scraped item.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

萌化 2025-02-07 07:33:10

要将数据存储在CSV文件中,您可以

df = pd.DataFrame(data).to_csv('output_kaunas.csv',index=False)

根据您的完整代码使用PANDAS DataFrame:

from selenium import webdriver
from bs4 import BeautifulSoup
import re
import pandas as pd
import time


PATH = 'C:\Program Files (x86)\chromedriver.exe'
driver = webdriver.Chrome(PATH)
data = []


def get_dl(soup):
    d_list = {}

    for dl in soup.findAll("dl", {"class": "obj-details"}):
        for el in dl.find_all(["dt", "dd"]):
            if el.name == 'dt':
                key = el.get_text(strip=True)
            elif key in ['Plotas:', 'Buto numeris:', 'Metai:', 'Namo numeris:', 'Kambarių sk.:', 'Aukštas:', 'Aukštų sk.:', 'Pastato tipas:', 'Šildymas:', 'Įrengimas:', 'Pastato energijos suvartojimo klasė:', 'Ypatybės:', 'Papildomos patalpos:', 'Papildoma įranga:', 'Apsauga:']:
                d_list[key] = ' '.join(el.text.strip().replace("\n", ", ").split('NAUDINGA')[0].split('m²')[0].split())
    return d_list

for puslapis in range(1, 2):
    driver.get(f'https://www.aruodas.lt/butai/kaune/puslapis/{puslapis}')
    response = driver.page_source
    soup = BeautifulSoup(response, 'html.parser')
    blocks = soup.find_all('tr', class_='list-row')
    stored_urls = []

    for url in blocks:
        try:
            stored_urls.append(url.a['href'])
        except:
            pass

    for link in stored_urls:
        driver.get(link)
        response = driver.page_source
        soup = BeautifulSoup(response, 'html.parser')
        h1 = soup.find('h1', 'obj-header-text')
        price = soup.find('div', class_ = 'price-left')

        try:
            address1 = h1.get_text(strip=True)
            address2 = re.findall(r'(.*),[^,]*
, address1)
            address = ''.join(address2)
            city, district, street = address.split(',')
        except:
            city, district, street = 'NaN'

        try:
            full_price = price.find('span', class_ = 'price-eur').text.strip()
            full_price1 = full_price.replace('€', '').replace(' ','').strip()
        except:
            full_price1 = 'NaN'

        try:
            price_sq_m = price.find('span', class_ = 'price-per').text.strip()
            price_sq_m1 = price_sq_m.replace('€/m²)', '').replace('(domina keitimas)', '').replace('(', '').replace(' ','').strip()
        except:
            price_sq_m1 = 'NaN'

        try:
            price_change = price.find('div', class_ = 'price-change').text.strip()
            price_change1 = price_change.replace('%', '').strip()
        except:
            price_change1 = 'NaN'

        data.append({'city': city, 'district': district, 'street': street, 'full_price': full_price1, 'price_sq_m': price_sq_m1, 'price_change': price_change1, **get_dl(soup)})


df = pd.DataFrame(data).to_csv('output_kaunas.csv',index=False)

To store data in csv file you can use pandas Dataframe

df = pd.DataFrame(data).to_csv('output_kaunas.csv',index=False)

According to your full code:

from selenium import webdriver
from bs4 import BeautifulSoup
import re
import pandas as pd
import time


PATH = 'C:\Program Files (x86)\chromedriver.exe'
driver = webdriver.Chrome(PATH)
data = []


def get_dl(soup):
    d_list = {}

    for dl in soup.findAll("dl", {"class": "obj-details"}):
        for el in dl.find_all(["dt", "dd"]):
            if el.name == 'dt':
                key = el.get_text(strip=True)
            elif key in ['Plotas:', 'Buto numeris:', 'Metai:', 'Namo numeris:', 'Kambarių sk.:', 'Aukštas:', 'Aukštų sk.:', 'Pastato tipas:', 'Šildymas:', 'Įrengimas:', 'Pastato energijos suvartojimo klasė:', 'Ypatybės:', 'Papildomos patalpos:', 'Papildoma įranga:', 'Apsauga:']:
                d_list[key] = ' '.join(el.text.strip().replace("\n", ", ").split('NAUDINGA')[0].split('m²')[0].split())
    return d_list

for puslapis in range(1, 2):
    driver.get(f'https://www.aruodas.lt/butai/kaune/puslapis/{puslapis}')
    response = driver.page_source
    soup = BeautifulSoup(response, 'html.parser')
    blocks = soup.find_all('tr', class_='list-row')
    stored_urls = []

    for url in blocks:
        try:
            stored_urls.append(url.a['href'])
        except:
            pass

    for link in stored_urls:
        driver.get(link)
        response = driver.page_source
        soup = BeautifulSoup(response, 'html.parser')
        h1 = soup.find('h1', 'obj-header-text')
        price = soup.find('div', class_ = 'price-left')

        try:
            address1 = h1.get_text(strip=True)
            address2 = re.findall(r'(.*),[^,]*
, address1)
            address = ''.join(address2)
            city, district, street = address.split(',')
        except:
            city, district, street = 'NaN'

        try:
            full_price = price.find('span', class_ = 'price-eur').text.strip()
            full_price1 = full_price.replace('€', '').replace(' ','').strip()
        except:
            full_price1 = 'NaN'

        try:
            price_sq_m = price.find('span', class_ = 'price-per').text.strip()
            price_sq_m1 = price_sq_m.replace('€/m²)', '').replace('(domina keitimas)', '').replace('(', '').replace(' ','').strip()
        except:
            price_sq_m1 = 'NaN'

        try:
            price_change = price.find('div', class_ = 'price-change').text.strip()
            price_change1 = price_change.replace('%', '').strip()
        except:
            price_change1 = 'NaN'

        data.append({'city': city, 'district': district, 'street': street, 'full_price': full_price1, 'price_sq_m': price_sq_m1, 'price_change': price_change1, **get_dl(soup)})


df = pd.DataFrame(data).to_csv('output_kaunas.csv',index=False)
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文