Python Beautifulsoup HTML解析器不起作用

发布于 2025-02-09 20:13:03 字数 342 浏览 2 评论 0原文

在这里,我正在尝试阅读页面并分别使用列创建CSV。但是我无法阅读解析数据以使用查找功能。汤数据没有网页中存在的数据

import requests
import pandas as pd
from bs4 import BeautifulSoup
url = "https://www.fancraze.com/marketplace/sales/mornemorkel1?tab=latest-sales"
r = requests.get(url)
soup = BeautifulSoup(r.content, "html.parser")

Here i'm trying to read the page and create a csv with columns respectively. But i'm unable to read the parsed data to use find function. The soup data doesn't have the data present in webpage

import requests
import pandas as pd
from bs4 import BeautifulSoup
url = "https://www.fancraze.com/marketplace/sales/mornemorkel1?tab=latest-sales"
r = requests.get(url)
soup = BeautifulSoup(r.content, "html.parser")

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

明天过后 2025-02-16 20:13:03

站点使用API​​获取数据,因此您可以处理

import pandas as pd
import requests

url = 'https://api.faze.app/v1/latestSalesInAGroup/mornemorkel1'
result = []
response = requests.get(url=url)
for data in response.json()['data']:
    data = {
        'id': data['momentId']['id'],
        'seller': data['sellerAddress']['userName'],
        'buyer': data['buyerAddress']['userName'],
        'price': data['price'],
        'created': data['createdAt']
    }
    result.append(data)
df = pd.DataFrame(result)
print(df)

输出:

      id                     seller  ... price                   created
0   1882                   singal22  ...     8  2022-06-22T14:34:39.403Z
1   1737           olive_creepy2343  ...     7  2022-06-22T14:09:32.070Z
2   1256          tomato_wicked3294  ...    10  2022-06-22T13:49:20.895Z
3   1931  aquamarine_productive9244  ...     6  2022-06-22T13:41:49.153Z
4   1603  aquamarine_productive9244  ...     9  2022-06-22T13:28:01.624Z
..   ...                        ...  ...   ...                       ...
95  1026           olive_creepy2343  ...     7  2022-04-16T18:00:00.662Z
96  1719                 Hhassan136  ...     5  2022-04-14T23:14:12.037Z
97  2054                 Cricket101  ...     5  2022-04-14T21:30:13.185Z
98  1961                  emzeden_9  ...     6  2022-04-14T18:02:05.194Z
99  1194       amaranth_curious1871  ...     5  2022-04-14T17:45:25.266Z

Site use API to get data, so you can handle it

import pandas as pd
import requests

url = 'https://api.faze.app/v1/latestSalesInAGroup/mornemorkel1'
result = []
response = requests.get(url=url)
for data in response.json()['data']:
    data = {
        'id': data['momentId']['id'],
        'seller': data['sellerAddress']['userName'],
        'buyer': data['buyerAddress']['userName'],
        'price': data['price'],
        'created': data['createdAt']
    }
    result.append(data)
df = pd.DataFrame(result)
print(df)

OUTPUT:

      id                     seller  ... price                   created
0   1882                   singal22  ...     8  2022-06-22T14:34:39.403Z
1   1737           olive_creepy2343  ...     7  2022-06-22T14:09:32.070Z
2   1256          tomato_wicked3294  ...    10  2022-06-22T13:49:20.895Z
3   1931  aquamarine_productive9244  ...     6  2022-06-22T13:41:49.153Z
4   1603  aquamarine_productive9244  ...     9  2022-06-22T13:28:01.624Z
..   ...                        ...  ...   ...                       ...
95  1026           olive_creepy2343  ...     7  2022-04-16T18:00:00.662Z
96  1719                 Hhassan136  ...     5  2022-04-14T23:14:12.037Z
97  2054                 Cricket101  ...     5  2022-04-14T21:30:13.185Z
98  1961                  emzeden_9  ...     6  2022-04-14T18:02:05.194Z
99  1194       amaranth_curious1871  ...     5  2022-04-14T17:45:25.266Z
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文