soup.select()返回一个空列表

发布于 2025-02-03 04:25:47 字数 1062 浏览 1 评论 0原文

我有一个。选择的问题,它总是在练习Webscrap的同时返回一个空列表。 我在以下页面上工作: https://presse.ania.net/news/news/? page = 1 使用BeautifulSoup。

我正在获取和解析html如下:

url = f"https://presse.ania.net/news/?page=1"
headers = {'User-Agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.64 Safari/537.36'

mr = requests.get(url, headers = headers)
soupmp = bs(mr.content, "lxml")

我尝试检索页面上显示的每个文章的URL,在类“标题row-space-1”下(我使用Chrome的开发人员工具来查找类,残疾JavaScript,如其他所建议的JavaScript帖子),然后将它们放入名为“新闻

news = []
for link in soupmp.select("a.title.row-space-1[href]"):
        news.append(link.get('href'))

列表

[]

  • 中页面下载
  • 使用.find_all,。查找和选择,首先使用CSS选择器尝试,然后尝试Kwargs(全部返回空列表或非类型对象)。

这些都不奏效,我陷入了错误。我认为我的理解此HTML并使用CSS选择类有特定的方式,但我找不到什么(部分是因为我成功地将此代码用于其他网站。)。

您能否教育我关于我缺少的东西?

感谢您的帮助!

I have an issue with .select which always returns an empty list while practicing webscraping.
I am working on the following page: https://presse.ania.net/news/?page=1 using BeautifulSoup.

I am getting and parsing HTML as following:

url = f"https://presse.ania.net/news/?page=1"
headers = {'User-Agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.64 Safari/537.36'

mr = requests.get(url, headers = headers)
soupmp = bs(mr.content, "lxml")

I try to retrieve the urls of each articles displayed on the page, under class "title row-space-1" (I use developer tools of chrome to find class, disabled JavaScript like suggested in other posts), and put them in a list called "news"

news = []
for link in soupmp.select("a.title.row-space-1[href]"):
        news.append(link.get('href'))

However I keep having an empty list when I print 'news'

[]

Searching on Stackoverflow I tried:

  • Disabling JavaScript on website
  • Adding a time sleep to let the page download
  • Using .find_all, . find and .select, tried with CSS selectors first then kwargs (all return empty list or NoneType object).

None of these worked and I am stuck with my mistake. I think there is something specific in my way of understanding this HTML and selecting class with CSS but I can't find what (partly because I successfully used this code for other websites earlier.).

Could you please educate me on what I am missing?

I appreciate your help!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

狼性发作 2025-02-10 04:25:47

尝试以下操作:

import requests
from bs4 import BeautifulSoup

css = ".card.item.thumbnail.card-topic .title a"

url = "https://presse.ania.net/news/?page=1"
headers = { 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.64 Safari/537.36'}

soup = [
    f'https://presse.ania.net{a["href"]}' for a in
    BeautifulSoup(requests.get(url, headers=headers).content, "lxml").select(css)
]
print("\n".join(soup))

输出:

https://presse.ania.net/actualites/cp-ania-reaction-suite-a-lannonce-de-la-composition-du-nouveau-gouvernement-delisabeth-borne-lania-salue-la-double-reconnaissance-de-la-souverainete-industrielle-et-alimentaire-au-coeur-du-gouvernement-5c05-53c7f.html
https://presse.ania.net/actualites/cp-ania-nomination-de-marie-buisson-au-poste-de-directrice-juridique-7a6b-53c7f.html
https://presse.ania.net/actualites/cp-ania-lca-renegociations-commerciales-la-filiere-agroalimentaire-inquiete-bfe7-53c7f.html
https://presse.ania.net/actualites/cp-ania-lca-guerre-en-ukraine-et-derogations-detiquetage-eviter-les-ruptures-tout-en-garantissant-la-securite-sanitaire-et-la-transparence-de-linformation-aux-consommateurs-9824-53c7f.html
https://presse.ania.net/actualites/cp-ania-il-est-urgent-de-re-ouvrir-les-negociations-commerciales-c4d4-53c7f.html
https://presse.ania.net/actualites/cp-ania-lca-reunion-interministerielle-sur-la-guerre-en-ukraine-appel-du-secteur-alimentaire-pour-des-mesures-durgence-abb3-53c7f.html
https://presse.ania.net/actualites/cp-ania-lca-signature-de-lavenant-au-contrat-strategique-de-la-filiere-agroalimentaire-2022-2023-138f-53c7f.html
https://presse.ania.net/actualites/ania-note-de-conjoncture-economique-une-rentree-2022-sous-tensions-216b-53c7f.html
https://presse.ania.net/actualites/cp-ania-presidentielles-2022-le-grand-oral-de-lagroalimentaire-lalimentation-est-laffaire-de-tous-les-francais-lindustrie-alimentaire-a-la-rencontre-des-candidats-a-la-presidentielle-fe01-53c7f.html
https://presse.ania.net/actualites/cp-ania-negociations-commerciales-avec-la-grande-distribution-j-5-le-compte-ny-est-toujours-pas-au-risque-de-lasphyxie-collective-de-toute-la-filiere-830b-53c7f.html
https://presse.ania.net/actualites/cp-ania-lania-engagee-pour-leducation-et-la-promotion-des-comportements-favorables-a-la-sante-et-a-la-planete-fa16-53c7f.html
https://presse.ania.net/actualites/cp-ania-varenne-de-leau-lania-salue-les-engagements-pris-par-letat-et-rappelle-la-mobilisation-des-entreprises-5ae0-53c7f.html

Try this:

import requests
from bs4 import BeautifulSoup

css = ".card.item.thumbnail.card-topic .title a"

url = "https://presse.ania.net/news/?page=1"
headers = { 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.64 Safari/537.36'}

soup = [
    f'https://presse.ania.net{a["href"]}' for a in
    BeautifulSoup(requests.get(url, headers=headers).content, "lxml").select(css)
]
print("\n".join(soup))

Output:

https://presse.ania.net/actualites/cp-ania-reaction-suite-a-lannonce-de-la-composition-du-nouveau-gouvernement-delisabeth-borne-lania-salue-la-double-reconnaissance-de-la-souverainete-industrielle-et-alimentaire-au-coeur-du-gouvernement-5c05-53c7f.html
https://presse.ania.net/actualites/cp-ania-nomination-de-marie-buisson-au-poste-de-directrice-juridique-7a6b-53c7f.html
https://presse.ania.net/actualites/cp-ania-lca-renegociations-commerciales-la-filiere-agroalimentaire-inquiete-bfe7-53c7f.html
https://presse.ania.net/actualites/cp-ania-lca-guerre-en-ukraine-et-derogations-detiquetage-eviter-les-ruptures-tout-en-garantissant-la-securite-sanitaire-et-la-transparence-de-linformation-aux-consommateurs-9824-53c7f.html
https://presse.ania.net/actualites/cp-ania-il-est-urgent-de-re-ouvrir-les-negociations-commerciales-c4d4-53c7f.html
https://presse.ania.net/actualites/cp-ania-lca-reunion-interministerielle-sur-la-guerre-en-ukraine-appel-du-secteur-alimentaire-pour-des-mesures-durgence-abb3-53c7f.html
https://presse.ania.net/actualites/cp-ania-lca-signature-de-lavenant-au-contrat-strategique-de-la-filiere-agroalimentaire-2022-2023-138f-53c7f.html
https://presse.ania.net/actualites/ania-note-de-conjoncture-economique-une-rentree-2022-sous-tensions-216b-53c7f.html
https://presse.ania.net/actualites/cp-ania-presidentielles-2022-le-grand-oral-de-lagroalimentaire-lalimentation-est-laffaire-de-tous-les-francais-lindustrie-alimentaire-a-la-rencontre-des-candidats-a-la-presidentielle-fe01-53c7f.html
https://presse.ania.net/actualites/cp-ania-negociations-commerciales-avec-la-grande-distribution-j-5-le-compte-ny-est-toujours-pas-au-risque-de-lasphyxie-collective-de-toute-la-filiere-830b-53c7f.html
https://presse.ania.net/actualites/cp-ania-lania-engagee-pour-leducation-et-la-promotion-des-comportements-favorables-a-la-sante-et-a-la-planete-fa16-53c7f.html
https://presse.ania.net/actualites/cp-ania-varenne-de-leau-lania-salue-les-engagements-pris-par-letat-et-rappelle-la-mobilisation-des-entreprises-5ae0-53c7f.html
歌枕肩 2025-02-10 04:25:47

您可以尝试一下并插入通缉元素的类。
我希望对您有所帮助。

soup = BeautifulSoup(html_source, 'html.parser')

# Find element by class which have href attr
el = soup.find(class_='1', href=True)

# Print href value
print(el['href'])

you could try this and insert the class of the wanted element.
I hope that helps you a bit.

soup = BeautifulSoup(html_source, 'html.parser')

# Find element by class which have href attr
el = soup.find(class_='1', href=True)

# Print href value
print(el['href'])
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文