在UL标签下的所有LI标签中找到链接的问题

发布于 2025-02-08 13:21:23 字数 1451 浏览 1 评论 0 原文

我正在尝试在UL标签HTML代码下的所有LI标签中获取链接

<div id="chapter-list" class="sbox" style="">
<ul>
<li>
<a href="https://example.com/manga/name/2">
<div class="chpbox">
<span class="chapternum">
Chapter 2 </span>
</div>
</a>
</li>
<li>
<a href="https://example.com/manga/name/1">
<div class="chpbox">
<span class="chapternum">
Chapter 1 </span>
</div>
</a>
</li>
</ul>
</div>

我写的代码:

from bs4 import BeautifulSoup
import requests

html_page = requests.get('https://example.com/manga/name/')

soup = BeautifulSoup(html_page.content, 'html.parser')
chapters = soup.find('div', {"id": "chapter-list"})

children = chapters.findChildren("ul" , recursive=False) # when printed, it gives the the whole ul content

for litag in children.find('li'):
    print(litag.find("a")["href"])

当我尝试打印LI标签链接时,它会出现以下错误:

Traceback (most recent call last):
  File "C:\0.py", line 12, in <module>
    for litag in children.find('li'):
  File "C:\Users\hs\AppData\Local\Programs\Python\Python310\lib\site-packages\bs4\element.py", line 2289, in __getattr__
    raise AttributeError(
AttributeError: ResultSet object has no attribute 'find'. You're probably treating a list of elements like a single element. Did you call find_all() when you meant to call find()?

I'm trying to get the links in all li tags under the ul tag

HTML code:

<div id="chapter-list" class="sbox" style="">
<ul>
<li>
<a href="https://example.com/manga/name/2">
<div class="chpbox">
<span class="chapternum">
Chapter 2 </span>
</div>
</a>
</li>
<li>
<a href="https://example.com/manga/name/1">
<div class="chpbox">
<span class="chapternum">
Chapter 1 </span>
</div>
</a>
</li>
</ul>
</div>

The code I wrote:

from bs4 import BeautifulSoup
import requests

html_page = requests.get('https://example.com/manga/name/')

soup = BeautifulSoup(html_page.content, 'html.parser')
chapters = soup.find('div', {"id": "chapter-list"})

children = chapters.findChildren("ul" , recursive=False) # when printed, it gives the the whole ul content

for litag in children.find('li'):
    print(litag.find("a")["href"])

When I try to print the li tags links, it gives the following error:

Traceback (most recent call last):
  File "C:\0.py", line 12, in <module>
    for litag in children.find('li'):
  File "C:\Users\hs\AppData\Local\Programs\Python\Python310\lib\site-packages\bs4\element.py", line 2289, in __getattr__
    raise AttributeError(
AttributeError: ResultSet object has no attribute 'find'. You're probably treating a list of elements like a single element. Did you call find_all() when you meant to call find()?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

陌若浮生 2025-02-15 13:21:23

您可以使用查找在章节列表中找到 ul 。然后 find_all ul 中查找列表项。最后,再次使用 find_all 再次查找每个列表项目中的链接并打印URL。这两种方法的详细信息可以在 find_all方法BS4上的文档。您可以使用 get_text()在每个链接上搜索 capternum 之后,以获取链接的文本,例如第1章。按课堂搜索可以在

(更新)代码:

from bs4 import BeautifulSoup

html_doc = """
<div id="chapter-list" class="sbox" style="">
    <ul>
        <li>
            <a href="https://example.com/manga/name/2">
                <div class="chpbox">
<span class="chapternum">
Chapter 2 </span>
                </div>
            </a>
        </li>
        <li>
            <a href="https://example.com/manga/name/1">
                <div class="chpbox">
<span class="chapternum">
Chapter 1 </span>
                </div>
            </a>
        </li>
    </ul>
</div>
"""

soup = BeautifulSoup(html_doc, 'html.parser')
chapters = soup.find('div', {"id": "chapter-list"})

list_items = chapters.find('ul').find_all('li')

for list_item in list_items:
    for link in list_item.find_all('a'):
        title = link.find('span', class_='chapternum').get_text().strip()
        href = link.get("href")
        print(f"{title}: {href}")

输出:

Chapter 2: https://example.com/manga/name/2
Chapter 1: https://example.com/manga/name/1

参考:

You can use find to find the ul in the chapter list. And then find_all to find the list items in the ul. Finally, use find_all again to find the links in each list item and print the URL. Details of these two methods can be found in find and find_all method documentation on bs4. You can use the get_text() after searching by the class chapternum on each link to get the link's text like Chapter 1. Searching by class be found in bs4 documentation for searching element by class

(Updated) Code:

from bs4 import BeautifulSoup

html_doc = """
<div id="chapter-list" class="sbox" style="">
    <ul>
        <li>
            <a href="https://example.com/manga/name/2">
                <div class="chpbox">
<span class="chapternum">
Chapter 2 </span>
                </div>
            </a>
        </li>
        <li>
            <a href="https://example.com/manga/name/1">
                <div class="chpbox">
<span class="chapternum">
Chapter 1 </span>
                </div>
            </a>
        </li>
    </ul>
</div>
"""

soup = BeautifulSoup(html_doc, 'html.parser')
chapters = soup.find('div', {"id": "chapter-list"})

list_items = chapters.find('ul').find_all('li')

for list_item in list_items:
    for link in list_item.find_all('a'):
        title = link.find('span', class_='chapternum').get_text().strip()
        href = link.get("href")
        print(f"{title}: {href}")

Output:

Chapter 2: https://example.com/manga/name/2
Chapter 1: https://example.com/manga/name/1

References:

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文