Web抓取' window'目的

发布于 2025-02-12 09:01:55 字数 663 浏览 1 评论 0 原文

我正在尝试获取像这样的新闻文章的身体文字:

noreferrer“> https://elpais.com/espana/2022-07-01/yolanda-diaz-lanza-su-proyecto-politico-politico-sumar-sumar-y-convoca-convoca-el-primer-acto-acto.html 源代码,可以在“文章”之后找到。

我尝试使用BS4 Beautifulsoup,但看起来它无法访问文章Body信息所在的“窗口”对象。我可以使用字符串函数来获取文本:

text = re.search('"articleBody":"(.*)","keywords"', source_code)

其中源_code是包含URL源代码的字符串。但是,与在页面允许时使用BS4方法相比,此方法看起来非常低效。有什么建议吗?

I am trying to get the body text of news articles like this one:

https://elpais.com/espana/2022-07-01/yolanda-diaz-lanza-su-proyecto-politico-sumar-y-convoca-el-primer-acto.html

In the source code, it can be found after "articleBody".

I've tried using bs4 Beautifulsoup but it looks like it cannot access the 'window' object where the article body information is. I'm able to get the text by using string functions:

text = re.search('"articleBody":"(.*)","keywords"', source_code)

Where source_code is a string that contains the source code of the URL. However, this method looks pretty inefficient compared to using the bs4 methods when the page allows it. Any advice, please?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

-小熊_ 2025-02-19 09:01:55

您对BeautifulSoup无法处理窗户对象的正确性是正确的。实际上,您需要将硒用于这种事情。这是一个关于如何使用Python 3进行的示例(如果您想在Python 2中工作,则必须稍微调整它):

from selenium import webdriver

import time

# Create a new instance of Chrome and go to the website we want to scrape

browser = webdriver.Chrome()

browser.get("http://www.elpais.com/")

time.sleep(5) # Let the browser load

# Find the div element containing the article content

div = browser.find_element_by_class_name('articleContent')

# Print out all the text inside the div

print(div.text)

希望这会有所帮助!

You're right about BeautifulSoup not being able to handle window objects. In fact, you need to use Selenium for that kind of thing. Here's an example on how to do so with Python 3 (you'll have to adapt it slightly if you want to work in Python 2):

from selenium import webdriver

import time

# Create a new instance of Chrome and go to the website we want to scrape

browser = webdriver.Chrome()

browser.get("http://www.elpais.com/")

time.sleep(5) # Let the browser load

# Find the div element containing the article content

div = browser.find_element_by_class_name('articleContent')

# Print out all the text inside the div

print(div.text)

Hope this helps!

如果没有你 2025-02-19 09:01:55

尝试:

import json
import requests
from bs4 import BeautifulSoup


url = "https://elpais.com/espana/2022-07-01/yolanda-diaz-lanza-su-proyecto-politico-sumar-y-convoca-el-primer-acto.html"
soup = BeautifulSoup(requests.get(url).content, "html.parser")


for ld_json in soup.select('[type="application/ld+json"]'):
    data = json.loads(ld_json.text)
    if "@type" in data and "NewsArticle" in data["@type"]:
        break

print(data["articleBody"])

打印:

A una semana de que arranque Sumar ...

或:

text = soup.select_one('[data-dtm-region="articulo_cuerpo"]').get_text(
    strip=True
)

print(text)

Try:

import json
import requests
from bs4 import BeautifulSoup


url = "https://elpais.com/espana/2022-07-01/yolanda-diaz-lanza-su-proyecto-politico-sumar-y-convoca-el-primer-acto.html"
soup = BeautifulSoup(requests.get(url).content, "html.parser")


for ld_json in soup.select('[type="application/ld+json"]'):
    data = json.loads(ld_json.text)
    if "@type" in data and "NewsArticle" in data["@type"]:
        break

print(data["articleBody"])

Prints:

A una semana de que arranque Sumar ...

Or:

text = soup.select_one('[data-dtm-region="articulo_cuerpo"]').get_text(
    strip=True
)

print(text)
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文