如何从URL中提取数据?

发布于 2025-01-31 17:34:19 字数 116 浏览 1 评论 0原文

我有一个XLSX文件,其中存储了许多URL及其串行ID。这些URL中的每一个都重定向到编写文章的网页。我的问题是,如何使用Python扫描所有URL并将文章的标题和文本存储在新的文本文件中,将URL序列ID作为文件名?

I have a xlsx file where a lot of URLs are stored along with their serial ids. Each of these URLs redirects to a webpage where there is article written. My question is how do I scan all the URLs using python and store the title and the texts of the article in a new text file with the URL serial id as its file name?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

寂寞花火° 2025-02-07 17:34:19

您可以使用Webscraping进行此操作。

如您所说,您有一个包含元组的XLSX (IDS,URL)

您可以从将其加载到Python中开始:

import pandas as pd

urls = pd.read_excel(filename)

然后读取每个URL的内容,您可以使用Python中最著名的Web刮擦库之一:BeautifulSoup

from bs4 import BeautifulSoup
import requests

# get the raw HTML from the request
content = requests.get(url).content

# build the soup
soup = BeautifulSoup(content)

# get the title
title_tag = soup.find("title") # shows <title>ActualTitle</title>

title = soup.find("title").string # show ActualTitle


# You can get the whole text contained in the page 
text_content = soup.get_text()

You can do this using webscraping.

As you said, you have a xlsx containing tuples (ids, url).

You could start by loading this into python with :

import pandas as pd

urls = pd.read_excel(filename)

Then to read the content of each URL you can use one of the most famous Web scraping library in python : BeautifulSoup.

from bs4 import BeautifulSoup
import requests

# get the raw HTML from the request
content = requests.get(url).content

# build the soup
soup = BeautifulSoup(content)

# get the title
title_tag = soup.find("title") # shows <title>ActualTitle</title>

title = soup.find("title").string # show ActualTitle


# You can get the whole text contained in the page 
text_content = soup.get_text()
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文