Python请求无法获取网站的所有链接
我正在学习如何使用 Python urllib.requests
模块,并且我一直在尝试从网站获取所有链接,尽管它适用于大多数链接,但我在打开 这个。
我为此链接获得的输出只是: # python teosa.py []
整个代码如下所示:
from bs4 import BeautifulSoup
from urllib.request import Request, urlopen
import re
req = Request("https://randomtube.xyz/")
html_page = urlopen(req)
soup = BeautifulSoup(html_page, "lxml")
links = []
for link in soup.findAll('a'):
links.append(link.get('href'))
print(links)
有谁知道可能是什么问题?
I'm learning how to use Python urllib.requests
module and I've been trying to get all the links from the website, although it works for most of them, I'm having trouble opening this one.
The output I'm getting for this link is just: # python teosa.py []
The whole code looks like this:
from bs4 import BeautifulSoup
from urllib.request import Request, urlopen
import re
req = Request("https://randomtube.xyz/")
html_page = urlopen(req)
soup = BeautifulSoup(html_page, "lxml")
links = []
for link in soup.findAll('a'):
links.append(link.get('href'))
print(links)
Does anyone know what could be the issue?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
问题是 url 中没有给出链接。
您的脚本应该按原样运行。尝试一个带有链接的网站。
注意:我认为您不需要“导入重新”。
The issue is there are no links in url given.
Your script should work as-is. Try a site with links.
Note: I do not think you need to 'import re'.