带有请求标头的废料图像在美丽的套件上
我有用于废料映像的代码:
import requests, base64
from bs4 import BeautifulSoup
baseurl = "https://www.google.com/search?q=cat&sxsrf=APq-WBuyx07rsOeGlVQpTsxLt262WbhlfA:1650636332756&source=lnms&tbm=shop&sa=X&ved=2ahUKEwjQr5HC66f3AhXxxzgGHejKC9sQ_AUoAXoECAIQAw&biw=1920&bih=937&dpr=1"
headers = {"User-Agent" : "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:99.0) Gecko/20100101 Firefox/99.0"}
r_images = requests.get(url=baseurl, headers=headers)
soup_for_image = BeautifulSoup(r_images.text, 'html.parser')
#find product images
productimages = []
product_images = soup_for_image.findAll('img')
for item in product_images:
# print(item['src'])
if "data:image/svg+xml" not in item['src']:
productimages.append(item.get('src'))
print(productimages)
如果没有标头,则可以,但是,如果我使用请求标题,则结果将是base64映像。那么,有什么办法可以用请求标题将图像删除?
I have code for scrap image:
import requests, base64
from bs4 import BeautifulSoup
baseurl = "https://www.google.com/search?q=cat&sxsrf=APq-WBuyx07rsOeGlVQpTsxLt262WbhlfA:1650636332756&source=lnms&tbm=shop&sa=X&ved=2ahUKEwjQr5HC66f3AhXxxzgGHejKC9sQ_AUoAXoECAIQAw&biw=1920&bih=937&dpr=1"
headers = {"User-Agent" : "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:99.0) Gecko/20100101 Firefox/99.0"}
r_images = requests.get(url=baseurl, headers=headers)
soup_for_image = BeautifulSoup(r_images.text, 'html.parser')
#find product images
productimages = []
product_images = soup_for_image.findAll('img')
for item in product_images:
# print(item['src'])
if "data:image/svg+xml" not in item['src']:
productimages.append(item.get('src'))
print(productimages)
It will be fine if there is no header but, if I use request header, the result will be base64 image. So is there any way that I can scrap the image with the request headers?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
您可以添加cookie同意并有效。
也许有些选择者将来会改变。
希望我能够为您提供帮助。
You can add cookie CONSENT and it works.
Maybe some selectors can change in the future.
I hope I have been able to help you.