带有请求标头的废料图像在美丽的套件上

发布于 2025-02-04 04:56:00 字数 886 浏览 2 评论 0原文

我有用于废料映像的代码:

import requests, base64
from bs4 import BeautifulSoup


baseurl = "https://www.google.com/search?q=cat&sxsrf=APq-WBuyx07rsOeGlVQpTsxLt262WbhlfA:1650636332756&source=lnms&tbm=shop&sa=X&ved=2ahUKEwjQr5HC66f3AhXxxzgGHejKC9sQ_AUoAXoECAIQAw&biw=1920&bih=937&dpr=1"
headers = {"User-Agent" : "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:99.0) Gecko/20100101 Firefox/99.0"}

r_images = requests.get(url=baseurl, headers=headers)


soup_for_image = BeautifulSoup(r_images.text, 'html.parser') 
#find product images
productimages = [] 
product_images = soup_for_image.findAll('img')
for item in product_images:
    # print(item['src'])
    if "data:image/svg+xml" not in item['src']:
        productimages.append(item.get('src'))
print(productimages)

如果没有标头,则可以,但是,如果我使用请求标题,则结果将是base64映像。那么,有什么办法可以用请求标题将图像删除?

I have code for scrap image:

import requests, base64
from bs4 import BeautifulSoup


baseurl = "https://www.google.com/search?q=cat&sxsrf=APq-WBuyx07rsOeGlVQpTsxLt262WbhlfA:1650636332756&source=lnms&tbm=shop&sa=X&ved=2ahUKEwjQr5HC66f3AhXxxzgGHejKC9sQ_AUoAXoECAIQAw&biw=1920&bih=937&dpr=1"
headers = {"User-Agent" : "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:99.0) Gecko/20100101 Firefox/99.0"}

r_images = requests.get(url=baseurl, headers=headers)


soup_for_image = BeautifulSoup(r_images.text, 'html.parser') 
#find product images
productimages = [] 
product_images = soup_for_image.findAll('img')
for item in product_images:
    # print(item['src'])
    if "data:image/svg+xml" not in item['src']:
        productimages.append(item.get('src'))
print(productimages)

It will be fine if there is no header but, if I use request header, the result will be base64 image. So is there any way that I can scrap the image with the request headers?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

ゝ杯具 2025-02-11 04:56:00

您可以添加cookie同意并有效。
也许有些选择者将来会改变。

import requests, base64
from bs4 import BeautifulSoup

baseurl = "https://www.google.com/search?q=cat&sxsrf=APq-WBuyx07rsOeGlVQpTsxLt262WbhlfA:1650636332756&source=lnms&tbm=shop&sa=X&ved=2ahUKEwjQr5HC66f3AhXxxzgGHejKC9sQ_AUoAXoECAIQAw&biw=1920&bih=937&dpr=1"
headers = {"cookie": "CONSENT=YES+cb.20230531-04-p0.en+FX+908"}
result = requests.get(url=baseurl, headers=headers)
soup = BeautifulSoup(result.text, 'html.parser')
allProducts = soup.findAll(class_="u30d4")
number = 0
for product in allProducts:
    name = product.find(class_="rgHvZc")
    if name is not None:
        number += 1
        print("Product number %d:" % number)
        print("Name : " + name.text)
        productLink = product.find('a')
        print("Link: " + productLink["href"][7:])
        img = product.find('img')
        print("Image: " + img["src"])
        price = product.find(class_="HRLxBb")
        print("Price " + price.text)

希望我能够为您提供帮助。

You can add cookie CONSENT and it works.
Maybe some selectors can change in the future.

import requests, base64
from bs4 import BeautifulSoup

baseurl = "https://www.google.com/search?q=cat&sxsrf=APq-WBuyx07rsOeGlVQpTsxLt262WbhlfA:1650636332756&source=lnms&tbm=shop&sa=X&ved=2ahUKEwjQr5HC66f3AhXxxzgGHejKC9sQ_AUoAXoECAIQAw&biw=1920&bih=937&dpr=1"
headers = {"cookie": "CONSENT=YES+cb.20230531-04-p0.en+FX+908"}
result = requests.get(url=baseurl, headers=headers)
soup = BeautifulSoup(result.text, 'html.parser')
allProducts = soup.findAll(class_="u30d4")
number = 0
for product in allProducts:
    name = product.find(class_="rgHvZc")
    if name is not None:
        number += 1
        print("Product number %d:" % number)
        print("Name : " + name.text)
        productLink = product.find('a')
        print("Link: " + productLink["href"][7:])
        img = product.find('img')
        print("Image: " + img["src"])
        price = product.find(class_="HRLxBb")
        print("Price " + price.text)

I hope I have been able to help you.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文