为什么在爬行亚马逊网站时 xpath 输出给出空白输出。请帮忙解决以下问题

发布于 01-15 17:48 字数 781 浏览 3 评论 0原文

下面是代码,我用它来抓取亚马逊网站,但输出空白值。请帮助

from bs4 import BeautifulSoup
import pandas as pd
from lxml import etree
import requests
import time

HEADERS = ({'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:94.0) Gecko/20100101 Firefox/94.0','Accept-Language': 'en-US, en;q=0.5'})
data = pd.DataFrame([])
    
URL= "https://www.amazon.in/dp/B09NM3WWGY"
webpage = requests.get(URL, headers=HEADERS)
soup = BeautifulSoup(webpage.content, "lxml")
dom = etree.HTML(str(soup))
    
Price = (dom.xpath("//div[@id='corePrice_desktop']/div/table/tbody/tr[2]/td[2]/span/span/text()"))
#if price!=None:
    #price = (dom.xpath("//div[@id='corePrice_desktop']/div/table/tbody/tr[2]/td[2]/span/span/text()"))
#else:
    #price = "No Data"

print(Price)

输出变得空白

below is the code, which I have used to crawl amazon site but output coming blank value. pls help

from bs4 import BeautifulSoup
import pandas as pd
from lxml import etree
import requests
import time

HEADERS = ({'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:94.0) Gecko/20100101 Firefox/94.0','Accept-Language': 'en-US, en;q=0.5'})
data = pd.DataFrame([])
    
URL= "https://www.amazon.in/dp/B09NM3WWGY"
webpage = requests.get(URL, headers=HEADERS)
soup = BeautifulSoup(webpage.content, "lxml")
dom = etree.HTML(str(soup))
    
Price = (dom.xpath("//div[@id='corePrice_desktop']/div/table/tbody/tr[2]/td[2]/span/span/text()"))
#if price!=None:
    #price = (dom.xpath("//div[@id='corePrice_desktop']/div/table/tbody/tr[2]/td[2]/span/span/text()"))
#else:
    #price = "No Data"

print(Price)

output getting blank

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

无力看清2025-01-22 17:48:38

从 xpath 中删除 tbody然后完美工作
从 bs4 导入 BeautifulSoup
将 pandas 导入为 pd
从 lxml 导入 etree
导入请求
导入时间

HEADERS = ({'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:94.0) Gecko/20100101 Firefox/94.0','Accept-Language': 'en-US, en;q=0.5'})
data = pd.DataFrame([])
    
URL= "https://www.amazon.in/dp/B09NM3WWGY"
webpage = requests.get(URL, headers=HEADERS)
soup = BeautifulSoup(webpage.content, "lxml")
dom = etree.HTML(str(soup))
    
Price = (dom.xpath("//div[@id='corePrice_desktop']/div/table/tr[2]/td[2]/span/span/text()"))
#if price!=None:
    #price = (dom.xpath("//div[@id='corePrice_desktop']/div/table/tbody/tr[2]/td[2]/span/span/text()"))
#else:
    #price = "No Data"

print(Price)

Remove tbody from the xpath & then working perfectly
from bs4 import BeautifulSoup
import pandas as pd
from lxml import etree
import requests
import time

HEADERS = ({'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:94.0) Gecko/20100101 Firefox/94.0','Accept-Language': 'en-US, en;q=0.5'})
data = pd.DataFrame([])
    
URL= "https://www.amazon.in/dp/B09NM3WWGY"
webpage = requests.get(URL, headers=HEADERS)
soup = BeautifulSoup(webpage.content, "lxml")
dom = etree.HTML(str(soup))
    
Price = (dom.xpath("//div[@id='corePrice_desktop']/div/table/tr[2]/td[2]/span/span/text()"))
#if price!=None:
    #price = (dom.xpath("//div[@id='corePrice_desktop']/div/table/tbody/tr[2]/td[2]/span/span/text()"))
#else:
    #price = "No Data"

print(Price)
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文