为什么在爬行亚马逊网站时 xpath 输出给出空白输出。请帮忙解决以下问题
下面是代码,我用它来抓取亚马逊网站,但输出空白值。请帮助
from bs4 import BeautifulSoup
import pandas as pd
from lxml import etree
import requests
import time
HEADERS = ({'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:94.0) Gecko/20100101 Firefox/94.0','Accept-Language': 'en-US, en;q=0.5'})
data = pd.DataFrame([])
URL= "https://www.amazon.in/dp/B09NM3WWGY"
webpage = requests.get(URL, headers=HEADERS)
soup = BeautifulSoup(webpage.content, "lxml")
dom = etree.HTML(str(soup))
Price = (dom.xpath("//div[@id='corePrice_desktop']/div/table/tbody/tr[2]/td[2]/span/span/text()"))
#if price!=None:
#price = (dom.xpath("//div[@id='corePrice_desktop']/div/table/tbody/tr[2]/td[2]/span/span/text()"))
#else:
#price = "No Data"
print(Price)
输出变得空白
below is the code, which I have used to crawl amazon site but output coming blank value. pls help
from bs4 import BeautifulSoup
import pandas as pd
from lxml import etree
import requests
import time
HEADERS = ({'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:94.0) Gecko/20100101 Firefox/94.0','Accept-Language': 'en-US, en;q=0.5'})
data = pd.DataFrame([])
URL= "https://www.amazon.in/dp/B09NM3WWGY"
webpage = requests.get(URL, headers=HEADERS)
soup = BeautifulSoup(webpage.content, "lxml")
dom = etree.HTML(str(soup))
Price = (dom.xpath("//div[@id='corePrice_desktop']/div/table/tbody/tr[2]/td[2]/span/span/text()"))
#if price!=None:
#price = (dom.xpath("//div[@id='corePrice_desktop']/div/table/tbody/tr[2]/td[2]/span/span/text()"))
#else:
#price = "No Data"
print(Price)
output getting blank
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

从 xpath 中删除 tbody然后完美工作
从 bs4 导入 BeautifulSoup
将 pandas 导入为 pd
从 lxml 导入 etree
导入请求
导入时间
Remove tbody from the xpath & then working perfectly
from bs4 import BeautifulSoup
import pandas as pd
from lxml import etree
import requests
import time