使用Python从数据库的HTML代码刮除网络

发布于 2025-01-22 13:06:18 字数 1779 浏览 0 评论 0原文

我是Python的新手,正在缓慢地学习东西。我早些时候已经从数据库中进行了API调用以提取indromation。但是,我正在处理一个特定的印度数据库。 HTML脚本似乎令人困惑地提取我要寻找的特定范围。基本上,我有一个像输入的草药名称链接列表,看起来像这样(只有ID更改):

http://envis.frlht.org/plantdetails/3315/fd01bd598f0869d65fe5a2861845f9f8
http://envis.frlht.org/plantdetails/2133/fd01bd598f0869d65fe5a2861845f9f9
http://envis.frlht.org/plantdetails/845/fd01bd598f0869d65fe5a2861845f9f10
http://envis.frlht.org/plantdetails/363/fd01bd598f0869d65fe5a2861845f9f11

当我打开每一个时,我想从网页中提取这些草药链接的“分发”细节。就这样。但是,在HTML脚本中,我无法理解哪个标头具有细节。我在来到这里之前尝试了很多。有人可以帮我吗提前致谢。

代码:

import requests
import urllib.request
import time
from bs4 import BeautifulSoup
import json
import pandas as pd
import os
from pathlib import Path
from pprint import pprint

user_home = os.path.expanduser('~')
OUTPUT_DIR = os.path.join(user_home, 'vk_frlht')
Path(OUTPUT_DIR).mkdir(parents=True, exist_ok=True)

herb_url = 'http://envis.frlht.org/bot_search'
response = requests.get(herb_url)
soup = BeautifulSoup(response.text, "html.parser")
token = soup.find('Type Botanical Name', {'type': 'hidden', 'name': 'token'})
herb_query_url = 'http://envis.frlht.org/plantdetails/3315/fd01bd598f0869d65fe5a2861845f9f8'

response = requests.get('http://envis.frlht.org/plantdetails/3315/fd01bd598f0869d65fe5a2861845f9f8')

#optional code for many links at once

with open(Path, 'r') as f:
    frlhtinput = f.readlines()
    frlht = [x[:-1] for x in frlhtinput]

    for line in frlht:
        out = requests.get(f'http://envis.frlht.org/plantdetails/{line}')
#end of the optional code

herb_query_soup = BeautifulSoup(response.text, "html.parser")
text = herb_query_soup.find('div', {'id': 'result-details'})
pprint(text)

I am new to python and am learning things slowly. I have earlier performed API calls from databases to extract infromation. However, I was dealing with a particular Indian database. The html script seems confusing to extract the particular infromation I am looking for. Basically, I have a list of herb name links as input which looks like this(only the ID changes):

http://envis.frlht.org/plantdetails/3315/fd01bd598f0869d65fe5a2861845f9f8
http://envis.frlht.org/plantdetails/2133/fd01bd598f0869d65fe5a2861845f9f9
http://envis.frlht.org/plantdetails/845/fd01bd598f0869d65fe5a2861845f9f10
http://envis.frlht.org/plantdetails/363/fd01bd598f0869d65fe5a2861845f9f11

When I open each of this, I want to extract the "Distribution" detail for these herb links from the webpage. That's all. But, in the html script, I cant figure which header has the detail. I tried a lot before coming here. Can someone please help me. Thanks in advance.

Code:

import requests
import urllib.request
import time
from bs4 import BeautifulSoup
import json
import pandas as pd
import os
from pathlib import Path
from pprint import pprint

user_home = os.path.expanduser('~')
OUTPUT_DIR = os.path.join(user_home, 'vk_frlht')
Path(OUTPUT_DIR).mkdir(parents=True, exist_ok=True)

herb_url = 'http://envis.frlht.org/bot_search'
response = requests.get(herb_url)
soup = BeautifulSoup(response.text, "html.parser")
token = soup.find('Type Botanical Name', {'type': 'hidden', 'name': 'token'})
herb_query_url = 'http://envis.frlht.org/plantdetails/3315/fd01bd598f0869d65fe5a2861845f9f8'

response = requests.get('http://envis.frlht.org/plantdetails/3315/fd01bd598f0869d65fe5a2861845f9f8')

#optional code for many links at once

with open(Path, 'r') as f:
    frlhtinput = f.readlines()
    frlht = [x[:-1] for x in frlhtinput]

    for line in frlht:
        out = requests.get(f'http://envis.frlht.org/plantdetails/{line}')
#end of the optional code

herb_query_soup = BeautifulSoup(response.text, "html.parser")
text = herb_query_soup.find('div', {'id': 'result-details'})
pprint(text)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

你对谁都笑 2025-01-29 13:06:18

这就是此页面报废之后的外观:

”在此处输入图像说明“

中间加载符号意味着仅在JavaScript代码执行后才可以加载内容。这意味着有人用JS代码保护了此内容。您必须使用硒浏览器而不是BS4。

请参阅教程在这里关于如何使用它。

This is how this page looks after scrapping:

enter image description here

Loading sign in the middle means that content can be loaded only after JavaScript code executes. Meaning someone protected this content with JS code. You have to use Selenium browser instead of BS4.

See tutorial here on how to use it.

辞旧 2025-01-29 13:06:18

尝试一下。

import requests
from bs4 import BeautifulSoup
from pprint import pprint

plant_ids = ["3315", "2133", "845", "363"]
results = []
for plant_id in plant_ids:
    herb_query_url = f"http://envis.frlht.org/plantdetails/{plant_id}/fd01bd598f0869d65fe5a2861845f9f8"
    headers = {
        "Referer": herb_query_url,
    }
    response = requests.get(
        f"http://envis.frlht.org/bot_search/plantdetails/plantid/{plant_id}/nocache/0.7763327765552295/referredfrom/extplantdetails",
        headers=headers,
    )
    herb_query_soup = BeautifulSoup(response.text, "html.parser")
    result = herb_query_soup.findAll("div", {"class": "initbriefdescription"})
    for r in result:
        result_dict = {r.text.split(":", 1)[0].strip(): r.text.split(":", 1)[1].strip()}
        results.append(result_dict)
pprint(results)

Try it.

import requests
from bs4 import BeautifulSoup
from pprint import pprint

plant_ids = ["3315", "2133", "845", "363"]
results = []
for plant_id in plant_ids:
    herb_query_url = f"http://envis.frlht.org/plantdetails/{plant_id}/fd01bd598f0869d65fe5a2861845f9f8"
    headers = {
        "Referer": herb_query_url,
    }
    response = requests.get(
        f"http://envis.frlht.org/bot_search/plantdetails/plantid/{plant_id}/nocache/0.7763327765552295/referredfrom/extplantdetails",
        headers=headers,
    )
    herb_query_soup = BeautifulSoup(response.text, "html.parser")
    result = herb_query_soup.findAll("div", {"class": "initbriefdescription"})
    for r in result:
        result_dict = {r.text.split(":", 1)[0].strip(): r.text.split(":", 1)[1].strip()}
        results.append(result_dict)
pprint(results)
谈场末日恋爱 2025-01-29 13:06:18
enter code here

import requests
from bs4 import BeautifulSoup
import csv


fieldnames = ["ID", "Accepted Name", "Family", "Used in", "Distribution"]

with open('IDs.txt') as f_input, open('output.csv', 'w', newline='') as f_output:
csv_output = csv.DictWriter(f_output, fieldnames=fieldnames, extrasaction='ignore')
csv_output.writeheader()

for line in f_input:
    url = line.strip()  # Remove newline
    print(url)
    url_split = url.split('/')
    url_details = f"http://envis.frlht.org/bot_search/plantdetails/plantid/{url_split[4]}/nocache/{url_split[5]}/referredfrom/extplantdetails"
    
    headers = {
        'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.47 Safari/537.36',
        'Referer' : url,
    }
    
    req = requests.get(url_details, headers=headers)
    soup = BeautifulSoup(req.content, "html.parser")
    row = {field : '' for field in fieldnames}      # default values
    row['ID'] = url_split[4]
    
    result = soup.findAll("div", {"class": "initbriefdescription"})
    for r in result:
        result_dict = r.get_text(strip=True).split(":" ,1)
        results.append(result_dict)
        
        row[entry] = results
        print(row)

with open('output.csv', 'w', newline='') as f_output:
    csv_output = csv.DictWriter(f_output, fieldnames=fieldnames, extrasaction='ignore')
    csv_output.writeheader()
    csv_output.writerow(row)
enter code here

import requests
from bs4 import BeautifulSoup
import csv


fieldnames = ["ID", "Accepted Name", "Family", "Used in", "Distribution"]

with open('IDs.txt') as f_input, open('output.csv', 'w', newline='') as f_output:
csv_output = csv.DictWriter(f_output, fieldnames=fieldnames, extrasaction='ignore')
csv_output.writeheader()

for line in f_input:
    url = line.strip()  # Remove newline
    print(url)
    url_split = url.split('/')
    url_details = f"http://envis.frlht.org/bot_search/plantdetails/plantid/{url_split[4]}/nocache/{url_split[5]}/referredfrom/extplantdetails"
    
    headers = {
        'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.47 Safari/537.36',
        'Referer' : url,
    }
    
    req = requests.get(url_details, headers=headers)
    soup = BeautifulSoup(req.content, "html.parser")
    row = {field : '' for field in fieldnames}      # default values
    row['ID'] = url_split[4]
    
    result = soup.findAll("div", {"class": "initbriefdescription"})
    for r in result:
        result_dict = r.get_text(strip=True).split(":" ,1)
        results.append(result_dict)
        
        row[entry] = results
        print(row)

with open('output.csv', 'w', newline='') as f_output:
    csv_output = csv.DictWriter(f_output, fieldnames=fieldnames, extrasaction='ignore')
    csv_output.writeheader()
    csv_output.writerow(row)
青衫儰鉨ミ守葔 2025-01-29 13:06:18

信息是根据您拥有的URL从另一个URL获得的。首先,您需要构建所需的URL(发现查看浏览器)并要求。

该信息可以按照以下方式写入CSV文件。它假设您有一个文本文件ids.txt如下:

http://envis.frlht.org/plantdetails/3315/fd01bd598f0869d65fe5a2861845f9f8
http://envis.frlht.org/plantdetails/2133/fd01bd598f0869d65fe5a2861845f9f9
http://envis.frlht.org/plantdetails/845/fd01bd598f0869d65fe5a2861845f9f10
http://envis.frlht.org/plantdetails/363/fd01bd598f0869d65fe5a2861845f9f11
import requests
from bs4 import BeautifulSoup
import csv


fieldnames = ["ID", "Accepted Name", "Family", "Used in", "Distribution"]

with open('IDs.txt') as f_input, open('output.csv', 'w', newline='') as f_output:
    csv_output = csv.DictWriter(f_output, fieldnames=fieldnames, extrasaction='ignore')
    csv_output.writeheader()

    for line in f_input:
        url = line.strip()  # Remove newline
        print(url)
        url_split = url.split('/')
        url_details = f"http://envis.frlht.org/bot_search/plantdetails/plantid/{url_split[4]}/nocache/{url_split[5]}/referredfrom/extplantdetails"
        
        headers = {
            'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.47 Safari/537.36',
            'Referer' : url,
        }
        
        req = requests.get(url_details, headers=headers)
        soup = BeautifulSoup(req.content, "html.parser")
        row = {field : '' for field in fieldnames}      # default values
        row['ID'] = url_split[4]
            
        for div in soup.find_all('div', class_="initbriefdescription"):
            entry, value = div.get_text(strip=True).split(":" ,1)
            row[entry] = value

        csv_output.writerow(row)

给出输出启动:

ID,Accepted Name,Family,Used in,Distribution
3315,Amaranthus hybridusL. subsp.cruentusvar.paniculatusTHELL.,AMARANTHACEAE,"Ayurveda, Siddha, Folk","This species is globally distributed in Africa, Asia and India. It is said to be cultivated as a leafy vegetable in Maharashtra, Karnataka (Coorg) and on the Nilgiri hills of Tamil Nadu. It is also found as an escape."
2133,Triticum aestivumL.,POACEAE,"Ayurveda, Siddha, Unani, Folk, Chinese, Modern",
845,Dolichos biflorusL.,FABACEAE,"Ayurveda, Siddha, Unani, Folk, Sowa Rigpa","This species is native to India, globally distributed in the Paleotropics. Within India, it occurs all over up to an altitude of 1500 m. It is an important pulse crop particularly in Madras, Mysore, Bombay and Hyderabad."
363,Brassica oleraceaL.,BRASSICACEAE,"Ayurveda, Siddha",

The information is obtained from another URL based on the URLs you have. First you need to construct the required URL (which was found looking at the browser) and requesting that.

This information could be written to a CSV file as follows. It assumes you have a text file IDs.txt as follows:

http://envis.frlht.org/plantdetails/3315/fd01bd598f0869d65fe5a2861845f9f8
http://envis.frlht.org/plantdetails/2133/fd01bd598f0869d65fe5a2861845f9f9
http://envis.frlht.org/plantdetails/845/fd01bd598f0869d65fe5a2861845f9f10
http://envis.frlht.org/plantdetails/363/fd01bd598f0869d65fe5a2861845f9f11
import requests
from bs4 import BeautifulSoup
import csv


fieldnames = ["ID", "Accepted Name", "Family", "Used in", "Distribution"]

with open('IDs.txt') as f_input, open('output.csv', 'w', newline='') as f_output:
    csv_output = csv.DictWriter(f_output, fieldnames=fieldnames, extrasaction='ignore')
    csv_output.writeheader()

    for line in f_input:
        url = line.strip()  # Remove newline
        print(url)
        url_split = url.split('/')
        url_details = f"http://envis.frlht.org/bot_search/plantdetails/plantid/{url_split[4]}/nocache/{url_split[5]}/referredfrom/extplantdetails"
        
        headers = {
            'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.47 Safari/537.36',
            'Referer' : url,
        }
        
        req = requests.get(url_details, headers=headers)
        soup = BeautifulSoup(req.content, "html.parser")
        row = {field : '' for field in fieldnames}      # default values
        row['ID'] = url_split[4]
            
        for div in soup.find_all('div', class_="initbriefdescription"):
            entry, value = div.get_text(strip=True).split(":" ,1)
            row[entry] = value

        csv_output.writerow(row)

Giving an output starting:

ID,Accepted Name,Family,Used in,Distribution
3315,Amaranthus hybridusL. subsp.cruentusvar.paniculatusTHELL.,AMARANTHACEAE,"Ayurveda, Siddha, Folk","This species is globally distributed in Africa, Asia and India. It is said to be cultivated as a leafy vegetable in Maharashtra, Karnataka (Coorg) and on the Nilgiri hills of Tamil Nadu. It is also found as an escape."
2133,Triticum aestivumL.,POACEAE,"Ayurveda, Siddha, Unani, Folk, Chinese, Modern",
845,Dolichos biflorusL.,FABACEAE,"Ayurveda, Siddha, Unani, Folk, Sowa Rigpa","This species is native to India, globally distributed in the Paleotropics. Within India, it occurs all over up to an altitude of 1500 m. It is an important pulse crop particularly in Madras, Mysore, Bombay and Hyderabad."
363,Brassica oleraceaL.,BRASSICACEAE,"Ayurveda, Siddha",
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文