使用列表中的URL保存zip文件

发布于 2025-01-25 15:06:15 字数 1059 浏览 1 评论 0原文

尝试使用urllib.request来读取从shapefile的URL列表,然后从所有这些URL中下载zips。到目前为止,我得到了一定数量的URL列表,但我无法将所有URL通过。该错误是预期的字符串或类似字节的对象。意味着在URL中存在问题。附带说明,我还需要下载它们并以其文件名/#命名。需要帮助!下面的代码。

import arcpy
import urllib.request
import os
    
os.chdir('C:\\ProgInGIS\\FinalExam\\Final')
lidar_shp = 'C:\\ProgInGIS\\FinalExam\\Final\\lidar-2013.shp'
zip_file_download = 'C:\\ProgInGIS\\FinalExam\\Final\\file1.zip'
    
    
data = []
with arcpy.da.SearchCursor(lidar_shp,"*") as cursor:
    for row in cursor:
        data.append(row)
data.sort(key=lambda tup: tup[2])
    
i = 0
with arcpy.da.UpdateCursor(lidar_shp,"*") as cursor:
    for row in cursor:
        row = data[i]
        i += 1
        cursor.updateRow(row)
    
counter = 0
url_list = []
with arcpy.da.UpdateCursor(lidar_shp, ['geotiff_ur']) as cursor:
    for row in cursor:
        url_list.append(row)
        counter += 1
        if counter == 18:
            break
for item in url_list:
    print(item)
    urllib.request.urlretrieve(item)

Trying to use urllib.request to read a list of urls from a shapefile, then download the zips from all those URLs. So far I got my list of a certain number of URLs, but I am unable to pass all of them through. The error is expected string or bytes-like object. Meaning theres prob an issue with the URL. As a side note, I also need to download them and name them by their file name/#. Need help!! Code below.

import arcpy
import urllib.request
import os
    
os.chdir('C:\\ProgInGIS\\FinalExam\\Final')
lidar_shp = 'C:\\ProgInGIS\\FinalExam\\Final\\lidar-2013.shp'
zip_file_download = 'C:\\ProgInGIS\\FinalExam\\Final\\file1.zip'
    
    
data = []
with arcpy.da.SearchCursor(lidar_shp,"*") as cursor:
    for row in cursor:
        data.append(row)
data.sort(key=lambda tup: tup[2])
    
i = 0
with arcpy.da.UpdateCursor(lidar_shp,"*") as cursor:
    for row in cursor:
        row = data[i]
        i += 1
        cursor.updateRow(row)
    
counter = 0
url_list = []
with arcpy.da.UpdateCursor(lidar_shp, ['geotiff_ur']) as cursor:
    for row in cursor:
        url_list.append(row)
        counter += 1
        if counter == 18:
            break
for item in url_list:
    print(item)
    urllib.request.urlretrieve(item)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

⒈起吃苦の倖褔 2025-02-01 15:06:15

我以这种方式理解您的问题:您想从某个字段中定义的URL中下载每个记录的zip文件。


使用 请求 软件包,这也更容易在

建议使用高级HTTP客户端接口的请求包。

这是一个示例:

import arcpy, arcpy.da

import shutil
import requests

SHAPEFILE = "your_shapefile.shp"

with arcpy.da.SearchCursor(SHAPEFILE, ["name", "url"]) as cursor:
    
    for name, url in cursor:
        
        response = requests.get(url, stream=True)
        if response.status_code == 200:
            with open(f"{name}.zip", "wb") as file:
                response.raw.decode_content = True
                shutil.copyfileobj(response.raw, file)

GIS Stackexchange上还有另一个示例:

I understand your question this way: you want to download a zip file for each record in a shapefile from an URL defined in a certain field.


It's easier to use the requests package which is also recommended in the urllib.request documentation:

The Requests package is recommended for a higher-level HTTP client interface.

Here is an example:

import arcpy, arcpy.da

import shutil
import requests

SHAPEFILE = "your_shapefile.shp"

with arcpy.da.SearchCursor(SHAPEFILE, ["name", "url"]) as cursor:
    
    for name, url in cursor:
        
        response = requests.get(url, stream=True)
        if response.status_code == 200:
            with open(f"{name}.zip", "wb") as file:
                response.raw.decode_content = True
                shutil.copyfileobj(response.raw, file)

There is another example on GIS StackExchange:

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文