在Python 3中逐部分下载文件
我正在使用 Python 3 下载文件:
local_file = open(file_name, "w" + file_mode)
local_file.write(f.read())
local_file.close()
此代码可以工作,但它首先将整个文件复制到内存中。这是非常大的文件的问题,因为我的程序会占用内存。 (对于一个200 MB的文件,从17M内存到240M内存)
我想知道Python中是否有一种方法可以下载文件的一小部分(数据包),将其写入文件,从内存中删除它,并保留重复该过程,直到文件完全下载。
I'm using Python 3 to download a file:
local_file = open(file_name, "w" + file_mode)
local_file.write(f.read())
local_file.close()
This code works, but it copies the whole file into memory first. This is a problem with very big files because my program becomes memory hungry. (Going from 17M memory to 240M memory for a 200 MB file)
I would like to know if there is a way in Python to download a small part of a file (packet), write it to file, erase it from memory, and keep repeating the process until the file is completely downloaded.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
尝试使用此处描述的方法:
在Python中读取大文件的惰性方法?
我特别指的是已接受的答案。让我也将其复制到此处,以确保回复完全清晰。
这可能会适应您的需求:它以较小的块读取文件,允许在不填满整个内存的情况下进行处理。如果您还有任何疑问,请回来。
Try using the method described here:
Lazy Method for Reading Big File in Python?
I am specifically referring to the accepted answer. Let me also copy it here to ensure complete clarity of response.
This will likely be adaptable to your needs: it reads the file in smaller chunks, allowing for processing without filling your entire memory. Come back if you have any further questions.