使用 PyCurl 从文件对象上传文件

发布于 2024-09-02 05:36:18 字数 451 浏览 1 评论 0原文

我正在尝试上传这样的文件:

import pycurl

c = pycurl.Curl()

values = [
     ("name", "tom"),
     ("image", (pycurl.FORM_FILE, "tom.png"))
]

c.setopt(c.URL, "http://upload.com/submit")
c.setopt(c.HTTPPOST, values)
c.perform()
c.close()

这工作正常。但是,这仅在文件是本地的情况下才有效。如果我要获取图像:

import urllib2
resp = urllib2.urlopen("http://upload.com/people/tom.png")

如何将 resp.fp 作为文件对象传递,而不是将其写入文件并传递文件名?这可能吗?

I'm attempting to upload a file like this:

import pycurl

c = pycurl.Curl()

values = [
     ("name", "tom"),
     ("image", (pycurl.FORM_FILE, "tom.png"))
]

c.setopt(c.URL, "http://upload.com/submit")
c.setopt(c.HTTPPOST, values)
c.perform()
c.close()

This works fine. However, this only works if the file is local. If I was to fetch the image such that:

import urllib2
resp = urllib2.urlopen("http://upload.com/people/tom.png")

How would I pass resp.fp as a file object instead of writing it to a file and passing the filename? Is this possible?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

打小就很酷 2024-09-09 05:36:18

在完美的情况下,基本上连接两个流是可能的,但这不是一个非常可靠的解决方案。有一堆丑陋的边界条件:

  • 响应套接字可能仍然是
    接收数据,和/或停止,
    从而导致你挨饿
    打破 POST (因为 PycURL 不是
    预计必须等待数据
    超出当前结束时间
    “文件”)。
  • 响应可能会重置,然后您没有完整的文件,但您已经发布了一堆数据 - 在这种情况下该怎么办?
  • 您使用 urllib 获取的文件可能是分块编码的,因此您需要对 MIME 标头执行一些操作以进行重组 - 您不能只是盲目地转发数据。
  • 您不一定知道您获得的文件有多大,因此很难在 POST 上提供正确的内容长度,因此您必须分块写入。
  • 可能还有一堆其他问题我无法立即想到......

您最好暂时将文件写入磁盘,然后在知道您拥有整个文件后将其发布。

如果您确实想这样做,最好的方法可能是实现您自己的类文件对象,该对象将管理两个连接之间的桥梁(可以正确缓冲、处理解码等)。

编辑:

根据您留下的评论 - 绝对 - 您只需要 setopt READFUNCTION 。查看 file_upload 示例:

http://pycurl.cvs.sourceforge.net/viewvc/pycurl/pycurl/examples/file_upload.py?revision=1.5&view=markup

它通过在文件上制作一个小包装来实现这一点带有回调的对象以从中读取数据,或者如果您不需要进行任何处理,则可以将 READFUNCTION 回调设置为 fp.read

It might be possible in perfect situations to basically connect the two streams, but it wouldn't be a very robust solution. There are a bunch of ugly boundary conditions:

  • The response socket might still be
    receiving data, and/or be stalled,
    thus causing you to starve out and
    break the POST (because PycURL is not
    expecting to have to wait for data
    beyond the current end of the
    "file").
  • The response might reset, and then you don't have the complete file, but you've already POSTed a bunch of data - what to do in this case?
  • The file you're fetching with urllib might be chunked-encoded, so you need to perform some operations on the MIME headers for reassembly - you can't just blindly forward the data.
  • You don't necessarily know how big the file you're getting is, so it's hard to provide the proper content-length on the POST, so then you have to write chunked.
  • Probably a bunch of other problems I can't think of off the top of my head...

You'll be much better off writing the file to disk temporarily and then POSTing it once you know you have the whole thing.

If you did want to do this, the best way would probably be to implement your own file-like object which would manage the bridge between the two connections (could properly buffer, handle decoding, etc.).

EDIT:

Based on the comment you left - absolutely - you just need to setopt READFUNCTION. Check out the file_upload example at:

http://pycurl.cvs.sourceforge.net/viewvc/pycurl/pycurl/examples/file_upload.py?revision=1.5&view=markup

It does exactly this by making a tiny wrapper on a file object with a callback to read the data from it, or alternatively if you don't need to do any processing, you can literally set the READFUNCTION callback to be fp.read.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文