libcurl 未处理分块响应
我正在使用 libcurl 从 url 下载文件。文件的原始大小是1700k,但我只得到1200k。在使用数据包嗅探器进行检查后,我意识到数据是以分块编码和 gzip 形式传入的。另外,我的进度回调始终显示 dltotal 为 -0- 。我尝试将 CURLOPT_ENCODING 设置为“gzip”、“deflate”、“”、“blah”、123123(非空)值,但没有运气。我仍然得到 1200k 未处理的数据。我应该怎么做才能使其与进度问题一起工作?
谢谢, 法提赫
I'm using libcurl to download a file from a url. The original size of the file is 1700k but I only get 1200k. After I inspected with a packet sniffer I realized that the data was coming in chunked encoding and gzip. Also my progress callback always shows a dltotal of -0- . I tried setting CURLOPT_ENCODING to "gzip", "deflate", "", "blah", 123123 (which are non-null) values but no luck. I still get 1200k of unprocessed data. What should I do to get this working along with the progress problem ??
Thanks,
Fatih
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
当使用分块编码时,libcurl 会调用进度回调并将“dltotal”设置为 0,因为此时它无法知道总大小。
然而,它确实支持并正确处理分块编码和内容编码 gzip,因此如果您没有解压完整文件,问题可能是您的服务器正在运行,或者您的连接在完整文件解压之前以某种方式中断。转移。
另外,您应该确保使用最新的curl版本,这样您就不会遭受旧错误或类似问题的困扰。
libcurl calls the progress callback with "dltotal" set to 0 when chunked encoding is used since it can't know the total size then.
It does however support and deal with both chunked encoding and content-encoding gzip properly so if you don't get the full file decompressed, the problem might be that your server is acting up or that your connection somehow breaks before the full file has been transferred.
Also, you should make sure that you use a recent curl version so that you're not suffering from an old bug or similar.