使用 Wininet 下载二进制文件
我目前正在编写一个简单的程序,我想分发给我的朋友。我想要完成的任务是在启动程序时将一些外部二进制文件从互联网写入缓冲区。为此,我使用 Windows Internet(wininet)。目前,我正在使用 InternetReadFile 将文件写入缓冲区,稍后我将在程序中使用该缓冲区。但是,文件没有被完全读取,因为结果大小比服务器上文件的大小小得多,而实际上它应该是相同的。
我想在不使用任何外部库的情况下执行此操作。
知道什么可以解决我的问题吗?
谢谢, 安德鲁
I am currently programming a simple program, I want to distribute to my friends. What I am trying to accomplish, is to write some external binary files to a buffer from the internet, upon starting the program. To do this, I am using windows internet(wininet). Currently, I am using InternetReadFile to write the file to a buffer which I use later in the program. However, the File is not read completely, as in, the resulting size is much smaller than the size of the file on the server, when it should be the same.
I would like to do this, without using any external libraries.
Any idea of what could solve my problem?
Thanks,
Andrew
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
文档做出以下评论:
基本上,不能保证该函数准确读取
dwNumberOfBytesToRead
。使用 lpdwNumberOfBytesRead 参数检查实际读取了多少字节。此外,一旦总文件大小大于
dwNumberOfBytesToRead
,您将需要多次调用该调用。因为它无法一次读取超过dwNumberOfBytesToRead
的内容。如果您预先知道文件总大小,则循环采用以下形式:
如果没有,则需要将缓冲区中的数据写入另一个文件,而不是累加它。
编辑(示例测试程序):
这是一个获取 StackOverflow 首页的完整程序。这会以 1K 块的形式下载大约 200K 的 HTML 代码,并检索整个页面。你能运行一下看看它是否有效吗?
The documentation makes the following remarks:
Basically, there is no guarantee that the function to read exactly
dwNumberOfBytesToRead
. Check out how many bytes were actually read using thelpdwNumberOfBytesRead
parameter.Moreover, as soon as the total file size is larger than
dwNumberOfBytesToRead
, you will need to invoke the call multiple times. Because it cannot read more thandwNumberOfBytesToRead
at once.If you have the total file size in advance, the loop takes the following form:
If not, then you need to write the data in the buffer to another file instead of accumulating it.
EDIT (SAMPLE TEST PROGRAM):
Here's a complete program that fetches StackOverflow's front page. This downloads about 200K of HTML code in 1K chunks and the full page is retrieved. Can you run this and see if it works?