下载管理器如何在 HTTP 上下载大文件而不需要多次请求?
我昨天使用 FlashGet 下载了一个 200MB 的文件,统计数据显示它使用的是 HTTP1.1 协议.
我的印象是 HTTP 是一种请求-响应协议,最常用于重量为几 KiB 的网页...我不太明白它如何能够下载 MB 或 GB 的数据,并且同时通过 5(或更多)不同的流。
I was downloading a 200MB file yesterday with FlashGet in the statistics it showed that it was using the HTTP1.1 protocol.
I was under the impression that HTTP is a request-response protocol and most generally used for web pages weighing a few KiB...I don't quite understand how it can download MB's or GB's of data and that too simultaneously through 5(or more) different streams.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
HTTP/1.1 有一个“Range”标头,可以指定通过连接传输文件的哪一部分。下载管理器可以建立多个连接,指定不同的传输范围。然后它将这些块组合在一起以构建完整的文件。
HTTP/1.1 has a "Range" header that can specify what part of a file to transfer over the connection. The download manager can make multiple connections, specifying different ranges to transfer. It would then combine the chunks together to build the full file.
http 中没有大小限制。它用于网页,但也用于传递互联网上的绝大多数内容。限制大小更多的是带宽问题,而不是协议本身。当然,这在早期更多是一种限制。 (而且,我想,那些仍在拨号的人)
There is no size limit in http. It is used for web pages, but it is also used to deliver a huge majority of the content on the Internet. It's more a matter of bandwidth that limits sizes, not the protocol itself. And of course, this was more of a limit in the early days. (and, I suppose, those still on dial-up)