HTTP Response中的Content-MD5字段是通用的吗?

发布于 2024-12-18 19:12:03 字数 103 浏览 1 评论 0原文

我尝试从不同的服务器下载文件,但并非所有服务器都在标头中响应 Content-MD5 字段。

我想知道这是否是没有资源文件哈希值的 HTTP 响应的标准?

谢谢

I have tried downloading files from different servers, NOT all of them respond with the Content-MD5 field in their headers.

I wanted to know if that it is the standard to HTTP response without the hash of the resource file or not?

thanks

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

云裳 2024-12-25 19:12:03

Content-MD5 标头字段可以由源服务器或客户端生成,以充当实体主体的完整性检查。只有源服务器或客户端可以生成Content-MD5标头字段;代理和网关不得生成它,因为这会破坏其作为端到端完整性检查的价值。实体主体的任何接收者,包括网关和代理,可以检查此标头字段中的摘要值是否与接收时实体主体的摘要值匹配

http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html

截至 2014 年 6 月:

Content-MD5 标头字段已被删除,因为它是
部分响应的实施不一致。

RFC 7231 - 超文本传输​​协议 (HTTP/1.1):语义和内容 - https://www.rfc-editor.org/rfc/rfc7231 rfc-editor.org/rfc/rfc7231(第 92 页)

The Content-MD5 header field MAY be generated by an origin server or client to function as an integrity check of the entity-body. Only origin servers or clients MAY generate the Content-MD5 header field; proxies and gateways MUST NOT generate it, as this would defeat its value as an end-to-end integrity check. Any recipient of the entity- body, including gateways and proxies, MAY check that the digest value in this header field matches that of the entity-body as receive

http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html

As of June 2014:

The Content-MD5 header field has been removed because it was
inconsistently implemented with respect to partial responses.

RFC 7231 - Hypertext Transfer Protocol (HTTP/1.1): Semantics and Content - https://www.rfc-editor.org/rfc/rfc7231 (page 92)

听,心雨的声音 2024-12-25 19:12:03

HTTPbis 正在弃用该标头字段(请参阅 http://trac.tools.ietf.org/wg/ httpbis/trac/ticket/178 了解详情)。

HTTPbis is deprecating that header field (see http://trac.tools.ietf.org/wg/httpbis/trac/ticket/178 for details).

香橙ぽ 2024-12-25 19:12:03

纯 MD5 不支持部分验证,已被废弃。如果您尝试使用纯哈希函数进行任何高级操作,最终您会遇到 以下情况

我不明白...一旦文件准备完成,它就会开始所有
再来一次。我还收到消息“正在验证文件内容”...什么
我该怎么办???

如果下载了一遍又一遍超过 20Gb 的文件而没有机会及早检测到不匹配怎么办?如果没有散列函数支持的部分验证,就无法将下载卸载到 p2p。

所以现在人们需要坚持使用 Merkle 树。 Gnutella(G1和G2)和DC++(NMDC和ADC)使用TTH(TIGER Tree Hash),而eDonkey 2k使用AICH,但它是单独使用这个哈希,而且不太优雅。所以 TTH 是事实上的标准,如果所有地方的文件哈希值(即使不是严格要求的)默认情况下都是 TTH 那就太好了,但我们还没有做到这一点。

DC++ 不是基于 HTTP,但 Gnutella(1 和 2)是基于的,因此您可以研究和/或支持这些 HTTP 标头。例如,Shareaza 可以拦截来自浏览器的下载,并使用 Alt-Location、Content-URN、X-Thex-URI 标头将其卸载到 p2p。

Pure MD5 does not support partial verification and is obsoleted. If you try to use use pure hash functions for anything advanced, eventually you'll meet the following situation:

I don't get it... As soon as a file is ready to finish, it starts all
over again. I also get the message "Verifying File Contents"... What
am I to do???

What if one downloads over and over 20Gb file having no chance to detect mismatch early? One cannot offload downloads to p2p without partial verification supported by hash function.

So nowadays one needs to stick with Merkle trees. Gnutella (both G1 and G2) and DC++ (both NMDC and ADC) use TTH (TIGER Tree Hash) while eDonkey 2k use AICH, but it is alone to use this hash, and it is less elegant. So TTH is the de facto standard, and it would be nice if all file hashes everywhere (even when not strictly required) were TTH by default, but we are not there yet.

DC++ is not based on HTTP, but Gnutella (1 and 2) is, so you can study and/or support those HTTP headers. For instance, Shareaza can intercept downloads from browsers and offload them to p2p using Alt-Location, Content-URN, X-Thex-URI headers.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文