迭代哈希
我只是想知道,是否有一些库(无论是任何语言)使用迭代哈希,以便哈希数据以十六进制编码并再次重新哈希,而不是重新哈希实际的二进制输出?
I'm just wondering, is there a reason why some libraries (be it any language) use iterative hashing such that the hashed data is encoded in hex and rehashed again instead of rehashing the actual binary output?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
阅读本页,尤其是最后有关迭代哈希和双重哈希的部分 http://hungred.com/useful-information/enhance-security-hash-function-web-development/
我想这些部分的 tl;dr 版本将是最后的句子:“因此,请尝试避免双重哈希并改为迭代哈希。此外,使用相同算法进行两次哈希被认为是次优的。”
Read this page and especially the sections at the end about iterative hashing and double hashing http://hungred.com/useful-information/enhance-security-hash-function-web-development/
I guess a tl;dr version of the sections would be the sentence at the end saying: "Therefore, try avoiding double hashing and go for iterative hashing instead. Furthermore, hashing two times with the same algorithm is considered suboptimal."
这样做是为了引入一个额外的步骤,以防止散列在迭代地直接应用于相同散列的结果时可能开始产生相同或相似的输出。这个额外的步骤独立于哈希实现,它本身充当另一个重新哈希阶段,不会造成伤害。可靠的哈希不需要这种预防措施 - 但你永远不会提前知道某些哈希算法是否存在未知的缺陷。
This is done to introduce an extra step to guard against the hash possibly starting to produce the same or similar output if it is iteratively directly applied to the result of the same hash. This extra step is independent of the hash implementation and itself acts as a yet another re-hash stage which will not hurt. Such precautions are not needed for reliable hashes - but you never know in advance if some hash algorithm has a yeat unknown defect.