MD5哈希值在服务器上的计算方式不同
我正在运行一些用 C 编写的代码,该代码从其他人编写的哈希库(md5.c 和 md5.h)调用 md5 哈希功能。 我看到的奇怪行为是:
哈希工作完美=我对一个字符串进行哈希处理,结果是我已经通过多个其他来源验证了它的确切哈希值。
哈希功能有效 编译和运行时完美 在我的 OSX 机器上以及哈希值 计算出来的结果完全符合预期 是。
相同的代码,未上传任何更改 并在Linux上编译 服务器并计算出不同的 (错误)哈希。
有谁知道这到底是如何实现的? 在过去的一周里,这真是太疯狂了,我不明白为什么这是可能的。 我还在另一台机器上测试过,编译并执行,效果很好。 只是当我将其上传到服务器时,哈希值不再正确。
哈希功能文件可以在以下位置找到: http://people.csail.mit.edu/rivest/Md5.c
已解决:谢谢大家 这是 64 位架构问题。 令人非常恼火的是,在调试时我忘记了考虑这一点......
I am running some code that I have written in C which calls the md5 hashing functionality from a hashing library that someone else wrote (md5.c & md5.h). The odd behavior I have been seeing is:
hashing working perfectly = I hash a string, and it comes out to the exact hash that I have verified it to be with multiple other sources.
Hashing functionality works
perfectly when compiling and running
on my OSX machine and the hash that
is computed is exactly as it should
be.Same code, no changes is uploaded
and compiled on the Linux based
server and it computes a different
(wrong) hash.
Does anyone have any insight on how exactly this would be possible? Its been driving crazy for the past week and I do not understand why this is even possible. I have also tested it on another machine, compiled and executed and it works perfectly. Its just when I upload it to the server that the hash is no longer correct.
The hashing functionality file can be found at:
http://people.csail.mit.edu/rivest/Md5.c
SOLVED: Thanks everyone
It was the 64-bit arch issue. Its mighty annoying that that slipped my mind to consider that when debugging.......
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(5)
尝试将(Md5.c第41行)
typedef unsigned long int UINT4;
替换为
typedef uint32_t UINT4;
(如果需要,包括stdint.h)
在64位机器上long int (通常)是 64 位长,而不是 32
编辑 :
我尝试在 64 位 opteron 上尝试,这解决了问题。
Try to replace (Md5.c line 41)
typedef unsigned long int UINT4;
by
typedef uint32_t UINT4;
(include stdint.h if needed)
On a 64 bits machine long int are (usually) 64 bits long instead of 32
EDIT :
I tried on a 64 bits opteron this solves the problem.
这台机器似乎没有使用与其他机器不同的架构(32 位与 64 位)吗? 如果 MD5 实现依赖于机器字大小(我没有检查代码),这可能会导致哈希值不同。
Is the machine that seems to not be working a different architecture (32-bit vs. 64-bit) than the others? If the MD5 implementation is dependent on machine word size (I haven't checked the code), this can cause the hash to be different.
不同的编译器可以具有不同级别的标准合规性。 如果您遇到不合标准的编译器,您可能会很难看到经过良好测试的代码已被编译为完全不同的代码。
也可能出现目标系统是 64 位并且代码存在 64 位可移植性问题的情况。
解决该问题的唯一方法是调试代码的两个版本的行为不同的地方。
Different compilers can have different levels of standard compliance. If you run into a sub-standard compiler you can have hard times seeing that well-tested code has been compiled to something working entirely different.
It can also happen that the target system is 64-bit and the code has 64-bit portability issues.
The only way to solve the problem is to debug where exactly the two versions of your code behave differently.
您确定您正在以二进制模式阅读吗? 否则,换行符将在不同的操作系统中以不同的方式转换。
Did you make sure you are reading in binary mode? Otherwise a newline will be converted differently in a different OS.
抱歉,没有。 如果我编译它并在我的 linux x86 机器上运行它,它会产生与 md5sum 实用程序相同的结果:
在我的 x64 机器上:
所以这似乎是一个 64 位问题,而不是 Linux 问题。
Sorry, no. If I compile that and run it on my linux x86 box it produces the same result as the md5sum utility:
On my x64 box:
So it does seem to be a 64 bit issue, rather than a linux issue.