在 Java 和 C 上重现相同 HMAC MD5 的问题

发布于 2024-12-25 17:55:01 字数 1075 浏览 1 评论 0原文

我目前很难在 C 上重新创建由 Java 程序生成的 HMAC MD5 哈希值。任何帮助、建议、更正和推荐将不胜感激。 Java 程序使用 UTF16LE 和 MAC 创建 HMAC MD5 字符串(编码为 32 个字符长的基本 16 进制字符串);我需要的是在 C 程序上重新创建相同的结果。

我使用 MD5 的 RSA 源,HMAC-MD5 代码来自 RFC 2104 (http://www.koders.com/c/fidBA892645B9DFAD21A2B5ED526824968A1204C781.aspx)

我通过填充每个偶数字节在 C 实现上“模拟”UTF16LE与 0。当我这样做时,十六进制/整数表示似乎在两端都是一致的;但这是正确的方法吗?我认为这将是最好的方法,因为 HMAC-MD5 函数调用仅允许字节数组(RFC2104 实现中没有双字节数组调用之类的东西,但这无关紧要)。

当我运行要进行 HMAC 处理的字符串时,您自然会得到“垃圾”。现在我的问题是,即使是“垃圾”在系统中也不一致(不包括基 16 编码可能不一致的事实)。我的意思是“���������”,可能是 Java HMAC-MD5 的结果,但 C 可能会给出“v ����?��!��{� ”(只是一个例子,不是实际数据)。

我有两件事想确认:

  1. 用 0 填充每个偶数字节是否会扰乱 HMAC-MD5 算法? (或者因为它会在第一个字节之后立即遇到空值或者其他原因)
  2. 我看到不同的“垃圾”是因为 C 和 Java 使用不同的字符编码吗? (运行 Ubuntu 的同一台机器)

我将通读 HMAC-MD5 和 MD5 代码,看看它们如何处理进入的字节数组(空偶数字节是否导致问题)。我也很难在 C 端编写正确的编码函数来将结果字符串转换为 32 个字符的十六进制字符串。任何意见/帮助将不胜感激。

更新(2 月 3 日):传递有符号/无符号字节数组会改变 HMAC-MD5 的输出吗? Java 实现采用字节数组(有符号);但 C 实现采用 UNSIGNED 字节数组。我认为这也可能是产生不同结果的一个因素。如果这确实影响了最终的输出;我到底能做什么?我会在 C 中传递一个有符号字节数组(该方法采用无符号字节数组)还是会将有符号字节数组转换为无符号?

谢谢! 克莱门特

I am currently stumped on recreating an HMAC MD5 hash generated by a Java program on C. Any help, suggestions, correction and recommendation would be greatly appreciated.
The Java program creates the HMAC MD5 string (encoded to a base 16 HEX string which is 32 characters long) using UTF16LE and MAC; what I need is to recreate the same result on C program.

I am using the RSA source for MD5 and the HMAC-MD5 code is from RFC 2104 (http://www.koders.com/c/fidBA892645B9DFAD21A2B5ED526824968A1204C781.aspx)

I have "simulated" UTF16LE on the C implementation by padding every even byte with 0s. The Hex/Int representation seem to be consistent on both ends when I do this; but is this the correct way to do this? I figured this would be the best way because the HMAC-MD5 function call only allows for a byte array (no such thing as a double byte array call in the RFC2104 implementation but that's irrelevant).

When I run the string to be HMAC'd through - you naturally get "garbage". Now my problem is that not even the "garbage" is consistent across the systems (excluding the fact that perhaps the base 16 encoding could be inconsistent). What I mean by this is "�����ԙ���," might be the result from Java HMAC-MD5 but C might give "v ����?��!��{� " (Just an example, not actual data).

I have 2 things I would like to confirm:

  1. Did padding every even byte with 0 mess up the HMAC-MD5 algorithms? (either because it would come across a null immediately after the first byte or whatever)
  2. Is the fact that I see different "garbage" because C and Java are using different character encodings? (same machine running Ubuntu)

I am going to read through the HMAC-MD5 and MD5 code to see how they treat the byte array going in (whether or not the null even bytes is causing a problem). I am also having a hard time writing a proper encoding function on the C side to convert the resultant string into a 32 character hex string. Any input/help would be greatly appreciated.

Update (Feb 3rd): Would passing signed/unsigned byte array alter the output of HMAC-MD5? The Java implementation takes a byte array (which is SIGNED); but the C implementation takes an UNSIGNED byte array. I think this might also be a factor in producing different results. If this does affect the final output; what can I really do? Would I pass a SIGNED byte array in C (the method takes an unsigned byte array) or would I cast the SIGNED byte array as unsigned?

Thanks!
Clement

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

知足的幸福 2025-01-01 17:55:01

该问题可能是由于您天真地创建了 UTF-16 字符串。任何大于 0x7F 的字符(参见 unicode 解释)都需要扩展为 UTF 编码方案。

我会首先在 C 和 Java 实现之间获取相同的字节字符串,因为这可能就是您的问题所在 - 所以我同意您的假设(1)

您是否尝试过在不填充 C 字符串的情况下计算 MD5,而只是将其转换为 UTF - 您可以使用 iconv 来进行编码实验。

The problem is probably due to your naive creation of the UTF-16 string. Any character greater than 0x7F (see unicode explanation) needs to be expanded into the UTF encoding scheme.

I would work on first getting the same byte string between the C and Java implementation as that is probably where your problem lies -- so I would agree with your assumption (1)

Have you tried to calculate the MD5 without padding the C-string, but rather just converting it to UTF -- you can use iconv to make experiments with the encoding.

睫毛溺水了 2025-01-01 17:55:01

问题是我使用了 RSA 实现。当我切换到 OpenSSL 后,我的所有问题都得到了解决。 RSA 实现没有考虑跨平台支持的所有必要细节(包括 32 位/64 位处理器)。

始终使用 OpenSSL,因为它们已经解决了所有跨平台问题。

The problem was that I used the RSA implementation. After I switched to OpenSSL all my problems were resolved. RSA implementation did not take into consideration all the necessary details of cross platform support (including 32bit/64bit processors).

Always use OpenSSL because they have already resolved all the cross platform issues.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文