相同的 C 代码在 Mac OS X 上产生的结果与 Windows 和 Linux 上不同
我正在使用旧版本的 OpenSSL,并且在尝试使用跨平台代码时遇到了一些困扰我好几天的行为。
我有调用 OpenSSL 来签署某些内容的代码。我的代码是根据 ASN1_sign 中的代码建模的,该代码可在 OpenSSL 的 a_sign.c 中找到,当我使用它时,它会出现相同的问题。以下是相关的代码行(在 a_sign.c 中以完全相同的方式找到和使用该代码):
EVP_SignUpdate(&ctx,(unsigned char *)buf_in,inl);
ctx 是 OpenSSL 使用的结构,与本讨论无关
buf_in 是要签名的数据的 char*
inl 为 buf_in 的长度
EVP_SignUpdate 可以重复调用,以便在调用 EVP_SignFinal 签名之前读入待签名的数据。
当此代码在 Ubuntu 和 Windows 7 上使用时,一切正常,在给定相同输入的情况下,它们都会生成完全相同的签名。
在 OS X 上,如果 inl 的大小小于 64(即 buf_in 中有 64 个字节或更少),那么它也会生成与 Ubuntu 和 Windows 相同的签名。但是,如果 inl 的大小变得大于 64,它会生成与其他平台不同的自己的内部一致签名。 通过内部一致,我的意思是 Mac 将读取签名并验证它们是否正确,而它会拒绝来自 Ubuntu 和 Windows 的签名,反之亦然。
我设法解决了这个问题,并通过将上面的行更改为以下内容来创建相同的签名,其中它一次读取一个字节的缓冲区:
int input_it;
for(input_it = (int)buf_in; input_it < inl + (int)buf_in; intput_it++){
EVP_SIGNUpdate(&ctx, (unsigned char*) input_it, 1);
}
这会导致 OS X 拒绝其自己的数据签名> 64 字节无效,我在其他地方找到了类似的行来验证需要以相同方式分解的签名。
这修复了签名创建和验证,但仍然出现问题,因为我遇到了其他问题,而且我真的不想深入研究(和修改!)OpenSSL。
当然,我做错了什么,因为当我使用库存 ASN1_sign 时,我看到了完全相同的问题。这是我编译 OpenSSL 的方式有问题吗?对于我的一生,我无法弄清楚。谁能告诉我我一定犯了什么愚蠢的错误?
I'm working with an older version of OpenSSL, and I'm running into some behavior that has stumped me for days when trying to work with cross-platform code.
I have code that calls OpenSSL to sign something. My code is modeled after the code in ASN1_sign, which is found in a_sign.c in OpenSSL, which exhibits the same issues when I use it. Here is the relevant line of code (which is found and used exactly the same way in a_sign.c):
EVP_SignUpdate(&ctx,(unsigned char *)buf_in,inl);
ctx is a structure that OpenSSL uses, not relevant to this discussion
buf_in is a char* of the data that is to be signed
inl is the length of buf_in
EVP_SignUpdate can be called repeatedly in order to read in data to be signed before EVP_SignFinal is called to sign it.
Everything works fine when this code is used on Ubuntu and Windows 7, both of them produce the exact same signatures given the same inputs.
On OS X, if the size of inl is less than 64 (that is there are 64 bytes or less in buf_in), then it too produces the same signatures as Ubuntu and Windows. However, if the size of inl becomes greater than 64, it produces its own internally consistent signatures that differ from the other platforms. By internally consistent, I mean that the Mac will read the signatures and verify them as proper, while it will reject the signatures from Ubuntu and Windows, and vice versa.
I managed to fix this issue, and cause the same signatures to be created by changing that line above to the following, where it reads the buffer one byte at a time:
int input_it;
for(input_it = (int)buf_in; input_it < inl + (int)buf_in; intput_it++){
EVP_SIGNUpdate(&ctx, (unsigned char*) input_it, 1);
}
This causes OS X to reject its own signatures of data > 64 bytes as invalid, and I tracked down a similar line elsewhere for verifying signatures that needed to be broken up in an identical manner.
This fixes the signature creation and verification, but something is still going wrong, as I'm encountering other problems, and I really don't want to go traipsing (and modifying!) much deeper into OpenSSL.
Surely I'm doing something wrong, as I'm seeing the exact same issues when I use stock ASN1_sign. Is this an issue with the way that I compiled OpenSSL? For the life of me I can't figure it out. Can anyone educate me on what bone-headed mistake I must be making?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
这可能是 MacOS 实现中的一个错误。我建议您通过将上述文本发送给开发人员来提交错误,如 http:// /www.openssl.org/support/faq.html#BUILD17
This is likely a bug in the MacOS implementation. I recommend you file a bug by sending the above text to the developers as described at http://www.openssl.org/support/faq.html#BUILD17
Mac 上的 OpenSSL 存在已知问题(您必须跳过一些步骤才能确保它链接到正确的库而不是系统库)。你自己编译的吗?发行版中的 PROBLEMS 文件解释了问题的详细信息并建议了一些解决方法。 (或者,如果您正在使用共享库运行,请仔细检查您的 DYLD_LIBRARY_PATH 是否设置正确)。不能保证,但这看起来可能是一个起点......
There are known issues with OpenSSL on the mac (you have to jump through a few hoops to ensure it links with the correct library instead of the system library). Did you compile it yourself? The PROBLEMS file in the distribution explains the details of the issue and suggests a few workarounds. (Or if you are running with shared libraries, double check that your DYLD_LIBRARY_PATH is correctly set). No guarantee, but this looks a likely place to start...
移植 Windows 和 Linux 代码时最常见的问题是内存的默认值。我认为 Windows 将其设置为 0xDEADBEEF,而 Linux 将其设置为 0s。
The most common issue porting Windows and Linux code around is default values of memory. I think Windows sets it to 0xDEADBEEF and Linux set's it to 0s.