为什么在offsetof()中减去空指针?
Linux 的 stddef.h 定义 offsetof()
为
#define offsetof(TYPE, MEMBER) ((size_t) &((TYPE *)0)->MEMBER)
:关于 offsetof()
的维基百科文章 (http://en.wikipedia.org/wiki /Offsetof) 将其定义为:
#define offsetof(st, m) \
((size_t) ( (char *)&((st *)(0))->m - (char *)0 ))
为什么在维基百科版本中减去 (char *)0
?有没有什么情况下这实际上会产生影响?
Linux's stddef.h defines offsetof()
as:
#define offsetof(TYPE, MEMBER) ((size_t) &((TYPE *)0)->MEMBER)
whereas the Wikipedia article on offsetof()
(http://en.wikipedia.org/wiki/Offsetof) defines it as:
#define offsetof(st, m) \
((size_t) ( (char *)&((st *)(0))->m - (char *)0 ))
Why subtract (char *)0
in the Wikipedia version? Is there any case where that would actually make a difference?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
第一个版本通过强制转换将指针转换为整数,这是不可移植的。
第二个版本在更广泛的编译器中更具可移植性,因为它依赖于编译器的指针算术来获取整数结果而不是类型转换。
顺便说一句,我是将原始代码添加到 Wiki 条目(Linux 形式)的编辑。后来编辑将其改为更便携的版本。
The first version converts a pointer into an integer with a cast, which is not portable.
The second version is more portable across a wider variety of compilers, because it relies on pointer arithmetic by the compiler to get an integer result instead of a typecast.
BTW, I was the editor that added the original code to the Wiki entry, which was the Linux form. Later editors changed it to the more portable version.
该标准不需要 NULL 指针来计算位模式 0,但可以计算为特定于平台的值。
进行减法可确保转换为整数值时 NULL 为 0。
The standard does not require the NULL pointer to evaluate to the bit pattern 0 but can evaluate to a platform specific value.
Doing the subtraction guarantees that when converted to an integer value, NULL is 0.