在 C99 中,等式 ==
似乎从来都不是未定义的。如果您将其应用于无效地址,它可能会意外生成 1
(例如 &x + 1 == &y
可能意外为 true)。它不会产生未定义的行为。许多(但不是全部)无效地址未定义为根据标准计算/使用,因此在 p == &x
和 p
中是悬空指针,或者在&x + 2 == &y
,无效地址会导致未定义的行为,而不是==
。
另一方面,当应用于不指向同一对象内的指针时,>=
和其他比较是未定义的。这包括测试 q >= NULL
,其中 q
是有效指针。这个测试就是我的问题的主题。
我致力于低级嵌入式代码的静态分析器。此类代码执行标准允许之外的操作是正常的。例如,在这种代码中,可以使用 memset(...,0,...) 初始化指针数组,尽管标准没有指定 NULL 和 0
必须具有相同的表示形式。为了有用,分析器必须接受这种事情并按照程序员期望的方式解释它们。警告程序员将被视为误报。
因此,分析器已经假设 NULL
和 0
具有相同的表示形式(您应该根据分析器检查编译器,以确保它们同意这种假设) 。我注意到有些程序将有效指针与 NULL 进行比较 >=
(这个库 是一个例子)。只要 NULL
表示为 0
并且指针比较被编译为无符号整数比较,这就可以按预期工作。
我只希望分析器对此发出警告,如果可能由于某些积极的优化,它可能被编译成与程序员在传统平台上的意思不同的东西。因此我的问题是:在 NULL
表示为的平台上,是否有程序不将 q >= NULL
评估为 1
的示例<代码>0?
注意:这个问题不是关于在指针上下文中使用 0 来获取空指针。关于 NULL
表示的假设是一个真实的假设,因为在 memset()
示例中没有转换。
In C99, equality ==
does not seem ever to be undefined. It can produce 1
by accident if you apply it to invalid addresses (for instance &x + 1 == &y
may be true by accident). It does not produce undefined behavior. Many, but not all, invalid addresses are undefined to compute/use according to the standard, so that in p == &x
with p
a dangling pointer, or in &x + 2 == &y
, the invalid address causes the undefined behavior, not ==
.
On the other hand, >=
and other comparisons are undefined when applied to pointers that do not point within the same object. That includes testing q >= NULL
where q
is a valid pointer. This test is the subject of my question.
I work on a static analyzer for low-level embedded code. It is normal for this kind of code to do things outside what the standard allows. As an example, an array of pointers may, in this kind of code, be initialized with memset(...,0,...)
, although the standard does not specify that NULL
and 0
must have the same representation. In order to be useful, the analyzer must accept this kind of thing and interpret them the way the programmer expects. Warning the programmer would be perceived as a false positive.
So the analyzer is already assuming that NULL
and 0
have the same representation (you are supposed to check your compiler against the analyzer to make sure they agree on this kind of assumptions). I am noticing that some programs compare valid pointers against NULL with >=
(this library is an example). This works as intended as long as NULL
is represented as 0
and pointer comparison is compiled as an unsigned integer comparison.
I only wish the analyzer to warn about this if, perhaps because of some agressive optimization, it may be compiled into something different from what the programmer meant on conventional platforms. Hence my question: is there any example of a program not evaluating q >= NULL
as 1
, on a platform where NULL
is represented as 0
?
NOTE: this question is not about using 0 in a pointer context to get a null pointer. The assumption about the representation of NULL
is a real assumption, because there is no conversion in the memset()
example.
发布评论
评论(1)
肯定有一些指针,当您将它们重新解释为指针大小的有符号整数时,它们将带有负号。
特别是 Win32 上的所有内核内存,如果您使用“大地址感知”,那么甚至 1GB 的用户空间,因为您将获得 3GB 的用户空间。
我不知道 c 指针算术的细节,但我怀疑这些在某些编译器中可能会比较为 <0。
There are definitely pointers that when you reinterpret them as a signed integer of pointer size will have negative sign.
In particular all kernel memory on Win32, and if you use "large address aware" then even 1GB of userspace since you get 3GB of userspace.
I don't know the details of c pointer arithmetic, but I suspect that these might compare as <0 in some compilers.