gcc 和 VC++ 的 #define 数量是否有限制?预处理器可以处理吗?

发布于 2024-10-12 02:51:09 字数 91 浏览 2 评论 0原文

在讨论要定义大量常量和位模式的项目的设计可能性时,出现了标准编译器可以处理多少个 #define 的问题?我认为这是一个非常大的数字,但我们很想知道是否存在实际的上限。

In discussing design possibilities for a project that has a very large number of constants and bit patterns to be defined, the question came up about how many #defines can a standard compiler handle? I assume it is a very large number, but we were curious to know if there is an actual upper bound.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

日久见人心 2024-10-19 02:51:09

对于“标准编译器”:

5.2.4.1:“翻译限制”

实施应能够
翻译并执行至少一项
程序至少包含一个
以下每一项的实例
限制

...

同时支持 4095 个宏标识符
在一次预处理中定义
翻译单元

请注意要求的措辞方式有点奇怪。实现可以通过拥有一个“黄金程序”来满足它,他们将其识别为特殊情况并将其编译,尽管这类似于操纵基准。在实践中,您可以将标准解读为:如果您的实现强加了可用内存以外的限制,那么该限制应该至少为 4095。超过 4095,您在一定程度上依赖于特定于实现的行为。

一些编译器(微软)施加了一些低于标准规定的实现限制。我认为这些在 MSDN 上某处列出,但可能仅适用于 C++。就 C 而言,因为我引用了 C99,所以它可能与 MSVC 无关。

特别是对于 GCC 和 MSVC,测试给定的实现是否施加任意限制应该不会太难,也许比找到文档更容易:-) 自动生成只包含长长的 #define 列表的文件,看看预处理器如何处理它们。

For a "standard compiler":

5.2.4.1: "Translation limits"

The implementation shall be able to
translate and execute at least one
program that contains at least one
instance of every one of the following
limits

...

4095 macro identifiers simultaneously
defined in one preprocessing
translation unit

Note the slightly odd way of phrasing the requirement. Implementations could satisfy it by having a single "golden program" which they recognise and compile as a special case, although that would be akin to rigging benchmarks. In practice you can read the standard as saying that if your implementation imposes a limit other than available memory, then that limit should be at least 4095. Beyond 4095 you are relying on implementation-specific behavior to an extent.

Some compilers (Microsoft) impose some implementation limits which are less than the standard says. These are listed somewhere on MSDN I think, but possibly only for C++. As far as C goes, since I'm quoting C99 it might not be relevant to MSVC anyway.

For GCC and MSVC in particular, it shouldn't be too hard to test whether a given implementation imposes an arbitrary limit, perhaps easier than finding it documented :-) Auto-generate files containing nothing but great long lists of #define, see what the preprocessor makes of them.

回忆凄美了谁 2024-10-19 02:51:09

我从来没有听说过有人用完。曾经。

I have never heard of anyone running out. Ever.

笑梦风尘 2024-10-19 02:51:09

C 预处理器在实际使用之前不会扩展#define。因此,在典型的实现中,您可能遇到的唯一限制是存储所有这些的内存。但是用于存储宏的内部表示的内存基本上最多与编译器读取的文件的大小成正比。

(好吧,您也可以多次包含文件...)

我猜,您可以通过扩展深层嵌套的宏来进行爆炸预处理运行。像这样的东西

#define EXP1(X) X X
#define EXP2(X) EXP1(X) EXP1(X)
#define EXP3(X) EXP2(X) EXP2(X)
.
.
#define EXP64(X) EXP63(X) EXP63(X)
EXP64(A)

应该可以解决问题,因为它给你 A 的 2^64 副本,或者左右。 AFAIR,这些宏定义甚至在标准规定的范围内。

The C preprocessor doesn't expand #define before they are actually used. So in a typical implementation the only limit you might encounter is memory to store all that. But this memory for storing the internal representation of the macros will basically at most be something proportional to the size of the files that the compiler reads.

(Well you could do multiple inclusion of files also...)

You could make explode a preprocessing run by expanding deeply nested macros, I guess. Something like

#define EXP1(X) X X
#define EXP2(X) EXP1(X) EXP1(X)
#define EXP3(X) EXP2(X) EXP2(X)
.
.
#define EXP64(X) EXP63(X) EXP63(X)
EXP64(A)

should do the trick, since it gives you 2^64 copies of A, or so. AFAIR, these macro definitions are even within the bounds that the standard imposes.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文