gcc 和 VC++ 的 #define 数量是否有限制?预处理器可以处理吗?
在讨论要定义大量常量和位模式的项目的设计可能性时,出现了标准编译器可以处理多少个 #define 的问题?我认为这是一个非常大的数字,但我们很想知道是否存在实际的上限。
In discussing design possibilities for a project that has a very large number of constants and bit patterns to be defined, the question came up about how many #defines can a standard compiler handle? I assume it is a very large number, but we were curious to know if there is an actual upper bound.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
对于“标准编译器”:
5.2.4.1:“翻译限制”
请注意要求的措辞方式有点奇怪。实现可以通过拥有一个“黄金程序”来满足它,他们将其识别为特殊情况并将其编译,尽管这类似于操纵基准。在实践中,您可以将标准解读为:如果您的实现强加了可用内存以外的限制,那么该限制应该至少为 4095。超过 4095,您在一定程度上依赖于特定于实现的行为。
一些编译器(微软)施加了一些低于标准规定的实现限制。我认为这些在 MSDN 上某处列出,但可能仅适用于 C++。就 C 而言,因为我引用了 C99,所以它可能与 MSVC 无关。
特别是对于 GCC 和 MSVC,测试给定的实现是否施加任意限制应该不会太难,也许比找到文档更容易:-) 自动生成只包含长长的
#define 列表的文件
,看看预处理器如何处理它们。For a "standard compiler":
5.2.4.1: "Translation limits"
Note the slightly odd way of phrasing the requirement. Implementations could satisfy it by having a single "golden program" which they recognise and compile as a special case, although that would be akin to rigging benchmarks. In practice you can read the standard as saying that if your implementation imposes a limit other than available memory, then that limit should be at least 4095. Beyond 4095 you are relying on implementation-specific behavior to an extent.
Some compilers (Microsoft) impose some implementation limits which are less than the standard says. These are listed somewhere on MSDN I think, but possibly only for C++. As far as C goes, since I'm quoting C99 it might not be relevant to MSVC anyway.
For GCC and MSVC in particular, it shouldn't be too hard to test whether a given implementation imposes an arbitrary limit, perhaps easier than finding it documented :-) Auto-generate files containing nothing but great long lists of
#define
, see what the preprocessor makes of them.我从来没有听说过有人用完。曾经。
I have never heard of anyone running out. Ever.
C 预处理器在实际使用之前不会扩展
#define
。因此,在典型的实现中,您可能遇到的唯一限制是存储所有这些的内存。但是用于存储宏的内部表示的内存基本上最多与编译器读取的文件的大小成正比。(好吧,您也可以多次包含文件...)
我猜,您可以通过扩展深层嵌套的宏来进行爆炸预处理运行。像这样的东西
应该可以解决问题,因为它给你 A 的 2^64 副本,或者左右。 AFAIR,这些宏定义甚至在标准规定的范围内。
The C preprocessor doesn't expand
#define
before they are actually used. So in a typical implementation the only limit you might encounter is memory to store all that. But this memory for storing the internal representation of the macros will basically at most be something proportional to the size of the files that the compiler reads.(Well you could do multiple inclusion of files also...)
You could make explode a preprocessing run by expanding deeply nested macros, I guess. Something like
should do the trick, since it gives you 2^64 copies of A, or so. AFAIR, these macro definitions are even within the bounds that the standard imposes.