在C#中,为什么“int”是System.Int32 的别名?

发布于 2024-12-22 16:11:59 字数 1702 浏览 1 评论 0原文

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

红墙和绿瓦 2024-12-29 16:11:59

我相信他们的主要原因是针对 CLR 的程序的可移植性。如果他们允许像 int 这样的基本类型依赖于平台,那么为 CLR 制作可移植程序将变得更加困难。在与平台无关的 C/C++ 代码中使用 typedef 整型来覆盖内置 int 的使用间接暗示了 CLR 设计者为何决定这样做关于使内置类型独立于平台。像这样的差异是基于虚拟机的执行系统“一次编写,随处运行”目标的一大阻碍。

编辑 通常,int 的大小通过位运算而不是通过算术隐式地影响代码(毕竟,int 的大小可能会出现问题) code>i++,对吧?)但错误通常更微妙。考虑下面的示例:

const int MaxItem = 20;
var item = new MyItem[MaxItem];
for (int mask = 1 ; mask != (1<<MaxItem) ; mask++) {
    var combination = new HashSet<MyItem>();
    for (int i = 0 ; i != MaxItem ; i++) {
        if ((mask & (1<<i)) != 0) {
            combination.Add(item[i]);
        }
    }
    ProcessCombination(combination);
}

此代码计算并处理 20 个项目的所有组合。正如您所知,该代码在 16 位 int 系统上会严重失败,但在 32 位或 64 位 int 上却可以正常工作。

不安全的代码会带来另一个令人头疼的问题:当 int 固定为某个大小(例如 32)时,分配 4 倍于需要编组的 int 数量的字节数的代码将起作用,即使使用 4 代替 sizeof(int) 在技术上是不正确的。此外,这个技术上不正确的代码仍然是可移植的!

最终,诸如此类的小事情会严重影响平台“好”或“坏”的看法。 .NET 程序的用户并不关心程序崩溃是因为程序员犯了不可移植的错误,或者 CLR 有错误。这类似于早期 Windows 由于驱动程序质量差而被广泛认为不稳定的情况。对于大多数用户来说,崩溃只是另一个 .NET 程序崩溃,而不是程序员的问题。因此,使标准尽可能宽容有利于“.NET 生态系统”的认知。

I believe that their main reason was portability of programs targeting CLR. If they were to allow a type as basic as int to be platform-dependent, making portable programs for CLR would become a lot more difficult. Proliferation of typedef-ed integral types in platform-neutral C/C++ code to cover the use of built-in int is an indirect hint as to why the designers of CLR decided on making built-in types platform-independent. Discrepancies like that are a big inhibitor to the "write once, run anywhere" goal of execution systems based on VMs.

Edit More often than not, the size of an int plays into your code implicitly through bit operations, rather than through arithmetics (after all, what could possibly go wrong with the i++, right?) But the errors are usually more subtle. Consider an example below:

const int MaxItem = 20;
var item = new MyItem[MaxItem];
for (int mask = 1 ; mask != (1<<MaxItem) ; mask++) {
    var combination = new HashSet<MyItem>();
    for (int i = 0 ; i != MaxItem ; i++) {
        if ((mask & (1<<i)) != 0) {
            combination.Add(item[i]);
        }
    }
    ProcessCombination(combination);
}

This code computes and processes all combinations of 20 items. As you can tell, the code fails miserably on a system with 16-bit int, but works fine with ints of 32 or 64 bits.

Unsafe code would provide another source of headache: when the int is fixed at some size (say, 32) code that allocates 4 times the number of bytes as the number of ints that it needs to marshal would work, even though it is technically incorrect to use 4 in place of sizeof(int). Moreover, this technically incorrect code would remain portable!

Ultimately, small things like that play heavily into the perception of platform as "good" or "bad". Users of .NET programs do not care that a program crashes because its programmer made a non-portable mistake, or the CLR is buggy. This is similar to the way the early Windows were widely perceived as non-stable due to poor quality of drivers. To most users, a crash is just another .NET program crash, not a programmers' issue. Therefore is is good for perception of the ".NET ecosystem" to make the standard as forgiving as possible.

幽蝶幻影 2024-12-29 16:11:59

许多程序员倾向于为他们使用的平台编写代码。这包括有关类型大小的假设。如果将 int 的大小更改为 16 或 64 位,许多 C 程序都会失败,因为它们是在 int 为 32 位的假设下编写的。选择 C# 通过简单地定义它来避免这个问题。如果您将 int 定义为变量,具体取决于您购买的平台,也会遇到同样的问题。尽管你可能会说这是程序员做出错误假设的错误,但它使语言变得更加健壮(IMO)。对于桌面平台,32 位 int 可能是最常见的情况。此外,它使得将本机 C 代码移植到 C# 变得更加容易。

编辑:我认为您编写的代码对类型的 sizer 做出(隐式)假设的频率比您想象的要频繁。基本上,如果您允许 int 的变量大小,那么涉及序列化的任何内容(例如 .NET 远程处理、WCF、将数据序列化到磁盘等)都会给您带来麻烦,除非程序员通过使用特定大小的类型(例如 )来处理它int32。然后你最终会得到“无论如何我们都会使用 int32 以防万一”,而你却一无所获。

Many programmers have the tendency to write code for the platform they use. This includes assumptions about the size of a type. There are many C programs around which will fail if the size of an int would be changed to 16 or 64 bit because they were written under the assumption that an int is 32 bit. The choice for C# avoids that problem by simply defining it as that. If you define int as variable depending on the platform you buy back into that same problem. Although you could argue that it's the programmers fault of making wrong assumptions it makes the language a bit more robust (IMO). And for desktop platforms a 32 bit int is probably the most common occurrence. Besides it makes porting native C code to C# a bit easier.

Edit: I think you write code which makes (implicit) assumptions about the sizer of a type more often than you think. Basically anything which involves serialization (like .NET remoting, WCF, serializing data to disk, etc.) will get you in trouble if you allow variables sizes for int unless the programmer takes care of it by using the specific sized type like int32. And then you end up with "we'll use int32 always anyway just in case" and you have gained nothing.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文