堆栈合并点并在CLR中进行管理指针

发布于 2025-02-09 22:46:33 字数 918 浏览 2 评论 0原文

我的理解是,.NET CLR被允许运行“无法验证”和“可验证”字节码。但是,在这两种情况下,字节码都必须根据ECMA-CIL为“正确的CIL”。可以使用C#的不安全功能来生成正确但无法验证的字节码。可验证的字节码可能来自日常C#。

无论哪种方式,.NET CLR必须以某种方式保证字节码为正确的CIL。为此,它必须在每次指令之前静态地推断有关堆栈状态的基本信息。例如,元素数量和非常粗糙的类型推理。如果它具有多个前身,则必须在基本块开始时合并。

我的问题是,是否可以合并不同类型的托管指针?我的意思是关于正确的CIL,但不一定是可验证的CIL。

.method public static void Bar (int32& a, uint32& b, bool d) cil managed
{
    .maxstack 8
    IL_0003: ldarg.2
    IL_0004: brfalse.s IL_000b

    IL_0006: ldarg.0
    IL_0009: br.s IL_000d

    IL_000b: ldarg.1

    IL_000d: pop
    IL_000e: ret
}

Ilverify报告:

IL]: Error [PathStackUnexpected]: [Test.dll : .Test::Bar(int32&, uint32&, bool)][offset 0x00000006][found address of Int32][expected address of UInt32] Non-compatible types on stack depending on path.

我的问题是我不知道这是否是关于字节码的可验证性或正确性。我的意思是“可验证性”和“正确性”,就像在ECMA-CIL中定义它们一样。我也想知道我是否可能误解标准。

My understanding is that the .NET CLR is allowed to run "unverifiable" and "verifiable" bytecode. However, in both cases, the bytecode must be "correct CIL" in terms of the ECMA-CIL. Bytecode that is correct but unverifiable could be generated by using unsafe features of C#. Verifiable bytecode could come from day-to-day C#.

Either way, the .NET CLR must guarantee somehow that the bytecode is correct CIL. To do so, it must statically infer basic information about the stack state before each instruction. For instance, the number of elements and very coarse type inference. The inferred information must be merged at the beginning of a basic block if it has more than one predecessor.

My question is, is it allowed to merge managed pointers of different types? I mean this regarding correct CIL but not necessarily verifiable CIL.

.method public static void Bar (int32& a, uint32& b, bool d) cil managed
{
    .maxstack 8
    IL_0003: ldarg.2
    IL_0004: brfalse.s IL_000b

    IL_0006: ldarg.0
    IL_0009: br.s IL_000d

    IL_000b: ldarg.1

    IL_000d: pop
    IL_000e: ret
}

ILVerify reports:

IL]: Error [PathStackUnexpected]: [Test.dll : .Test::Bar(int32&, uint32&, bool)][offset 0x00000006][found address of Int32][expected address of UInt32] Non-compatible types on stack depending on path.

My problem is that I don't know if this is regarding the verifiability or correctness of the bytecode. I mean "verifiability" and "correctness" in the same way they are defined in the ECMA-CIL. I also wonder if I may be misunderstanding the standard.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

倚栏听风 2025-02-16 22:46:33

ECMA-335,第83页,说:

堆栈的类型状态(堆栈深度和每个元素的类型
对于所有可能的控制流路径,在程序中的任何给定点堆栈)均应相同。例如,将禁止在每次迭代时在堆栈上推出新元素的程序。

第85页加强了这一点:

不管允许执行到达那里的控制流量如何,堆栈上的每个插槽均应在方法主体内的任何给定点具有相同的数据类型。

因此,通常,堆栈上元素的类型必须相同。但是,I.12.3.2.1节继续指定评估堆栈仅确认为由这些类型组成:int64int32本机代码>,f&amp;o, *(本机int&amp; < /code>),或任何任意用户定义的值类型。

这似乎暗示着,虽然CIL将int32&amp;uint32&amp; amp;视为不同类型,而评估堆栈则将它们视为&amp;,,因此,两个控制流的“类型状态”都是相同的。因此,CIL为正确

可验证性是一个更强的标准,可确保该程序最终是内存安全的,并且在有关程序内存的所有情况下都具有可预测的结果。验证不使用CLI用于评估堆栈的类型有限的类型,而是要求类型状态 compatibil ,通常是通过具有相同的验证类型

在您的情况下,重要的是转介到变量的类型:int32uint32。如第36页所述:

类型T的验证类型是以下内容:
...
2。如果T是托管指针类型S&amp; S的减少类型为:
...
INT32,然后其验证类型为int32&amp;。

同样,在同一页面上:

类型T的还原类型是以下内容:
1。如果t的基础类型为:
...
INT32或未签名的INT32,其还原类型为INT32。

结果,还原类型都是 int32,因此这两个参数的验证类型均为int32&amp;,无论其签名如何CLI是可验证的。

ECMA-335, page 83, says:

The type state of the stack (the stack depth and types of each element on
the stack) at any given point in a program shall be identical for all possible control flow paths. For example, a program that loops an unknown number of times and pushes a new element on the stack at each iteration would be prohibited.

This is reinforced by page 85:

Regardless of the control flow that allows execution to arrive there, each slot on the stack shall have the same data type at any given point within the method body.

Therefore it would seem that, in general, the types of elements on the stack must be identical. However, section I.12.3.2.1 goes on to specify that the evaluation stack is only recognized to consist of these types: int64, int32, native int, F, &, O, * (either native int or &), or any arbitrary user-defined value type.

This seems to imply that, while the CIL treats int32& and uint32& as different types, the evaluation stack treats them both as &, thus the "type state" is the same for both control flows. Thus the CIL is correct.

Verifiability is a stronger criterion, ensuring that the program is ultimately memory-safe and has predictable results in all situations with respect to the program's memory. Verification does not use the limited number of types the CLI uses for the evaluation stack, instead it requires that the type states be compatible, usually by having the same verification type.

In your case, what matters are the types of the referred-to variables: int32 and uint32. As is described on page 36:

The verification type of a type T is the following:
...
2. If T is a managed pointer type S& and the reduced type of S is:
...
int32, then its verification type is int32&.

And again, on the same page:

The reduced type of a type T is the following:
1. If the underlying type of T is:
...
int32, or unsigned int32, then its reduced type is int32.

As a consequence, the reduced types are both int32, and so the verification types of both parameters are int32&, regardless of their signedness, thus the CLI is verifiable.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文