引用与指针的执行速度

发布于 2024-07-11 07:26:35 字数 183 浏览 9 评论 0原文

我最近阅读了一篇关于托管语言是否比本机语言(特别是 C# 与 C++)慢(或快)的讨论。 一位参与讨论的人士表示,托管语言的 JIT 编译器将能够对引用进行优化,而这在使用指针的语言中是不可能实现的。

我想知道的是,对于引用而不是指针可以进行​​什么样的优化?

请注意,讨论的是执行速度,而不是内存使用情况。

I recently read a discussion regarding whether managed languages are slower (or faster) than native languages (specifically C# vs C++). One person that contributed to the discussion said that the JIT compilers of managed languages would be able to make optimizations regarding references that simply isn't possible in languages that use pointers.

What I'd like to know is what kind of optimizations that are possible on references and not on pointers?

Note that the discussion was about execution speed, not memory usage.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

墟烟 2024-07-18 07:26:35

在 C++ 中,与优化方面相关的引用有两个优点:

  1. 引用是常量(在其整个生命周期中引用同一变量)

    因此,编译器更容易推断哪些名称引用相同的基础变量 - 从而创造优化机会。 不能保证编译器会更好地处理引用,但它可能......

  2. 假设引用引用某些东西(没有空引用)

    可以创建“不引用任何内容”的引用(相当于 NULL 指针),但这并不像创建 NULL 指针那么容易。 因此,可以省略对 NULL 引用的检查。

但是,这些优点都不会直接延续到托管语言,因此我认为这与您的讨论主题的上下文无关。

In C++ there are two advantages of references related to optimization aspects:

  1. A reference is constant (refers to the same variable for its whole lifetime)

    Because of this it is easier for the compiler to infer which names refer to the same underlying variables - thus creating optimization opportunities. There is no guarantee that the compiler will do better with references, but it might...

  2. A reference is assumed to refer to something (there is no null reference)

    A reference that "refers to nothing" (equivalent to the NULL pointer) can be created, but this is not as easy as creating a NULL pointer. Because of this the check of the reference for NULL can be omitted.

However, none of these advantages carry over directly to managed languages, so I don't see the relevance of that in the context of your discussion topic.

香草可樂 2024-07-18 07:26:35

维基百科中提到了 JIT 编译的一些好处:

JIT 代码通常比解释器提供更好的性能。 此外,在某些或许多情况下,它可以提供比静态编译更好的性能,因为许多优化仅在运行时才可行:

  1. 编译可以针对目标 CPU 和应用程序运行的操作系统模型进行优化。 例如,当 JIT 检测到 CPU 支持 SSE2 CPU 指令时,它可以选择 SSE2 CPU 指令。 使用静态编译器,必须编写两个版本的代码,可能使用内联汇编。
  2. 系统能够收集有关程序在其所处环境中实际运行情况的统计信息,并且可以重新排列和重新编译以获得最佳性能。 不过,一些静态编译器也可以将配置文件信息作为输入。
  3. 系统可以进行全局代码优化(例如库函数的内联),而不会失去动态链接的优点,也不会产生静态编译器和链接器固有的开销。 具体来说,在进行全局内联替换时,静态编译器必须插入运行时检查,并确保如果对象的实际类覆盖内联方法,就会发生虚拟调用。
  4. 尽管静态编译的垃圾收集语言可以做到这一点,但字节码系统可以更轻松地重新排列内存以提高缓存利用率。

我想不出与使用引用而不是指针直接相关的东西。

There are some benefits of JIT compilation mentioned in Wikipedia:

JIT code generally offers far better performance than interpreters. In addition, it can in some or many cases offer better performance than static compilation, as many optimizations are only feasible at run-time:

  1. The compilation can be optimized to the targeted CPU and the operating system model where the application runs. For example JIT can choose SSE2 CPU instructions when it detects that the CPU supports them. With a static compiler one must write two versions of the code, possibly using inline assembly.
  2. The system is able to collect statistics about how the program is actually running in the environment it is in, and it can rearrange and recompile for optimum performance. However, some static compilers can also take profile information as input.
  3. The system can do global code optimizations (e.g. inlining of library functions) without losing the advantages of dynamic linking and without the overheads inherent to static compilers and linkers. Specifically, when doing global inline substitutions, a static compiler must insert run-time checks and ensure that a virtual call would occur if the actual class of the object overrides the inlined method.
  4. Although this is possible with statically compiled garbage collected languages, a bytecode system can more easily rearrange memory for better cache utilization.

I can't think of something related directly to the use of references instead of pointers.

千笙结 2024-07-18 07:26:35

一般来说,引用使得可以从不同的地方引用同一个对象。

“指针”是实现引用的机制的名称。 C++、Pascal、C...都有指针,C++ 提供了另一种称为“引用”的机制(还有一些其他用例),但本质上这些都是通用引用概念的实现。

因此,根据定义,引用没有理由比指针更快/更慢。

真正的区别在于使用 JIT 或经典的“预先”编译器:JIT 可以考虑预先编译器无法使用的数据。 它与“参考”概念的实现无关。

In general speak, references make it possible to refer to the same object from different places.

A 'Pointer' is the name of a mechanism to implement references. C++, Pascal, C... have pointers, C++ offers another mechanism (with slightly other use cases) called 'Reference', but essentially these are all implementations of the general referencing concept.

So there is no reason why references are by definition faster/slower than pointers.

The real difference is in using a JIT or a classic 'up front' compiler: the JIT can data take into account that aren't available for the up front compiler. It has nothing to do with the implementation of the concept 'reference'.

做个少女永远怀春 2024-07-18 07:26:35

其他答案都是对的。

我只想补充一点,任何优化都不会产生任何影响,除非它是在程序计数器实际上花费大量时间的代码中,例如在不包含函数调用(例如比较字符串)的紧密循环中。

Other answers are right.

I would only add that any optimization won't make a hoot of difference unless it is in code where the program counter actually spends much time, like in tight loops that don't contain function calls (such as comparing strings).

白衬杉格子梦 2024-07-18 07:26:35

托管框架中的对象引用与 C++ 中的传递引用有很大不同。 要理解它们的特殊之处,请想象一下在没有垃圾收集对象引用的情况下如何在机器级别处理以下场景:方法“Foo”返回一个字符串,该字符串存储在各种集合中并传递给不同的代码段。 一旦不再需要该字符串,应该可以回收用于存储该字符串的所有内存,但尚不清楚哪一段代码将是最后使用该字符串的代码。

在非 GC 系统中,每个集合要么需要拥有自己的字符串副本,要么需要保存包含指向共享对象的指针的内容,该共享对象保存字符串中的字符。 在后一种情况下,共享对象需要以某种方式知道指向它的最后一个指针何时被消除。 有多种方法可以处理此问题,但所有这些方法的一个基本共同点是,当复制或销毁共享对象的指针时,需要通知它们。 此类通知需要工作。

相比之下,在 GC 系统中,程序用元数据进行修饰,以说明在任何给定时间将使用哪些寄存器或堆栈帧的部分来保存根对象引用。 当垃圾收集周期发生时,垃圾收集器将必须解析这些数据,识别并保留所有活动对象,并消除其他所有内容。 然而,在所有其他时间,处理器可以按照它喜欢的任何模式或顺序复制、替换、洗牌或销毁引用,而不必通知所涉及的任何对象。 请注意,在多处理器系统中使用指针使用通知时,如果不同的线程可能复制或销毁对同一对象的引用,则需要同步代码以使必要的通知成为线程安全的。 相比之下,在 GC 系统中,每个处理器可以随时更改参考变量,而无需与任何其他处理器同步其操作。

An object reference in a managed framework is very different from a passed reference in C++. To understand what makes them special, imagine how the following scenario would be handled, at the machine level, without garbage-collected object references: Method "Foo" returns a string, which is stored into various collections and passed to different pieces of code. Once nothing needs the string any more, it should be possible to reclaim all memory used in storing it, but it's unclear what piece of code will be the last one to use the string.

In a non-GC system, every collection either needs to have its own copy of the string, or else needs to hold something containing a pointer to a shared object which holds the characters in the string. In the latter situation, the shared object needs to somehow know when the last pointer to it gets eliminated. There are a variety of ways this can be handled, but an essential common aspect of all of them is that shared objects need to be notified when pointers to them are copied or destroyed. Such notification requires work.

In a GC system by contrast, programs are decorated with metadata to say which registers or parts of a stack frame will be used at any given time to hold rooted object references. When a garbage collection cycle occurs, the garbage collector will have to parse this data, identify and preserve all live objects, and nuke everything else. At all other times, however, the processor can copy, replace, shuffle, or destroy references in any pattern or sequence it likes, without having to notify any of the objects involved. Note that when using pointer-use notifications in a multi-processor system, if different threads might copy or destroy references to the same object, synchronization code will be required to make the necessary notification thread-safe. By contrast, in a GC system, each processor may change reference variables at any time without having to synchronize its actions with any other processor.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文