引用与指针的执行速度
我最近阅读了一篇关于托管语言是否比本机语言(特别是 C# 与 C++)慢(或快)的讨论。 一位参与讨论的人士表示,托管语言的 JIT 编译器将能够对引用进行优化,而这在使用指针的语言中是不可能实现的。
我想知道的是,对于引用而不是指针可以进行什么样的优化?
请注意,讨论的是执行速度,而不是内存使用情况。
I recently read a discussion regarding whether managed languages are slower (or faster) than native languages (specifically C# vs C++). One person that contributed to the discussion said that the JIT compilers of managed languages would be able to make optimizations regarding references that simply isn't possible in languages that use pointers.
What I'd like to know is what kind of optimizations that are possible on references and not on pointers?
Note that the discussion was about execution speed, not memory usage.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(5)
在 C++ 中,与优化方面相关的引用有两个优点:
引用是常量(在其整个生命周期中引用同一变量)
因此,编译器更容易推断哪些名称引用相同的基础变量 - 从而创造优化机会。 不能保证编译器会更好地处理引用,但它可能......
假设引用引用某些东西(没有空引用)
可以创建“不引用任何内容”的引用(相当于 NULL 指针),但这并不像创建 NULL 指针那么容易。 因此,可以省略对 NULL 引用的检查。
但是,这些优点都不会直接延续到托管语言,因此我认为这与您的讨论主题的上下文无关。
In C++ there are two advantages of references related to optimization aspects:
A reference is constant (refers to the same variable for its whole lifetime)
Because of this it is easier for the compiler to infer which names refer to the same underlying variables - thus creating optimization opportunities. There is no guarantee that the compiler will do better with references, but it might...
A reference is assumed to refer to something (there is no null reference)
A reference that "refers to nothing" (equivalent to the NULL pointer) can be created, but this is not as easy as creating a NULL pointer. Because of this the check of the reference for NULL can be omitted.
However, none of these advantages carry over directly to managed languages, so I don't see the relevance of that in the context of your discussion topic.
维基百科中提到了 JIT 编译的一些好处:
我想不出与使用引用而不是指针直接相关的东西。
There are some benefits of JIT compilation mentioned in Wikipedia:
I can't think of something related directly to the use of references instead of pointers.
一般来说,引用使得可以从不同的地方引用同一个对象。
“指针”是实现引用的机制的名称。 C++、Pascal、C...都有指针,C++ 提供了另一种称为“引用”的机制(还有一些其他用例),但本质上这些都是通用引用概念的实现。
因此,根据定义,引用没有理由比指针更快/更慢。
真正的区别在于使用 JIT 或经典的“预先”编译器:JIT 可以考虑预先编译器无法使用的数据。 它与“参考”概念的实现无关。
In general speak, references make it possible to refer to the same object from different places.
A 'Pointer' is the name of a mechanism to implement references. C++, Pascal, C... have pointers, C++ offers another mechanism (with slightly other use cases) called 'Reference', but essentially these are all implementations of the general referencing concept.
So there is no reason why references are by definition faster/slower than pointers.
The real difference is in using a JIT or a classic 'up front' compiler: the JIT can data take into account that aren't available for the up front compiler. It has nothing to do with the implementation of the concept 'reference'.
其他答案都是对的。
我只想补充一点,任何优化都不会产生任何影响,除非它是在程序计数器实际上花费大量时间的代码中,例如在不包含函数调用(例如比较字符串)的紧密循环中。
Other answers are right.
I would only add that any optimization won't make a hoot of difference unless it is in code where the program counter actually spends much time, like in tight loops that don't contain function calls (such as comparing strings).
托管框架中的对象引用与 C++ 中的传递引用有很大不同。 要理解它们的特殊之处,请想象一下在没有垃圾收集对象引用的情况下如何在机器级别处理以下场景:方法“Foo”返回一个字符串,该字符串存储在各种集合中并传递给不同的代码段。 一旦不再需要该字符串,应该可以回收用于存储该字符串的所有内存,但尚不清楚哪一段代码将是最后使用该字符串的代码。
在非 GC 系统中,每个集合要么需要拥有自己的字符串副本,要么需要保存包含指向共享对象的指针的内容,该共享对象保存字符串中的字符。 在后一种情况下,共享对象需要以某种方式知道指向它的最后一个指针何时被消除。 有多种方法可以处理此问题,但所有这些方法的一个基本共同点是,当复制或销毁共享对象的指针时,需要通知它们。 此类通知需要工作。
相比之下,在 GC 系统中,程序用元数据进行修饰,以说明在任何给定时间将使用哪些寄存器或堆栈帧的部分来保存根对象引用。 当垃圾收集周期发生时,垃圾收集器将必须解析这些数据,识别并保留所有活动对象,并消除其他所有内容。 然而,在所有其他时间,处理器可以按照它喜欢的任何模式或顺序复制、替换、洗牌或销毁引用,而不必通知所涉及的任何对象。 请注意,在多处理器系统中使用指针使用通知时,如果不同的线程可能复制或销毁对同一对象的引用,则需要同步代码以使必要的通知成为线程安全的。 相比之下,在 GC 系统中,每个处理器可以随时更改参考变量,而无需与任何其他处理器同步其操作。
An object reference in a managed framework is very different from a passed reference in C++. To understand what makes them special, imagine how the following scenario would be handled, at the machine level, without garbage-collected object references: Method "Foo" returns a string, which is stored into various collections and passed to different pieces of code. Once nothing needs the string any more, it should be possible to reclaim all memory used in storing it, but it's unclear what piece of code will be the last one to use the string.
In a non-GC system, every collection either needs to have its own copy of the string, or else needs to hold something containing a pointer to a shared object which holds the characters in the string. In the latter situation, the shared object needs to somehow know when the last pointer to it gets eliminated. There are a variety of ways this can be handled, but an essential common aspect of all of them is that shared objects need to be notified when pointers to them are copied or destroyed. Such notification requires work.
In a GC system by contrast, programs are decorated with metadata to say which registers or parts of a stack frame will be used at any given time to hold rooted object references. When a garbage collection cycle occurs, the garbage collector will have to parse this data, identify and preserve all live objects, and nuke everything else. At all other times, however, the processor can copy, replace, shuffle, or destroy references in any pattern or sequence it likes, without having to notify any of the objects involved. Note that when using pointer-use notifications in a multi-processor system, if different threads might copy or destroy references to the same object, synchronization code will be required to make the necessary notification thread-safe. By contrast, in a GC system, each processor may change reference variables at any time without having to synchronize its actions with any other processor.