使用 Interlocked.Increment 的 C# 对象池
我见过很多好的对象池实现。例如:C# 对象池模式实现。
但似乎线程安全的总是使用锁并且从不尝试使用 Interlocked.* 操作。
编写一个不允许将对象返回到池中的对象似乎很容易(只是一个带有 Interlocked.Increments 指针的大数组)。但我想不出任何方法来编写一个可以返回对象的方法。有人这样做过吗?
I have seen many good object pool implementations. For example: C# Object Pooling Pattern implementation.
But it seems like the thread-safe ones always use a lock and never try to use Interlocked.* operations.
It seems easy to write one that doesn't allow returning objects to the pool (just a big array with a pointer that Interlocked.Increments). But I can't think of any way to write one that lets you return objects. Has anyone done this?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(6)
我看不出使用 Interlocked 有任何真正的好处,而且它必须以不安全的方式使用。锁定,只是更改对象内存空间上的一个位标志 - 确实非常非常快。互锁要好一点,因为它可以在寄存器上完成,而不是在内存中完成。
您遇到性能问题吗?此类代码的主要目的是什么?归根结底,C# 旨在将内存管理从您手中抽象出来,以便您专注于业务问题。
请记住,如果您需要自己管理内存并使用不安全的指针,则必须固定内存区域 = 额外的性能成本。
I cannot see any real benefit in using Interlocked also that it has to be used in an unsafe manner. Lock, is only changing a bit flag on the object's memory space - very very fast indeed. Interlocked is a tad better since it could be done on the registers and not in the memory.
Are you experiencing a performance problem? What is the main purpose of such code? At the end of the day C# is designed to abstract memory management from you so that you focus on your business problem.
Remember, if you need to manage memory yourself and use unsafe pointers, you have to pin the memory area = extra performance cost.
仔细想想为什么你需要对象池——这里没有讨论池化的对象。对于大多数对象,使用托管堆将提供必要的功能,而无需在您自己的代码中使用新的池管理器。仅当您的对象封装了难以建立或难以释放的资源时,托管代码中的对象池才值得考虑。
如果您确实需要自己执行此操作,那么有一个轻量级读取器/写入器锁可能有助于优化池访问。
http://theburningmonk.com/2010/02/threading-using-readerwriterlockslim/< /a>
Think hard about why you need object pooling anyway - there is no discussion here of the objects that are pooled. For most objects, using the managed heap will provide the necessary functionality without the headaches of a new pool manager in your own code. Only if your object encapsulates hard-to-establish or hard-to-release resources is object pooling in managed code worth considering.
If you do need to do it yourself, then there is a lightweight reader/writer lock that might be useful in optimizing the pool accesses.
http://theburningmonk.com/2010/02/threading-using-readerwriterlockslim/
我已经使用构建为单链表的无锁队列来完成此操作。以下内容删除了一些不相关的内容,我还没有对删除的内容进行测试,但至少应该给出想法。
如果您进行池化的原因是分配和收集的原始性能考虑,那么分配和收集的事实使其变得毫无用处。如果是因为获取和/或释放底层资源的成本很高,或者因为实例缓存了使用中的“学习”信息,那么它可能适合。
I've done it with a lock-free queue built as a singly-linked list. The following has some irrelevant stuff cut out and I haven't tested it with that stuff removed, but should at least give the idea.
If your reason for pooling was the raw performance consideration of allocation and collection then the fact that this allocates and collects makes it pretty useless. If it's because an underlying resource is expensive to obtain and/or release, or because the instances cache "learned" information in use, then it may suit.
返回引用对象的问题在于,它首先破坏了锁定对其访问的整个尝试。您无法使用基本的 lock() 命令来控制对对象范围之外的资源的访问,这意味着传统的 getter/setter 设计不起作用。
可能有用的是一个包含可锁定资源的对象,并允许传入 lambda 或委托来使用该资源。该对象将锁定资源,运行委托,然后在委托完成时解锁。这基本上将运行代码的控制权交给了锁定对象,但允许比 Interlocked 更复杂的操作。
另一种可能的方法是公开 getter 和 setter,但通过使用“签出”模型来实现您自己的访问控制;当允许线程“获取”值时,在锁定的内部资源中保留对当前线程的引用。在该线程调用 setter、中止等之前,尝试访问 getter 的所有其他线程都将保留在 Yield 循环中。一旦资源被重新签入,下一个线程就可以获取它。
现在,请注意,这仍然需要对象的用户之间进行一些合作。特别是对于 setter 来说,这种逻辑相当幼稚;如果没有签出一本书,就不可能签入它。该规则对于消费者来说可能并不明显,并且使用不当可能会导致未处理的异常。此外,所有用户都必须知道,如果他们在终止之前停止使用该对象,则必须重新签入该对象,即使基本的 C# 知识表明,如果您获取引用类型,您所做的更改也会反映在各处。但是,这可以用作对非线程安全资源的基本“一次一个”访问控制。
The problem with returning reference objects is that it defeats the entire attempt to lock access to it in the first place. You can't use a basic lock() command to control access to a resource outside the scope of the object, and that means that the traditional getter/setter designs don't work.
Something that MAY work is an object that contains lockable resources, and allows lambdas or delegates to be passed in that will make use of the resource. The object will lock the resource, run the delegate, then unlock when the delegate completes. This basically puts control over running the code into the hands of the locking object, but would allow more complex operations than Interlocked has available.
Another possible method is to expose getters and setters, but implement your own access control by using a "checkout" model; when a thread is allowed to "get" a value, keep a reference to the current thread in a locked internal resource. Until that thread calls the setter, aborts, etc., all other threads attempting to access the getter are kept in a Yield loop. Once the resource is checked back in, the next thread can get it.
Now, be aware that this still requires some cooperation among users of the object. Particularly, this logic is rather naive with regards to the setter; it is impossible to check in a book without having checked it out. This rule may not be apparent to consumers, and improper use could cause an unhandled exception. Also, all users must know to check the object back in if they will stop using it before they terminate, even though basic C# knowledge would dictate that if you get a reference type, changes you make are reflected everywhere. However, this can be used as a basic "one at a time" access control to a non-thread-safe resource.
您是否看过.Net 4中的并发集合。
例如 http://msdn .microsoft.com/en-us/library/dd287191.aspx
Have you looked at the Concurrent collection in .Net 4.
e.g. http://msdn.microsoft.com/en-us/library/dd287191.aspx
好问题。在编写高性能软件时,通过使用快速对象池来拥抱零分配模式至关重要。
微软在Apache License 2.0下发布了一个对象池,
它避免使用锁,仅使用Interlocked.CompareExchange进行分配(获取)。当您一次获取和释放几个对象(这是大多数用例)时,它看起来特别快。如果您获得大量对象,然后释放该批次,那么它似乎不太优化,因此如果您的应用程序表现得像您应该修改的那样。
我认为 Interlocked.Increment 方法,正如您所建议的,可能更通用,并且更适合批量用例。
http://sourceroslyn.io/#Microsoft.CodeAnalysis.Workspaces/ ObjectPool%25601.cs,98aa6d9b3c4e313b
Good question. When writing high performance software embracing zero allocation patterns by using a fast object pool is critical.
Microsoft released an object pool under Apache License 2.0
It avoids using locks and only uses Interlocked.CompareExchange for Allocations (Get). It seems particularly fast when you get and release few objects at a time which is most use cases. It seems less optimized if you get a large batch of objects, then release the batch so if your application behaves like that you should modify.
I think the Interlocked.Increment approach, as you suggested, could be more general and work better for the batch use cases.
http://sourceroslyn.io/#Microsoft.CodeAnalysis.Workspaces/ObjectPool%25601.cs,98aa6d9b3c4e313b