使用 Interlocked.Increment 的 C# 对象池

发布于 2024-09-18 07:45:36 字数 283 浏览 8 评论 0原文

我见过很多好的对象池实现。例如:C# 对象池模式实现

但似乎线程安全的总是使用锁并且从不尝试使用 Interlocked.* 操作。

编写一个不允许将对象返回到池中的对象似乎很容易(只是一个带有 Interlocked.Increments 指针的大数组)。但我想不出任何方法来编写一个可以返回对象的方法。有人这样做过吗?

I have seen many good object pool implementations. For example: C# Object Pooling Pattern implementation.

But it seems like the thread-safe ones always use a lock and never try to use Interlocked.* operations.

It seems easy to write one that doesn't allow returning objects to the pool (just a big array with a pointer that Interlocked.Increments). But I can't think of any way to write one that lets you return objects. Has anyone done this?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

握住我的手 2024-09-25 07:45:37

我看不出使用 Interlocked 有任何真正的好处,而且它必须以不安全的方式使用。锁定,只是更改对象内存空间上的一个位标志 - 确实非常非常快。互锁要好一点,因为它可以在寄存器上完成,而不是在内存中完成。

您遇到性能问题吗?此类代码的主要目的是什么?归根结底,C# 旨在将内存管理从您手中抽象出来,以便您专注于业务问题。

请记住,如果您需要自己管理内存并使用不安全的指针,则必须固定内存区域 = 额外的性能成本。

I cannot see any real benefit in using Interlocked also that it has to be used in an unsafe manner. Lock, is only changing a bit flag on the object's memory space - very very fast indeed. Interlocked is a tad better since it could be done on the registers and not in the memory.

Are you experiencing a performance problem? What is the main purpose of such code? At the end of the day C# is designed to abstract memory management from you so that you focus on your business problem.

Remember, if you need to manage memory yourself and use unsafe pointers, you have to pin the memory area = extra performance cost.

南烟 2024-09-25 07:45:36

仔细想想为什么你需要对象池——这里没有讨论池化的对象。对于大多数对象,使用托管堆将提供必要的功能,而无需在您自己的代码中使用新的池管理器。仅当您的对象封装了难以建立或难以释放的资源时,托管代码中的对象池才值得考虑。

如果您确实需要自己执行此操作,那么有一个轻量级读取器/写入器锁可能有助于优化池访问。

http://theburningmonk.com/2010/02/threading-using-readerwriterlockslim/< /a>

Think hard about why you need object pooling anyway - there is no discussion here of the objects that are pooled. For most objects, using the managed heap will provide the necessary functionality without the headaches of a new pool manager in your own code. Only if your object encapsulates hard-to-establish or hard-to-release resources is object pooling in managed code worth considering.

If you do need to do it yourself, then there is a lightweight reader/writer lock that might be useful in optimizing the pool accesses.

http://theburningmonk.com/2010/02/threading-using-readerwriterlockslim/

单调的奢华 2024-09-25 07:45:36

我已经使用构建为单链表的无锁队列来完成此操作。以下内容删除了一些不相关的内容,我还没有对删除的内容进行测试,但至少应该给出想法。

internal sealed class LockFreeQueue<T>
{
  private sealed class Node
  {
    public readonly T Item;
    public Node Next;
    public Node(T item)
    {
      Item = item;
    }
  }
  private volatile Node _head;
  private volatile Node _tail;
  public LockFreeQueue()
  {
    _head = _tail = new Node(default(T));
  }
#pragma warning disable 420 // volatile semantics not lost as only by-ref calls are interlocked
  public void Enqueue(T item)
  {
    Node newNode = new Node(item);
    for(;;)
    {
      Node curTail = _tail;
      if (Interlocked.CompareExchange(ref curTail.Next, newNode, null) == null)   //append to the tail if it is indeed the tail.
      {
        Interlocked.CompareExchange(ref _tail, newNode, curTail);   //CAS in case we were assisted by an obstructed thread.
        return;
      }
      else
      {
        Interlocked.CompareExchange(ref _tail, curTail.Next, curTail);  //assist obstructing thread.
      }
    }
  }    
  public bool TryDequeue(out T item)
  {
    for(;;)
    {
      Node curHead = _head;
      Node curTail = _tail;
      Node curHeadNext = curHead.Next;
      if (curHead == curTail)
      {
        if (curHeadNext == null)
        {
          item = default(T);
          return false;
        }
        else
          Interlocked.CompareExchange(ref _tail, curHeadNext, curTail);   // assist obstructing thread
      }
      else
      {
        item = curHeadNext.Item;
        if (Interlocked.CompareExchange(ref _head, curHeadNext, curHead) == curHead)
        {
          return true;
        }
      }
    }
  }
#pragma warning restore 420
}

如果您进行池化的原因是分配和收集的原始性能考虑,那么分配和收集的事实使其变得毫无用处。如果是因为获取和/或释放底层资源的成本很高,或者因为实例缓存了使用中的“学习”信息,那么它可能适合。

I've done it with a lock-free queue built as a singly-linked list. The following has some irrelevant stuff cut out and I haven't tested it with that stuff removed, but should at least give the idea.

internal sealed class LockFreeQueue<T>
{
  private sealed class Node
  {
    public readonly T Item;
    public Node Next;
    public Node(T item)
    {
      Item = item;
    }
  }
  private volatile Node _head;
  private volatile Node _tail;
  public LockFreeQueue()
  {
    _head = _tail = new Node(default(T));
  }
#pragma warning disable 420 // volatile semantics not lost as only by-ref calls are interlocked
  public void Enqueue(T item)
  {
    Node newNode = new Node(item);
    for(;;)
    {
      Node curTail = _tail;
      if (Interlocked.CompareExchange(ref curTail.Next, newNode, null) == null)   //append to the tail if it is indeed the tail.
      {
        Interlocked.CompareExchange(ref _tail, newNode, curTail);   //CAS in case we were assisted by an obstructed thread.
        return;
      }
      else
      {
        Interlocked.CompareExchange(ref _tail, curTail.Next, curTail);  //assist obstructing thread.
      }
    }
  }    
  public bool TryDequeue(out T item)
  {
    for(;;)
    {
      Node curHead = _head;
      Node curTail = _tail;
      Node curHeadNext = curHead.Next;
      if (curHead == curTail)
      {
        if (curHeadNext == null)
        {
          item = default(T);
          return false;
        }
        else
          Interlocked.CompareExchange(ref _tail, curHeadNext, curTail);   // assist obstructing thread
      }
      else
      {
        item = curHeadNext.Item;
        if (Interlocked.CompareExchange(ref _head, curHeadNext, curHead) == curHead)
        {
          return true;
        }
      }
    }
  }
#pragma warning restore 420
}

If your reason for pooling was the raw performance consideration of allocation and collection then the fact that this allocates and collects makes it pretty useless. If it's because an underlying resource is expensive to obtain and/or release, or because the instances cache "learned" information in use, then it may suit.

还如梦归 2024-09-25 07:45:36

返回引用对象的问题在于,它首先破坏了锁定对其访问的整个尝试。您无法使用基本的 lock() 命令来控制对对象范围之外的资源的访问,这意味着传统的 getter/setter 设计不起作用。

可能有用的是一个包含可锁定资源的对象,并允许传入 lambda 或委托来使用该资源。该对象将锁定资源,运行委托,然后在委托完成时解锁。这基本上将运行代码的控制权交给了锁定对象,但允许比 Interlocked 更复杂的操作。

另一种可能的方法是公开 getter 和 setter,但通过使用“签出”模型来实现您自己的访问控制;当允许线程“获取”值时,在锁定的内部资源中保留对当前线程的引用。在该线程调用 setter、中止等之前,尝试访问 getter 的所有其他线程都将保留在 Yield 循环中。一旦资源被重新签入,下一个线程就可以获取它。

public class Library
{
   private Book controlledBook
   private Thread checkoutThread;

   public Book CheckOutTheBook()
   {
      while(Thread.CurrentThread != checkoutThread && checkoutThread.IsAlive)
          thread.CurrentThread.Yield();

      lock(this)
      {
         checkoutThread = Thread.CurrentThread;

         return controlledBook;
      }
   }

   public void CheckInTheBook(Book theBook)
   {
      if(Thread.CurrentThread != checkoutThread)
          throw new InvalidOperationException("This thread does not have the resource checked out.");

      lock(this)
      {
         checkoutThread = null;

         controlledBook = theBook;
      }
   }

}

现在,请注意,这仍然需要对象的用户之间进行一些合作。特别是对于 setter 来说,这种逻辑相当幼稚;如果没有签出一本书,就不可能签入它。该规则对于消费者来说可能并不明显,并且使用不当可能会导致未处理的异常。此外,所有用户都必须知道,如果他们在终止之前停止使用该对象,则必须重新签入该对象,即使基本的 C# 知识表明,如果您获取引用类型,您所做的更改也会反映在各处。但是,这可以用作对非线程安全资源的基本“一次一个”访问控制。

The problem with returning reference objects is that it defeats the entire attempt to lock access to it in the first place. You can't use a basic lock() command to control access to a resource outside the scope of the object, and that means that the traditional getter/setter designs don't work.

Something that MAY work is an object that contains lockable resources, and allows lambdas or delegates to be passed in that will make use of the resource. The object will lock the resource, run the delegate, then unlock when the delegate completes. This basically puts control over running the code into the hands of the locking object, but would allow more complex operations than Interlocked has available.

Another possible method is to expose getters and setters, but implement your own access control by using a "checkout" model; when a thread is allowed to "get" a value, keep a reference to the current thread in a locked internal resource. Until that thread calls the setter, aborts, etc., all other threads attempting to access the getter are kept in a Yield loop. Once the resource is checked back in, the next thread can get it.

public class Library
{
   private Book controlledBook
   private Thread checkoutThread;

   public Book CheckOutTheBook()
   {
      while(Thread.CurrentThread != checkoutThread && checkoutThread.IsAlive)
          thread.CurrentThread.Yield();

      lock(this)
      {
         checkoutThread = Thread.CurrentThread;

         return controlledBook;
      }
   }

   public void CheckInTheBook(Book theBook)
   {
      if(Thread.CurrentThread != checkoutThread)
          throw new InvalidOperationException("This thread does not have the resource checked out.");

      lock(this)
      {
         checkoutThread = null;

         controlledBook = theBook;
      }
   }

}

Now, be aware that this still requires some cooperation among users of the object. Particularly, this logic is rather naive with regards to the setter; it is impossible to check in a book without having checked it out. This rule may not be apparent to consumers, and improper use could cause an unhandled exception. Also, all users must know to check the object back in if they will stop using it before they terminate, even though basic C# knowledge would dictate that if you get a reference type, changes you make are reflected everywhere. However, this can be used as a basic "one at a time" access control to a non-thread-safe resource.

花落人断肠 2024-09-25 07:45:36

您是否看过.Net 4中的并发集合。

例如 http://msdn .microsoft.com/en-us/library/dd287191.aspx

Have you looked at the Concurrent collection in .Net 4.

e.g. http://msdn.microsoft.com/en-us/library/dd287191.aspx

携余温的黄昏 2024-09-25 07:45:36

好问题。在编写高性能软件时,通过使用快速对象池来拥抱零分配模式至关重要。

微软在Apache License 2.0下发布了一个对象池,

它避免使用锁,仅使用Interlocked.CompareExchange进行分配(获取)。当您一次获取和释放几个对象(这是大多数用例)时,它看起来特别快。如果您获得大量对象,然后释放该批次,那么它似乎不太优化,因此如果您的应用程序表现得像您应该修改的那样。

我认为 Interlocked.Increment 方法,正如您所建议的,可能更通用,并且更适合批量用例。

http://sourceroslyn.io/#Microsoft.CodeAnalysis.Workspaces/ ObjectPool%25601.cs,98aa6d9b3c4e313b

// Copyright (c) Microsoft.  All Rights Reserved.  Licensed under the Apache License, Version 2.0.  See License.txt in the project root for license information.

// define TRACE_LEAKS to get additional diagnostics that can lead to the leak sources. note: it will
// make everything about 2-3x slower
// 
// #define TRACE_LEAKS

// define DETECT_LEAKS to detect possible leaks
// #if DEBUG
// #define DETECT_LEAKS  //for now always enable DETECT_LEAKS in debug.
// #endif

using System;
using System.Diagnostics;
using System.Threading;

#if DETECT_LEAKS
using System.Runtime.CompilerServices;

#endif
namespace Microsoft.CodeAnalysis.PooledObjects
{
    /// <summary>
    /// Generic implementation of object pooling pattern with predefined pool size limit. The main
    /// purpose is that limited number of frequently used objects can be kept in the pool for
    /// further recycling.
    /// 
    /// Notes: 
    /// 1) it is not the goal to keep all returned objects. Pool is not meant for storage. If there
    ///    is no space in the pool, extra returned objects will be dropped.
    /// 
    /// 2) it is implied that if object was obtained from a pool, the caller will return it back in
    ///    a relatively short time. Keeping checked out objects for long durations is ok, but 
    ///    reduces usefulness of pooling. Just new up your own.
    /// 
    /// Not returning objects to the pool in not detrimental to the pool's work, but is a bad practice. 
    /// Rationale: 
    ///    If there is no intent for reusing the object, do not use pool - just use "new". 
    /// </summary>
    internal class ObjectPool<T> where T : class
    {
        [DebuggerDisplay("{Value,nq}")]
        private struct Element
        {
            internal T Value;
        }

        /// <remarks>
        /// Not using System.Func{T} because this file is linked into the (debugger) Formatter,
        /// which does not have that type (since it compiles against .NET 2.0).
        /// </remarks>
        internal delegate T Factory();

        // Storage for the pool objects. The first item is stored in a dedicated field because we
        // expect to be able to satisfy most requests from it.
        private T _firstItem;
        private readonly Element[] _items;

        // factory is stored for the lifetime of the pool. We will call this only when pool needs to
        // expand. compared to "new T()", Func gives more flexibility to implementers and faster
        // than "new T()".
        private readonly Factory _factory;

#if DETECT_LEAKS
        private static readonly ConditionalWeakTable<T, LeakTracker> leakTrackers = new ConditionalWeakTable<T, LeakTracker>();

        private class LeakTracker : IDisposable
        {
            private volatile bool disposed;

#if TRACE_LEAKS
            internal volatile object Trace = null;
#endif

            public void Dispose()
            {
                disposed = true;
                GC.SuppressFinalize(this);
            }

            private string GetTrace()
            {
#if TRACE_LEAKS
                return Trace == null ? "" : Trace.ToString();
#else
                return "Leak tracing information is disabled. Define TRACE_LEAKS on ObjectPool`1.cs to get more info \n";
#endif
            }

            ~LeakTracker()
            {
                if (!this.disposed && !Environment.HasShutdownStarted)
                {
                    var trace = GetTrace();

                    // If you are seeing this message it means that object has been allocated from the pool 
                    // and has not been returned back. This is not critical, but turns pool into rather 
                    // inefficient kind of "new".
                    Debug.WriteLine($"TRACEOBJECTPOOLLEAKS_BEGIN\nPool detected potential leaking of {typeof(T)}. \n Location of the leak: \n {GetTrace()} TRACEOBJECTPOOLLEAKS_END");
                }
            }
        }
#endif

        internal ObjectPool(Factory factory)
            : this(factory, Environment.ProcessorCount * 2)
        { }

        internal ObjectPool(Factory factory, int size)
        {
            Debug.Assert(size >= 1);
            _factory = factory;
            _items = new Element[size - 1];
        }

        private T CreateInstance()
        {
            var inst = _factory();
            return inst;
        }

        /// <summary>
        /// Produces an instance.
        /// </summary>
        /// <remarks>
        /// Search strategy is a simple linear probing which is chosen for it cache-friendliness.
        /// Note that Free will try to store recycled objects close to the start thus statistically 
        /// reducing how far we will typically search.
        /// </remarks>
        internal T Allocate()
        {
            // PERF: Examine the first element. If that fails, AllocateSlow will look at the remaining elements.
            // Note that the initial read is optimistically not synchronized. That is intentional. 
            // We will interlock only when we have a candidate. in a worst case we may miss some
            // recently returned objects. Not a big deal.
            T inst = _firstItem;
            if (inst == null || inst != Interlocked.CompareExchange(ref _firstItem, null, inst))
            {
                inst = AllocateSlow();
            }

#if DETECT_LEAKS
            var tracker = new LeakTracker();
            leakTrackers.Add(inst, tracker);

#if TRACE_LEAKS
            var frame = CaptureStackTrace();
            tracker.Trace = frame;
#endif
#endif
            return inst;
        }

        private T AllocateSlow()
        {
            var items = _items;

            for (int i = 0; i < items.Length; i++)
            {
                // Note that the initial read is optimistically not synchronized. That is intentional. 
                // We will interlock only when we have a candidate. in a worst case we may miss some
                // recently returned objects. Not a big deal.
                T inst = items[i].Value;
                if (inst != null)
                {
                    if (inst == Interlocked.CompareExchange(ref items[i].Value, null, inst))
                    {
                        return inst;
                    }
                }
            }

            return CreateInstance();
        }

        /// <summary>
        /// Returns objects to the pool.
        /// </summary>
        /// <remarks>
        /// Search strategy is a simple linear probing which is chosen for it cache-friendliness.
        /// Note that Free will try to store recycled objects close to the start thus statistically 
        /// reducing how far we will typically search in Allocate.
        /// </remarks>
        internal void Free(T obj)
        {
            Validate(obj);
            ForgetTrackedObject(obj);

            if (_firstItem == null)
            {
                // Intentionally not using interlocked here. 
                // In a worst case scenario two objects may be stored into same slot.
                // It is very unlikely to happen and will only mean that one of the objects will get collected.
                _firstItem = obj;
            }
            else
            {
                FreeSlow(obj);
            }
        }

        private void FreeSlow(T obj)
        {
            var items = _items;
            for (int i = 0; i < items.Length; i++)
            {
                if (items[i].Value == null)
                {
                    // Intentionally not using interlocked here. 
                    // In a worst case scenario two objects may be stored into same slot.
                    // It is very unlikely to happen and will only mean that one of the objects will get collected.
                    items[i].Value = obj;
                    break;
                }
            }
        }

        /// <summary>
        /// Removes an object from leak tracking.  
        /// 
        /// This is called when an object is returned to the pool.  It may also be explicitly 
        /// called if an object allocated from the pool is intentionally not being returned
        /// to the pool.  This can be of use with pooled arrays if the consumer wants to 
        /// return a larger array to the pool than was originally allocated.
        /// </summary>
        [Conditional("DEBUG")]
        internal void ForgetTrackedObject(T old, T replacement = null)
        {
#if DETECT_LEAKS
            LeakTracker tracker;
            if (leakTrackers.TryGetValue(old, out tracker))
            {
                tracker.Dispose();
                leakTrackers.Remove(old);
            }
            else
            {
                var trace = CaptureStackTrace();
                Debug.WriteLine($"TRACEOBJECTPOOLLEAKS_BEGIN\nObject of type {typeof(T)} was freed, but was not from pool. \n Callstack: \n {trace} TRACEOBJECTPOOLLEAKS_END");
            }

            if (replacement != null)
            {
                tracker = new LeakTracker();
                leakTrackers.Add(replacement, tracker);
            }
#endif
        }

#if DETECT_LEAKS
        private static Lazy<Type> _stackTraceType = new Lazy<Type>(() => Type.GetType("System.Diagnostics.StackTrace"));

        private static object CaptureStackTrace()
        {
            return Activator.CreateInstance(_stackTraceType.Value);
        }
#endif

        [Conditional("DEBUG")]
        private void Validate(object obj)
        {
            Debug.Assert(obj != null, "freeing null?");

            Debug.Assert(_firstItem != obj, "freeing twice?");

            var items = _items;
            for (int i = 0; i < items.Length; i++)
            {
                var value = items[i].Value;
                if (value == null)
                {
                    return;
                }

                Debug.Assert(value != obj, "freeing twice?");
            }
        }
    }
}

Good question. When writing high performance software embracing zero allocation patterns by using a fast object pool is critical.

Microsoft released an object pool under Apache License 2.0

It avoids using locks and only uses Interlocked.CompareExchange for Allocations (Get). It seems particularly fast when you get and release few objects at a time which is most use cases. It seems less optimized if you get a large batch of objects, then release the batch so if your application behaves like that you should modify.

I think the Interlocked.Increment approach, as you suggested, could be more general and work better for the batch use cases.

http://sourceroslyn.io/#Microsoft.CodeAnalysis.Workspaces/ObjectPool%25601.cs,98aa6d9b3c4e313b

// Copyright (c) Microsoft.  All Rights Reserved.  Licensed under the Apache License, Version 2.0.  See License.txt in the project root for license information.

// define TRACE_LEAKS to get additional diagnostics that can lead to the leak sources. note: it will
// make everything about 2-3x slower
// 
// #define TRACE_LEAKS

// define DETECT_LEAKS to detect possible leaks
// #if DEBUG
// #define DETECT_LEAKS  //for now always enable DETECT_LEAKS in debug.
// #endif

using System;
using System.Diagnostics;
using System.Threading;

#if DETECT_LEAKS
using System.Runtime.CompilerServices;

#endif
namespace Microsoft.CodeAnalysis.PooledObjects
{
    /// <summary>
    /// Generic implementation of object pooling pattern with predefined pool size limit. The main
    /// purpose is that limited number of frequently used objects can be kept in the pool for
    /// further recycling.
    /// 
    /// Notes: 
    /// 1) it is not the goal to keep all returned objects. Pool is not meant for storage. If there
    ///    is no space in the pool, extra returned objects will be dropped.
    /// 
    /// 2) it is implied that if object was obtained from a pool, the caller will return it back in
    ///    a relatively short time. Keeping checked out objects for long durations is ok, but 
    ///    reduces usefulness of pooling. Just new up your own.
    /// 
    /// Not returning objects to the pool in not detrimental to the pool's work, but is a bad practice. 
    /// Rationale: 
    ///    If there is no intent for reusing the object, do not use pool - just use "new". 
    /// </summary>
    internal class ObjectPool<T> where T : class
    {
        [DebuggerDisplay("{Value,nq}")]
        private struct Element
        {
            internal T Value;
        }

        /// <remarks>
        /// Not using System.Func{T} because this file is linked into the (debugger) Formatter,
        /// which does not have that type (since it compiles against .NET 2.0).
        /// </remarks>
        internal delegate T Factory();

        // Storage for the pool objects. The first item is stored in a dedicated field because we
        // expect to be able to satisfy most requests from it.
        private T _firstItem;
        private readonly Element[] _items;

        // factory is stored for the lifetime of the pool. We will call this only when pool needs to
        // expand. compared to "new T()", Func gives more flexibility to implementers and faster
        // than "new T()".
        private readonly Factory _factory;

#if DETECT_LEAKS
        private static readonly ConditionalWeakTable<T, LeakTracker> leakTrackers = new ConditionalWeakTable<T, LeakTracker>();

        private class LeakTracker : IDisposable
        {
            private volatile bool disposed;

#if TRACE_LEAKS
            internal volatile object Trace = null;
#endif

            public void Dispose()
            {
                disposed = true;
                GC.SuppressFinalize(this);
            }

            private string GetTrace()
            {
#if TRACE_LEAKS
                return Trace == null ? "" : Trace.ToString();
#else
                return "Leak tracing information is disabled. Define TRACE_LEAKS on ObjectPool`1.cs to get more info \n";
#endif
            }

            ~LeakTracker()
            {
                if (!this.disposed && !Environment.HasShutdownStarted)
                {
                    var trace = GetTrace();

                    // If you are seeing this message it means that object has been allocated from the pool 
                    // and has not been returned back. This is not critical, but turns pool into rather 
                    // inefficient kind of "new".
                    Debug.WriteLine($"TRACEOBJECTPOOLLEAKS_BEGIN\nPool detected potential leaking of {typeof(T)}. \n Location of the leak: \n {GetTrace()} TRACEOBJECTPOOLLEAKS_END");
                }
            }
        }
#endif

        internal ObjectPool(Factory factory)
            : this(factory, Environment.ProcessorCount * 2)
        { }

        internal ObjectPool(Factory factory, int size)
        {
            Debug.Assert(size >= 1);
            _factory = factory;
            _items = new Element[size - 1];
        }

        private T CreateInstance()
        {
            var inst = _factory();
            return inst;
        }

        /// <summary>
        /// Produces an instance.
        /// </summary>
        /// <remarks>
        /// Search strategy is a simple linear probing which is chosen for it cache-friendliness.
        /// Note that Free will try to store recycled objects close to the start thus statistically 
        /// reducing how far we will typically search.
        /// </remarks>
        internal T Allocate()
        {
            // PERF: Examine the first element. If that fails, AllocateSlow will look at the remaining elements.
            // Note that the initial read is optimistically not synchronized. That is intentional. 
            // We will interlock only when we have a candidate. in a worst case we may miss some
            // recently returned objects. Not a big deal.
            T inst = _firstItem;
            if (inst == null || inst != Interlocked.CompareExchange(ref _firstItem, null, inst))
            {
                inst = AllocateSlow();
            }

#if DETECT_LEAKS
            var tracker = new LeakTracker();
            leakTrackers.Add(inst, tracker);

#if TRACE_LEAKS
            var frame = CaptureStackTrace();
            tracker.Trace = frame;
#endif
#endif
            return inst;
        }

        private T AllocateSlow()
        {
            var items = _items;

            for (int i = 0; i < items.Length; i++)
            {
                // Note that the initial read is optimistically not synchronized. That is intentional. 
                // We will interlock only when we have a candidate. in a worst case we may miss some
                // recently returned objects. Not a big deal.
                T inst = items[i].Value;
                if (inst != null)
                {
                    if (inst == Interlocked.CompareExchange(ref items[i].Value, null, inst))
                    {
                        return inst;
                    }
                }
            }

            return CreateInstance();
        }

        /// <summary>
        /// Returns objects to the pool.
        /// </summary>
        /// <remarks>
        /// Search strategy is a simple linear probing which is chosen for it cache-friendliness.
        /// Note that Free will try to store recycled objects close to the start thus statistically 
        /// reducing how far we will typically search in Allocate.
        /// </remarks>
        internal void Free(T obj)
        {
            Validate(obj);
            ForgetTrackedObject(obj);

            if (_firstItem == null)
            {
                // Intentionally not using interlocked here. 
                // In a worst case scenario two objects may be stored into same slot.
                // It is very unlikely to happen and will only mean that one of the objects will get collected.
                _firstItem = obj;
            }
            else
            {
                FreeSlow(obj);
            }
        }

        private void FreeSlow(T obj)
        {
            var items = _items;
            for (int i = 0; i < items.Length; i++)
            {
                if (items[i].Value == null)
                {
                    // Intentionally not using interlocked here. 
                    // In a worst case scenario two objects may be stored into same slot.
                    // It is very unlikely to happen and will only mean that one of the objects will get collected.
                    items[i].Value = obj;
                    break;
                }
            }
        }

        /// <summary>
        /// Removes an object from leak tracking.  
        /// 
        /// This is called when an object is returned to the pool.  It may also be explicitly 
        /// called if an object allocated from the pool is intentionally not being returned
        /// to the pool.  This can be of use with pooled arrays if the consumer wants to 
        /// return a larger array to the pool than was originally allocated.
        /// </summary>
        [Conditional("DEBUG")]
        internal void ForgetTrackedObject(T old, T replacement = null)
        {
#if DETECT_LEAKS
            LeakTracker tracker;
            if (leakTrackers.TryGetValue(old, out tracker))
            {
                tracker.Dispose();
                leakTrackers.Remove(old);
            }
            else
            {
                var trace = CaptureStackTrace();
                Debug.WriteLine($"TRACEOBJECTPOOLLEAKS_BEGIN\nObject of type {typeof(T)} was freed, but was not from pool. \n Callstack: \n {trace} TRACEOBJECTPOOLLEAKS_END");
            }

            if (replacement != null)
            {
                tracker = new LeakTracker();
                leakTrackers.Add(replacement, tracker);
            }
#endif
        }

#if DETECT_LEAKS
        private static Lazy<Type> _stackTraceType = new Lazy<Type>(() => Type.GetType("System.Diagnostics.StackTrace"));

        private static object CaptureStackTrace()
        {
            return Activator.CreateInstance(_stackTraceType.Value);
        }
#endif

        [Conditional("DEBUG")]
        private void Validate(object obj)
        {
            Debug.Assert(obj != null, "freeing null?");

            Debug.Assert(_firstItem != obj, "freeing twice?");

            var items = _items;
            for (int i = 0; i < items.Length; i++)
            {
                var value = items[i].Value;
                if (value == null)
                {
                    return;
                }

                Debug.Assert(value != obj, "freeing twice?");
            }
        }
    }
}
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文