IDictionary 是否有 LRU 实现?
我想实现一个简单的内存中 LRU 缓存系统,并且我正在考虑一个基于 IDictionary 实现的解决方案,该解决方案可以处理散列 LRU 机制。 来自 java,我有使用 LinkedHashMap
的经验,它可以很好地满足我的需要:我在任何地方都找不到类似的 .NET 解决方案。
有没有人开发过或者有没有人有过这样的经历?
I would like to implement a simple in-memory LRU cache system and I was thinking about a solution based on an IDictionary implementation which could handle an hashed LRU mechanism.
Coming from java, I have experiences with LinkedHashMap
, which works fine for what I need: I can't find anywhere a similar solution for .NET.
Has anyone developed it or has anyone had experiences like this?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(13)
这是我们为我们拥有的网站开发的一个非常简单且快速的实现。
我们尝试尽可能地改进代码,同时保持线程安全。
我认为代码非常简单明了,但是如果您需要一些解释或与如何使用它相关的指南,请随时询问。
This a very simple and fast implementation we developed for a web site we own.
We tried to improve the code as much as possible, while keeping it thread safe.
I think the code is very simple and clear, but if you need some explanation or a guide related to how to use it, don't hesitate to ask.
基类库中没有任何内容可以执行此操作。
在免费方面,也许类似于 C5 的 HashedLinkedList< /a> 会起作用。
如果您愿意付费,也许可以查看 此 C# 工具包。 它包含一个实现。
There is nothing in the base class libraries that does this.
On the free side, maybe something like C5's HashedLinkedList would work.
If you're willing to pay, maybe check out this C# toolkit. It contains an implementation.
上面示例代码的
LRUCache
答案(由 Martin)使用MethodImplOptions.Synchronized< /code>,相当于在每个方法调用周围放置
lock(this)
。 虽然正确,但此全局锁将显着降低并发负载下的吞吐量。为了解决这个问题,我实现了一个专为并发工作负载设计的线程安全伪 LRU。 性能非常接近
ConcurrentDictionary
,比MemoryCache
快约 10 倍,并且命中率优于传统 LRU。 下面的 GitHub 链接提供了完整的分析。用法如下:
GitHub:https://github.com/bitfaster/BitFaster.Caching
The
LRUCache
answer with sample code above (by Martin) usesMethodImplOptions.Synchronized
, which is equivalent to puttinglock(this)
around each method call. Whilst correct, this global lock will significantly reduce throughput under concurrent load.To solve this I implemented a thread safe pseudo LRU designed for concurrent workloads. Performance is very close to
ConcurrentDictionary
, ~10x faster thanMemoryCache
and hit rate is better than a conventional LRU. Full analysis provided in the GitHub link below.Usage looks like this:
GitHub: https://github.com/bitfaster/BitFaster.Caching
我最近发布了一个名为 LurchTable 的类来满足对 LinkedHashMap 的 C# 变体的需求。 LurchTable 的简要讨论可以在这里找到。
基本功能:
源代码: http://csharptest.net/browse/src/Library/Collections/LurchTable.cs GitHub
:https://github.com/csharptest/CSharpTest.Net.Collections
HTML 帮助:http://help .csharptest.net/
I've recently released a class called LurchTable to address the need for a C# variant of the LinkedHashMap. A brief discussion of the LurchTable can be found here.
Basic features:
Source Code: http://csharptest.net/browse/src/Library/Collections/LurchTable.cs
GitHub: https://github.com/csharptest/CSharpTest.Net.Collections
HTML Help: http://help.csharptest.net/
在谷歌搜索时找到了你的答案,还发现了这个:
http://code.google.com /p/csharp-lru-cache/
Found you answer while googling, also found this:
http://code.google.com/p/csharp-lru-cache/
EntLib 的缓存应用程序块具有 LRU 清理选项框并可以存储在内存中。 对于你想要的东西来说,它可能有点重。
The Caching Application Block of EntLib has an LRU scavenging option out of the box and can be in memory. It might be a bit heavyweight for what you want tho.
这需要 Martin 的代码与 T 先生 的建议并使其对 Stylecop 友好。 哦,它还允许在值从缓存中循环出来时对其进行处理。
This takes Martin's code with Mr T's suggestions and makes it Stylecop friendly. Oh, it also allows for disposal of values as they cycle out of the cache.
我不相信是这样。 我当然见过手卷在各种不相关的项目中多次实施(这或多或少证实了这一点。如果有的话,肯定至少有一个项目会使用它)。
实现起来非常简单,通常通过创建一个包含
Dictionary
和List
的类来完成。键放入列表(按顺序),项目放入字典。
当您向集合中添加新项目时,该函数会检查列表的长度,取出最后一个键(如果太长),然后从字典中逐出键和值以进行匹配。 其实没什么更多的了
I don't believe so. I've certainly seen hand-rolled ones implemented several times in various unrelated projects (which more or less confirms this. If there was one, surely at least one of the projects would have used it).
It's pretty simple to implement, and usually gets done by creating a class which contains both a
Dictionary
and aList
.The keys go in the list (in-order) and the items go in the dictionary.
When you Add a new item to the collection, the function checks the length of the list, pulls out the last Key (if it's too long) and then evicts the key and value from the dictionary to match. Not much more to it really
我喜欢劳伦斯的实施。 Hashtable + LinkedList 是一个很好的解决方案。
关于线程,我不会使用
[MethodImpl(MethodImplOptions.Synchronized)]
来锁定它,而是使用ReaderWriterLockSlim
或自旋锁(因为争用通常很快)。在
Get
函数中,我会首先检查它是否已经是第一项,而不是总是删除和添加。 这使您可以将其保留在不阻塞其他读取器的读取器锁内。I like Lawrence's implementation. Hashtable + LinkedList is a good solution.
Regarding threading, I would not lock this with
[MethodImpl(MethodImplOptions.Synchronized)]
, but rather useReaderWriterLockSlim
or spin lock (since contention usually fast) instead.In the
Get
function I would check if it's already the 1st item first, rather than always removing and adding. This gives you the possibility to keep that within a reader lock that is not blocking other readers.下面是适用于 .NET 6 及更高版本的
LRUCache
集合的现代实现。 主要功能是方法GetOrAdd
。 此方法要么返回现有值,要么调用 valueFactory 并返回新值。 每次添加新值时,都会通过从集合中逐出最近最少使用的值来强制执行boundedCapacity
策略。valueFactory
被延迟调用,以便对同一键的多个并发GetOrAdd
调用接收相同的值。用法示例:
高级 API
CollectionsMarshal使用 .GetValueRefOrAddDefault
以便每次GetOrAdd
调用仅对key
进行哈希处理一次。如果
valueFactory
失败,Lazy
类用于永久缓存异常。 此行为可能不适合缓存系统,因此您可能需要将Lazy
替换为我发布的简单LazyWithRetry
实现 此处。如果您想使用异步
valueFactory
,可以在 这个问题。LRUCache
类是线程安全的。Here is a modern implementation of a
LRUCache<TKey, TValue>
collection, for .NET 6 and later. The main feature is the methodGetOrAdd
. This method either returns an existing value, or invokes thevalueFactory
and returns a new value. Each time a new value is added, theboundedCapacity
policy is enforced by evicting the least recently used value from the collection. ThevalueFactory
is invoked lazily, so that multiple concurrentGetOrAdd
calls for the same key receive the same value.Usage example:
The advanced API
CollectionsMarshal.GetValueRefOrAddDefault
is used so that thekey
is hashed only once perGetOrAdd
call.In case the
valueFactory
fails, the behavior of theLazy<T>
class is to cache permanently the exception. This behavior might not be suitable for a caching system, so you may want to substitute theLazy<T>
with the simpleLazyWithRetry<T>
implementation that I have posted here.In case you would like to use an asynchronous
valueFactory
, there areAsyncLazy<T>
implementations in this question.The
LRUCache<TKey, TValue>
class is thread-safe.我刚刚在 aws-sdk-net 中意外发现了 LruCache.cs: https://github.com/aws/aws-sdk-net/blob/master/sdk/src/Core/Amazon.Runtime/Internal/Util/LruCache .cs
I just accidently found now LruCache.cs in aws-sdk-net: https://github.com/aws/aws-sdk-net/blob/master/sdk/src/Core/Amazon.Runtime/Internal/Util/LruCache.cs
如果它是一个 asp.net 应用程序,您可以使用缓存类 [1],但您将与其他缓存内容竞争空间,这可能是您想要的,也可能不是。
[1] http://msdn.microsoft.com /en-us/library/system.web.caching.cache.aspx
If it's an asp.net app you can use the cache class[1] but you'll be competing for space with other cached stuff, which may be what you want or may not be.
[1] http://msdn.microsoft.com/en-us/library/system.web.caching.cache.aspx
有 OrderedDictionary
您可以通过键删除元素并将其(重新)插入到订单末尾。 当您需要内存时,请删除顺序中的第一个元素。
这显示了如何操作,但速度稍慢:
https://leetcode.com/problems/lru-cache/solutions/1065496/c-two-implementationsordered-dictionary-linkedlist-and-their-comparison-with-explanation/< /a>
There is OrderedDictionary
You can remove a element by key and (re)insert it on the end of the order. When you need memory remove the first element in the order.
This shows how, but its a trifle slower:
https://leetcode.com/problems/lru-cache/solutions/1065496/c-two-implementationsordered-dictionary-linkedlist-and-their-comparison-with-explanation/