如何在 Java 中实现 LRU 缓存?
请不要说 EHCache 或 OSCache 等。假设出于此问题的目的,我想仅使用 SDK 来实现我自己的(边做边学)。 考虑到缓存将在多线程环境中使用,您会使用哪种数据结构? 我已经使用 LinkedHashMap 实现了一个集合#synchronizedMap< /a>,但我很好奇是否有任何新的并发集合是更好的候选者。
更新:我刚刚阅读 Yegge 的最新作品,当我发现了这个金块:
如果您需要恒定时间访问并希望保持插入顺序,那么没有比 LinkedHashMap 更好的了,这是一个真正美妙的数据结构。 唯一可能让它变得更精彩的方法是如果有一个并发版本。 但可惜。
在使用上面提到的 LinkedHashMap + Collections#synchronizedMap 实现之前,我的想法几乎完全相同。 很高兴知道我没有忽略一些事情。
根据到目前为止的答案,听起来我对高度并发 LRU 的最佳选择是扩展 ConcurrentHashMap 使用与 LinkedHashMap
使用的一些相同的逻辑。
Please don't say EHCache or OSCache, etc. Assume for purposes of this question that I want to implement my own using just the SDK (learning by doing). Given that the cache will be used in a multithreaded environment, which datastructures would you use? I've already implemented one using LinkedHashMap and Collections#synchronizedMap, but I'm curious if any of the new concurrent collections would be better candidates.
UPDATE: I was just reading through Yegge's latest when I found this nugget:
If you need constant-time access and want to maintain the insertion order, you can't do better than a LinkedHashMap, a truly wonderful data structure. The only way it could possibly be more wonderful is if there were a concurrent version. But alas.
I was thinking almost exactly the same thing before I went with the LinkedHashMap
+ Collections#synchronizedMap
implementation I mentioned above. Nice to know I hadn't just overlooked something.
Based on the answers so far, it sounds like my best bet for a highly concurrent LRU would be to extend ConcurrentHashMap using some of the same logic that LinkedHashMap
uses.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(21)
我喜欢很多这些建议,但现在我想我会坚持使用
LinkedHashMap
+Collections.synchronizedMap
。 如果我将来再次访问此问题,我可能会以与LinkedHashMap
扩展HashMap
相同的方式扩展ConcurrentHashMap
。更新:
根据要求,这是我当前实现的要点。
I like lots of these suggestions, but for now I think I'll stick with
LinkedHashMap
+Collections.synchronizedMap
. If I do revisit this in the future, I'll probably work on extendingConcurrentHashMap
in the same wayLinkedHashMap
extendsHashMap
.UPDATE:
By request, here's the gist of my current implementation.
如果我今天再次从头开始这样做,我会使用 Guava 的
CacheBuilder
。If I were doing this again from scratch today, I'd use Guava's
CacheBuilder
.这是第二轮。
第一轮是我想出的,然后我重新阅读了评论,这个域名在我的脑海中更加根深蒂固。
这是最简单的版本,带有单元测试,表明它可以基于其他一些版本工作。
首先是非并发版本:
true 标志将跟踪 get 和 put 的访问。 请参阅 Java 文档。 如果构造函数没有 true 标志,removeEdelstEntry 只会实现一个 FIFO 缓存(请参阅下面有关 FIFO 和removeEldestEntry 的注释)。
下面是证明它可以作为 LRU 缓存工作的测试:
现在对于并发版本...
package org.boon.cache;
您可以明白为什么我首先介绍非并发版本。 上面尝试创建一些条带来减少锁争用。 因此,我们对密钥进行哈希处理,然后查找该哈希值以找到实际的缓存。 这使得限制大小更多地是在相当数量的错误内的建议/粗略猜测,具体取决于密钥散列算法的传播程度。
这是显示并发版本可能有效的测试。 :)(在火下测试才是真正的方法)。
这是最后一篇文章。我删除的第一篇文章,因为它是 LFU 而不是 LRU 缓存。
我想我会再试一次。 我试图使用标准 JDK 来提出最简单的 LRU 缓存版本,而无需太多实现。
这是我想出的。 我的第一次尝试有点灾难,因为我实现了 LFU 而不是 LRU,然后我添加了 FIFO 和 LRU 支持......然后我意识到它正在变成一个怪物。 然后我开始和我的好友约翰交谈,他几乎不感兴趣,然后我详细描述了我如何实现 LFU、LRU 和 FIFO 以及如何用一个简单的 ENUM arg 来切换它,然后我意识到我真正想要的只是是一个简单的 LRU。 因此,请忽略我之前的帖子,如果您想查看可通过枚举切换的 LRU/LFU/FIFO 缓存,请告诉我...不? 好吧..他走了。
仅使用 JDK 的最简单的 LRU。 我实现了并发版本和非并发版本。
我创建了一个通用界面(它是极简主义的,所以可能会缺少一些您想要的功能,但它适用于我的用例,但是如果您想查看功能 XYZ,请告诉我......我活着就是为了编写代码。) 。
您可能想知道 getSilent 是什么。 我用它来测试。 getSilent 不会更改项目的 LRU 分数。
首先是非并发的......
如果您有一个大的缓存,queue.removeFirstOccurrence 可能是一个昂贵的操作。 可以以 LinkedList 为例,添加从元素到节点的反向查找哈希映射,以使删除操作更快、更一致。 我也开始了,但后来发现我不需要它。 但是...也许...
当调用 put 时,密钥将被添加到队列中。 当调用 get 时,键将被删除并重新添加到队列顶部。
如果你的缓存很小并且构建一个项目很昂贵,那么这应该是一个很好的缓存。 如果您的缓存非常大,那么线性搜索可能会成为瓶颈,特别是如果您没有缓存的热点区域。 热点越强烈,线性搜索越快,因为热门项目始终位于线性搜索的顶部。 不管怎样......要想让这个过程更快,需要编写另一个具有删除操作的 LinkedList,该操作具有反向元素到节点查找以进行删除,然后删除的速度与从哈希映射中删除键一样快。
如果您的缓存少于 1,000 个项目,这应该可以正常工作。
这是一个简单的测试来展示其实际操作。
最后一个 LRU 缓存是单线程的,请不要将其包装在同步的任何内容中......
这是对并发版本的尝试。
主要区别是使用 ConcurrentHashMap 而不是 HashMap,以及 Lock 的使用(我本来可以使用同步,但是......)。
我还没有在火力下测试过它,但它似乎是一个简单的 LRU 缓存,可以在 80% 需要简单 LRU 映射的用例中工作。
我欢迎提供反馈,除了为什么不使用库 a、b 或 c 之外。
我不总是使用库的原因是因为我并不总是希望每个 war 文件都是 80MB,并且我编写库,所以我倾向于使用足够好的解决方案使库可插入,并且有人可以插入- 如果他们愿意的话,可以在另一个缓存提供程序中。 :)
我永远不知道什么时候有人可能需要 Guava 或 ehcache 或其他我不想包含它们的东西,但如果我使缓存可插入,我也不会排除它们。
减少依赖性有其自身的回报。 我喜欢获得一些关于如何使这变得更简单或更快或两者兼而有之的反馈。
另外,如果有人知道准备好了....
好吧..我知道你在想什么...他为什么不直接使用 LinkedHashMap 中的removeEldest条目,我应该但是....但是..但是.. 那将是一个 FIFO 而不是 LRU,我们正在尝试实现一个 LRU。
上面的代码测试失败了...
所以这里是一个使用removeEldestEntry 的快速而脏的 FIFO 缓存。
FIFO 速度很快。 没有四处搜寻。 您可以将 FIFO 放在 LRU 前面,这样可以很好地处理大多数热条目。 更好的 LRU 将需要反向元素到节点功能。
不管怎样...现在我写了一些代码,让我浏览一下其他答案,看看我错过了什么...第一次扫描它们时。
This is round two.
The first round was what I came up with then I reread the comments with the domain a bit more ingrained in my head.
So here is the simplest version with a unit test that shows it works based on some other versions.
First the non-concurrent version:
The true flag will track the access of gets and puts. See JavaDocs. The removeEdelstEntry without the true flag to the constructor would just implement a FIFO cache (see notes below on FIFO and removeEldestEntry).
Here is the test that proves it works as an LRU cache:
Now for the concurrent version...
package org.boon.cache;
You can see why I cover the non-concurrent version first. The above attempts to create some stripes to reduce lock contention. So we it hashes the key and then looks up that hash to find the actual cache. This makes the limit size more of a suggestion/rough guess within a fair amount of error depending on how well spread your keys hash algorithm is.
Here is the test to show that the concurrent version probably works. :) (Test under fire would be the real way).
This is the last post.. The first post I deleted as it was a LFU not an LRU cache.
I thought I would give this another go. I was trying trying to come up with the simplest version of an LRU cache using the standard JDK w/o too much implementation.
Here is what I came up with. My first attempt was a bit of a disaster as I implemented a LFU instead of and LRU, and then I added FIFO, and LRU support to it... and then I realized it was becoming a monster. Then I started talking to my buddy John who was barely interested, and then I described at deep length how I implemented an LFU, LRU and FIFO and how you could switch it with a simple ENUM arg, and then I realized that all I really wanted was a simple LRU. So ignore the earlier post from me, and let me know if you want to see an LRU/LFU/FIFO cache that is switchable via an enum... no? Ok.. here he go.
The simplest possible LRU using just the JDK. I implemented both a concurrent version and a non-concurrent version.
I created a common interface (it is minimalism so likely missing a few features that you would like but it works for my use cases, but let if you would like to see feature XYZ let me know... I live to write code.).
You may wonder what getSilent is. I use this for testing. getSilent does not change LRU score of an item.
First the non-concurrent one....
The queue.removeFirstOccurrence is a potentially expensive operation if you have a large cache. One could take LinkedList as an example and add a reverse lookup hash map from element to node to make remove operations A LOT FASTER and more consistent. I started too, but then realized I don't need it. But... maybe...
When put is called, the key gets added to the queue. When get is called, the key gets removed and re-added to the top of the queue.
If your cache is small and the building an item is expensive then this should be a good cache. If your cache is really large, then the linear search could be a bottle neck especially if you don't have hot areas of cache. The more intense the hot spots, the faster the linear search as hot items are always at the top of the linear search. Anyway... what is needed for this to go faster is write another LinkedList that has a remove operation that has reverse element to node lookup for remove, then removing would be about as fast as removing a key from a hash map.
If you have a cache under 1,000 items, this should work out fine.
Here is a simple test to show its operations in action.
The last LRU cache was single threaded, and please don't wrap it in a synchronized anything....
Here is a stab at a concurrent version.
The main differences are the use of the ConcurrentHashMap instead of HashMap, and the use of the Lock (I could have gotten away with synchronized, but...).
I have not tested it under fire, but it seems like a simple LRU cache that might work out in 80% of use cases where you need a simple LRU map.
I welcome feedback, except the why don't you use library a, b, or c.
The reason I don't always use a library is because I don't always want every war file to be 80MB, and I write libraries so I tend to make the libs plug-able with a good enough solution in place and someone can plug-in another cache provider if they like. :)
I never know when someone might need Guava or ehcache or something else I don't want to include them, but if I make caching plug-able, I will not exclude them either.
Reduction of dependencies has its own reward. I love to get some feedback on how to make this even simpler or faster or both.
Also if anyone knows of a ready to go....
Ok.. I know what you are thinking... Why doesn't he just use removeEldest entry from LinkedHashMap, and well I should but.... but.. but.. That would be a FIFO not an LRU and we were trying to implement a LRU.
This test fails for the above code...
So here is a quick and dirty FIFO cache using removeEldestEntry.
FIFOs are fast. No searching around. You could front a FIFO in front of an LRU and that would handle most hot entries quite nicely. A better LRU is going to need that reverse element to Node feature.
Anyway... now that I wrote some code, let me go through the other answers and see what I missed... the first time I scanned them.
LinkedHashMap
的复杂度为 O(1),但需要同步。 无需在那里重新发明轮子。提高并发性的 2 个选项:
1.
创建多个 LinkedHashMap,并对它们进行哈希处理:
示例:
LinkedHashMap[4],索引 0、1、2、3
。 在键上执行key%4
(或在[key, 3]
上执行binary OR
)来选择要执行 put/get/ 操作的映射消除。2.
您可以通过扩展 ConcurrentHashMap 来实现“几乎”LRU,并在其中的每个区域中具有类似链接的哈希映射结构。 与同步的 LinkedHashMap 相比,锁定的粒度会更细。 在
put
或putIfAbsent
上,仅需要列表头部和尾部的锁定(每个区域)。 在删除或获取时,需要锁定整个区域。 我很好奇某种原子链表是否可能在这里有所帮助——对于列表的头部来说可能是这样。 也许是为了更多。该结构不会保留总顺序,而只会保留每个区域的顺序。 只要条目的数量远大于区域的数量,这对于大多数缓存来说就足够了。 每个区域都必须有自己的条目计数,这将用于驱逐触发器,而不是全局计数。
ConcurrentHashMap 中的默认区域数量为 16,这对于当今的大多数服务器来说已经足够了。
在中等并发情况下会更容易编写并且更快。
编写起来会更困难,但在非常高的并发性下可以更好地扩展。 正常访问会更慢(就像没有并发时
ConcurrentHashMap
比HashMap
慢一样)LinkedHashMap
is O(1), but requires synchronization. No need to reinvent the wheel there.2 options for increasing concurrency:
1.
Create multiple
LinkedHashMap
, and hash into them:example:
LinkedHashMap[4], index 0, 1, 2, 3
. On the key dokey%4
(orbinary OR
on[key, 3]
) to pick which map to do a put/get/remove.2.
You could do an 'almost' LRU by extending
ConcurrentHashMap
, and having a linked hash map like structure in each of the regions inside of it. Locking would occur more granularly than aLinkedHashMap
that is synchronized. On aput
orputIfAbsent
only a lock on the head and tail of the list is needed (per region). On a remove or get the whole region needs to be locked. I'm curious if Atomic linked lists of some sort might help here -- probably so for the head of the list. Maybe for more.The structure would not keep the total order, but only the order per region. As long as the number of entries is much larger than the number of regions, this is good enough for most caches. Each region will have to have its own entry count, this would be used rather than the global count for the eviction trigger.
The default number of regions in a
ConcurrentHashMap
is 16, which is plenty for most servers today.would be easier to write and faster under moderate concurrency.
would be more difficult to write but scale much better at very high concurrency. It would be slower for normal access (just as
ConcurrentHashMap
is slower thanHashMap
where there is no concurrency)有两个开源实现。
Apache Solr 有 ConcurrentLRUCache: https://lucene.apache .org/solr/3_6_1/org/apache/solr/util/ConcurrentLRUCache.html
有一个 ConcurrentLinkedHashMap 的开源项目:
http://code.google.com/p/concurrentlinkedhashmap/
There are two open source implementations.
Apache Solr has ConcurrentLRUCache: https://lucene.apache.org/solr/3_6_1/org/apache/solr/util/ConcurrentLRUCache.html
There's an open source project for a ConcurrentLinkedHashMap:
http://code.google.com/p/concurrentlinkedhashmap/
我会考虑使用 java.util.concurrent.PriorityBlockingQueue,优先级由每个元素中的“numberOfUses”计数器确定。 我会非常非常小心确保所有同步正确,因为“numberOfUses”计数器意味着该元素不能是不可变的。
元素对象将是缓存中对象的包装器:
I would consider using java.util.concurrent.PriorityBlockingQueue, with priority determined by a "numberOfUses" counter in each element. I would be very, very careful to get all my synchronisation correct, as the "numberOfUses" counter implies that the element can't be immutable.
The element object would be a wrapper for the objects in the cache:
希望这可以帮助 。
Hope this helps .
LRU Cache 可以使用 ConcurrentLinkedQueue 和 ConcurrentHashMap 来实现,也可以在多线程场景中使用。 队列的头部是在队列中停留时间最长的元素。 队列的尾部是在队列中停留时间最短的元素。 当Map中存在一个元素时,我们可以将其从LinkedQueue中移除并插入到尾部。
LRU Cache can be implemented using a ConcurrentLinkedQueue and a ConcurrentHashMap which can be used in multithreading scenario as well. The head of the queue is that element that has been on the queue the longest time. The tail of the queue is that element that has been on the queue the shortest time. When an element exists in the Map, we can remove it from the LinkedQueue and insert it at the tail.
这是我对 LRU 的实现。 我使用了 PriorityQueue,它基本上以 FIFO 的方式工作,而不是线程安全的。
使用基于页面时间创建和基于执行排序的比较器
最近最少使用时间的页面。
考虑的页面:2, 1, 0, 2, 8, 2, 4
添加到缓存的页面是:2
添加到缓存中的页面是:1
添加到缓存中的页面是:0
页:2 已存在于缓存中。 上次访问时间已更新
页面错误,页:1,替换为页:8
添加到缓存中的页面是:8
页:2 已存在于缓存中。 上次访问时间已更新
页面错误,页:0,替换为页:4
添加到缓存中的页面为:4
输出
LRUCache 页面
-------------
页面名称:8,页面创建时间:1365957019974
页面名称:2,页面创建时间:1365957020074
页面名称:4,页面创建时间:1365957020174
在此输入代码
Here is my implementation for LRU. I have used PriorityQueue, which basically works as FIFO and not threadsafe.
Used Comparator based on the page time creation and based on the performs the ordering
of the pages for the least recently used time.
Pages for consideration : 2, 1, 0, 2, 8, 2, 4
Page added into cache is : 2
Page added into cache is : 1
Page added into cache is : 0
Page: 2 already exisit in cache. Last accessed time updated
Page Fault, PAGE: 1, Replaced with PAGE: 8
Page added into cache is : 8
Page: 2 already exisit in cache. Last accessed time updated
Page Fault, PAGE: 0, Replaced with PAGE: 4
Page added into cache is : 4
OUTPUT
LRUCache Pages
-------------
PageName: 8, PageCreationTime: 1365957019974
PageName: 2, PageCreationTime: 1365957020074
PageName: 4, PageCreationTime: 1365957020174
enter code here
这是我测试的最佳性能并发 LRU 缓存实现,没有任何同步块:
}
Here is my tested best performing concurrent LRU cache implementation without any synchronized block:
}
这是我使用的 LRU 缓存,它封装了 LinkedHashMap 并通过一个简单的同步锁来处理并发性,以保护有趣的地方。 它在使用元素时“触及”元素,使它们再次成为“最新鲜”的元素,因此它实际上是 LRU。 我还要求我的元素具有最短寿命,您也可以将其视为允许的“最大空闲时间”,然后您就可以驱逐。
不过,我同意 Hank 的结论并接受了答案——如果我今天再次开始,我会查看 Guava 的
CacheBuilder
。This is the LRU cache I use, which encapsulates a LinkedHashMap and handles concurrency with a simple synchronize lock guarding the juicy spots. It "touches" elements as they are used so that they become the "freshest" element again, so that it is actually LRU. I also had the requirement of my elements having a minimum lifespan, which you can also think of as "maximum idle time" permitted, then you're up for eviction.
However, I agree with Hank's conclusion and accepted answer -- if I were starting this again today, I'd check out Guava's
CacheBuilder
.对于缓存来说,您通常会通过代理对象(URL、字符串......)查找一些数据,因此从接口角度来看,您将需要一个映射。 但为了把事情踢出去,你需要一个类似队列的结构。 在内部,我将维护两个数据结构,一个优先级队列和一个哈希映射。 这是一个应该能够在 O(1) 时间内完成所有事情的实现。
这是我很快就完成的课程:
这就是它的工作原理。 密钥存储在一个链接列表中,最旧的密钥位于列表的前面(新密钥位于后面),因此当您需要“弹出”某些内容时,只需将其从队列的前面弹出,然后使用该密钥即可从地图中删除该值。 当一个项目被引用时,您从映射中获取 ValueHolder,然后使用queuelocation 变量从队列中的当前位置删除该键,然后将其放在队列的后面(现在是最近使用的)。 添加东西几乎是一样的。
我确信这里有很多错误,而且我还没有实现任何同步。 但此类将提供 O(1) 添加到缓存、O(1) 删除旧项目以及 O(1) 检索缓存项目。 即使是简单的同步(只需同步每个公共方法),由于运行时间的原因,仍然会出现很少的锁争用。 如果有人有任何聪明的同步技巧,我会非常感兴趣。 另外,我确信您可以使用与地图相关的 maxsize 变量来实现一些额外的优化。
Well for a cache you will generally be looking up some piece of data via a proxy object, (a URL, String....) so interface-wise you are going to want a map. but to kick things out you want a queue like structure. Internally I would maintain two data structures, a Priority-Queue and a HashMap. heres an implementation that should be able to do everything in O(1) time.
Here's a class I whipped up pretty quick:
Here's how it works. Keys are stored in a linked list with the oldest keys in the front of the list (new keys go to the back) so when you need to 'eject' something you just pop it off the front of the queue and then use the key to remove the value from the map. When an item gets referenced you grab the ValueHolder from the map and then use the queuelocation variable to remove the key from its current location in the queue and then put it at the back of the queue (its now the most recently used). Adding things is pretty much the same.
I'm sure theres a ton of errors here and I haven't implemented any synchronization. but this class will provide O(1) adding to the cache, O(1) removal of old items, and O(1) retrieval of cache items. Even a trivial synchronization (just synchronize every public method) would still have little lock contention due to the run time. If anyone has any clever synchronization tricks I would be very interested. Also, I'm sure there are some additional optimizations that you could implement using the maxsize variable with respect to the map.
最好的实现方法是使用 LinkedHashMap 来维护元素的插入顺序。 以下是示例代码:
}
Best way to achieve is to use a LinkedHashMap that maintains insertion order of elements. Following is an sample code:
}
查看 ConcurrentSkipListMap。 如果元素已包含在缓存中,它应该为您提供 log(n) 时间来测试和删除该元素,并为您提供恒定的时间来重新添加它。
您只需要一些计数器等和包装器元素来强制对 LRU 顺序进行排序,并确保在缓存已满时丢弃最近的内容。
Have a look at ConcurrentSkipListMap. It should give you log(n) time for testing and removing an element if it is already contained in the cache, and constant time for re-adding it.
You'd just need some counter etc and wrapper element to force ordering of the LRU order and ensure recent stuff is discarded when the cache is full.
这是我的简短实现,请批评或改进!
Here is my short implementation, please criticize or improve it!
这是我自己对这个问题的实现
simplelrucache 提供了线程安全、非常简单、非分布式 LRU 缓存,并支持 TTL。 它提供了两种实现:
您可以在这里找到它: http://code .google.com/p/simplelrucache/
Here's my own implementation to this problem
simplelrucache provides threadsafe, very simple, non-distributed LRU caching with TTL support. It provides two implementations:
You can find it here: http://code.google.com/p/simplelrucache/
我正在寻找使用 Java 代码的更好的 LRU 缓存。 您是否可以使用
LinkedHashMap
和Collections#synchronizedMap
共享您的 Java LRU 缓存代码? 目前,我正在使用LRUMap 实现 Map
并且代码工作正常,但在使用以下方法使用 500 个用户进行负载测试时,我收到ArrayIndexOutofBoundException
。 该方法将最近的对象移动到队列的前面。get(Object key)
和put(Object key, Object value)
方法调用上面的moveToFront
方法。I'm looking for a better LRU cache using Java code. Is it possible for you to share your Java LRU cache code using
LinkedHashMap
andCollections#synchronizedMap
? Currently I'm usingLRUMap implements Map
and the code works fine, but I'm gettingArrayIndexOutofBoundException
on load testing using 500 users on the below method. The method moves the recent object to front of the queue.get(Object key)
andput(Object key, Object value)
method calls the abovemoveToFront
method.想要对 Hank 给出的答案添加评论,但有些我无法 - 请将其视为评论
LinkedHashMap 也根据其构造函数中传递的参数来维护访问顺序
它保持双行列表以维持顺序(请参阅 LinkedHashMap.Entry)
@Pacerier 如果再次添加元素,LinkedHashMap 在迭代时保持相同的顺序是正确的,但这仅适用于插入顺序模式。
这是我在 LinkedHashMap.Entry 对象的 java 文档中发现的,
该方法负责将最近访问的元素移动到列表的末尾。 所以总而言之LinkedHashMap是实现LRUCache的最佳数据结构。
Wanted to add comment to the answer given by Hank but some how I am not able to - please treat it as comment
LinkedHashMap maintains access order as well based on parameter passed in its constructor
It keeps doubly lined list to maintain order (See LinkedHashMap.Entry)
@Pacerier it is correct that LinkedHashMap keeps same order while iteration if element is added again but that is only in case of insertion order mode.
this is what I found in java docs of LinkedHashMap.Entry object
this method takes care of moving recently accessed element to end of the list. So all in all LinkedHashMap is best data structure for implementing LRUCache.
另一种想法甚至是使用Java的LinkedHashMap集合的简单实现。
LinkedHashMap提供了removeEldestEntry方法,可以按照示例中提到的方式重写该方法。 默认情况下,该集合结构的实现为 false。 如果该结构的真实大小超出了初始容量,则最旧或较旧的元素将被删除。
我们可以有一个 pageno 和页面内容,在我的例子中 pageno 是整数,而 pagecontent 我保留了页码值字符串。
上述代码执行结果如下:
Another thought and even a simple implementation using LinkedHashMap collection of Java.
LinkedHashMap provided method removeEldestEntry and which can be overridden in the way mentioned in example. By default implementation of this collection structure is false. If its true and size of this structure goes beyond the initial capacity than eldest or older elements will be removed.
We can have a pageno and page content in my case pageno is integer and pagecontent i have kept page number values string.
Result of above code execution is as follows:
遵循@sanjanab概念(但在修复之后),我制作了我的LRUCache版本,还提供了Consumer,允许在需要时对已删除的项目执行某些操作。
Following the @sanjanab concept (but after fixes) I made my version of the LRUCache providing also the Consumer that allows to do something with the removed items if needed.
Android 提供了 LRU 缓存 的实现。 代码干净、简单。
Android offers an implementation of an LRU Cache. The code is clean and straightforward.