在java中使用LRU缓存简单、简单

发布于 2024-07-06 22:51:31 字数 242 浏览 13 评论 0原文

我知道它实现起来很简单,但我想重用已经存在的东西。

我想解决的问题是,我加载不同页面、角色的配置(从 XML 中,所以我想缓存它们)……因此输入的组合可以增长很多(但 99% 不会)。 为了处理这1%,我想在缓存中有一些最大数量的项目...

直到我知道我在apache commons中找到了org.apache.commons.collections.map.LRUMap,它看起来不错,但还想检查其他东西。 有什么建议吗?

I know it's simple to implement, but I want to reuse something that already exist.

Problem I want to solve is that I load configuration (from XML so I want to cache them) for different pages, roles, ... so the combination of inputs can grow quite much (but in 99% will not). To handle this 1%, I want to have some max number of items in cache...

Till know I have found org.apache.commons.collections.map.LRUMap in apache commons and it looks fine but want to check also something else. Any recommendations?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

不打扰别人 2024-07-13 22:51:31

您可以使用 LinkedHashMap (Java 1.4+):

// Create cache
final int MAX_ENTRIES = 100;
Map cache = new LinkedHashMap(MAX_ENTRIES+1, .75F, true) {
    // This method is called just after a new entry has been added
    public boolean removeEldestEntry(Map.Entry eldest) {
        return size() > MAX_ENTRIES;
    }
};

// Add to cache
Object key = "key";
cache.put(key, object);

// Get object
Object o = cache.get(key);
if (o == null && !cache.containsKey(key)) {
    // Object not in cache. If null is not a possible value in the cache,
    // the call to cache.contains(key) is not needed
}

// If the cache is to be used by multiple threads,
// the cache must be wrapped with code to synchronize the methods
cache = (Map)Collections.synchronizedMap(cache);

You can use a LinkedHashMap (Java 1.4+) :

// Create cache
final int MAX_ENTRIES = 100;
Map cache = new LinkedHashMap(MAX_ENTRIES+1, .75F, true) {
    // This method is called just after a new entry has been added
    public boolean removeEldestEntry(Map.Entry eldest) {
        return size() > MAX_ENTRIES;
    }
};

// Add to cache
Object key = "key";
cache.put(key, object);

// Get object
Object o = cache.get(key);
if (o == null && !cache.containsKey(key)) {
    // Object not in cache. If null is not a possible value in the cache,
    // the call to cache.contains(key) is not needed
}

// If the cache is to be used by multiple threads,
// the cache must be wrapped with code to synchronize the methods
cache = (Map)Collections.synchronizedMap(cache);
冰葑 2024-07-13 22:51:31

这是一个老问题,但为了后代,我想列出 ConcurrentLinkedHashMap,它是线程安全的,与LRUMap。 使用非常简单:

ConcurrentMap<K, V> cache = new ConcurrentLinkedHashMap.Builder<K, V>()
    .maximumWeightedCapacity(1000)
    .build();

文档中有一些很好的示例,例如如何制作LRU 缓存基于大小而不是基于项数。

This is an old question, but for posterity I wanted to list ConcurrentLinkedHashMap, which is thread safe, unlike LRUMap. Usage is quite easy:

ConcurrentMap<K, V> cache = new ConcurrentLinkedHashMap.Builder<K, V>()
    .maximumWeightedCapacity(1000)
    .build();

And the documentation has some good examples, like how to make the LRU cache size-based instead of number-of-items based.

葬シ愛 2024-07-13 22:51:31

这是我的实现,它可以让我在内存中保留最佳数量的元素。

重点是,我不需要跟踪当前正在使用哪些对象,因为我使用的是 MRU 对象的 LinkedHashMap 和 LRU 对象的 WeakHashMap 的组合。
因此,缓存容量不小于 MRU 大小加上 GC 允许我保留的任何内容。 每当对象脱离 MRU 时,只要 GC 还保留它们,它们就会进入 LRU。

public class Cache<K,V> {
final Map<K,V> MRUdata;
final Map<K,V> LRUdata;

public Cache(final int capacity)
{
    LRUdata = new WeakHashMap<K, V>();

    MRUdata = new LinkedHashMap<K, V>(capacity+1, 1.0f, true) {
        protected boolean removeEldestEntry(Map.Entry<K,V> entry)
        {
            if (this.size() > capacity) {
                LRUdata.put(entry.getKey(), entry.getValue());
                return true;
            }
            return false;
        };
    };
}

public synchronized V tryGet(K key)
{
    V value = MRUdata.get(key);
    if (value!=null)
        return value;
    value = LRUdata.get(key);
    if (value!=null) {
        LRUdata.remove(key);
        MRUdata.put(key, value);
    }
    return value;
}

public synchronized void set(K key, V value)
{
    LRUdata.remove(key);
    MRUdata.put(key, value);
}
}

Here is my implementation which lets me keep an optimal number of elements in memory.

The point is that I do not need to keep track of what objects are currently being used since I'm using a combination of a LinkedHashMap for the MRU objects and a WeakHashMap for the LRU objects.
So the cache capacity is no less than MRU size plus whatever the GC lets me keep. Whenever objects fall off the MRU they go to the LRU for as long as the GC will have them.

public class Cache<K,V> {
final Map<K,V> MRUdata;
final Map<K,V> LRUdata;

public Cache(final int capacity)
{
    LRUdata = new WeakHashMap<K, V>();

    MRUdata = new LinkedHashMap<K, V>(capacity+1, 1.0f, true) {
        protected boolean removeEldestEntry(Map.Entry<K,V> entry)
        {
            if (this.size() > capacity) {
                LRUdata.put(entry.getKey(), entry.getValue());
                return true;
            }
            return false;
        };
    };
}

public synchronized V tryGet(K key)
{
    V value = MRUdata.get(key);
    if (value!=null)
        return value;
    value = LRUdata.get(key);
    if (value!=null) {
        LRUdata.remove(key);
        MRUdata.put(key, value);
    }
    return value;
}

public synchronized void set(K key, V value)
{
    LRUdata.remove(key);
    MRUdata.put(key, value);
}
}
刘备忘录 2024-07-13 22:51:31

我也遇到了同样的问题,但我没有找到任何好的库......所以我创建了自己的库。

simplelrucache 提供线程安全、非常简单、非分布式 LRU 缓存,并支持 TTL。 它提供了两种实现:

  • 基于 ConcurrentLinkedHashMap 的
  • 并发 以及基于 LinkedHashMap 的同步

您可以在此处找到它。

I also had same problem and I haven't found any good libraries... so I've created my own.

simplelrucache provides threadsafe, very simple, non-distributed LRU caching with TTL support. It provides two implementations

  • Concurrent based on ConcurrentLinkedHashMap
  • Synchronized based on LinkedHashMap

You can find it here.

负佳期 2024-07-13 22:51:31

这里是一个非常简单易用的Java LRU缓存。
尽管它简短而简单,但它是生产质量的。
代码有解释(查看 README.md)并有一些单元测试。

Here is a very simple and easy to use LRU cache in Java.
Although it is short and simple it is production quality.
The code is explained (look at the README.md) and has some unit tests.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文