具有“对象过期”的对象高速缓存数据结构

发布于 2024-12-26 02:55:17 字数 183 浏览 1 评论 0 原文

Java 中哪种数据结构最适合实现内存中对象缓存,其中对象具有单独的过期时间?

基本上对于缓存,我可以使用提供 put 和 get 方法的 Map(其中键可以是字符串),并使用“时间戳”+“对象”对的有序列表来管理过期时间。因此,清理线程可以检查第一个列表条目,并在其过期时间过后删除该对象。 (删除第一个元素应该在 O(1) 时间内)

Which data structure in Java would be best to implement an in-memory object cache, where objects have an individual expiration time?

Basically for the cache I could use a Map (where key could be a String) which offers the put and get methods, and use a ordered list of "timestamp"+"object" pairs to manage the expiration time. So a cleanup thread could check the first list entry and delete the object when its expiration time passed. (Deletion of the first element should be in O(1) time)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

画中仙 2025-01-02 02:55:17

您所描述的构建基本上是我的 ExpiringMap。还有其他类似的实现,例如 Guava(请参阅 CacheBuilder) - 尽管我不认为它像 ExpiringMap 那样支持每个条目过期。

What you're describing building is basically my ExpiringMap. There are other similar implementations, such as Guava (see CacheBuilder) - though I don't believe it supports per-entry expiration as ExpiringMap does.

纵性 2025-01-02 02:55:17

缓存框架现在已经相当成熟:

但是,如果您坚持重新发明轮子,请记住考虑内存利用率。我经常看到实施不当的缓存 (HashMap) 实际上会导致内存泄漏。

请参阅 Cowan 的答案:Java 的WeakHashMap 和缓存:为什么它引用键,而不是值?

Caching frameworks are pretty mature now:

However, if you insist on reinventing the wheel, remember to account for memory utilisation. All too often I see a badly implemented cache (HashMap) effectively turn into a memory leak.

See Cowan's answer here: Java's WeakHashMap and caching: Why is it referencing the keys, not the values?

故事与诗 2025-01-02 02:55:17

我会考虑使用现有的库,例如 ehcache。

但是,如果您想编写自己的线程,我不会使用后台线程,除非您需要它,因为它会增加复杂性。相反,我会让前台线程删除过期的条目。

如果您只需要 LRU 缓存,我会使用 LinkedHashMap。但是,如果您想要定时过期,我会使用带有 PriorityQueueHashMap (这样您就可以检查下一个过期条目是否已过期)

I would consider using an existing library like ehcache.

However, if you want to write your own, I would not use a background thread unless you need it as it adds complexity. Instead I would have the foreground thread remove expired entries.

I would use LinkedHashMap if you just need an LRU cache. However if you want timed expiry, I would use a HashMap with a PriorityQueue (so you can check whether the next to expire entry has expired)

祁梦 2025-01-02 02:55:17

Guava Cachebuilder :

LoadingCache<Key, Graph> graphs = CacheBuilder.newBuilder()
       .maximumSize(10000)
       .expireAfterWrite(10, TimeUnit.MINUTES)
       .removalListener(MY_LISTENER)
       .build(
           new CacheLoader<Key, Graph>() {
             public Graph load(Key key) throws AnyException {
               return createExpensiveGraph(key);
             }
           });

由于 WeekHashmap 不适合缓存,但您始终可以使用 Map> ,其值符合 GC 的周引用条件。

最重要的是,我们始终将 EhCacheMemcachedcoherence 作为流行的选择。

Guava Cachebuilder :

LoadingCache<Key, Graph> graphs = CacheBuilder.newBuilder()
       .maximumSize(10000)
       .expireAfterWrite(10, TimeUnit.MINUTES)
       .removalListener(MY_LISTENER)
       .build(
           new CacheLoader<Key, Graph>() {
             public Graph load(Key key) throws AnyException {
               return createExpensiveGraph(key);
             }
           });

Since WeekHashmap doesn't fit caching but you can always utilize Map<K,WeakReference<V>> whose value become eligible for GC for week references.

Above all we always have EhCache ,Memcached and coherence as popular choice.

何以畏孤独 2025-01-02 02:55:17

我认为你的决定是正确的。
确切地说,我会使用 HashMap。

I think your decision is right.
I would be using HashMap to be exact.

成熟稳重的好男人 2025-01-02 02:55:17

正如前面的答案中提到的,最好使用流行的内存缓存之一,如 EhCache、Memcached 等。

但是正如您希望通过自己的缓存实现它一样,该缓存具有对象过期功能且时间复杂度较低,我尝试像这样实现它 - (非常感谢任何测试评论/建议)..

public class ObjectCache<K, V> {

    private volatile boolean shutdown;
    private final long maxObjects;
    private final long timeToLive;
    private final long removalThreadRunDelay;
    private final long objectsToRemovePerRemovalThreadRun;
    private final AtomicLong objectsCount;
    private final Map<K, CacheEntryWrapper> cachedDataStore;
    private final BlockingQueue<CacheEntryReference> queue;
    private final Object lock = new Object();
    private ScheduledExecutorService executorService;

    public ObjectCache(long maxObjects, long timeToLive, long removalThreadRunDelay, long objectsToRemovePerRemovalThreadRun) {
        this.maxObjects = maxObjects;
        this.timeToLive = timeToLive;
        this.removalThreadRunDelay = removalThreadRunDelay;
        this.objectsToRemovePerRemovalThreadRun = objectsToRemovePerRemovalThreadRun;
        this.objectsCount = new AtomicLong(0);
        this.cachedDataStore = new HashMap<K, CacheEntryWrapper>();
        this.queue = new LinkedBlockingQueue<CacheEntryReference>();
    }

    public void put(K key, V value) {
        if (key == null || value == null) {
            throw new IllegalArgumentException("Key and Value both should be not null");
        }
        if (objectsCount.get() + 1 > maxObjects) {
            throw new RuntimeException("Max objects limit reached. Can not store more objects in cache.");
        }
        // create a value wrapper and add it to data store map
        CacheEntryWrapper entryWrapper = new CacheEntryWrapper(key, value);
        synchronized (lock) {
            cachedDataStore.put(key, entryWrapper);
        }
        // add the cache entry reference to queue which will be used by removal thread
        queue.add(entryWrapper.getCacheEntryReference());
        objectsCount.incrementAndGet();
        // start the removal thread if not started already
        if (executorService == null) {
            synchronized (lock) {
                if (executorService == null) {
                    executorService = Executors.newSingleThreadScheduledExecutor();
                    executorService.scheduleWithFixedDelay(new CacheEntryRemover(), 0, removalThreadRunDelay, TimeUnit.MILLISECONDS);
                }
            }
        }
    }

    public V get(K key) {
        if (key == null) {
            throw new IllegalArgumentException("Key can not be null");
        }
        CacheEntryWrapper entryWrapper;
        synchronized (lock) {
            entryWrapper = cachedDataStore.get(key);
            if (entryWrapper != null) {
                // reset the last access time
                entryWrapper.resetLastAccessedTime();
                // reset the reference (so the weak reference is cleared)
                entryWrapper.resetCacheEntryReference();
                // add the new reference to queue
                queue.add(entryWrapper.getCacheEntryReference());
            }
        }
        return entryWrapper == null ? null : entryWrapper.getValue();
    }

    public void remove(K key) {
        if (key == null) {
            throw new IllegalArgumentException("Key can not be null");
        }
        CacheEntryWrapper entryWrapper;
        synchronized (lock) {
            entryWrapper = cachedDataStore.remove(key);
            if (entryWrapper != null) {
                // reset the reference (so the weak reference is cleared)
                entryWrapper.resetCacheEntryReference();
            }
        }
        objectsCount.decrementAndGet();
    }

    public void shutdown() {
        shutdown = true;
        executorService.shutdown();
        queue.clear();
        cachedDataStore.clear();
    }

    public static void main(String[] args) throws Exception {
        ObjectCache<Long, Long> cache = new ObjectCache<>(1000000, 60000, 1000, 1000);
        long i = 0;
        while (i++ < 10000) {
            cache.put(i, i);
        }
        i = 0;
        while(i++ < 100) {
            Thread.sleep(1000);
            System.out.println("Data store size: " + cache.cachedDataStore.size() + ", queue size: " + cache.queue.size());
        }
        cache.shutdown();
    }

    private class CacheEntryRemover implements Runnable {
        public void run() {
            if (!shutdown) {
                try {
                    int count = 0;
                    CacheEntryReference entryReference;
                    while ((entryReference = queue.peek()) != null && count++ < objectsToRemovePerRemovalThreadRun) {
                        long currentTime = System.currentTimeMillis();
                        CacheEntryWrapper cacheEntryWrapper = entryReference.getWeakReference().get();
                        if (cacheEntryWrapper == null || !cachedDataStore.containsKey(cacheEntryWrapper.getKey())) {
                            queue.poll(100, TimeUnit.MILLISECONDS); // remove the reference object from queue as value is removed from cache
                        } else if (currentTime - cacheEntryWrapper.getLastAccessedTime().get() > timeToLive) {
                            synchronized (lock) {
                                // get the cacheEntryWrapper again just to find if put() has overridden the same key or remove() has removed it already
                                CacheEntryWrapper newCacheEntryWrapper = cachedDataStore.get(cacheEntryWrapper.getKey());
                                // poll the queue if -
                                // case 1 - value is removed from cache
                                // case 2 - value is overridden by new value
                                // case 3 - value is still in cache but it is old now
                                if (newCacheEntryWrapper == null || newCacheEntryWrapper != cacheEntryWrapper || currentTime - cacheEntryWrapper.getLastAccessedTime().get() > timeToLive) {
                                    queue.poll(100, TimeUnit.MILLISECONDS);
                                    newCacheEntryWrapper = newCacheEntryWrapper == null ? cacheEntryWrapper : newCacheEntryWrapper;
                                    if (currentTime - newCacheEntryWrapper.getLastAccessedTime().get() > timeToLive) {
                                        remove(newCacheEntryWrapper.getKey());
                                    }
                                } else {
                                    break; // try next time
                                }
                            }
                        }
                    }
                } catch (Exception e) {
                    e.printStackTrace();
                }
            }
        }
    }

    private class CacheEntryWrapper {
        private K key;
        private V value;
        private AtomicLong lastAccessedTime;
        private CacheEntryReference cacheEntryReference;

        public CacheEntryWrapper(K key, V value) {
            this.key = key;
            this.value = value;
            this.lastAccessedTime = new AtomicLong(System.currentTimeMillis());
            this.cacheEntryReference = new CacheEntryReference(this);
        }

        public K getKey() {
            return key;
        }

        public V getValue() {
            return value;
        }

        public AtomicLong getLastAccessedTime() {
            return lastAccessedTime;
        }

        public CacheEntryReference getCacheEntryReference() {
            return cacheEntryReference;
        }

        public void resetLastAccessedTime() {
            lastAccessedTime.set(System.currentTimeMillis());
        }

        public void resetCacheEntryReference() {
            cacheEntryReference.clear();
            cacheEntryReference = new CacheEntryReference(this);
        }
    }

    private class CacheEntryReference {
        private WeakReference<CacheEntryWrapper> weakReference;

        public CacheEntryReference(CacheEntryWrapper entryWrapper) {
            this.weakReference = new WeakReference<CacheEntryWrapper>(entryWrapper);
        }

        public WeakReference<CacheEntryWrapper> getWeakReference() {
            return weakReference;
        }

        public void clear() {
            weakReference.clear();
        }
    }
}

As mentioned in previous answers, it is better to use one of the popular in-memory caches like EhCache, Memcached etc.

But just as you wanted implement it by your own cache which has the object expiry feature and less time complexity, I tried to implement it like this - (any test reviews/suggestions much appreciated)..

public class ObjectCache<K, V> {

    private volatile boolean shutdown;
    private final long maxObjects;
    private final long timeToLive;
    private final long removalThreadRunDelay;
    private final long objectsToRemovePerRemovalThreadRun;
    private final AtomicLong objectsCount;
    private final Map<K, CacheEntryWrapper> cachedDataStore;
    private final BlockingQueue<CacheEntryReference> queue;
    private final Object lock = new Object();
    private ScheduledExecutorService executorService;

    public ObjectCache(long maxObjects, long timeToLive, long removalThreadRunDelay, long objectsToRemovePerRemovalThreadRun) {
        this.maxObjects = maxObjects;
        this.timeToLive = timeToLive;
        this.removalThreadRunDelay = removalThreadRunDelay;
        this.objectsToRemovePerRemovalThreadRun = objectsToRemovePerRemovalThreadRun;
        this.objectsCount = new AtomicLong(0);
        this.cachedDataStore = new HashMap<K, CacheEntryWrapper>();
        this.queue = new LinkedBlockingQueue<CacheEntryReference>();
    }

    public void put(K key, V value) {
        if (key == null || value == null) {
            throw new IllegalArgumentException("Key and Value both should be not null");
        }
        if (objectsCount.get() + 1 > maxObjects) {
            throw new RuntimeException("Max objects limit reached. Can not store more objects in cache.");
        }
        // create a value wrapper and add it to data store map
        CacheEntryWrapper entryWrapper = new CacheEntryWrapper(key, value);
        synchronized (lock) {
            cachedDataStore.put(key, entryWrapper);
        }
        // add the cache entry reference to queue which will be used by removal thread
        queue.add(entryWrapper.getCacheEntryReference());
        objectsCount.incrementAndGet();
        // start the removal thread if not started already
        if (executorService == null) {
            synchronized (lock) {
                if (executorService == null) {
                    executorService = Executors.newSingleThreadScheduledExecutor();
                    executorService.scheduleWithFixedDelay(new CacheEntryRemover(), 0, removalThreadRunDelay, TimeUnit.MILLISECONDS);
                }
            }
        }
    }

    public V get(K key) {
        if (key == null) {
            throw new IllegalArgumentException("Key can not be null");
        }
        CacheEntryWrapper entryWrapper;
        synchronized (lock) {
            entryWrapper = cachedDataStore.get(key);
            if (entryWrapper != null) {
                // reset the last access time
                entryWrapper.resetLastAccessedTime();
                // reset the reference (so the weak reference is cleared)
                entryWrapper.resetCacheEntryReference();
                // add the new reference to queue
                queue.add(entryWrapper.getCacheEntryReference());
            }
        }
        return entryWrapper == null ? null : entryWrapper.getValue();
    }

    public void remove(K key) {
        if (key == null) {
            throw new IllegalArgumentException("Key can not be null");
        }
        CacheEntryWrapper entryWrapper;
        synchronized (lock) {
            entryWrapper = cachedDataStore.remove(key);
            if (entryWrapper != null) {
                // reset the reference (so the weak reference is cleared)
                entryWrapper.resetCacheEntryReference();
            }
        }
        objectsCount.decrementAndGet();
    }

    public void shutdown() {
        shutdown = true;
        executorService.shutdown();
        queue.clear();
        cachedDataStore.clear();
    }

    public static void main(String[] args) throws Exception {
        ObjectCache<Long, Long> cache = new ObjectCache<>(1000000, 60000, 1000, 1000);
        long i = 0;
        while (i++ < 10000) {
            cache.put(i, i);
        }
        i = 0;
        while(i++ < 100) {
            Thread.sleep(1000);
            System.out.println("Data store size: " + cache.cachedDataStore.size() + ", queue size: " + cache.queue.size());
        }
        cache.shutdown();
    }

    private class CacheEntryRemover implements Runnable {
        public void run() {
            if (!shutdown) {
                try {
                    int count = 0;
                    CacheEntryReference entryReference;
                    while ((entryReference = queue.peek()) != null && count++ < objectsToRemovePerRemovalThreadRun) {
                        long currentTime = System.currentTimeMillis();
                        CacheEntryWrapper cacheEntryWrapper = entryReference.getWeakReference().get();
                        if (cacheEntryWrapper == null || !cachedDataStore.containsKey(cacheEntryWrapper.getKey())) {
                            queue.poll(100, TimeUnit.MILLISECONDS); // remove the reference object from queue as value is removed from cache
                        } else if (currentTime - cacheEntryWrapper.getLastAccessedTime().get() > timeToLive) {
                            synchronized (lock) {
                                // get the cacheEntryWrapper again just to find if put() has overridden the same key or remove() has removed it already
                                CacheEntryWrapper newCacheEntryWrapper = cachedDataStore.get(cacheEntryWrapper.getKey());
                                // poll the queue if -
                                // case 1 - value is removed from cache
                                // case 2 - value is overridden by new value
                                // case 3 - value is still in cache but it is old now
                                if (newCacheEntryWrapper == null || newCacheEntryWrapper != cacheEntryWrapper || currentTime - cacheEntryWrapper.getLastAccessedTime().get() > timeToLive) {
                                    queue.poll(100, TimeUnit.MILLISECONDS);
                                    newCacheEntryWrapper = newCacheEntryWrapper == null ? cacheEntryWrapper : newCacheEntryWrapper;
                                    if (currentTime - newCacheEntryWrapper.getLastAccessedTime().get() > timeToLive) {
                                        remove(newCacheEntryWrapper.getKey());
                                    }
                                } else {
                                    break; // try next time
                                }
                            }
                        }
                    }
                } catch (Exception e) {
                    e.printStackTrace();
                }
            }
        }
    }

    private class CacheEntryWrapper {
        private K key;
        private V value;
        private AtomicLong lastAccessedTime;
        private CacheEntryReference cacheEntryReference;

        public CacheEntryWrapper(K key, V value) {
            this.key = key;
            this.value = value;
            this.lastAccessedTime = new AtomicLong(System.currentTimeMillis());
            this.cacheEntryReference = new CacheEntryReference(this);
        }

        public K getKey() {
            return key;
        }

        public V getValue() {
            return value;
        }

        public AtomicLong getLastAccessedTime() {
            return lastAccessedTime;
        }

        public CacheEntryReference getCacheEntryReference() {
            return cacheEntryReference;
        }

        public void resetLastAccessedTime() {
            lastAccessedTime.set(System.currentTimeMillis());
        }

        public void resetCacheEntryReference() {
            cacheEntryReference.clear();
            cacheEntryReference = new CacheEntryReference(this);
        }
    }

    private class CacheEntryReference {
        private WeakReference<CacheEntryWrapper> weakReference;

        public CacheEntryReference(CacheEntryWrapper entryWrapper) {
            this.weakReference = new WeakReference<CacheEntryWrapper>(entryWrapper);
        }

        public WeakReference<CacheEntryWrapper> getWeakReference() {
            return weakReference;
        }

        public void clear() {
            weakReference.clear();
        }
    }
}
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文