以原子方式递增存储在 ConcurrentHashMap 中的计数器

发布于 2024-09-11 08:25:06 字数 787 浏览 10 评论 0 原文

我想从网络应用程序的各个地方收集一些指标。为了简单起见,所有这些都是计数器,因此唯一的修饰符操作是将它们加 1。

增量将是并发的并且经常发生。读取(转储统计数据)是一种罕见的操作。

我正在考虑使用ConcurrentHashMap。问题是如何正确地增加计数器。由于地图没有“增量”操作,因此我需要先读取当前值,对其进行增量,然后将新值放入地图中。如果没有更多代码,这不是原子操作。

是否可以在不同步的情况下实现这一点(这会违背ConcurrentHashMap的目的)?我需要查看 Guava 吗?

感谢您的指点。


PS
SO 有一个相关问题(最有效的方法在 Java 中增加 Map 值),但关注性能而不是多线程

更新
对于那些通过搜索同一主题到达这里的人:除了下面的答案之外,还有一个有用的 演示文稿 顺便涵盖了同一主题。参见幻灯片 24-33。

I would like to collect some metrics from various places in a web app. To keep it simple, all these will be counters and therefore the only modifier operation is to increment them by 1.

The increments will be concurrent and often. The reads (dumping the stats) is a rare operation.

I was thinking to use a ConcurrentHashMap. The issue is how to increment the counters correctly. Since the map doesn't have an "increment" operation, I need to read the current value first, increment it than put the new value in the map. Without more code, this is not an atomic operation.

Is it possible to achieve this without synchronization (which would defeat the purpose of the ConcurrentHashMap)? Do I need to look at Guava ?

Thanks for any pointers.


P.S.
There is a related question on SO (Most efficient way to increment a Map value in Java) but focused on performance and not multi-threading

UPDATE
For those arriving here through searches on the same topic: besides the answers below, there's a useful presentation which incidentally covers the same topic. See slides 24-33.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

山田美奈子 2024-09-18 08:25:06

在 Java 8 中:

ConcurrentHashMap<String, LongAdder> map = new ConcurrentHashMap<>();

map.computeIfAbsent("key", k -> new LongAdder()).increment();

In Java 8:

ConcurrentHashMap<String, LongAdder> map = new ConcurrentHashMap<>();

map.computeIfAbsent("key", k -> new LongAdder()).increment();
演出会有结束 2024-09-18 08:25:06

Guava's new AtomicLongMap (in release 11) might address this need.

长途伴 2024-09-18 08:25:06

你已经很接近了。为什么不尝试类似 ConcurrentHashMap 的东西呢?
如果您的Key(指标)不变,您甚至可以只使用标准的HashMap(如果只读,它们是线程安全的,但建议您明确说明使用来自 Google Collections 的 ImmutableMapCollections.unmodifyingMap 等)。

这样,您就可以使用 map.get(myKey).incrementAndGet() 来获取统计信息。

You're pretty close. Why don't you try something like a ConcurrentHashMap<Key, AtomicLong>?
If your Keys (metrics) are unchanging, you could even just use a standard HashMap (they are threadsafe if readonly, but you'd be well advised to make this explicit with an ImmutableMap from Google Collections or Collections.unmodifiableMap, etc.).

This way, you can use map.get(myKey).incrementAndGet() to bump statistics.

残月升风 2024-09-18 08:25:06

除了使用 AtomicLong 之外,您还可以执行通常的 cas-loop 操作:(

private final ConcurrentMap<Key,Long> counts =
    new ConcurrentHashMap<Key,Long>();

public void increment(Key key) {
    if (counts.putIfAbsent(key, 1)) == null) {
        return;
    }

    Long old;
    do {
       old = counts.get(key);
    } while (!counts.replace(key, old, old+1)); // Assumes no removal.
}

我已经很久没有编写 do-while 循环了.)

对于较小的值,Long 可能会被“缓存”。对于较长的值,可能需要分配。但分配实际上非常快(并且您可以进一步缓存) - 取决于您在最坏情况下的期望。

Other than going with AtomicLong, you can do the usual cas-loop thing:

private final ConcurrentMap<Key,Long> counts =
    new ConcurrentHashMap<Key,Long>();

public void increment(Key key) {
    if (counts.putIfAbsent(key, 1)) == null) {
        return;
    }

    Long old;
    do {
       old = counts.get(key);
    } while (!counts.replace(key, old, old+1)); // Assumes no removal.
}

(I've not written a do-while loop for ages.)

For small values the Long will probably be "cached". For longer values, it may require allocation. But the allocations are actually extremely fast (and you can cache further) - depends upon what you expect, in the worst case.

一束光,穿透我孤独的魂 2024-09-18 08:25:06

有必要做同样的事情。
我正在使用 ConcurrentHashMap + AtomicInteger。
此外,还引入了 ReentrantRW Lock 来实现原子刷新(非常相似的行为)。

使用 10 个键和每个键 10 个线程进行测试。什么都没有丢失。
我还没有尝试过几个冲洗线程,但希望它能起作用。

大规模的单用户模式刷新正在折磨我......
我想删除 RWLock 并将冲洗分解成小块。明天。

private ConcurrentHashMap<String,AtomicInteger> counters = new ConcurrentHashMap<String, AtomicInteger>();
private ReadWriteLock rwLock = new ReentrantReadWriteLock();

public void count(String invoker) {

    rwLock.readLock().lock();

    try{
        AtomicInteger currentValue = counters.get(invoker);
        // if entry is absent - initialize it. If other thread has added value before - we will yield and not replace existing value
        if(currentValue == null){
            // value we want to init with
            AtomicInteger newValue = new AtomicInteger(0);
            // try to put and get old
            AtomicInteger oldValue = counters.putIfAbsent(invoker, newValue);
            // if old value not null - our insertion failed, lets use old value as it's in the map
            // if old value is null - our value was inserted - lets use it
            currentValue = oldValue != null ? oldValue : newValue;
        }

        // counter +1
        currentValue.incrementAndGet();
    }finally {
        rwLock.readLock().unlock();
    }

}

/**
 * @return Map with counting results
 */
public Map<String, Integer> getCount() {
    // stop all updates (readlocks)
    rwLock.writeLock().lock();
    try{
        HashMap<String, Integer> resultMap = new HashMap<String, Integer>();
        // read all Integers to a new map
        for(Map.Entry<String,AtomicInteger> entry: counters.entrySet()){
            resultMap.put(entry.getKey(), entry.getValue().intValue());
        }
        // reset ConcurrentMap
        counters.clear();
        return resultMap;

    }finally {
        rwLock.writeLock().unlock();
    }

}

Got a necessity to do the same.
I'm using ConcurrentHashMap + AtomicInteger.
Also, ReentrantRW Lock was introduced for atomic flush(very similar behavior).

Tested with 10 Keys and 10 Threads per each Key. Nothing was lost.
I just haven't tried several flushing threads yet, but hope it will work.

Massive singleusermode flush is torturing me...
I want to remove RWLock and break down flushing into small pieces. Tomorrow.

private ConcurrentHashMap<String,AtomicInteger> counters = new ConcurrentHashMap<String, AtomicInteger>();
private ReadWriteLock rwLock = new ReentrantReadWriteLock();

public void count(String invoker) {

    rwLock.readLock().lock();

    try{
        AtomicInteger currentValue = counters.get(invoker);
        // if entry is absent - initialize it. If other thread has added value before - we will yield and not replace existing value
        if(currentValue == null){
            // value we want to init with
            AtomicInteger newValue = new AtomicInteger(0);
            // try to put and get old
            AtomicInteger oldValue = counters.putIfAbsent(invoker, newValue);
            // if old value not null - our insertion failed, lets use old value as it's in the map
            // if old value is null - our value was inserted - lets use it
            currentValue = oldValue != null ? oldValue : newValue;
        }

        // counter +1
        currentValue.incrementAndGet();
    }finally {
        rwLock.readLock().unlock();
    }

}

/**
 * @return Map with counting results
 */
public Map<String, Integer> getCount() {
    // stop all updates (readlocks)
    rwLock.writeLock().lock();
    try{
        HashMap<String, Integer> resultMap = new HashMap<String, Integer>();
        // read all Integers to a new map
        for(Map.Entry<String,AtomicInteger> entry: counters.entrySet()){
            resultMap.put(entry.getKey(), entry.getValue().intValue());
        }
        // reset ConcurrentMap
        counters.clear();
        return resultMap;

    }finally {
        rwLock.writeLock().unlock();
    }

}
吾性傲以野 2024-09-18 08:25:06

我做了一个基准测试来比较 LongAdder 和 AtomicLong 的性能。

LongAdder 在我的基准测试中具有更好的性能:对于使用大小为 100 的映射(10 个并发线程)的 500 次迭代,LongAdder 的平均时间为 1270 毫秒,而 AtomicLong 的平均时间为 1315 毫秒。

I did a benchmark to compare the performance of LongAdder and AtomicLong.

LongAdder had a better performance in my benchmark: for 500 iterations using a map with size 100 (10 concurrent threads), the average time for LongAdder was 1270ms while that for AtomicLong was 1315ms.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文