高效的缓存同步

发布于 2024-09-28 16:00:54 字数 563 浏览 3 评论 0原文

考虑一下,

public Object doGet() {
    return getResource();
}

private Object getResource() {
    synchronized (lock) {
        if (cachedResourceIsStale()) {
            downloadNewVersionOfResource();
        }
    }

    return resource;
}

假设 doGet 将并发执行,并且执行很多次,并且下载新版本的资源需要一段时间,是否有更有效的方法在 getResource 中进行同步?我了解读/写锁,但我不认为它们可以应用在这里。

为什么要同步?如果缓存过时,则在第一个线程仍在刷新资源时访问该资源的所有线程都将执行自己的刷新。除了这导致的其他问题之外,它的效率也很低。

正如 BalusC 在评论中提到的,我目前在 servlet 中面临这个问题,但我对通用答案很满意,因为谁知道在什么情况下我会再次遇到它。

Consider this

public Object doGet() {
    return getResource();
}

private Object getResource() {
    synchronized (lock) {
        if (cachedResourceIsStale()) {
            downloadNewVersionOfResource();
        }
    }

    return resource;
}

Assuming that doGet will be executed concurrently, and a lot at that, and that downloading the new version of the resource takes a while, are there more efficient ways to do the synchronization in getResource? I know about read/write locks but I don't think they can be applied here.

Why synchronize at all? If the cache goes stale, all threads accessing the resource while it's still being refreshed by the first one, will execute their own refresh. Among other problems this causes, it's hardly efficient.

As BalusC mentions in the comments, I'm currently facing this issue in a servlet, but I'm happy with generic answers because who knows in what situation I'll run into it again.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

沙沙粒小 2024-10-05 16:00:54

假设

  1. 高效意味着 doGet() 应尽快完成
  2. cachedPageIsStale() 根本不需要时间
  3. downloadNewVersionOfResource() 需要一点时间

答案

同步减少了网络负载,因为只有一个线程在资源过期时获取资源。此外,它不会过度延迟其他线程的处理 - 由于虚拟机不包含线程可以返回的当前快照,因此它们必须阻塞,并且没有理由额外的并发 downloadNewVersionOfResource() 会更快地完成(由于网络带宽争用,我预计情况会相反)。

因此,同步效果很好,并且在带宽消耗和响应时间方面是最佳的。 (与 I/O 等待相比,同步的 CPU 开销非常小) - 假设调用 doGet() 时资源的当前版本可能不可用;如果您的服务器始终拥有该资源的当前版本,它可以立即将其发回。 (您可能有一个后台线程在旧版本到期之前下载新版本。)

PS

您尚未显示任何错误处理。您必须决定是将 downloadNewVersionOfResource() 引发的异常传播给调用者还是继续提供旧版本的资源。

编辑

那么呢?假设您有 100 个连接工作线程,检查资源是否过时需要 1 微秒,资源是否过时且提供服务需要 1 秒。那么,平均有 100 * 10^-6 / 1 = 0.0001 个线程尝试获取锁。几乎没有任何争论。获取未获取的锁的开销约为 10^-8 秒。当网络会导致几毫秒的延迟时,优化已经采用微发送的东西是没有意义的。如果您不相信我,请执行微基准测试以进行同步。确实,频繁、不必要的同步会增加大量开销,因此不推荐使用同步集合类。但这是因为这些方法每次调用只做很少的工作,并且同步的相对开销要大得多。我刚刚为以下代码做了一个小微基准测试:

synchronized (lock) {
    c++;
}

在我的笔记本上,这需要 50 纳秒(5*10^-8 秒),在 sun 的热点虚拟机中平均执行超过 1000 万次。这大约是裸增量操作的 20 倍,因此如果进行大量增量操作,则同步其中的每一个都会使程序减慢一个数量级。然而,如果该方法确实阻塞 I/O,例如等待 1 毫秒,则增加同样的 50 纳秒将使吞吐量减少 0.005%。当然,您有更好的性能调整机会:-)

这就是为什么,您应该在开始优化之前始终进行测量。它可以防止您花费数小时的时间来节省几纳秒的处理器时间。

Assumptions

  1. efficient means that doGet() should complete as quickly as possible
  2. cachedPageIsStale() takes no time at all
  3. downloadNewVersionOfResource() takes a little time

Answer

Sychronizing reduces network load, because only one thread fetches the resource when it expires. Also, it will not unduly delay processing of other threads - as the VM contains no current snapshot the threads could return, they'll have to block, and there is no reason an additional concurrent downloadNewVersionOfResource() will complete any faster (I'd expect the opposite due to network bandwith contention).

So synchronizing is good, and optimal in bandwith consumption and response times. (The CPU overhead of synchronization is vanishingly small compared to I/O waits) - assuming that a current version of the resource might not be available when doGet() is invoked; if your server always had a current version of the resource, it could send it back without delay. (You might have a background thread download the new version just before the old one expires.)

PS

You haven't shown any error handling. You'll have to decide whether to propagate exceptions thrown by downloadNewVersionOfResource() to your callers or continue to serve the old version of the resource.

Edit

So? Let's assume you have 100 connection workers, and the check whether the resource is stale takes one microsecond, the resource is not stale, and serving it takes one second. Then, on average, 100 * 10^-6 / 1 = 0.0001 threads are trying to get the lock. Barely any contention at all. And the overhead of acquiring an untaken lock is on the order of 10^-8 seconds. There is no point optimizing things that already take microsends, when the network will cause delays of milliseonds. And if you don't believe me, do a microbenchmark for synchronization. It is true that frequent, needless sychronization adds significant overhead, and that synchronizing collection classes were deprecated for that reason. But that's because these methods do very little work per invocation, and the relative overhead of sychronization was a lot bigger. I just did a small microbenchmark for the following code:

synchronized (lock) {
    c++;
}

On my notebook, this takes 50 nanoseconds (5*10^-8 seconds) averaged over 10 million executions in sun's hotspot vm. This is about 20 times as long as the naked increment operation, so if one does a lot of increments, sychronizing each of those will slow the program an order of magnitude. If however, that method did blocking I/O, waiting, say, 1 ms, adding those same 50 nanoseconds would reduce throughput by 0.005%. Surely you have better opportunities for performance tuning :-)

This is why, you should always measure before starting to optimize. It prevents you from investing hours of your time to save a couple nano seconds processor time.

暮光沉寂 2024-10-05 16:00:54

您可能可以通过使用“锁条带化”来减少锁争用(从而提高吞吐量)——本质上是将一个锁拆分为多个锁,每个锁保护特定的用户组。
棘手的部分是如何弄清楚如何将用户分配到组。最简单的情况是您可以将任何用户的请求分配给任何组。如果您的数据模型要求必须按顺序处理来自一个用户的请求,则必须在用户请求和组之间引入某种映射。以下是 StripedLock 的示例实现:

import java.util.concurrent.locks.ReentrantLock;

/**
 * Striped locks holder, contains array of {@link java.util.concurrent.locks.ReentrantLock}, on which lock/unlock
 * operations are performed. Purpose of this is to decrease lock contention.
 * <p>When client requests lock, it gives an integer argument, from which target lock is derived as follows:
 * index of lock in array equals to <code>id & (locks.length - 1)</code>.
 * Since <code>locks.length</code> is the power of 2, <code>locks.length - 1</code> is string of '1' bits,
 * and this means that all lower bits of argument are taken into account.
 * <p>Number of locks it can hold is bounded: it can be from set {2, 4, 8, 16, 32, 64}.
  */
public class StripedLock {
    private final ReentrantLock[] locks;

    /**
     * Default ctor, creates 16 locks
     */
    public StripedLock() {
        this(4);
    }

    /**
     * Creates array of locks, size of array may be any from set {2, 4, 8, 16, 32, 64} 
     * @param storagePower size of array will be equal to <code>Math.pow(2, storagePower)</code>
     */
    public StripedLock(int storagePower) {
        if (storagePower < 1 || storagePower > 6)
             throw new IllegalArgumentException("storage power must be in [1..6]");
        int lockSize = (int) Math.pow(2, storagePower);
        locks = new ReentrantLock[lockSize];
        for (int i = 0; i < locks.length; i++)
            locks[i] = new ReentrantLock();
    }

    /**
     * Locks lock associated with given id.
     * @param id value, from which lock is derived
     */
    public void lock(int id) {
        getLock(id).lock();
    }

    /**
     * Unlocks lock associated with given id.
     * @param id value, from which lock is derived 
     */
    public void unlock(int id) {
        getLock(id).unlock();
    }

    /**
     * Map function between integer and lock from locks array
     * @param id argument
     * @return lock which is result of function 
     */
    private ReentrantLock getLock(int id) {
        return locks[id & (locks.length - 1)];
    }
}

There might be possibility that you can reduce lock contention (thus improving through-put) by using "lock-striping" -- essentially, split one lock into several ones, each lock protecting particular group of users.
The tricky part is how to figure out how to assign users to groups. The simplest case is when you can assign request from any user to any group. If your data model requires that requests from one user must be processed sequentially, you must introduce some mapping between user requests and groups. Here's sample implementation of StripedLock:

import java.util.concurrent.locks.ReentrantLock;

/**
 * Striped locks holder, contains array of {@link java.util.concurrent.locks.ReentrantLock}, on which lock/unlock
 * operations are performed. Purpose of this is to decrease lock contention.
 * <p>When client requests lock, it gives an integer argument, from which target lock is derived as follows:
 * index of lock in array equals to <code>id & (locks.length - 1)</code>.
 * Since <code>locks.length</code> is the power of 2, <code>locks.length - 1</code> is string of '1' bits,
 * and this means that all lower bits of argument are taken into account.
 * <p>Number of locks it can hold is bounded: it can be from set {2, 4, 8, 16, 32, 64}.
  */
public class StripedLock {
    private final ReentrantLock[] locks;

    /**
     * Default ctor, creates 16 locks
     */
    public StripedLock() {
        this(4);
    }

    /**
     * Creates array of locks, size of array may be any from set {2, 4, 8, 16, 32, 64} 
     * @param storagePower size of array will be equal to <code>Math.pow(2, storagePower)</code>
     */
    public StripedLock(int storagePower) {
        if (storagePower < 1 || storagePower > 6)
             throw new IllegalArgumentException("storage power must be in [1..6]");
        int lockSize = (int) Math.pow(2, storagePower);
        locks = new ReentrantLock[lockSize];
        for (int i = 0; i < locks.length; i++)
            locks[i] = new ReentrantLock();
    }

    /**
     * Locks lock associated with given id.
     * @param id value, from which lock is derived
     */
    public void lock(int id) {
        getLock(id).lock();
    }

    /**
     * Unlocks lock associated with given id.
     * @param id value, from which lock is derived 
     */
    public void unlock(int id) {
        getLock(id).unlock();
    }

    /**
     * Map function between integer and lock from locks array
     * @param id argument
     * @return lock which is result of function 
     */
    private ReentrantLock getLock(int id) {
        return locks[id & (locks.length - 1)];
    }
}
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文