使用GCD实现并发读独占写模型

发布于 2024-10-31 06:36:07 字数 742 浏览 2 评论 0原文

我试图了解使用 Grand Central Dispatch (GCD) 实现控制资源访问的并发读独占写模型的正确方法。

假设有一个 NSMutableDictionary 被大量读取并且偶尔更新。确保读取始终与字典状态一致的正确方法是什么?当然,我可以使用队列并序列化对字典的所有读写访问,但这会不必要地序列化应允许同时访问字典的读取。乍一看,在这里使用组听起来很有希望。我可以创建一个“读取”组并将每个读取操作添加到其中。这将允许同时进行读取。然后,当需要进行更新时,我可以将dispatch_notify()或dispatch_wait()作为写入操作的一部分,以确保在允许继续更新之前完成所有读取。但是,如何确保在写操作完成之前不会开始后续的读操作呢?

这是我上面提到的字典的示例:
R1:在 0 秒时,需要 5 秒才能完成的读取
R2:在 2 秒处,另一个读取进来,需要 5 秒才能完成
W1:在 4 秒时,写入操作需要访问字典 3 秒
R3:在 6 秒时,另一个读取进来,需要 5 秒才能完成
W2:在 8 秒处,另一个写入操作进来,也需要 3 秒才能完成

理想情况下,上面的结果应如下所示:
R1 从 0 秒开始,在 5 秒结束
R2 从 2 秒开始,到 7 秒结束
W1 从 7 秒开始,到 10 秒结束
R3 在 10 秒开始,在 15 秒结束
W2在15秒开始,在18秒结束

注意:虽然R3在6秒到来,但不允许在W1之前开始,因为W1来得更早。

使用 GCD 实现上述内容的最佳方法是什么?

I am trying to understand the proper way of using Grand Central Dispatch (GCD) to implement concurrent read exclusive write model of controlling access to a resource.

Suppose there is a NSMutableDictionary that is read a lot and once in awhile updated. What is the proper way of ensuring that reads always work with consistent state of the dictionary? Sure I can use a queue and serialize all read and write access to the dictionary, but that would unnecessarily serialize reads which should be allowed to access the dictionary concurrently. At first the use of groups here sounds promising. I could create a 'read' group and add every read operation to it. That would allow reads to happen at the same time. And then when the time comes to do an update, I could dispatch_notify() or dispatch_wait() as part of a write operation to make sure that all reads complete before the update is allowed to go on. But then how do I make sure that a subsequent read operation does not start until the write operation completes?

Here's an example with the dictionary I mentioned above:
R1: at 0 seconds, a read comes in which needs 5 seconds to complete
R2: at 2 seconds another read comes in which needs 5 seconds to complete
W1: at 4 seconds a write operation comes needing access to dictionary for 3 sec
R3: at 6 seconds another read comes in which needs 5 seconds to complete
W2: at 8 seconds another write operation comes in also needing 3 seconds to complete

Ideally the above should play out like this:
R1 starts at 0 seconds, ends at 5
R2 starts at 2 seconds, ends at 7
W1 starts at 7 seconds, ends at 10
R3 starts at 10 seconds, ends at 15
W2 starts at 15 seconds, ends at 18

Note: even though R3 came at 6 seconds, it was not allowed to start before W1 because W1 came earlier.

What is the best way to implement the above with GCD?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

紫南 2024-11-07 06:36:08

我认为你的想法是正确的。从概念上讲,您想要的是一个私有并发队列,您可以向其中提交“屏障”块,这样屏障块就会等待,直到所有先前提交的块完成执行,然后自行执行所有块。

GCD 尚未(还?)提供开箱即用的此功能,但您可以通过将读/写请求包装在一些附加逻辑中并通过中间串行队列汇集这些请求来模拟它。

当读取请求到达串行队列的前端时,dispatch_group_async 将实际工作转移到全局并发队列上。如果是写请求,您应该dispatch_suspend串行队列,并仅在前面的请求执行完毕后调用dispatch_group_notify将工作提交到并发队列。执行此写入请求后,再次恢复队列。

像下面这样的东西可以让你开始(我还没有测试过这个):

dispatch_block_t CreateBlock(dispatch_block_t block, dispatch_group_t group, dispatch_queue_t concurrentQueue) {
    return Block_copy(^{ 
        dispatch_group_async(concurrentQueue, group, block);
    });
}

dispatch_block_t CreateBarrierBlock(dispatch_block_t barrierBlock, dispatch_group_t group, dispatch_queue_t concurrentQueue) {
    return Block_copy(^{
        dispatch_queue_t serialQueue = dispatch_get_current_queue();
        dispatch_suspend(serialQueue);
        dispatch_group_notify(group, concurrentQueue, ^{
            barrierBlock();
            dispatch_resume(serialQueue);
        });
    });
}

使用dispatch_async将这些包装块推送到串行队列上。

You've got the right idea, I think. Conceptually, what you want is a private concurrent queue that you can submit "barrier" blocks to, such that the barrier block waits until all previously submitted blocks have finished executing, and then executes all by itself.

GCD doesn't (yet?) provide this functionality out-of-the-box, but you could simulate it by wrapping your read/write requests in some additional logic and funnelling these requests through an intermediary serial queue.

When a read request reaches the front of the serial queue, dispatch_group_async the actual work onto a global concurrent queue. In the case of a write request, you should dispatch_suspend the serial queue, and call dispatch_group_notify to submit the work onto the concurrent queue only after the previous requests have finished executing. After this write request has executed, resume the queue again.

Something like the following could get you started (I haven't tested this):

dispatch_block_t CreateBlock(dispatch_block_t block, dispatch_group_t group, dispatch_queue_t concurrentQueue) {
    return Block_copy(^{ 
        dispatch_group_async(concurrentQueue, group, block);
    });
}

dispatch_block_t CreateBarrierBlock(dispatch_block_t barrierBlock, dispatch_group_t group, dispatch_queue_t concurrentQueue) {
    return Block_copy(^{
        dispatch_queue_t serialQueue = dispatch_get_current_queue();
        dispatch_suspend(serialQueue);
        dispatch_group_notify(group, concurrentQueue, ^{
            barrierBlock();
            dispatch_resume(serialQueue);
        });
    });
}

Use dispatch_async to push these wrapped blocks onto a serial queue.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文