当从同一个并发队列的dispatch_sync中调用dispatch_apply时,内部会发生什么

发布于 2024-12-10 14:05:00 字数 414 浏览 0 评论 0原文

示例:

dispatch_sync(someConcurrentQueue, ^(){
    dispatch_apply(5,someConcurrentQueue, ^(size_t i){
        // do some non-thread safe operation
     });
});

我决定对此进行测试,并注意到非线程安全操作按预期执行。然而,当我使用全局队列调用dispatch_sync时,情况很快就恶化了。

所以我的问题是: 1. 这样的调用背后到底发生了什么? 2.dispatch_apply的每次迭代是否都被抢先调度在自己的线程上,然后串行执行? 3. 如果 2 的答案是肯定的,那么在无限循环内执行此操作是否会提高性能?原因是操作可以在最后一个操作完成后立即开始执行,而不是再次循环。

Example:

dispatch_sync(someConcurrentQueue, ^(){
    dispatch_apply(5,someConcurrentQueue, ^(size_t i){
        // do some non-thread safe operation
     });
});

I decided to test this out and noticed that the non-thread safe operation performed as expected. However, when I called dispatch_sync using a global queue, things quickly deteriorated.

So my questions are:
1. What is happening under the hood with a call like that?
2. Is every iteration of dispatch_apply being preemptively scheduled on its own thread then executing serially?
3. If the answer to 2 is yes, would doing this inside of an infinite loop be a performance increase? The reasoning being that operation could start executing as soon as the last one finished instead of looping again.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

我要还你自由 2024-12-17 14:05:00

这大致相同:

dispatch_sync(someConcurrentQueue, ^(){
    for (size_t i = 0; i < 5; ++i){
        dispatch_async(someConcurrentQueue, ^(){
            // do some non-thread safe operation
        });
    }
});

操作将在同一个队列上排队,代码将在哪个线程上运行更多的是实现细节。因此,如果您在无限循环中执行此操作,它将看起来像这样:

  • sync
  • async(0)
  • async(1)
  • async(2)
  • async(3)
  • async(4)
  • sync
  • ...

作为您的下一个 dispatch_sync<一旦调度(未执行)dispatch_apply, /code> 将被执行,您的队列大小将很快增加。

This is roughly the same as:

dispatch_sync(someConcurrentQueue, ^(){
    for (size_t i = 0; i < 5; ++i){
        dispatch_async(someConcurrentQueue, ^(){
            // do some non-thread safe operation
        });
    }
});

The operations would be enqueued on the same queue, which thread the code will run on is more of an implementation detail. As such if you did this in an infinite loop it would look something like this:

  • sync
  • async(0)
  • async(1)
  • async(2)
  • async(3)
  • async(4)
  • sync
  • ...

As your next dispatch_sync will get executed once dispatch_apply has been scheduled (not executed) your queue will increase in size very quickly.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文