D 任务池等待所有任务完成
这与我之前的问题有关: D 并发写入缓冲区
假设您有一个由 2 个连续代码块 A 和 B 组成的代码段,其中 B 依赖于 A。这在编程中很常见。 A 和 B 都由一个循环组成,其中每次迭代都可以并行运行:
double[] array = [ ... ]; // has N elements
// A
for (int i = 0; i < N; i++)
{
job1(array[i]); // new task
}
// wait for all job1's to be done
// B
for (int i = 0; i < N; i++)
{
job2(array[i]); // new task
}
只有当 A 完成时才能执行 B。如何等到A的所有任务完成后再执行B?
This is in relation to my previous question: D concurrent writing to buffer
Say you have a piece of code that consists of 2 consecutive code blocks A and B, where B depends on A. This is very common in programming. Both A and B consist of a loop, where each iteration can be run in parallel:
double[] array = [ ... ]; // has N elements
// A
for (int i = 0; i < N; i++)
{
job1(array[i]); // new task
}
// wait for all job1's to be done
// B
for (int i = 0; i < N; i++)
{
job2(array[i]); // new task
}
B can only be executed when A is finished. How do I wait till all tasks of A are finished before executing B?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
我假设你正在使用 std.parallelism?我写了 std.parallelism,所以我会让您做出设计决策。在 std.parallelism 的一些测试版中实际上有一个
join
函数。它等到所有任务完成后关闭任务池。我把它删除了,因为我意识到它没有用。原因是,如果您手动创建一组 O(N)
task
对象来迭代某个范围,那么您就滥用了该库。您应该使用并行的 foreach 循环,该循环在将控制权释放回调用线程之前会自动连接。您的示例将变为:在这种情况下,
job1
和job2
不应启动新任务,因为并行 foreach 循环已使用足够的任务来充分利用所有 CPU 核心。I assume you're using std.parallelism? I wrote std.parallelism, so I'll let you in on a design decision. There was actually a
join
function in some of the betas of std.parallelism. It waited until all tasks were finished and then shut down the task pool. I removed it because I realized it was useless.The reason is that if you're manually creating a set of O(N)
task
objects to iterate over some range, you're misusing the library. You should be using a parallel foreach loop instead, which automatically joins before it releases control back to the calling thread. Your example would become:In this case
job1
andjob2
should not start a new task because the parallel foreach loop is already using enough tasks to fully utilize all CPU cores.