OpenMP 动态循环分解块
我正在使用 OpenMP 并行执行一个大循环。假设我正在处理的数组总共有 N 个条目。我希望一个线程执行前 N/2 个条目,另一个线程执行最后 N/2 个条目。
我必须避免线程在彼此相邻的条目上工作。大小 N 总是比线程数大得多,因此如果我能让 OpenMP 按照我上面概述的方式分配工作,我就不需要担心锁。
如果大小 N 在编译时已知,我可以使用#pragma omp parallel for Schedule(static,N/2)。不幸的是事实并非如此。那么,如何动态定义块大小呢?
I am using OpenMP to go through a large loop in parallel. Let's say the array I'm working on has N entries in total. I would like one thread to do the first N/2 entries and the other thread the last N/2.
I have to avoid that the threads work on entries that are next to each other. The size N is always much bigger than the number of threads, so I don't need to worry about locks if I can get OpenMP to distribute the work the way I outlined above.
If the size N is known at compiletime, I can use #pragma omp parallel for schedule(static,N/2)
. Unfortunately it isn't. So, how do I define the chunk size dynamically?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
只要运行时知道 N 就没有问题;我不知道为什么你认为它必须在编译时知道。如果一切都必须在编译时知道的话,OMP 循环结构的用途确实非常有限。
它的运行非常简单,如下所示:
There's no problem as long as N is known at runtime; I'm not sure why you think it has to be known at compile time. OMP loop constructs would be of very limited use indeed if everything had to be known at compile time.
And it runs simply enough, as so:
如果您不想使用内置的 openmp 调度选项 @Jonathan Dursi 的答案表明您可以自己实现所需的选项:
输出
If you don't want to use builtin openmp scheduling options as @Jonathan Dursi's answer shows then you could implement required options yourself:
Output
我在 dotNET 上遇到了类似的问题,最终编写了一个智能队列对象,一旦对象可用,它就会一次返回十几个对象。一旦我手头有一个包,我就会决定一个可以一次性处理所有包的线程。
在解决这个问题时,我牢记 W 队列比 M 队列更好。与多个工人一起排长队比为每个工人排一条队要好。
I had a similar problem on dotNET, and ended up writing a smart queue object that would return a dozen objects at a time, once they are available. Once I have a package in hand, I'd decide on a thread that can process all of them in one go.
When working on this problem, I kept in mind that W-queues are better than M-queues. It's better to have one long line with multiple workers, than to have a line for each worker.