MPI_Reduce 是阻塞(还是天然屏障)?

发布于 2025-01-05 03:54:25 字数 720 浏览 2 评论 0原文

我有下面的 C++ 代码片段,它基本上使用经典的蒙特卡罗技术计算 pi。

    srand48((unsigned)time(0) + my_rank);

    for(int i = 0 ; i < part_points; i++)
    {
            double x = drand48();

            double y = drand48();

            if( (pow(x,2)+pow(y,2)) < 1){ ++count; }
    }

    MPI_Reduce(&count, &total_hits, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD);

    MPI_Barrier(MPI_COMM_WORLD);

    if(my_rank == root)
    {
            pi = 4*(total_hits/(double)total_points);

            cout << "Calculated pi: "  <<  pi << " in " << end_time-start_time <<  endl;
    }

我只是想知道 MPI_Barrier 调用是否有必要。 MPI_Reduce 是否确保在reduce 操作完全完成之前不会执行if 语句的主体?希望我说清楚了。谢谢

I have the code snippet below in C++ which basically calculates the pi using classic monte carlo technic.

    srand48((unsigned)time(0) + my_rank);

    for(int i = 0 ; i < part_points; i++)
    {
            double x = drand48();

            double y = drand48();

            if( (pow(x,2)+pow(y,2)) < 1){ ++count; }
    }

    MPI_Reduce(&count, &total_hits, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD);

    MPI_Barrier(MPI_COMM_WORLD);

    if(my_rank == root)
    {
            pi = 4*(total_hits/(double)total_points);

            cout << "Calculated pi: "  <<  pi << " in " << end_time-start_time <<  endl;
    }

I am just wondering if the MPI_Barrier call is necessary. Does MPI_Reduce make sure that the body of the if statement won't be executed before the reduce operation is completely finished ? Hope I was clear. Thanks

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

猫瑾少女 2025-01-12 03:54:25

是的,所有集体通信调用(Reduce、Scatter、Gather 等)都是阻塞的。没有必要设置障碍。

Yes, all collective communication calls (Reduce, Scatter, Gather, etc) are blocking. There's no need for the barrier.

偷得浮生 2025-01-12 03:54:25

问问自己是否需要这个障碍。假设你不是根;你调用Reduce,它会发送你的数据。有什么理由坐等根部得到结果吗?答:不,所以不需要屏障。

假设你是根。您发出reduce 调用。从语义上讲,您现在被迫坐下来等待结果完全组装完毕。那么为什么要设置障碍呢?同样,不需要障碍呼叫。

一般来说,您几乎不需要屏障,因为您不关心时间同步。语义保证您的本地状态在reduce 调用后是正确的。

Ask your self if that barrier is needed. Suppose you are not the root; you call Reduce, which sends off your data. Is there any reason to sit and wait until the root has the result? Answer: no, so you don't need the barrier.

Suppose you're the root. You issue the reduce call. Semantically you are now forced to sit and wait until the result is fully assembled. So why the barrier? Again, no barrier call is needed.

In general, you almost never need a barrier because you don't care about temporal synchronization. The semantics guarantee that your local state is correct after the reduce call.

累赘 2025-01-12 03:54:25

阻挡是,障碍,不是。在紧密循环中执行时,为 MPI_Reduce() 调用 MPI_Barrier() 非常重要。如果不调用MPI_Barrier(),缩减进程的接收缓冲区最终将满,并且应用程序将中止。其他参与进程只需要发送并继续,而缩减进程则必须接收并缩减。
如果 my_rank == root == 0 (可能是真的),上面的代码不需要屏障。无论如何... MPI_Reduce() 不执行屏障或任何形式的同步。 AFAIK 即使 MPI_Allreduce() 也不能保证同步(至少 MPI 标准不能保证同步)。

Blocking yes, a barrier, no. It is very important to call MPI_Barrier() for MPI_Reduce() when executing in a tight loop. If not calling MPI_Barrier() the receive buffers of the reducing process will eventually run full and the application will abort. While other participating processes only need to send and continue, the reducing process has to receive and reduce.
The above code does not need the barrier if my_rank == root == 0 (what probably is true). Anyways... MPI_Reduce() does not perform a barrier or any form of synchronization. AFAIK even MPI_Allreduce() isn't guaranteed to synchronize (at least not by the MPI standard).

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文