MPI Irecv 无法正确接收缓冲区的第一个元素?

发布于 2024-12-06 18:30:13 字数 1844 浏览 1 评论 0原文

我刚刚尝试了 MPI,并复制并运行了这段代码,该代码取自 [LLNL MPI 教程][1] 中的第二个代码示例。

#include <mpi.h>
#include <stdlib.h>
#include <stdio.h>

int main(int argc, char ** argv) {
    int num_tasks, rank, next, prev, buf[2], tag1 = 1, tag2 = 2;
    MPI_Request reqs[4];
    MPI_Status status[2];

    MPI_Init(&argc, &argv);
    MPI_Comm_size(MPI_COMM_WORLD, &num_tasks);
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);

    prev = rank - 1;
    next = rank + 1;
    if (rank == 0) prev = num_tasks - 1;
    if (rank == (num_tasks - 1)) next = 0;

    MPI_Irecv(&buf[0], 1, MPI_INT, prev, tag1, MPI_COMM_WORLD,
                    &reqs[0]);
    MPI_Irecv(&buf[1], 1, MPI_INT, next, tag2, MPI_COMM_WORLD,
                    &reqs[1]);
    MPI_Isend(&rank, 1, MPI_INT, prev, tag2, MPI_COMM_WORLD, &reqs[2]);
    MPI_Isend(&rank, 1, MPI_INT, next, tag1, MPI_COMM_WORLD, &reqs[3]);

    MPI_Waitall(4, reqs, status);
    printf("Task %d received %d from %d and %d from %d\n",
                    rank, buf[0], prev, buf[1], next);

    MPI_Finalize();
     return EXIT_SUCCESS;
}

我本来期望这样的输出(例如,4 个任务):

$ mpiexec -n 4 ./m3
Task 0 received 3 from 3 and 1 from 1
Task 1 received 0 from 0 and 2 from 2
Task 2 received 1 from 1 and 3 from 3
Task 3 received 2 from 2 and 0 from 0

然而,相反,我得到了这样的结果:

$ mpiexec -n 4 ./m3
Task 0 received 0 from 3 and 1 from 1
Task 1 received 0 from 0 and 2 from 2
Task 3 received 0 from 2 and 0 from 0
Task 2 received 0 from 1 and 3 from 3

也就是说,进入缓冲区 buf[0] 的消息(带有 tag == 1)总是得到值 0。 ,如果我更改代码,将缓冲区声明为 buf[3] 而不是 buf[2],并将 buf[0] 的每个实例替换为 buf[2],那么我将精确地得到我期望的输出(即给定的第一个输出集 多于)。这看起来好像,出于某种原因,某些东西正在用 0 覆盖 buf[0] 中的值。但我看不出那可能是什么。顺便说一句,据我所知,我的代码(未经修改)与教程中的代码完全匹配,除了我的 printf 之外。

谢谢!

I've just been experimenting with MPI, and copied and ran this code, taken from the second code example at [the LLNL MPI tutorial][1].

#include <mpi.h>
#include <stdlib.h>
#include <stdio.h>

int main(int argc, char ** argv) {
    int num_tasks, rank, next, prev, buf[2], tag1 = 1, tag2 = 2;
    MPI_Request reqs[4];
    MPI_Status status[2];

    MPI_Init(&argc, &argv);
    MPI_Comm_size(MPI_COMM_WORLD, &num_tasks);
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);

    prev = rank - 1;
    next = rank + 1;
    if (rank == 0) prev = num_tasks - 1;
    if (rank == (num_tasks - 1)) next = 0;

    MPI_Irecv(&buf[0], 1, MPI_INT, prev, tag1, MPI_COMM_WORLD,
                    &reqs[0]);
    MPI_Irecv(&buf[1], 1, MPI_INT, next, tag2, MPI_COMM_WORLD,
                    &reqs[1]);
    MPI_Isend(&rank, 1, MPI_INT, prev, tag2, MPI_COMM_WORLD, &reqs[2]);
    MPI_Isend(&rank, 1, MPI_INT, next, tag1, MPI_COMM_WORLD, &reqs[3]);

    MPI_Waitall(4, reqs, status);
    printf("Task %d received %d from %d and %d from %d\n",
                    rank, buf[0], prev, buf[1], next);

    MPI_Finalize();
     return EXIT_SUCCESS;
}

I would have expected an output like this (for, say, 4 tasks):

$ mpiexec -n 4 ./m3
Task 0 received 3 from 3 and 1 from 1
Task 1 received 0 from 0 and 2 from 2
Task 2 received 1 from 1 and 3 from 3
Task 3 received 2 from 2 and 0 from 0

However, instead, I get this:

$ mpiexec -n 4 ./m3
Task 0 received 0 from 3 and 1 from 1
Task 1 received 0 from 0 and 2 from 2
Task 3 received 0 from 2 and 0 from 0
Task 2 received 0 from 1 and 3 from 3

That is, the message (with tag == 1) going into buffer buf[0] always gets value 0. Moreover, if I alter the code so that I declare the buffer as buf[3] rather than buf[2], and replace each instance of buf[0] with buf[2], then I get precisely the output I would have expected (i.e., the first output set given above). This looks as if, for some reason, something is overwriting the value in buf[0] with 0. But I can't see what that might be. BTW, as far as I can tell, my code (without the modification) exactly matches the code inthe tutorial, except for my printf.

Thanks!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

妞丶爷亲个 2024-12-13 18:30:13

状态数组的大小必须为 4 而不是 2。在您的情况下,MPI_Waitall 在写入状态时会损坏内存。

Array of statuses must be of size 4 not 2. In your case MPI_Waitall corrupts memory when writing statuses.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文