MPI和MemoryLeaks和mpi_wait()带有异步发送和recv
我是MPI编程的新手,我正在尝试创建一个程序,该程序将在环中的流程之间执行2条通信。
我在mpi_finalize()语句上遇到了MemoryLeaks错误。稍后,我发现我可以使用-fsanitize = address -fno-omit-frame-pointer
flags来帮助我调试泄漏的位置。
现在,我遇到了一个非常奇怪的(至少对我来说)错误。
这是我的代码:
MPI_Request request_s1, request_s2, request_r1, request_r2;
// receiving 2 elems from the left neighbor, which i shall be needing
if (0 > MPI_Irecv(lefties, EXTENT, MPI_DOUBLE, my_left, 1, MPI_COMM_WORLD, &request_r1)) {
return 2;
}
// receiving 2 elems from my right neighbor which i will be appending at the end of my input
if (0 > MPI_Irecv(righties, EXTENT, MPI_DOUBLE, my_right, 1, MPI_COMM_WORLD, &request_r2)) {
return 2;
}
// sending the first 2 elems which will be required by the left neighbor
if (0 > MPI_Isend(my_output_buffer, EXTENT, MPI_DOUBLE, my_left, 1, MPI_COMM_WORLD, &request_s1)) {
return 2;
}
// sending the last 2 elems to my right neighbor
if (0 > MPI_Isend(&my_output_buffer[displacement - EXTENT], EXTENT, MPI_DOUBLE, my_right, 1, MPI_COMM_WORLD, &request_s2)) {
return 2;
}
MPI_Wait(&request_r2, MPI_STATUS_IGNORE);
MPI_Wait(&request_r1, MPI_STATUS_IGNORE);
我遇到的错误是
[my_machine:18353] *** An error occurred in MPI_Wait
[my_machine:18359] *** reported by process [204079105,1]
[my_machine:18359] *** on communicator MPI_COMM_WORLD
[my_machine:18359] *** MPI_ERR_TRUNCATE: message truncated
[my_machine:18359] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
[my_machine:18359] *** and potentially your MPI job)
[my_machine:18353] 1 more process has sent help message help-mpi-btl-base.txt / btl:no-nics
[my_machine:18353] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
,我不知道如何从这里进步。
I am new to MPI programming and I am trying to create a program that would perform 2-way communication between processes in a ring.
I was getting MemoryLeaks errors at the MPI_Finalize() statement. Later I found out that I could use the -fsanitize=address -fno-omit-frame-pointer
flags to help me debug where the leaks could be.
Now I get a very bizarre (at least for me) error.
Here's my code:
MPI_Request request_s1, request_s2, request_r1, request_r2;
// receiving 2 elems from the left neighbor, which i shall be needing
if (0 > MPI_Irecv(lefties, EXTENT, MPI_DOUBLE, my_left, 1, MPI_COMM_WORLD, &request_r1)) {
return 2;
}
// receiving 2 elems from my right neighbor which i will be appending at the end of my input
if (0 > MPI_Irecv(righties, EXTENT, MPI_DOUBLE, my_right, 1, MPI_COMM_WORLD, &request_r2)) {
return 2;
}
// sending the first 2 elems which will be required by the left neighbor
if (0 > MPI_Isend(my_output_buffer, EXTENT, MPI_DOUBLE, my_left, 1, MPI_COMM_WORLD, &request_s1)) {
return 2;
}
// sending the last 2 elems to my right neighbor
if (0 > MPI_Isend(&my_output_buffer[displacement - EXTENT], EXTENT, MPI_DOUBLE, my_right, 1, MPI_COMM_WORLD, &request_s2)) {
return 2;
}
MPI_Wait(&request_r2, MPI_STATUS_IGNORE);
MPI_Wait(&request_r1, MPI_STATUS_IGNORE);
The error I get is
[my_machine:18353] *** An error occurred in MPI_Wait
[my_machine:18359] *** reported by process [204079105,1]
[my_machine:18359] *** on communicator MPI_COMM_WORLD
[my_machine:18359] *** MPI_ERR_TRUNCATE: message truncated
[my_machine:18359] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
[my_machine:18359] *** and potentially your MPI job)
[my_machine:18353] 1 more process has sent help message help-mpi-btl-base.txt / btl:no-nics
[my_machine:18353] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
and I have no clue how to progress from here.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
mpi_request_null
初始化请求变量的初始化,如果您正在等待未创建的请求。0> mpi_whatewhate
成语很奇怪。而是:mpi_success!= mpi_whate
。MPI_REQUEST_NULL
, in case you're waiting for a request that was not created.0>MPI_whatever
idiom is strange. Instead:MPI_SUCCESS!=MPI_Whatever
.