MPI 警告:程序因未完成的接收请求而退出
我希望使用 MPI 在两个节点之间发送连续的内存数组。为此,我使用以下非阻塞发送/接收命令(MPI_Isend、MPI_Irecv)。在执行 run 命令时,我看到两个警告语句,如下所示:
Warning: Program exiting with outstanding receive requests
基本上,我想看到“NorthEdge1”中的数组数据传递到“NorthofNorthEdge3”。我该如何解决这个问题?我还可以尝试什么来检查此通信?
以下是源代码的摘录:
#define Rows 48
...
double *northedge1 = new double[Rows];
double *northofnorthedge3 = new double[Rows];
...
...
int main (int argc, char *argv[])
{
....
....
MPI_Request send_request, recv_request;
...
...
{
MPI_Isend(northedge1, Rows, MPI_DOUBLE, my_rank+1, 0, MPI_COMM_WORLD, &send_request);
MPI_Irecv(northofnorthedge3, Rows, MPI_DOUBLE, my_rank+1, MPI_ANY_TAG, MPI_COMM_WORLD,
&recv_request);
}
I wish to send a contiguous array of memory between two nodes using MPI. For this purpose, I use the following Non-blocking Send/Receive command (MPI_Isend, MPI_Irecv). While executing the run command, I see two warning statements as follows:
Warning: Program exiting with outstanding receive requests
Basically, I want to see that the array data from "NorthEdge1" is passed to "NorthofNorthEdge3". How could I fix this? What else could I try to check this communication?
Here is an excerpt from the source code:
#define Rows 48
...
double *northedge1 = new double[Rows];
double *northofnorthedge3 = new double[Rows];
...
...
int main (int argc, char *argv[])
{
....
....
MPI_Request send_request, recv_request;
...
...
{
MPI_Isend(northedge1, Rows, MPI_DOUBLE, my_rank+1, 0, MPI_COMM_WORLD, &send_request);
MPI_Irecv(northofnorthedge3, Rows, MPI_DOUBLE, my_rank+1, MPI_ANY_TAG, MPI_COMM_WORLD,
&recv_request);
}
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
您似乎还没有致电
MPI_Waitall( )
。 “立即”发送和接收例程仅开始通信。您必须阻止进程以确保通信已完成。 MPI 中的阻塞是通过MPI_Wait()
的变体实现的;在您的情况下,您需要MPI_Waitall()
。It looks like you haven't called
MPI_Waitall()
. The "immediate" send and receive routines only begin the communication. You have to block your process to ensure the communication has finished. Blocking in MPI is with a variant ofMPI_Wait()
; in your case, you needMPI_Waitall()
.