带动态分配的 MPI 矩阵乘法:段。过错
我正在 OpenMPI 中制作矩阵乘法程序,并且收到此错误消息:
[Mecha Liberta:12337] *** Process received signal ***
[Mecha Liberta:12337] Signal: Segmentation fault (11)
[Mecha Liberta:12337] Signal code: Address not mapped (1)
[Mecha Liberta:12337] Failing at address: 0xbfe4f000
--------------------------------------------------------------------------
mpirun noticed that process rank 1 with PID 12337 on node Mecha Liberta exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------
这就是我定义矩阵的方式:
int **a, **b, **r;
a = (int **)calloc(l,sizeof(int));
b = (int **)calloc(l,sizeof(int));
r = (int **)calloc(l,sizeof(int));
for (i = 0; i < l; i++)
a[i] = (int *)calloc(c,sizeof(int));
for (i = 0; i < l; i++)
b[i] = (int *)calloc(c,sizeof(int));
for (i = 0; i < l; i++)
r[i] = (int *)calloc(c,sizeof(int));
这是我的发送/接收(我很确定我的问题应该在这里):
MPI_Send(&sent, 1, MPI_INT, dest, tag, MPI_COMM_WORLD);
MPI_Send(&lines, 1, MPI_INT, dest, tag, MPI_COMM_WORLD);
MPI_Send(&(a[sent][0]), lines*NCA, MPI_INT, dest, tag, MPI_COMM_WORLD);
MPI_Send(&b, NCA*NCB, MPI_INT, dest, tag, MPI_COMM_WORLD);
并且:
MPI_Recv(&sent, 1, MPI_INT, 0, tag, MPI_COMM_WORLD, &status);
MPI_Recv(&lines, 1, MPI_INT, 0, tag, MPI_COMM_WORLD, &status);
MPI_Recv(&a, lines*NCA, MPI_INT, 0, tag, MPI_COMM_WORLD, &status);
MPI_Recv(&b, NCA*NCB, MPI_INT, 0, tag, MPI_COMM_WORLD, &status);
任何人都可以看到在哪里是问题吗?
I'm making a matriz multiplication program in OpenMPI, and I got this error message:
[Mecha Liberta:12337] *** Process received signal ***
[Mecha Liberta:12337] Signal: Segmentation fault (11)
[Mecha Liberta:12337] Signal code: Address not mapped (1)
[Mecha Liberta:12337] Failing at address: 0xbfe4f000
--------------------------------------------------------------------------
mpirun noticed that process rank 1 with PID 12337 on node Mecha Liberta exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------
That's how I define the matrices:
int **a, **b, **r;
a = (int **)calloc(l,sizeof(int));
b = (int **)calloc(l,sizeof(int));
r = (int **)calloc(l,sizeof(int));
for (i = 0; i < l; i++)
a[i] = (int *)calloc(c,sizeof(int));
for (i = 0; i < l; i++)
b[i] = (int *)calloc(c,sizeof(int));
for (i = 0; i < l; i++)
r[i] = (int *)calloc(c,sizeof(int));
And here's my Send/Recv (i'm pretty sure my problem should be here):
MPI_Send(&sent, 1, MPI_INT, dest, tag, MPI_COMM_WORLD);
MPI_Send(&lines, 1, MPI_INT, dest, tag, MPI_COMM_WORLD);
MPI_Send(&(a[sent][0]), lines*NCA, MPI_INT, dest, tag, MPI_COMM_WORLD);
MPI_Send(&b, NCA*NCB, MPI_INT, dest, tag, MPI_COMM_WORLD);
and:
MPI_Recv(&sent, 1, MPI_INT, 0, tag, MPI_COMM_WORLD, &status);
MPI_Recv(&lines, 1, MPI_INT, 0, tag, MPI_COMM_WORLD, &status);
MPI_Recv(&a, lines*NCA, MPI_INT, 0, tag, MPI_COMM_WORLD, &status);
MPI_Recv(&b, NCA*NCB, MPI_INT, 0, tag, MPI_COMM_WORLD, &status);
Can anyone see where is the problem?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
这是 C 语言、多维数组和 MPI 的常见问题。
在这一行中,假设:
您告诉 MPI 将从 b 开始的 NCAxNCB 整数发送到带有标记
tag
的dest,MPI_COMM_WORLD
。 但是,b 不是指向 NCAxNCB 整数的指针;而是指向 NCAxNCB 整数的指针。它是一个指向 NCA 的指针,指向 NCB 整数。所以你想要做的是确保你的数组是连续的(无论如何可能对性能更好),使用类似这样的东西:
然后
应该按预期工作更多。
This is a common problem with C and multidimensional arrays and MPI.
In this line, say:
you're telling MPI to send NCAxNCB integers starting at b to
dest,MPI_COMM_WORLD
with tagtag
. But, b isn't a pointer to NCAxNCB integers; it's a pointer to NCA pointers to NCB integers.So what you want to do is to ensure your arrays are contiguous (probably better for performance anyway), using something like this:
and then
should work more as expected.