MPI 只有主节点
我正在尝试将 MPI 与我的 4 核处理器一起使用。 我已遵循本教程:http://debianclusters.org/index.php/MPICH:_Starting_a_Global_MPD_Ring
但最后,当我尝试 hello.out 脚本时,我只得到服务器进程(主节点):
mpiexec -np 4 ./hello.out
Hello MPI from the server process!
Hello MPI from the server process!
Hello MPI from the server process!
Hello MPI from the server process!
我已经搜索了整个网络,但找不到此问题的任何线索。
这是我的 mpdtrace 结果:
[nls@debian] ~ $ mpd --ncpus=4 --daemon
[nls@debian] ~ $ mpdtrace -l
debian_52063 (127.0.0.1)
我不应该为每个核心获取一条跟踪线吗?
感谢您的帮助,
马尔恰斯
I'm trying to use MPI with my 4 cores processor.
I have followed this tutorial : http://debianclusters.org/index.php/MPICH:_Starting_a_Global_MPD_Ring
But at the end, when I try the hello.out script, I get only server process (master nodes) :
mpiexec -np 4 ./hello.out
Hello MPI from the server process!
Hello MPI from the server process!
Hello MPI from the server process!
Hello MPI from the server process!
I have searched all around the web but couldn't find any clues for this problem.
Here is my mpdtrace result :
[nls@debian] ~ $ mpd --ncpus=4 --daemon
[nls@debian] ~ $ mpdtrace -l
debian_52063 (127.0.0.1)
Shouldn't I get one trace line per core ?
Thanks for your help,
Malchance
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
95% 的情况下,当您看到此问题时(MPI 任务未获得“正确的”排名 ID,通常最终全部排名为零),这意味着 MPI 库中存在不匹配。执行启动的 mpiexec 与用于编译程序的 mpicc(或其他内容)不同,或者子进程在启动时获取的 MPI 库(如果动态链接)与预期的不同。所以我首先要仔细检查这些事情。
95% of the time, when you see this problem -- MPI tasks not getting the "right" rank ids, usually ending up all being rank zero -- it means there's a mismatch in MPI libraries. Either the mpiexec is doing the launching isn't the same as the mpicc (or whatever) used to compile the program, or the MPI libraries the child processes are picking up on launch (if linked dynamically) are different than those intended. So I'd start by double checking those things.