无法在两台以上机器上运行 OpenMPI

发布于 2024-08-26 03:27:06 字数 2688 浏览 1 评论 0原文

当尝试运行 boost: 中的第一个示例时: mpi 教程,我无法在两台以上的机器上运行。具体来说,这似乎运行良好:

mpirun -hostfile hostnames -np 4 boost1

主机名中的每个主机名都为 ;插槽=2 max_slots=2。但是,当我将进程数增加到 5 时,它就挂起了。我已将 slots/max_slots 数量减少到 1,当我超过 2 台机器时,结果相同。在节点上,这会显示在作业列表中:

<user> Ss orted --daemonize -mca ess env -mca orte_ess_jobid 388497408 \
-mca orte_ess_vpid 2 -mca orte_ess_num_procs 3 -hnp-uri \
388497408.0;tcp://<node_ip>:48823

此外,当我终止它时,我收到此消息:

node2- daemon did not report back when launched
node3- daemon did not report back when launched

集群已设置为可访问的 mpiboost 库在 NFS 安装的驱动器上。我是否遇到了 NFS 僵局?或者,还有其他事情发生吗?

更新:需要明确的是,我正在运行的Boost程序

#include <boost/mpi/environment.hpp>
#include <boost/mpi/communicator.hpp>
#include <iostream>
namespace mpi = boost::mpi;

int main(int argc, char* argv[]) 
{
  mpi::environment env(argc, argv);
  mpi::communicator world;
  std::cout << "I am process " << world.rank() << " of " << world.size()
        << "." << std::endl;
  return 0;
}

来自@Dirk Eddelbuettel的推荐,我编译并运行了mpi示例hello_c.c,如下

#include <stdio.h>
#include "mpi.h"

int main(int argc, char* argv[])
{
    int rank, size;

    MPI_Init(&argc, &argv);
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);
    MPI_Comm_size(MPI_COMM_WORLD, &size);
    printf("Hello, world, I am %d of %d\n", rank, size);
    MPI_Barrier(MPI_COMM_WORLD);
    MPI_Finalize();

   return 0;
}

在单机多进程上运行良好,这包括通过 ssh 连接到任何节点并运行。每个计算节点都与通过 NFS 从远程计算机挂载的工作目录和 mpi/boost 目录相同。当从文件服务器运行 boost 程序时(除了 boost/mpi 是本地节点之外,与节点相同),我可以在两个远程节点上运行。然而,对于“hello world”,运行命令 mpirun -H node1,node2 -np 12 ./hello 我得到了

[<node name>][[2771,1],<process #>] \
[btl_tcp_endpoint.c:638:mca_btl_tcp_endpoint_complete_connect] \
connect() to <node-ip> failed: No route to host (113)

所有“Hello World”的打印结果,并且它挂在最后。但是,从远程节点上的计算节点运行时的行为有所不同。

当从 node2 运行时,“Hello world”和 boost 代码都会以 mpirun -H node1 -np 12 ./hello 挂起,反之亦然。 (与上面的含义相同:orted 正在远程计算机上运行,​​但不进行通信。)

该行为与在 mpi 库位于本地的文件服务器上运行与在计算节点上运行不同这一事实表明,我可能正在运行陷入 NFS 死锁。这是一个合理的结论吗?假设是这种情况,我该如何配置 mpi 以允许我静态链接它?此外,我不知道如何处理从文件服务器运行时遇到的错误,有什么想法吗?

When attempting to run the first example in the boost::mpi tutorial, I was unable to run across more than two machines. Specifically, this seemed to run fine:

mpirun -hostfile hostnames -np 4 boost1

with each hostname in hostnames as <node_name> slots=2 max_slots=2. But, when I increase the number of processes to 5, it just hangs. I have decreased the number of slots/max_slots to 1 with the same result when I exceed 2 machines. On the nodes, this shows up in the job list:

<user> Ss orted --daemonize -mca ess env -mca orte_ess_jobid 388497408 \
-mca orte_ess_vpid 2 -mca orte_ess_num_procs 3 -hnp-uri \
388497408.0;tcp://<node_ip>:48823

Additionally, when I kill it, I get this message:

node2- daemon did not report back when launched
node3- daemon did not report back when launched

The cluster is set up with the mpi and boost libs accessible on an NFS mounted drive. Am I running into a deadlock with NFS? Or, is something else going on?

Update: To be clear, the boost program I am running is

#include <boost/mpi/environment.hpp>
#include <boost/mpi/communicator.hpp>
#include <iostream>
namespace mpi = boost::mpi;

int main(int argc, char* argv[]) 
{
  mpi::environment env(argc, argv);
  mpi::communicator world;
  std::cout << "I am process " << world.rank() << " of " << world.size()
        << "." << std::endl;
  return 0;
}

From @Dirk Eddelbuettel's recommendations, I compiled and ran the mpi example hello_c.c, as follows

#include <stdio.h>
#include "mpi.h"

int main(int argc, char* argv[])
{
    int rank, size;

    MPI_Init(&argc, &argv);
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);
    MPI_Comm_size(MPI_COMM_WORLD, &size);
    printf("Hello, world, I am %d of %d\n", rank, size);
    MPI_Barrier(MPI_COMM_WORLD);
    MPI_Finalize();

   return 0;
}

It runs fine on a single machine with multiple processes, this includes sshing into any of the nodes and running. Each compute node is identical with the working and mpi/boost directories mounted from a remote machine via NFS. When running the boost program from the fileserver (identical to a node except boost/mpi are local), I am able to run on two remote nodes. For "hello world", however, running the command mpirun -H node1,node2 -np 12 ./hello I get

[<node name>][[2771,1],<process #>] \
[btl_tcp_endpoint.c:638:mca_btl_tcp_endpoint_complete_connect] \
connect() to <node-ip> failed: No route to host (113)

while the all of the "Hello World's" are printed and it hangs at the end. However, the behavior when running from a compute node on a remote node differs.

Both "Hello world" and the boost code just hang with mpirun -H node1 -np 12 ./hello when run from node2 and vice versa. (Hang in the same sense as above: orted is running on remote machine, but not communicating back.)

The fact that the behavior differs from running on the fileserver where the mpi libs are local versus on a compute node suggests that I may be running into an NFS deadlock. Is this a reasonable conclusion? Assuming that this is the case, how do I configure mpi to allow me to link it statically? Additionally, I don't know what to make of the error I get when running from the fileserver, any thoughts?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

蓬勃野心 2024-09-02 03:27:06

答案很简单:打开通过 ssh 进行身份验证的 mpi,然后打开节点之间的 tcp/ip 套接字。计算节点上的防火墙设置为仅接受彼此之间的 ssh 连接,而不接受任意连接。因此,更新 iptables 后,hello world 在所有节点上运行得像冠军一样。

编辑:应该指出的是,文件服务器的防火墙允许任意连接,因此这就是为什么在其上运行的 mpi 程序的行为与仅在计算节点上运行不同。

The answer turned out to be simple: open mpi authenticated via ssh and then opened up tcp/ip sockets between the nodes. The firewalls on the compute nodes were set up to only accept ssh connections from each other, not arbitrary connections. So, after updating iptables, hello world runs like a champ across all of the nodes.

Edit: It should be pointed out that the fileserver's firewall allowed arbitrary connections, so that was why an mpi program run on it would behave differently than just running on the compute nodes.

秋心╮凉 2024-09-02 03:27:06

我的第一个建议是简化:

  • 你能构建标准 MPI 'hello, world' 示例吗?
  • 你可以在本地主机上运行它几次吗?
  • 您可以通过 ssh 在另一台主机上执行
  • 路径是否相同

,如果是,则

mpirun -H host1,host2,host3 -n 12 ./helloworld

应该穿越。整理好这些基础知识后,请尝试 Boost 教程...并确保您计划运行的所有主机上都有 Boost 和 MPI 库。

My first recommendation would be to simplify:

  • can you build the standard MPI 'hello, world' example?
  • can you run it several times on localhost?
  • can you execute on the other host via ssh
  • is the path identical

and if so, then

mpirun -H host1,host2,host3 -n 12 ./helloworld

should travel across. Once you have these basics sorted out, try the Boost tutorial ... and make sure you have Boost and MPI libraries on all hosts you plan to run on.

吾性傲以野 2024-09-02 03:27:06

考虑使用参数 --mca btl_tcp_if_include eth0 使节点仅使用 eth0 接口并阻止 OpenMPI 找出最佳网络。还有 --mca btl_tcp_if_exclude eth0 请记住将 eth0 替换为您的特定接口。

我的 /etc/hosts 包含如下行:

10.1.2.13 node13

...

10.1.3.13 node13-ib

当我启动 mpirun 时,选择了 TCP 网络并且节点使用 TCP 网络,但是,在一段时间(20 秒)之后, OpenMPI 发现 IP 10.1.3.XXX 并尝试使用它们,这导致了错误消息。

我希望它有帮助

Consider to use the parameter --mca btl_tcp_if_include eth0 to make nodes use only eth0 interface and preventing OpenMPI to figure out which was the best network. There is also --mca btl_tcp_if_exclude eth0 Remember to subtitute eth0 for your particular interface.

My /etc/hosts contained lines like these:

10.1.2.13 node13

...

10.1.3.13 node13-ib

When I launched mpirun, the TCP network was selected and the nodes used TCP network, however, after some time(20 seconds), OpenMPI discovered the IPs 10.1.3.XXX and tried to use them, which caused the error message.

I hope it helps

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文