使用 Fabric 将命令置于后台在某些主机上不起作用

发布于 2024-12-19 20:24:49 字数 1744 浏览 0 评论 0原文

出于测试目的,我使用普通 ssh 命令行工具运行以下命令:

ssh user@host "nohup sleep 100 >> /tmp/xxx 2>&1 < /dev/null &"

在我的所有主机中,这按预期工作:在后台创建睡眠进程,并且 ssh 立即完成。

我正在尝试使用 Fabric 在 python 中实现此功能。我最终进行了 run 调用。这就是 Fabric 日志记录所报告的内容:

[user@host] run: nohup sleep 100 >> /tmp/xxx 2>&1 < /dev/null &

这正是我所期望的。但是,如果我检查主机中运行的进程,sleep 100 不是其中之一。更糟糕的是:该问题仅发生在我的某些主机上。

我还添加了一些更多信息,通过附加“\necho $!”来显示已创建的进程。到 Fabric 要运行的命令。这就是报告的内容:

[user@host] run: nohup sleep 100 >> /tmp/xxx 2>&1 < /dev/null &
echo $!
[user@host] out: 30935

我已经没有关于如何调试此问题的想法了,因为 Fabric 报告该进程已创建,但我看到另一端没有正在运行的进程。 syslog 报告 ssh 会话正在打开和关闭:

Dec  6 09:12:09 host sshd[2835]: Accepted publickey for user from 67.133.172.14 port 37732 ssh2
Dec  6 09:12:09 host sshd[2838]: pam_unix(sshd:session): session opened for user user by (uid=0)
Dec  6 09:12:10 host sshd[2838]: pam_unix(sshd:session): session closed for user user

我能否以某种方式增加 ssh 守护进程生成的日志记录量,以便我至少可以看到通过 ssh 请求的命令是什么?

我知道 Fabric 在运行命令时存在一些问题 背景,但这似乎不是我的问题。 Fabric / ssh / 后台进程是否存在其他问题?

编辑

我已经在我的所有系统上安装了 dtach 。 Ubuntu 8.04 中打包的版本太旧,不允许通过 ssh 调用 dtach -n(终端问题),所以我必须下载并编译 dtach 源。这样做之后,我能够像这样运行我的命令,使用 Fabric:

[user@host] run: dtach -n /tmp/Y sleep 100 >>> /tmp/xxx 2>&1

这在所有主机中都工作正常。但这不适合我的场景,因为:

  • dtach 创建两个进程:一个用于 dtach 本身,另一个用于正在运行的进程。
  • 我无法获取正在启动的进程的 pid

For testing purposes, I am running the following command, with plain ssh command line tool:

ssh user@host "nohup sleep 100 >> /tmp/xxx 2>&1 < /dev/null &"

This is working as expected, in all my hosts: a sleep process is created in the background, and the ssh finishes immediately.

I am trying to implement this functionality in python using Fabric. I end up doing a run call. This is what the Fabric logging is reporting:

[user@host] run: nohup sleep 100 >> /tmp/xxx 2>&1 < /dev/null &

Which is exactly what I expect. But if I check the processes which are running in my host, sleep 100 is not one of them. Worse yet: the problem happens only on some of my hosts.

I also added some more info to show what process has been created, by appending a "\necho $!" to the command to be run by Fabric. This is what was reported:

[user@host] run: nohup sleep 100 >> /tmp/xxx 2>&1 < /dev/null &
echo $!
[user@host] out: 30935

I am running out of ideas on how to debug this, since Fabric is reporting that the process has been created, but I see no process running in the other end. The syslog reports that an ssh session is being opened and closed:

Dec  6 09:12:09 host sshd[2835]: Accepted publickey for user from 67.133.172.14 port 37732 ssh2
Dec  6 09:12:09 host sshd[2838]: pam_unix(sshd:session): session opened for user user by (uid=0)
Dec  6 09:12:10 host sshd[2838]: pam_unix(sshd:session): session closed for user user

Could I somehow increase the amount of logging that the ssh daemon is producing, so that I can see at least what command is being requested via ssh?

I know that Fabric has some issues with running commands in the background, but that does not seem to be my problem. Could there be other issues with Fabric / ssh / background processes?

EDIT

I have installed dtach on all my systems. The version packaged in Ubuntu 8.04 is too old, and does not allow calling dtach -n over ssh (problems with terminal), so I had to download and compile the dtach sources. After doing that, I was able to run my command like this, with Fabric:

[user@host] run: dtach -n /tmp/Y sleep 100 >> /tmp/xxx 2>&1

This is working fine in all hosts. But this does not fit my scenario, because:

  • dtach creates two processes: one for dtach itself, another for the process being run.
  • I can not get the pid of the process being started

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

℉絮湮 2024-12-26 20:24:49

您可能遇到了臭名昭著的Fabric 问题#395。解决这些问题的简单方法是使用 pty=False 运行任务。

You are probably bumping into the infamous Fabric issue #395. The easiet workaround for these problems is to run your task with pty=False.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文