带有 paramiko 的多进程模块
我正在尝试使用 paramiko python 模块(1.7.7.1)并行执行命令和/或 xfer 文件到一组远程服务器。一项任务如下所示:
jobs = []
for obj in appObjs:
if obj.stop_app:
p = multiprocessing.Process(target=exec_cmd, args=(obj, obj.stop_cmd))
jobs.append(p)
print "Starting job %s" % (p)
p.start()
“obj”包含 paramiko SSHClient、传输和 SFTPClient 等内容。 appObjs 列表包含大约 25 个这样的对象,因此有 25 个到 25 个不同服务器的连接。
我
raise AssertionError("PID check failed. RNG must be re-initialized after fork().
Hint: Try Random.atfork()")
根据 https://github.com/newsapps/beeswithmachineguns/issues/17 但它似乎没有帮助。我已经验证了上面提到的路径中的transport.py就是正在使用的路径。 paramiko 邮件列表似乎已经消失。
这看起来像 paramiko 中的问题还是我误解/误用了多处理模块?有人愿意提出一个实用的解决方法吗?非常感谢,
I'm trying to use the paramiko python module (1.7.7.1) to execute commands and/or xfer files to a group of remote servers in parallel. One task looks like this:
jobs = []
for obj in appObjs:
if obj.stop_app:
p = multiprocessing.Process(target=exec_cmd, args=(obj, obj.stop_cmd))
jobs.append(p)
print "Starting job %s" % (p)
p.start()
"obj" contains, among other things, a paramiko SSHClient, transport, and SFTPClient. The appObjs list contains approximately 25 of these objects, and thus 25 connections to 25 different servers.
I get the following error with paramiko's transport.py in the backtrace
raise AssertionError("PID check failed. RNG must be re-initialized after fork().
Hint: Try Random.atfork()")
I patched /usr/lib/python2.6/site-packages/paramiko/transport.py based on the post at https://github.com/newsapps/beeswithmachineguns/issues/17 but it doesn't seem to have helped. I've verified that the transport.py in the path mentioned above is the one being used. The paramiko mailing list appears to have disappeared.
Does this look like a problem in paramiko or am I misunderstanding/misapplying the multiprocessing module? Would anyone be willing to suggest a practical workaround? Many thanks,
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
更新: 正如 @ento 所说,分叉的 ssh 包已 合并回 paramiko 因此下面的答案现在无关紧要,您现在应该再次使用 Paramiko。
<罢工>
这是 Paramiko 中的一个已知问题,已在 Paramiko 的一个分支中修复(在版本 1.7.7.1 中停止),现在被称为 pypi 上的 ssh 包(在撰写本文时将其引入版本 1.7.11)。
显然,在将一些重要补丁添加到主线 Paramiko 中时出现问题,并且维护者没有响应,因此 @bitprophet 以新的包名称 pypi 上的 ssh 包。您可以看到您提到的具体问题此处讨论并且是他决定分叉它的原因之一;如果您确实愿意,可以阅读血腥细节。
UPDATE: As @ento notes, the forked ssh package has been merged back into paramiko so the answer below is now irrelevant and you should now being using Paramiko again.
This is a known-problem in Paramiko that has been fixed in a fork of Paramiko (stalled at version 1.7.7.1) that is now just known as the ssh package on pypi (which brings things to version 1.7.11 as of this writing).
Apparently there were problems getting some important patches into the mainline Paramiko and the maintainer was non-responsive, so @bitprophet, the maintainer of Fabric, forked Paramiko under the new package name ssh package on pypi. You can see the specific problem that you mention is discussed here and is one of the reasons he decided to fork it; you can read the gory details if you really want to.
正如 Paramiko 问题 中的一条评论提到的,可以通过打开一个来避免 RNG 错误每个进程单独的 ssh 句柄,那么 paramiko 就不会再抱怨了。
此示例脚本演示了这一点(我使用池而不是进程):
As one comment inside the Paramiko issue mentions, the RNG error can be avoided by opening one separate ssh handle per process, Then paramiko will not complain anymore.
This sample script demonstrates this (I'm using a Pool instead of Processes):