与子进程进行多次通信(多次调用标准输出)
[此线程][1] 上有一个与我类似的问题。
我想向我的子进程发送一个命令,解释响应,然后发送另一个命令。必须启动一个新的子进程来完成此任务似乎很遗憾,特别是如果 subprocess2 必须执行许多与 subprocess1 相同的任务(例如 ssh、打开 mysql)。
我尝试了以下操作:
subprocess1.stdin.write([my commands])
subprocess1.stdin.flush()
subprocess1.stout.read()
但是如果没有给 read()
指定字节的明确参数,程序就会卡住执行该指令,并且我无法为 read()
提供参数> 因为我无法猜测流中有多少字节可用。
我正在运行 WinXP,Py2.7.1
编辑
感谢 @regularfry 为我提供了实现我真正意图的最佳解决方案(阅读他的回复中的评论,因为它们与通过 SSH 隧道实现我的目标有关)。 (他/她的答案已被投票通过。)不过,为了以后前来寻求标题问题答案的任何观众的利益,我已经接受了@Mike Penningtion 的答案。
There's a similar question to mine on [this thread][1].
I want to send a command to my subprocess, interpret the response, then send another command. It would seem a shame to have to start a new subprocess to accomplish this, particularly if subprocess2 must perform many of the same tasks as subprocess1 (e.g. ssh, open mysql).
I tried the following:
subprocess1.stdin.write([my commands])
subprocess1.stdin.flush()
subprocess1.stout.read()
But without a definite parameter for bytes to read()
, the program gets stuck executing that instruction, and I can't supply an argument for read()
because I can't guess how many bytes are available in the stream.
I'm running WinXP, Py2.7.1
EDIT
Credit goes to @regularfry for giving me the best solution for my real intention (read the comments in his response, as they pertain to accomplishing my goal through an SSH tunnel). (His/her answer has been voted up.) For the benefit of any viewer who hereafter comes for an answer to the title question, however, I've accepted @Mike Penningtion's answer.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
您的选择是:
Your choices are:
@JellicleCat,我正在跟进评论。我相信 wexpect 是 sage 的一部分...AFAIK,它没有单独打包,但你可以下载 我们期望在这里。
老实说,如果您要驱动程序化 ssh 会话,请使用 paramiko。它支持独立安装,具有良好的封装,并且应该在 Windows 上本机安装。
编辑
示例 paramiko 脚本以 cd 到目录,执行 ls 并退出...捕获所有结果...
@JellicleCat, I'm following up on the comments. I believe wexpect is a part of sage... AFAIK, it is not packaged separately, but you can download wexpect here.
Honestly, if you're going to drive programmatic ssh sessions, use paramiko. It is supported as an independent installation, has good packaging, and should install natively on windows.
EDIT
Sample paramiko script to cd to a directory, execute an
ls
and exit... capturing all results...这种方法可以工作(我已经这样做了),但需要一些时间,并且它使用特定于 Unix 的调用。您必须放弃 subprocess 模块并基于 fork/exec 和 os.pipe() 推出您自己的等效模块。
使用 os.pipe() 创建子进程的 stdin/stdout 文件描述符(读和写)后,使用 fcntl.fcntl 函数将其置于非阻塞模式(O_NONBLOCK 选项常量)。
使用 select.select 函数轮询或等待文件描述符的可用性。为了避免死锁,您需要使用 select() 来确保写入不会阻塞,就像读取一样。即使如此,您在读写时也必须考虑 OSError 异常,并在收到 EAGAIN 错误时重试。 (即使在读/写之前使用 select,EAGAIN 也可能在非阻塞模式下发生;这是一个常见的内核错误,已被证明很难修复。)
如果您愿意在 Twisted 框架上实现,他们应该已经解决了这个问题为你;您所要做的就是编写一个 Process 子类。但我自己还没有尝试过。
This approach will work (I've done this) but will take some time and it uses Unix-specific calls. You'll have to abandon the subprocess module and roll your own equivalent based on fork/exec and os.pipe().
Use the fcntl.fcntl function to place the stdin/stdout file descriptors (read and write) for your child process into non-blocking mode (O_NONBLOCK option constant) after creating them with os.pipe().
Use the select.select function to poll or wait for availability on your file descriptors. To avoid deadlocks you will need to use select() to ensure that writes will not block, just like reads. Even still, you must account for OSError exceptions when you read and write, and retry when you get EAGAIN errors. (Even when using select before read/write, EAGAIN can occur in non-blocking mode; this is a common kernel bug that has proven difficult to fix.)
If you are willing to implement on the Twisted framework, they have supposedly solved this problem for you; all you have to do is write a Process subclass. But I haven't tried that myself yet.