我可以拆分/合并 subprocess.Popen 的输出流吗?
我正在编写一个与工作流程管理器一起使用的包装类。我想以某种方式记录应用程序的输出(通过 subprocess.Popen
执行的子进程):
- 子进程的
stdout
应该转到日志文件并<父级的 code>stdout、 - 子级的
stderr
应该转到不同的日志文件,但也应该转到父级的stdout
。
即子进程的所有输出最终都应该合并在 stdout
上(就像 subprocess.Popen(..., stderr=subprocess.STDOUT)
一样,所以我可以保留 >stderr
来自包装器本身的日志消息另一方面,子级的流应该转到不同的文件以允许单独的验证
。 (stdout
和日志文件)在一起,以便 Tee.write
写入两个流,但是,这不能传递给 Popen
因为“ subprocess”使用操作系统级函数进行编写(请参见此处:http://bugs.python.org/issue1631)。
问题 使用我当前的解决方案(下面的代码片段,主要改编自 此处)是 stdout
上的输出可能不会以正确的顺序出现。
我怎样才能克服这个问题?或者我应该使用完全不同的方法? (如果我坚持使用下面的代码,如何选择 os.read 中的字节数值?)
import subprocess, select, sys, os
call = ... # set this
process = subprocess.Popen(call, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
logs = {process.stdout: open("out.log", "w"), process.stderr: open("err.log", "w")}
done = {process.stdout: False, process.stderr: False}
while (process.poll() is None) or (not all(done.values())):
ready = select.select([process.stdout, process.stderr], [], [])[0]
for stream in ready:
data = os.read(stream.fileno(), 1)
if data:
sys.stdout.write(data)
logs[stream].write(data)
else:
done[stream] = True
logs[process.stdout].close()
logs[process.stderr].close()
顺便说一下,使用“fcntl”的此解决方案对我不起作用。而且我还不太清楚如何使这个解决方案适应我的情况,所以我还没有尝试过。
I'm writing a wrapper class for use with a workflow manager. I would like to log output from an application (child process executed via subprocess.Popen
) in a certain way:
stdout
of the child should go to a log file and tostdout
of the parent,stderr
of the child should go to a different logfile, but also tostdout
of the parent.
I.e. all output from the child should end up merged on stdout
(like with subprocess.Popen(..., stderr=subprocess.STDOUT)
, so I can reserve stderr
for log messages from the wrapper itself. On the other hand, the child's streams should go to different files to allow separate validation.
I've tried using a "Tee" helper class to tie two streams (stdout
and the log file) together, so that Tee.write
writes to both streams. However, this cannot be passed to Popen
because "subprocess" uses OS-level functions for writing (see here: http://bugs.python.org/issue1631).
The problem with my current solution (code snippet below, adapted mostly from here) is that output on stdout
may not appear in the right order.
How can I overcome this? Or should I use an altogether different approach?
(If I stick with the code below, how do I choose a value for the number of bytes in os.read
?)
import subprocess, select, sys, os
call = ... # set this
process = subprocess.Popen(call, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
logs = {process.stdout: open("out.log", "w"), process.stderr: open("err.log", "w")}
done = {process.stdout: False, process.stderr: False}
while (process.poll() is None) or (not all(done.values())):
ready = select.select([process.stdout, process.stderr], [], [])[0]
for stream in ready:
data = os.read(stream.fileno(), 1)
if data:
sys.stdout.write(data)
logs[stream].write(data)
else:
done[stream] = True
logs[process.stdout].close()
logs[process.stderr].close()
By the way, this solution using "fcntl" has not worked for me. And I couldn't quite figure out how to adapt this solution to my case yet, so I haven't tried it.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
如果设置
shell=True
,则可以将命令字符串传递给子进程,其中包括管道、重定向和tee 命令。If you set
shell=True
, you can pass a command string to subprocess that includes pipes, redirections, and the tee command.