从子进程运行的子进程的乱码输出
我正在使用以下代码来运行另一个 python 脚本。我面临的问题是该脚本的输出以无序的方式出现。 从命令行运行它时,我得到正确的输出,即:
这里有一些输出
编辑 xml 文件并保存更改
正在上传 xml 文件..
使用子进程运行脚本时,我以相反的顺序获取一些输出:
在此之前输出正确
正在上传 xml 文件..
编辑 xml 文件并保存更改
脚本正在执行,没有错误,并进行了正确的更改。所以我认为罪魁祸首可能是调用子脚本的代码,但我找不到问题:
cmd = "child_script.py"
proc = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE,stderr=subprocess.STDOUT)
(fout ,ferr) = ( proc.stdout, proc.stderr )
print "Going inside while - loop"
while True:
line = proc.stdout.readline()
print line
fo.write(line)
try :
err = ferr.readline()
fe.write(err)
except Exception, e:
pass
if not line:
pass
break
[编辑]:fo 和 fe 是输出和错误日志的文件句柄。此外,该脚本正在 Windows 上运行。很抱歉缺少这些详细信息。
I'm using the following code to run another python script. The problem I'm facing is that the output of that script is coming out in an unorderly manner.
While running it from the command line, I get the correct output i.e. :
some output here
Editing xml file and saving changes
Uploading xml file back..
While running the script using subprocess, am getting some of the output in reverse order:
correct output till here
Uploading xml file back..
Editing xml file and saving changes
The script is executing without errors and making the right changes. So I think the culprit might be the code that is calling the child script, but I can't find the problem:
cmd = "child_script.py"
proc = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE,stderr=subprocess.STDOUT)
(fout ,ferr) = ( proc.stdout, proc.stderr )
print "Going inside while - loop"
while True:
line = proc.stdout.readline()
print line
fo.write(line)
try :
err = ferr.readline()
fe.write(err)
except Exception, e:
pass
if not line:
pass
break
[EDIT]: fo and fe are file handles to output and error logs. Also the script is being run on Windows.Sorry for missing these details.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
恐怕您引用的脚本部分存在一些问题:
fo
和fe
是什么?想必这些是您将子进程的输出写入其中的对象? (更新:您指出这些都是用于写入输出日志。)stderr=subprocess.STDOUT
,因此:(a)ferr
在循环中将始终为 None,并且 (b) 由于缓冲、标准输出和错误可能以不可预测的方式混合。但是,从您的代码看来,您实际上想分别处理标准输出和标准错误,因此也许可以尝试使用stderr=subprocess.PIPE
来代替。最好按照 jsbueno 建议重写循环:
...或者进一步减少它,因为它似乎目标本质上是您只想将子进程的标准输出和标准错误写入
fo
和fe
,只需执行以下操作:如果您仍然看到输出在
fo
正在写入的文件中交换了行,那么我们只能假设有某种方式可以在子脚本中发生这种情况。例如子脚本是多线程的吗?其中一行是通过另一个函数的回调打印的吗?There are a few problems with the part of the script you've quoted, I'm afraid:
fo
andfe
? Presumably those are objects to which you're writing the output of the child process? (Update: you indicate that these are both for writing output logs.)stderr=subprocess.STDOUT
, so: (a)ferr
will always be None in your loop and (b) due to buffering, standard output and error may be mixed in an unpredictable way. However, it looks from your code as if you actually want to deal with standard output and standard error separately, so perhaps trystderr=subprocess.PIPE
instead.It would be a good idea to rewrite your loop as jsbueno suggests:
... or to reduce it even further, since it seems that the aim is essentially that you just want to write the standard output and standard error from the child process to
fo
andfe
, just do:If you still see the output lines swapped in the file that
fo
is writing to, then we can only assume that there is some way in which this can happen in the child script. e.g. is the child script multi-threaded? Is one of the lines printed via a callback from another function?大多数时候,我看到输出顺序因执行而异,一些输出被发送到 C 标准 IO 流 stdin,一些输出被发送到 stderr。 stdout 和 stderr 的缓冲特性根据它们是否连接到终端、管道、文件等而有所不同:
因此,也许您应该将 stdout 和 stderr 配置为转到相同的源,以便将相同的缓冲应用于两个流。
此外,某些程序直接打开终端
open("/dev/tty",...)
(主要是为了读取密码),因此将终端输出与管道输出进行比较并不总是有效工作。此外,如果您的程序将直接 write(2) 调用与标准 IO 调用混合在一起,则输出顺序可能会根据不同的缓冲选择而有所不同。
我希望其中之一是正确的:)请告诉我哪一个(如果有的话)。
Most of the times I've seen order of output differ based on execution, some output was sent to the C standard IO streams stdin, and some output was sent to stderr. The buffering characteristics of stdout and stderr vary depending upon if they are connected to a terminal, pipes, files, etc:
So perhaps you should configure both stdout and stderr to go to the same source, so the same buffering will be applied to both streams.
Also, some programs open the terminal directly
open("/dev/tty",...)
(mostly so they can read passwords), so comparing terminal output with pipe output isn't always going to work.Further, if your program is mixing direct
write(2)
calls with standard IO calls, the order of output can be different based on the different buffering choices.I hope one of these is right :) let me know which, if any.