从子进程运行的子进程的乱码输出

发布于 2024-10-19 07:09:13 字数 938 浏览 1 评论 0原文

我正在使用以下代码来运行另一个 python 脚本。我面临的问题是该脚本的输出以无序的方式出现。 从命令行运行它时,我得到正确的输出,即:

这里有一些输出
编辑 xml 文件并保存更改
正在上传 xml 文件..

使用子进程运行脚本时,我以相反的顺序获取一些输出:

在此之前输出正确
正在上传 xml 文件..
编辑 xml 文件并保存更改

脚本正在执行,没有错误,并进行了正确的更改。所以我认为罪魁祸首可能是调用子脚本的代码,但我找不到问题:

    cmd = "child_script.py"
    proc = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE,stderr=subprocess.STDOUT)
    (fout ,ferr) = ( proc.stdout, proc.stderr )
    print "Going inside while - loop"
    while True:
        line = proc.stdout.readline()
        print line
        fo.write(line)
        try : 
            err = ferr.readline()
            fe.write(err)
        except Exception, e:
            pass
        if not line:
            pass
            break

[编辑]:fo 和 fe 是输出和错误日志的文件句柄。此外,该脚本正在 Windows 上运行。很抱歉缺少这些详细信息。

I'm using the following code to run another python script. The problem I'm facing is that the output of that script is coming out in an unorderly manner.
While running it from the command line, I get the correct output i.e. :

some output here
Editing xml file and saving changes
Uploading xml file back..

While running the script using subprocess, am getting some of the output in reverse order:

correct output till here
Uploading xml file back..
Editing xml file and saving changes

The script is executing without errors and making the right changes. So I think the culprit might be the code that is calling the child script, but I can't find the problem:

    cmd = "child_script.py"
    proc = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE,stderr=subprocess.STDOUT)
    (fout ,ferr) = ( proc.stdout, proc.stderr )
    print "Going inside while - loop"
    while True:
        line = proc.stdout.readline()
        print line
        fo.write(line)
        try : 
            err = ferr.readline()
            fe.write(err)
        except Exception, e:
            pass
        if not line:
            pass
            break

[EDIT]: fo and fe are file handles to output and error logs. Also the script is being run on Windows.Sorry for missing these details.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

じ违心 2024-10-26 07:09:13

恐怕您引用的脚本部分存在一些问题:

  • 正如 detly' 中提到的s 注释,fofe 是什么?想必这些是您将子进程的输出写入其中的对象? (更新:您指出这些都是用于写入输出日志。)
  • 第 3 行存在缩进错误。(更新:我已在原始帖子中修复了该问题。) >
  • 您指定了 stderr=subprocess.STDOUT,因此:(a) ferr 在循环中将始终为 None,并且 (b) 由于缓冲、标准输出和错误可能以不可预测的方式混合。但是,从您的代码看来,您实际上想分别处理标准输出和标准错误,因此也许可以尝试使用 stderr=subprocess.PIPE 来代替。

最好按照 jsbueno 建议重写循环:

from subprocess import Popen, PIPE
proc = Popen(["child_script.py"], stdout=PIPE, stderr=PIPE)
fout, ferr = proc.stdout, proc.stderr
for line in fout:
    print(line.rstrip())
    fo.write(line)
for line in ferr:
    fe.write(line)

...或者进一步减少它,因为它似乎目标本质上是您只想将子进程的标准输出和标准错误写入 fofe,只需执行以下操作:

proc = subprocess.Popen(["child_script.py"], stdout=fo, stderr=fe)

如果您仍然看到输出在 fo 正在写入的文件中交换了行,那么我们只能假设有某种方式可以在子脚本中发生这种情况。例如子脚本是多线程的吗?其中一行是通过另一个函数的回调打印的吗?

There are a few problems with the part of the script you've quoted, I'm afraid:

  • As mentioned in detly's comment, what are fo and fe? Presumably those are objects to which you're writing the output of the child process? (Update: you indicate that these are both for writing output logs.)
  • There's an indentation error on line 3. (Update: I've fixed that in the original post.)
  • You're specifying stderr=subprocess.STDOUT, so: (a) ferr will always be None in your loop and (b) due to buffering, standard output and error may be mixed in an unpredictable way. However, it looks from your code as if you actually want to deal with standard output and standard error separately, so perhaps try stderr=subprocess.PIPE instead.

It would be a good idea to rewrite your loop as jsbueno suggests:

from subprocess import Popen, PIPE
proc = Popen(["child_script.py"], stdout=PIPE, stderr=PIPE)
fout, ferr = proc.stdout, proc.stderr
for line in fout:
    print(line.rstrip())
    fo.write(line)
for line in ferr:
    fe.write(line)

... or to reduce it even further, since it seems that the aim is essentially that you just want to write the standard output and standard error from the child process to fo and fe, just do:

proc = subprocess.Popen(["child_script.py"], stdout=fo, stderr=fe)

If you still see the output lines swapped in the file that fo is writing to, then we can only assume that there is some way in which this can happen in the child script. e.g. is the child script multi-threaded? Is one of the lines printed via a callback from another function?

云淡月浅 2024-10-26 07:09:13

大多数时候,我看到输出顺序因执行而异,一些输出被发送到 C 标准 IO 流 stdin,一些输出被发送到 stderr。 stdout 和 stderr 的缓冲特性根据它们是否连接到终端、管道、文件等而有所不同:

NOTES
   The stream stderr is unbuffered.  The stream stdout is
   line-buffered when it points to a terminal.  Partial lines
   will not appear until fflush(3) or exit(3) is called, or a
   newline is printed.  This can produce unexpected results,
   especially with debugging output.  The buffering mode of
   the standard streams (or any other stream) can be changed
   using the setbuf(3) or setvbuf(3) call.  Note that in case
   stdin is associated with a terminal, there may also be
   input buffering in the terminal driver, entirely unrelated
   to stdio buffering.  (Indeed, normally terminal input is
   line buffered in the kernel.)  This kernel input handling
   can be modified using calls like tcsetattr(3); see also
   stty(1), and termios(3).

因此,也许您应该将 stdout 和 stderr 配置为转到相同的源,以便将相同的缓冲应用于两个流。

此外,某些程序直接打开终端 open("/dev/tty",...) (主要是为了读取密码),因此将终端输出与管道输出进行比较并不总是有效工作。

此外,如果您的程序将直接 write(2) 调用与标准 IO 调用混合在一起,则输出顺序可能会根据不同的缓冲选择而有所不同。

我希望其中之一是正确的:)请告诉我哪一个(如果有的话)。

Most of the times I've seen order of output differ based on execution, some output was sent to the C standard IO streams stdin, and some output was sent to stderr. The buffering characteristics of stdout and stderr vary depending upon if they are connected to a terminal, pipes, files, etc:

NOTES
   The stream stderr is unbuffered.  The stream stdout is
   line-buffered when it points to a terminal.  Partial lines
   will not appear until fflush(3) or exit(3) is called, or a
   newline is printed.  This can produce unexpected results,
   especially with debugging output.  The buffering mode of
   the standard streams (or any other stream) can be changed
   using the setbuf(3) or setvbuf(3) call.  Note that in case
   stdin is associated with a terminal, there may also be
   input buffering in the terminal driver, entirely unrelated
   to stdio buffering.  (Indeed, normally terminal input is
   line buffered in the kernel.)  This kernel input handling
   can be modified using calls like tcsetattr(3); see also
   stty(1), and termios(3).

So perhaps you should configure both stdout and stderr to go to the same source, so the same buffering will be applied to both streams.

Also, some programs open the terminal directly open("/dev/tty",...) (mostly so they can read passwords), so comparing terminal output with pipe output isn't always going to work.

Further, if your program is mixing direct write(2) calls with standard IO calls, the order of output can be different based on the different buffering choices.

I hope one of these is right :) let me know which, if any.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文