将 subprocess.Popen 的输出通过管道传输到文件

发布于 2024-08-23 14:43:48 字数 379 浏览 6 评论 0原文

我需要使用 subprocess.Popen 启动多个长时间运行的进程,并希望每个进程自动通过管道传输 stdoutstderr分隔日志文件。每个进程将同时运行几分钟,我希望每个进程写入两个日志文件(stdoutstderr)作为进程跑步。

我是否需要在循环中的每个进程上不断调用 p.communicate() 才能更新每个日志文件,或者是否有某种方法来调用原始 Popen 命令这样 stdoutstderr 自动流式传输以打开文件句柄?

I need to launch a number of long-running processes with subprocess.Popen, and would like to have the stdout and stderr from each automatically piped to separate log files. Each process will run simultaneously for several minutes, and I want two log files (stdout and stderr) per process to be written to as the processes run.

Do I need to continually call p.communicate() on each process in a loop in order to update each log file, or is there some way to invoke the original Popen command so that stdout and stderr are automatically streamed to open file handles?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

ゃ懵逼小萝莉 2024-08-30 14:43:48

您可以将 stdoutstderr 作为参数传递给 Popen()

subprocess.Popen(self, args, bufsize=0, executable=None, stdin=None, stdout=None,
                 stderr=None, preexec_fn=None, close_fds=False, shell=False,
                 cwd=None, env=None, universal_newlines=False, startupinfo=None, 
                 creationflags=0)

例如

>>> import subprocess
>>> with open("stdout.txt","wb") as out, open("stderr.txt","wb") as err:
...    subprocess.Popen("ls",stdout=out,stderr=err)
... 
<subprocess.Popen object at 0xa3519ec>
>>> 

You can pass stdout and stderr as parameters to Popen()

subprocess.Popen(self, args, bufsize=0, executable=None, stdin=None, stdout=None,
                 stderr=None, preexec_fn=None, close_fds=False, shell=False,
                 cwd=None, env=None, universal_newlines=False, startupinfo=None, 
                 creationflags=0)

For example

>>> import subprocess
>>> with open("stdout.txt","wb") as out, open("stderr.txt","wb") as err:
...    subprocess.Popen("ls",stdout=out,stderr=err)
... 
<subprocess.Popen object at 0xa3519ec>
>>> 
鹿童谣 2024-08-30 14:43:48

根据文档

stdin、stdout 和 stderr 指定
执行程序的标准输入,
标准输出和标准错误
分别是文件句柄。有效的
值为 PIPE,一个现有文件
描述符(正整数),
现有文件对象,并且无。

因此,只需将打开写入的文件对象作为命名参数 stdout=stderr= 传递,就可以了!

Per the docs,

stdin, stdout and stderr specify the
executed programs’ standard input,
standard output and standard error
file handles, respectively. Valid
values are PIPE, an existing file
descriptor (a positive integer), an
existing file object, and None.

So just pass the open-for-writing file objects as named arguments stdout= and stderr= and you should be fine!

情独悲 2024-08-30 14:43:48

我同时运行两个子进程,并将两个子进程的输出保存到一个日志文件中。我还内置了一个超时来处理挂起的子进程。当输出太大时,总是会触发超时,并且来自任一子进程的标准输出都不会保存到日志文件中。 Alex 上面提出的答案并没有解决这个问题。

# Currently open log file.
log = None

# If we send stdout to subprocess.PIPE, the tests with lots of output fill up the pipe and
# make the script hang. So, write the subprocess's stdout directly to the log file.
def run(cmd, logfile):
   #print os.getcwd()
   #print ("Running test: %s" % cmd)
   global log
   p = subprocess.Popen(cmd, shell=True, universal_newlines = True, stderr=subprocess.STDOUT, stdout=logfile)
   log = logfile
   return p


# To make a subprocess capable of timing out
class Alarm(Exception):
   pass

def alarm_handler(signum, frame):
   log.flush()
   raise Alarm


####
## This function runs a given command with the given flags, and records the
## results in a log file. 
####
def runTest(cmd_path, flags, name):

  log = open(name, 'w')

  print >> log, "header"
  log.flush()

  cmd1_ret = run(cmd_path + "command1 " + flags, log)
  log.flush()
  cmd2_ret = run(cmd_path + "command2", log)
  #log.flush()
  sys.stdout.flush()

  start_timer = time.time()  # time how long this took to finish

  signal.signal(signal.SIGALRM, alarm_handler)
  signal.alarm(5)  #seconds

  try:
    cmd1_ret.communicate()

  except Alarm:
    print "myScript.py: Oops, taking too long!"
    kill_string = ("kill -9 %d" % cmd1_ret.pid)
    os.system(kill_string)
    kill_string = ("kill -9 %d" % cmd2_ret.pid)
    os.system(kill_string)
    #sys.exit()

  end_timer = time.time()
  print >> log, "closing message"

  log.close()

I am simultaneously running two subprocesses, and saving the output from both into a single log file. I have also built in a timeout to handle hung subprocesses. When the output gets too big, the timeout always triggers, and none of the stdout from either subprocess gets saved to the log file. The answer posed by Alex above does not solve it.

# Currently open log file.
log = None

# If we send stdout to subprocess.PIPE, the tests with lots of output fill up the pipe and
# make the script hang. So, write the subprocess's stdout directly to the log file.
def run(cmd, logfile):
   #print os.getcwd()
   #print ("Running test: %s" % cmd)
   global log
   p = subprocess.Popen(cmd, shell=True, universal_newlines = True, stderr=subprocess.STDOUT, stdout=logfile)
   log = logfile
   return p


# To make a subprocess capable of timing out
class Alarm(Exception):
   pass

def alarm_handler(signum, frame):
   log.flush()
   raise Alarm


####
## This function runs a given command with the given flags, and records the
## results in a log file. 
####
def runTest(cmd_path, flags, name):

  log = open(name, 'w')

  print >> log, "header"
  log.flush()

  cmd1_ret = run(cmd_path + "command1 " + flags, log)
  log.flush()
  cmd2_ret = run(cmd_path + "command2", log)
  #log.flush()
  sys.stdout.flush()

  start_timer = time.time()  # time how long this took to finish

  signal.signal(signal.SIGALRM, alarm_handler)
  signal.alarm(5)  #seconds

  try:
    cmd1_ret.communicate()

  except Alarm:
    print "myScript.py: Oops, taking too long!"
    kill_string = ("kill -9 %d" % cmd1_ret.pid)
    os.system(kill_string)
    kill_string = ("kill -9 %d" % cmd2_ret.pid)
    os.system(kill_string)
    #sys.exit()

  end_timer = time.time()
  print >> log, "closing message"

  log.close()
勿忘心安 2024-08-30 14:43:48

根据 Alex Martelli 的回答,我创建了一个对我有用的小例子。

runit.sh

执行的程序:

#!/usr/bin/env bash

sleep 5
ls -la
sleep 5
ls -la /Users
sleep 5
echo "Hello World!!"

在Python中分离进程

import os
from subprocess import Popen
import shlex


if __name__ == "__main__":

    fhout = open(f'{os.getcwd()}/stdout.log', 'w')
    fherr = open(f'{os.getcwd()}/stderr.log', 'w')
    command = shlex.split(f"{os.getcwd()}/runit.sh")
    p = Popen(command, shell=True, stdout=fhout, stderr=fherr, text=True, close_fds=True)

    pid = p.pid

    print(f'PID of the process is: {pid}')

Following up on the answer of Alex Martelli I've created a small example which works for me.

runit.sh

Program which is executed:

#!/usr/bin/env bash

sleep 5
ls -la
sleep 5
ls -la /Users
sleep 5
echo "Hello World!!"

Detaching a process in Python

import os
from subprocess import Popen
import shlex


if __name__ == "__main__":

    fhout = open(f'{os.getcwd()}/stdout.log', 'w')
    fherr = open(f'{os.getcwd()}/stderr.log', 'w')
    command = shlex.split(f"{os.getcwd()}/runit.sh")
    p = Popen(command, shell=True, stdout=fhout, stderr=fherr, text=True, close_fds=True)

    pid = p.pid

    print(f'PID of the process is: {pid}')

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文