Python 日志记录从多个进程重定向标准输出

发布于 2024-10-05 10:34:01 字数 1670 浏览 5 评论 0原文

我试图捕获多个进程的 stderr 和 stdout 并使用 python 日志记录模块将它们的输出写入日志文件。下面的代码似乎可以实现这一点。目前,我轮询每个进程的标准输出,并在有任何数据时写入记录器。有没有更好的方法来做到这一点。

另外,我还希望拥有所有单独进程活动的主日志,换句话说,我想自动(无需轮询)将每个进程的所有 stdout/stderr 写入主记录器。这可能吗?

谢谢

class MyProcess:
def __init__(self, process_name , param):
    self.param = param
    self.logfile = logs_dir + "Display_" + str(param) + ".log"
    self.args = [process_name, str(param)]
    self.logger_name = process_name + str(param)
    self.start()
    self.logger = self.initLogger()

def start(self):
    self.process = Popen(self.args, bufsize=1, stdout=PIPE, stderr=STDOUT) #line buffered
    # make each processes stdout non-blocking
    fd = self.process.stdout
    fl = fcntl.fcntl(fd, fcntl.F_GETFL)
    fcntl.fcntl(fd, fcntl.F_SETFL, fl | os.O_NONBLOCK)

def initLogger(self):
    f  = logging.Formatter("%(levelname)s -%(name)s - %(asctime)s - %(message)s")
    fh = logging.handlers.RotatingFileHandler(self.logfile, maxBytes=max_log_file_size, backupCount = 10)
    fh.setFormatter(f)

    logger = logging.getLogger(self.logger_name)
    logger.setLevel(logging.DEBUG)
    logger.addHandler(fh) #file handler
    return logger

def getOutput(self): #non blocking read of stdout
    try:
        return self.process.stdout.readline()
    except:
        pass

def writeLog(self):
    line = self.getOutput()
    if line:
        self.logger.debug(line.strip()) 
        #print line.strip()



process_name = 'my_prog'
num_processes = 10
processes=[]

for param in range(num_processes)
    processes.append(MyProcess(process_name,param))

while(1):
    for p in processes:
        p.writeLog()

    sleep(0.001)

I am trying to capture the stderr and stdout of a number of processes and write their outputs to a log file using the python logging module. The code below seems to acheive this. Presently I poll each processes stdout and write to the logger if there is any data. Is there a better way of doing this.

Also I would also like to have a master log of all individual processese activity, in other words I want to automatically (without polling) write all the stdout/stderr for each process to a master logger. Is this possible?

Thanks

class MyProcess:
def __init__(self, process_name , param):
    self.param = param
    self.logfile = logs_dir + "Display_" + str(param) + ".log"
    self.args = [process_name, str(param)]
    self.logger_name = process_name + str(param)
    self.start()
    self.logger = self.initLogger()

def start(self):
    self.process = Popen(self.args, bufsize=1, stdout=PIPE, stderr=STDOUT) #line buffered
    # make each processes stdout non-blocking
    fd = self.process.stdout
    fl = fcntl.fcntl(fd, fcntl.F_GETFL)
    fcntl.fcntl(fd, fcntl.F_SETFL, fl | os.O_NONBLOCK)

def initLogger(self):
    f  = logging.Formatter("%(levelname)s -%(name)s - %(asctime)s - %(message)s")
    fh = logging.handlers.RotatingFileHandler(self.logfile, maxBytes=max_log_file_size, backupCount = 10)
    fh.setFormatter(f)

    logger = logging.getLogger(self.logger_name)
    logger.setLevel(logging.DEBUG)
    logger.addHandler(fh) #file handler
    return logger

def getOutput(self): #non blocking read of stdout
    try:
        return self.process.stdout.readline()
    except:
        pass

def writeLog(self):
    line = self.getOutput()
    if line:
        self.logger.debug(line.strip()) 
        #print line.strip()



process_name = 'my_prog'
num_processes = 10
processes=[]

for param in range(num_processes)
    processes.append(MyProcess(process_name,param))

while(1):
    for p in processes:
        p.writeLog()

    sleep(0.001)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

静赏你的温柔 2024-10-12 10:34:01

您在这里的选项是

有关更多信息,请观看此 pyevent 用于异步文件访问。 mirocommunity.org/video/1501/pycon-2010-demystifying-non-bl" rel="nofollow">有关 Python 的非阻塞 I/O 的视频

由于您的方法似乎有效,所以我会坚持如果强加的处理器负载不打扰您的话。如果确实如此,我会在 Unix 上使用 select.select()

至于您关于主记录器的问题:因为您想要关闭各个输出,所以您无法将所有内容重定向到主记录器。您必须手动执行此操作。

Your options here are

  • Non-blocking I/O: This is what you have done :)

  • The select module: You can use either poll() or select() to dispatch reads for the different inputs.

  • Threads: Create a thread for each file descriptor you want to monitor and use blocking I/O. Not advisable for large numbers of file descriptors, but at least it works on Windows.

  • Third-party libraries: Apparently, you can also use Twisted or pyevent for asynchronous file access, but I never did that...

For more information, watch this video on non-blocking I/O with Python

Since your approach seems to work, I would just stick to it, if the imposed processor load does not bother you. If it does, I would go for select.select() on Unix.

As for your question about the master logger: Because you want to tee off the individual outputs, you can't redirect everything to a master logger. You have to do this manually.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文