Python 日志记录从多个进程重定向标准输出
我试图捕获多个进程的 stderr 和 stdout 并使用 python 日志记录模块将它们的输出写入日志文件。下面的代码似乎可以实现这一点。目前,我轮询每个进程的标准输出,并在有任何数据时写入记录器。有没有更好的方法来做到这一点。
另外,我还希望拥有所有单独进程活动的主日志,换句话说,我想自动(无需轮询)将每个进程的所有 stdout/stderr 写入主记录器。这可能吗?
谢谢
class MyProcess:
def __init__(self, process_name , param):
self.param = param
self.logfile = logs_dir + "Display_" + str(param) + ".log"
self.args = [process_name, str(param)]
self.logger_name = process_name + str(param)
self.start()
self.logger = self.initLogger()
def start(self):
self.process = Popen(self.args, bufsize=1, stdout=PIPE, stderr=STDOUT) #line buffered
# make each processes stdout non-blocking
fd = self.process.stdout
fl = fcntl.fcntl(fd, fcntl.F_GETFL)
fcntl.fcntl(fd, fcntl.F_SETFL, fl | os.O_NONBLOCK)
def initLogger(self):
f = logging.Formatter("%(levelname)s -%(name)s - %(asctime)s - %(message)s")
fh = logging.handlers.RotatingFileHandler(self.logfile, maxBytes=max_log_file_size, backupCount = 10)
fh.setFormatter(f)
logger = logging.getLogger(self.logger_name)
logger.setLevel(logging.DEBUG)
logger.addHandler(fh) #file handler
return logger
def getOutput(self): #non blocking read of stdout
try:
return self.process.stdout.readline()
except:
pass
def writeLog(self):
line = self.getOutput()
if line:
self.logger.debug(line.strip())
#print line.strip()
process_name = 'my_prog'
num_processes = 10
processes=[]
for param in range(num_processes)
processes.append(MyProcess(process_name,param))
while(1):
for p in processes:
p.writeLog()
sleep(0.001)
I am trying to capture the stderr and stdout of a number of processes and write their outputs to a log file using the python logging module. The code below seems to acheive this. Presently I poll each processes stdout and write to the logger if there is any data. Is there a better way of doing this.
Also I would also like to have a master log of all individual processese activity, in other words I want to automatically (without polling) write all the stdout/stderr for each process to a master logger. Is this possible?
Thanks
class MyProcess:
def __init__(self, process_name , param):
self.param = param
self.logfile = logs_dir + "Display_" + str(param) + ".log"
self.args = [process_name, str(param)]
self.logger_name = process_name + str(param)
self.start()
self.logger = self.initLogger()
def start(self):
self.process = Popen(self.args, bufsize=1, stdout=PIPE, stderr=STDOUT) #line buffered
# make each processes stdout non-blocking
fd = self.process.stdout
fl = fcntl.fcntl(fd, fcntl.F_GETFL)
fcntl.fcntl(fd, fcntl.F_SETFL, fl | os.O_NONBLOCK)
def initLogger(self):
f = logging.Formatter("%(levelname)s -%(name)s - %(asctime)s - %(message)s")
fh = logging.handlers.RotatingFileHandler(self.logfile, maxBytes=max_log_file_size, backupCount = 10)
fh.setFormatter(f)
logger = logging.getLogger(self.logger_name)
logger.setLevel(logging.DEBUG)
logger.addHandler(fh) #file handler
return logger
def getOutput(self): #non blocking read of stdout
try:
return self.process.stdout.readline()
except:
pass
def writeLog(self):
line = self.getOutput()
if line:
self.logger.debug(line.strip())
#print line.strip()
process_name = 'my_prog'
num_processes = 10
processes=[]
for param in range(num_processes)
processes.append(MyProcess(process_name,param))
while(1):
for p in processes:
p.writeLog()
sleep(0.001)
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
您在这里的选项是
非阻塞I/O:这就是您所做的:)
select
模块:您可以使用poll()
或select()
来调度不同输入的读取。线程:为每个要监视的文件描述符创建一个线程并使用阻塞 I/O。不建议使用大量文件描述符,但至少它可以在 Windows 上运行。
第三方库:显然,您还可以使用 Twisted 或 pyevent 用于异步文件访问,但我从未这样做过...
有关更多信息,请观看此 pyevent 用于异步文件访问。 mirocommunity.org/video/1501/pycon-2010-demystifying-non-bl" rel="nofollow">有关 Python 的非阻塞 I/O 的视频
由于您的方法似乎有效,所以我会坚持如果强加的处理器负载不打扰您的话。如果确实如此,我会在 Unix 上使用
select.select()
。至于您关于主记录器的问题:因为您想要关闭各个输出,所以您无法将所有内容重定向到主记录器。您必须手动执行此操作。
Your options here are
Non-blocking I/O: This is what you have done :)
The
select
module: You can use eitherpoll()
orselect()
to dispatch reads for the different inputs.Threads: Create a thread for each file descriptor you want to monitor and use blocking I/O. Not advisable for large numbers of file descriptors, but at least it works on Windows.
Third-party libraries: Apparently, you can also use Twisted or pyevent for asynchronous file access, but I never did that...
For more information, watch this video on non-blocking I/O with Python
Since your approach seems to work, I would just stick to it, if the imposed processor load does not bother you. If it does, I would go for
select.select()
on Unix.As for your question about the master logger: Because you want to tee off the individual outputs, you can't redirect everything to a master logger. You have to do this manually.