Python 使用文件处理程序正确记录日志

发布于 2024-10-22 02:36:34 字数 808 浏览 4 评论 0原文

我在 django 应用程序中使用 python 日志记录。如果需要,连接到后端 api 的类会使用文件处理程序初始化此记录器。每次进行 api 调用时都会实例化该类。我尝试确保不会每次都添加额外的处理程序,但

lsof | grep my.log 

在我的日志文件上显示处理程序的数量不断增加,一段时间后,我的服务器由于此打开文件限制而失败。

 self.logger = logging.getLogger("FPA")

        try:
            if self.logger.handlers[0].__class__.__name__=="FileHandler":
                pass
        except Exception, e:
            print 'new filehandler added'+str(e)
            ch = logging.FileHandler(FPA_LOG_TARGET)
            formatter = logging.Formatter("%(asctime)s - %(levelname)s - %(message)s - %(pathname)s @ line %(lineno)d")
            ch.setFormatter(formatter)
            self.logger.setLevel(logging.DEBUG)
            self.logger.addHandler(ch)

我意识到这可能不是最好的方法,但到目前为止我还没有在实现中发现错误。

I am using python logging in my django application. A class that connects to a backend api initialises this logger with a filehandler if needed. The class gets instantiated everytime an api call is made. I have tried making sure additional handlers are not added every time, but

lsof | grep my.log 

shows an increasing amount of handlers on my log file and after a while my server fails due to this open file limit.

 self.logger = logging.getLogger("FPA")

        try:
            if self.logger.handlers[0].__class__.__name__=="FileHandler":
                pass
        except Exception, e:
            print 'new filehandler added'+str(e)
            ch = logging.FileHandler(FPA_LOG_TARGET)
            formatter = logging.Formatter("%(asctime)s - %(levelname)s - %(message)s - %(pathname)s @ line %(lineno)d")
            ch.setFormatter(formatter)
            self.logger.setLevel(logging.DEBUG)
            self.logger.addHandler(ch)

I realise this may not be the best way to do this, but I have not found the error in my implementation so far.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

流殇 2024-10-29 02:36:34

很久没有分析if,不过看起来像是并发问题。

每个进程/线程都保留自己的已打开日志文件的文件句柄列表。

如何修复它?对于多线程代码,请确保有一个保存所有句柄的全局字典。对于多进程 - 恐怕我没有答案...每个进程都保留自己的文件句柄,也许将其映射到内存(内存映射文件可能是一个选项),但我不确定这是一个很好的解决方案 - 看到这个备注

但主要问题是为什么你需要做这样的事情。

首先,您可以使用logging.conf文件来初始化所有记录器/处理程序/格式化程序,并在需要时(例如特定记录器很广泛并且您想将其记录到单独的文件中)添加另一个记录器具有不同的文件名。如果您要为每个 django 应用程序添加一个记录器,通过在应用程序的主 __init__.py 中添加,

import logging
log = logging.getLogger(__name__)

然后在应用程序的其余部分导入 log , 这是非常明智的代码(视图、模型等)

要使用 logging.conf,请将以下行添加到您的 settings.py 中:

import os
import logging
DIRNAME = os.path.abspath(os.path.dirname(__file__))
logging.config.fileConfig(os.path.join(DIRNAME, 'logging.conf'))

是的,它是手动的,但您不需要更改代码,而只需更改一个配置文件。

另一种方法(如果您确实希望每种记录器类型都有一个文件)是有一个单独的进程,该进程将保持文件打开,接受来自应用程序的连接。日志记录模块文档有一个 很好的示例这个方法的

最后但并非最不重要的一点是,已经有一些不错的解决方案可能会有所帮助。其中一个非常好,就是使用 django-sentry。该模块可以记录所有异常、404(包含额外的中间件)并捕获所有日志记录(通过包含的日志记录处理程序)。

提供的 UI 将使您能够搜索所有记录的消息,并按严重性和日志记录源过滤它们。但这并不限于这些 - 您可以简单地添加自己的模块。

I did not analysed if for a long time, but it looks like a concurrency problem.

Each process/thread is keeping it's own list of the file handles to the opened log files.

How to fix it? For the multithreaded code, make sure that there is a global dictionary where all handles are kept. For the multiprocess - I'm afraid I do not have an answer... each process is keeping it's own file-handles, maybe mapping it to the memory (memory mapped files could be an option), but I'm not sure that this is good solution - see this remark.

But the main question is why do you need to do such a thing.

First of all, you can use logging.conf file to initialize all your loggers/handlers/formatters and when needed (e.g. specific loger is extensive and you want to log it to the separate file) add another logger with different filename. Which is quite sensible if you will add one logger per django app, by adding in the main __init__.py of the app:

import logging
log = logging.getLogger(__name__)

and then import log in the rest of the app code (views, models, etc.)

To use logging.conf add following lines to your settings.py:

import os
import logging
DIRNAME = os.path.abspath(os.path.dirname(__file__))
logging.config.fileConfig(os.path.join(DIRNAME, 'logging.conf'))

Yes, it is manual, but you do not need to change code, but simply a config file.

Another approach (if you really want to have one file per logger type) is to have a separate process which will keep files open, accept connections from the application. Logging module documentation has a nice example of this method.

Last, but not least there are already some nice solutions which may be helpful. One, quite good, is to use django-sentry. This module can log all your exceptions, 404 (with extra middleware - included) and capture all logging (via included logging handler).

Provided UI will gave you a possibility to search all the logged messages, filter them by the severity and logging source. But this is not limited to those - you can simply add your own modules.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文