Django 和 fcgi - 日志记录问题
我有一个在 Django 中运行的网站。 前端是 lighttpd,并使用 fcgi 托管 django。
我按如下方式启动 fcgi 进程:
python2.6 /<snip>/manage.py runfcgi maxrequests=10 host=127.0.0.1 port=8000 pidfile=django.pid
对于日志记录,我定义了一个 RotatingFileHandler,如下所示:
file_handler = RotatingFileHandler(filename, maxBytes=10*1024*1024, backupCount=5,encoding='utf-8')
日志记录正在工作。 然而,当文件大小还不到 10Kb 时,看起来就像是在轮换,更不用说 10Mb 了。 我的猜测是每个 fcgi 实例只处理 10 个请求,然后重新生成。 fcgi 的每次重生都会创建一个新文件。 我确认 fcgi 经常在新的进程 ID 下启动(很难准确说出时间,但不到一分钟)。
有什么办法可以解决这个问题吗? 我希望所有 fcgi 实例都记录到一个文件,直到它达到大小限制,此时将发生日志文件轮换。
I have a site running in Django. Frontend is lighttpd and is using fcgi to host django.
I start my fcgi processes as follows:
python2.6 /<snip>/manage.py runfcgi maxrequests=10 host=127.0.0.1 port=8000 pidfile=django.pid
For logging, I have a RotatingFileHandler defined as follows:
file_handler = RotatingFileHandler(filename, maxBytes=10*1024*1024, backupCount=5,encoding='utf-8')
The logging is working. However, it looks like the files are rotating when they do not even get up to 10Kb, let alone 10Mb. My guess is that each fcgi instance is only handling 10 requests, and then re-spawning. Each respawn of fcgi creates a new file. I confirm that fcgi is starting up under new process id every so often (hard to tell time exactly, but under a minute).
Is there any way to get around this issues? I would like all fcgi instances logging to one file until it reaches the size limit, at which point a log file rotation would take place.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
正如 Alex 所说,日志记录是线程安全的,但标准处理程序不能安全地用于从多个进程记录到单个文件中。
ConcurrentLogHandler 使用文件锁定来允许从多个进程内进行日志记录。
As Alex stated, logging is thread-safe, but the standard handlers cannot be safely used to log from multiple processes into a single file.
ConcurrentLogHandler uses file locking to allow for logging from within multiple processes.
站在你的角度,我会切换到 TimedRotatingFileHandler - 我'令我惊讶的是,基于大小的旋转文件句柄给出了这个问题(因为它应该不受生成日志条目的进程的影响),但是定时版本(尽管没有完全控制您喜欢的参数)应该可以解决它。 或者,编写您自己的、更可靠的旋转文件处理程序(您可以从标准库源中获取很多内容),以确保不同的进程不会成为问题(因为它们永远不应该成为问题)。
In your shoes I'd switch to a TimedRotatingFileHandler -- I'm surprised that the size-based rotating file handles is giving this problem (as it should be impervious to what processes are producing the log entries), but the timed version (though not controlled on exactly the parameter you prefer) should solve it. Or, write your own, more solid, rotating file handler (you can take a lot from the standard library sources) that ensures varying processes are not a problem (as they should never be).
由于您似乎使用的是追加(“a”)而不是写入(“w”)的默认文件打开模式,因此如果进程重新生成,它应该追加到现有文件,然后在达到大小限制时滚动。 所以我不确定您所看到的是否是由重新生成 CGI 进程引起的。 (当然,这假设进程重新生成时文件名保持不变)。
尽管日志记录包是线程安全的,但它不处理从多个进程对同一文件的并发访问 - 因为在 stdlib 中没有标准方法来执行此操作。 我通常的建议是建立一个单独的守护进程,它实现一个套接字服务器并将通过它接收的事件记录到文件中 - 然后其他进程只需实现一个 SocketHandler 来与日志守护进程进行通信。 然后所有事件都将正确序列化到磁盘。 Python 文档包含 工作套接字服务器可以作为这种需求的基础。
As you appear to be using the default file opening mode of append ("a") rather than write ("w"), if a process re-spawns it should append to the existing file, then rollover when the size limit is reached. So I am not sure that what you are seeing is caused by re-spawning CGI processes. (This of course assumes that the filename remains the same when the process re-spawns).
Although the logging package is thread-safe, it does not handle concurrent access to the same file from multiple processes - because there is no standard way to do it in the stdlib. My normal advice is to set up a separate daemon process which implements a socket server and logs events received across it to file - the other processes then just implement a SocketHandler to communicate with the logging daemon. Then all events will get serialised to disk properly. The Python documentation contains a working socket server which could serve as a basis for this need.