如何刷新缓冲的 log4j FileAppender?

发布于 2024-09-06 04:02:57 字数 116 浏览 7 评论 0原文

在 log4j 中,当使用具有 BufferedIO=true 和 BufferSize=xxx 属性(即启用缓冲)的 FileAppender 时,我希望能够在正常关闭过程中刷新日志。关于如何做到这一点有什么想法吗?

In log4j, when using a FileAppender with BufferedIO=true and BufferSize=xxx properties (i.e. buffering is enabled), I want to be able to flush the log during normal shutdown procedure. Any ideas on how to do this?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(8

坐在坟头思考人生 2024-09-13 04:02:57

关闭 LogManager 时:

LogManager.shutdown();

所有缓冲的日志都会被刷新。

When shutting down the LogManager:

LogManager.shutdown();

all buffered logs get flushed.

盗琴音 2024-09-13 04:02:57
public static void flushAllLogs()
{
    try
    {
        Set<FileAppender> flushedFileAppenders = new HashSet<FileAppender>();
        Enumeration currentLoggers = LogManager.getLoggerRepository().getCurrentLoggers();
        while(currentLoggers.hasMoreElements())
        {
            Object nextLogger = currentLoggers.nextElement();
            if(nextLogger instanceof Logger)
            {
                Logger currentLogger = (Logger) nextLogger;
                Enumeration allAppenders = currentLogger.getAllAppenders();
                while(allAppenders.hasMoreElements())
                {
                    Object nextElement = allAppenders.nextElement();
                    if(nextElement instanceof FileAppender)
                    {
                        FileAppender fileAppender = (FileAppender) nextElement;
                        if(!flushedFileAppenders.contains(fileAppender) && !fileAppender.getImmediateFlush())
                        {
                            flushedFileAppenders.add(fileAppender);
                            //log.info("Appender "+fileAppender.getName()+" is not doing immediateFlush ");
                            fileAppender.setImmediateFlush(true);
                            currentLogger.info("FLUSH");
                            fileAppender.setImmediateFlush(false);
                        }
                        else
                        {
                            //log.info("fileAppender"+fileAppender.getName()+" is doing immediateFlush");
                        }
                    }
                }
            }
        }
    }
    catch(RuntimeException e)
    {
        log.error("Failed flushing logs",e);
    }
}
public static void flushAllLogs()
{
    try
    {
        Set<FileAppender> flushedFileAppenders = new HashSet<FileAppender>();
        Enumeration currentLoggers = LogManager.getLoggerRepository().getCurrentLoggers();
        while(currentLoggers.hasMoreElements())
        {
            Object nextLogger = currentLoggers.nextElement();
            if(nextLogger instanceof Logger)
            {
                Logger currentLogger = (Logger) nextLogger;
                Enumeration allAppenders = currentLogger.getAllAppenders();
                while(allAppenders.hasMoreElements())
                {
                    Object nextElement = allAppenders.nextElement();
                    if(nextElement instanceof FileAppender)
                    {
                        FileAppender fileAppender = (FileAppender) nextElement;
                        if(!flushedFileAppenders.contains(fileAppender) && !fileAppender.getImmediateFlush())
                        {
                            flushedFileAppenders.add(fileAppender);
                            //log.info("Appender "+fileAppender.getName()+" is not doing immediateFlush ");
                            fileAppender.setImmediateFlush(true);
                            currentLogger.info("FLUSH");
                            fileAppender.setImmediateFlush(false);
                        }
                        else
                        {
                            //log.info("fileAppender"+fileAppender.getName()+" is doing immediateFlush");
                        }
                    }
                }
            }
        }
    }
    catch(RuntimeException e)
    {
        log.error("Failed flushing logs",e);
    }
}
鹿港巷口少年归 2024-09-13 04:02:57
public static void flushAll() {
    final LoggerContext logCtx = ((LoggerContext) LogManager.getContext());
    for(final org.apache.logging.log4j.core.Logger logger : logCtx.getLoggers()) {
        for(final Appender appender : logger.getAppenders().values()) {
            if(appender instanceof AbstractOutputStreamAppender) {
                ((AbstractOutputStreamAppender) appender).getManager().flush();
            }
        }
    }
}
public static void flushAll() {
    final LoggerContext logCtx = ((LoggerContext) LogManager.getContext());
    for(final org.apache.logging.log4j.core.Logger logger : logCtx.getLoggers()) {
        for(final Appender appender : logger.getAppenders().values()) {
            if(appender instanceof AbstractOutputStreamAppender) {
                ((AbstractOutputStreamAppender) appender).getManager().flush();
            }
        }
    }
}
灼痛 2024-09-13 04:02:57

也许您可以重写 WriterAppender#shouldFlush( LoggingEvent ) ,这样它就会为特殊的日志类别返回 true ,例如 log4j.flush.now ,然后你打电话:

LoggerFactory.getLogger("log4j.flush.now").info("Flush")

http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/WriterAppender.html#shouldFlush%28org.apache.log4j.spi.LoggingEvent%29

Maybe you could override WriterAppender#shouldFlush( LoggingEvent ), so it would return true for a special logging category, like log4j.flush.now, and then you call:

LoggerFactory.getLogger("log4j.flush.now").info("Flush")

http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/WriterAppender.html#shouldFlush%28org.apache.log4j.spi.LoggingEvent%29

Smile简单爱 2024-09-13 04:02:57

分享我使用“Andrey Kurilov”的代码示例的经验,或者至少是类似的。

我实际上想要实现的是通过手动刷新实现异步日志条目(immediateFlush = false),以确保在达到 bufferSize 之前刷新空闲缓冲区内容。

初始性能结果实际上与使用 AsyncAppender 实现的结果相当 - 所以我认为它是一个很好的替代方案。

AsyncAppender 使用单独的线程(以及对 disruptor jar 的额外依赖),这使其性能更高,但需要更多的 CPU 和更多的磁盘刷新(没有高负荷冲洗的物质是分批进行的)。

因此,如果您想节省磁盘 IO 操作和 CPU 负载,但仍想确保缓冲区在某个时刻被异步刷新,那么这就是正确的方法。

Sharing my experience with using "Andrey Kurilov"'s code example, or at least simmilar.

What I actually wanted to achieve was to implement an asynchronous log entries with manual flush (immediateFlush = false) to ensure an idle buffers content will be flushed before the bufferSize is reached.

The initial performance results were actually comparable with the ones achieved with the AsyncAppender - so I think it is a good alternative of it.

The AsyncAppender is using a separate thread (and additional dependency to disruptor jar), which makes it more performant, but with the cost of more CPU and even more disk flushing(no matter with high load flushes are made on batches).

So if you want to save disk IO operations and CPU load, but still want to ensure your buffers will be flushed asynchronously at some point, that is the way to go.

瑶笙 2024-09-13 04:02:57

对我有用的唯一解决方案是等待一段时间:

private void flushAppender(Appender appender) {
    // this flush seems to be useless
    ((AbstractOutputStreamAppender<?>) appender).getManager().flush(); 
    try {
        Thread.sleep(500); // wait for log4j to flush logs
    } catch (InterruptedException ignore) {
    }
}

The only solution that worked for me is waiting for a while:

private void flushAppender(Appender appender) {
    // this flush seems to be useless
    ((AbstractOutputStreamAppender<?>) appender).getManager().flush(); 
    try {
        Thread.sleep(500); // wait for log4j to flush logs
    } catch (InterruptedException ignore) {
    }
}
尽揽少女心 2024-09-13 04:02:57

尝试:

LogFactory.releaseAll();

Try:

LogFactory.releaseAll();
黯然#的苍凉 2024-09-13 04:02:57

我编写了一个修复此问题的附加程序,请参阅 GitHub 或使用 name.wramner.log4j:FlushAppender梅文。它可以配置为刷新具有高严重性的事件,并且可以在接收到特定消息(例如“关闭”)时使附加程序不缓冲。检查单元测试以获取配置示例。当然,它是免费的。

I have written an appender that fixes this, see GitHub or use name.wramner.log4j:FlushAppender in Maven. It can be configured to flush on events with high severity and it can make the appenders unbuffered when it receives a specific message, for example "Shutting down". Check the unit tests for configuration examples. It is free, of course.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文