Log4j:每个请求一个日志文件

发布于 2024-08-22 11:05:24 字数 1150 浏览 8 评论 0原文

我们有一个 weblogic 批处理应用程序,它可以同时处理来自消费者的多个请求。我们使用 log4j 来记录目的。现在,我们针对多个请求登录到单个日志文件。调试给定请求的问题变得很乏味,因为所有请求的日志都在单个文件中。

因此计划是每个请求都有一个日志文件。消费者发送必须执行处理的请求ID。现在,实际上可能有多个消费者将请求 ID 发送到我们的应用程序。所以问题是如何根据请求分隔日志文件。

我们无法启动&每次停止生产服务器,这样就排除了使用带有日期时间戳或请求 ID 的覆盖文件附加程序的情况。这就是下面文章中解释的内容: http://veerasundar.com/blog/2009/08/how-to-create-a-new-log-file-for-each-time-the-application-runs/

我也尝试过尝试这些替代方案:

http: //cognitivecache.blogspot.com/2008/08/log4j-writing-to-dynamic-log-file-for.html

http://www.mail-archive.com/[电子邮件受保护]/msg05099.html

这种方法给出了期望的结果,但如果同时发送多个请求,则无法正常工作。由于一些并发问题,日志到处都是。

我期待你们能提供一些帮助。提前致谢....

We have a weblogic batch application which processes multiple requests from consumers at the same time. We use log4j for logging puposes. Right now we log into a single log file for multiple requests. It becomes tedious to debug an issue for a given request as for all requests the logs are in a single file.

So plan is to have one log file per request. The consumer sends a request ID for which processing has to be performed. Now, in reality there could be multiple consumers sending the request IDs to our application. So question is how to seggregate the log files based on the request.

We cannot start & stop the production server every time so the point in using an overridden file appender with date time stamp or request ID is ruled out. This is what is explained in the article below:
http://veerasundar.com/blog/2009/08/how-to-create-a-new-log-file-for-each-time-the-application-runs/

I also tried playing around with these alternatives:

http://cognitivecache.blogspot.com/2008/08/log4j-writing-to-dynamic-log-file-for.html

http://www.mail-archive.com/[email protected]/msg05099.html

This approach gives the desired results but it does not work properly if multiple request are send at the same time. Due to some concurrency issues logs go here and there.

I anticipate some help from you folks. Thanks in advance....

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

淑女气质 2024-08-29 11:05:26

看看 SiftingAppender 随 logback(log4j 的后继者)一起提供,它旨在处理根据运行时标准创建附加程序。

如果您的应用程序需要为每个会话创建一个日志文件,只需根据会话 ID 创建一个鉴别器即可。编写鉴别器需要 3 或 4 行代码,因此应该相当容易。如果您需要帮助,请在 logback-user 邮件列表上留言。

Look at SiftingAppender shipping with logback (log4j's successor), it is designed to handle the creation of appenders on runtime criteria.

If you application needs to create just one log file per session, simply create a discriminator based on the session id. Writing a discriminator involves 3 or 4 lines of code and thus should be fairly easy. Shout on the logback-user mailing list if you need help.

尹雨沫 2024-08-29 11:05:26

Logback 很好地处理了这个问题。如果你有自由的话,我建议你选择它。

假设可以,您需要使用的是 SiftingAppender。它允许您根据某些运行时值分隔日志文件。这意味着您有多种如何分割日志文件的选项。

要在 requestId 上拆分文件,您可以执行以下操作:

logback.xml

<configuration>

  <appender name="SIFT" class="ch.qos.logback.classic.sift.SiftingAppender">
    <discriminator>
      <key>requestId</key>
      <defaultValue>unknown</defaultValue>
    </discriminator>
    <sift>
      <appender name="FILE-${requestId}" class="ch.qos.logback.core.FileAppender">
        <file>${requestId}.log</file>
        <append>false</append>
        <layout class="ch.qos.logback.classic.PatternLayout">
          <pattern>%d [%thread] %level %mdc %logger{35} - %msg%n</pattern>
        </layout>
      </appender>
    </sift>
  </appender>

  <root level="DEBUG">
    <appender-ref ref="SIFT" />
  </root>

</configuration>

正如您所看到的(在 discriminator 元素内),您是将区分用于在 requestId 上写入日志的文件。这意味着每个请求都将转到具有匹配 requestId 的文件。因此,如果您有两个 requestId=1 请求和一个 requestId=2 请求,您将拥有 2 个日志文件:1.log (2 个条目)和 2.log (1 个条目)。

此时您可能想知道如何设置。这是通过将键值对放入 MDC (请注意,该键与 logback.xml 文件中定义的键匹配):

RequestProcessor.java

public class RequestProcessor {

    private static final Logger log = LoggerFactory.getLogger(RequestProcessor.java);

    public void process(Request request) {
        MDC.put("requestId", request.getId());
        log.debug("Request received: {}", request);
    }
}

这基本上就是一个简单的用例。现在,每次传入具有不同(尚未遇到)id 的请求时,都会为其创建一个新文件。

This problem is handled very well by Logback. I suggest to opt for it if you have the freedom.

Assuming you can, what you will need to use is is SiftingAppender. It allows you to separate log files according to some runtime value. Which means that you have a wide array of options of how to split log files.

To split your files on requestId, you could do something like this:

logback.xml

<configuration>

  <appender name="SIFT" class="ch.qos.logback.classic.sift.SiftingAppender">
    <discriminator>
      <key>requestId</key>
      <defaultValue>unknown</defaultValue>
    </discriminator>
    <sift>
      <appender name="FILE-${requestId}" class="ch.qos.logback.core.FileAppender">
        <file>${requestId}.log</file>
        <append>false</append>
        <layout class="ch.qos.logback.classic.PatternLayout">
          <pattern>%d [%thread] %level %mdc %logger{35} - %msg%n</pattern>
        </layout>
      </appender>
    </sift>
  </appender>

  <root level="DEBUG">
    <appender-ref ref="SIFT" />
  </root>

</configuration>

As you can see (inside discriminator element), you are going to discriminate the files used for writing logs on requestId. That means that each request will go to a file that has a matching requestId. Hence, if you had two requests where requestId=1 and one request where requestId=2, you would have 2 log files: 1.log (2 entries) and 2.log (1 entry).

At this point you might wonder how to set the key. This is done by putting key-value pairs in MDC (note that key matches the one defined in logback.xml file):

RequestProcessor.java

public class RequestProcessor {

    private static final Logger log = LoggerFactory.getLogger(RequestProcessor.java);

    public void process(Request request) {
        MDC.put("requestId", request.getId());
        log.debug("Request received: {}", request);
    }
}

And that's basically it for a simple use case. Now each time a request with a different (not yet encountered) id comes in, a new file will be created for it.

ぺ禁宫浮华殁 2024-08-29 11:05:26

使用文件模式

<?xml version="1.0" encoding="UTF-8"?>
<Configuration>
<Properties>
<property name="filePattern">${date:yyyy-MM-dd-HH_mm_ss}</property>
</Properties>
<Appenders>
<File name="File" fileName="export/logs/app_${filePattern}.log" append="false">
<PatternLayout
pattern="%d{yyyy-MMM-dd HH:mm:ss a} [%t] %-5level %logger{36} - %msg%n" />
</File>
</Appenders>
<Loggers>
<Root level="debug">
<AppenderRef ref="Console" />
<AppenderRef ref="File" />
</Root>
</Loggers>
</Configuration>

using filePattern

<?xml version="1.0" encoding="UTF-8"?>
<Configuration>
<Properties>
<property name="filePattern">${date:yyyy-MM-dd-HH_mm_ss}</property>
</Properties>
<Appenders>
<File name="File" fileName="export/logs/app_${filePattern}.log" append="false">
<PatternLayout
pattern="%d{yyyy-MMM-dd HH:mm:ss a} [%t] %-5level %logger{36} - %msg%n" />
</File>
</Appenders>
<Loggers>
<Root level="debug">
<AppenderRef ref="Console" />
<AppenderRef ref="File" />
</Root>
</Loggers>
</Configuration>
苏佲洛 2024-08-29 11:05:25

这是我关于同一主题的问题:
动态创建&销毁日志附加程序

我在 Log4J 邮件列表上讨论做类似这样的事情的线程中跟进了这一点:
http://www.qos.ch/pipermail/logback- user/2009-August/001220.html

Ceci Gulcu(log4j 的发明者)认为这不是一个好主意...建议使用 Logback。

无论如何,我们使用自定义文件附加器继续执行此操作。有关更多详细信息,请参阅我上面的讨论。

Here's my question on the same topic:
dynamically creating & destroying logging appenders

I follow this up on a thread where I discuss doing something exactly like this, on the Log4J mailing list:
http://www.qos.ch/pipermail/logback-user/2009-August/001220.html

Ceci Gulcu (inventor of log4j) didn't think it was a good idea...suggested using Logback instead.

We went ahead and did this anyway, using a custom file appender. See my discussions above for more details.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文