我的 log4j 过滤器出了什么问题? (这会让tomcat挂掉!)

发布于 2024-08-21 16:50:41 字数 1362 浏览 4 评论 0原文

我们最近使用自定义过滤器向 log4j 配置添加了一个过滤器。 目标是停止一次又一次重复相同的日志,而是将其替换为:

日志+“最后一行重复x次”

我们编写了以下过滤器,效果很好。但随后,我们开始注意到tomcat5.5的奇怪死锁和挂起。

当我们删除此过滤器时,错误就会停止发生。 (是的,我们对此非常确定)。

除了这个经验观察之外,JVM 的堆堆栈有许多 log4j 线程被阻塞并等待 tomcat 监视器被释放。

这是我们的过滤器的代码。非常基本。有什么问题吗?

public class RepeatFilter extends Filter {
String lastMessage;
Category lastLogger;
int repeatCount = 0;

@Override
public int decide(LoggingEvent event) {
    // get the rendered (String) form of the message
    String msg = event.getRenderedMessage();

    if(msg == null || msg.startWith("Last message repeated "){
        return Filter.NEUTRAL;
    }

    if(msg.equals(lastMessage)) {
        repeatCount++;
        return Filter.DENY;
    } else {
        if(repeatCount>0){
            String msgToLog = "Last message repeated " + repeatCount + " time(s).";
            repeatCount = 0;
            lastLogger.log(event.getLevel(), msgToLog);
        }
    }

    lastLogger = event.getLogger();
    lastMessage = msg;
    return Filter.NEUTRAL;
}

}

编辑 : 是的,当我们在过滤器内使用记录器时,这是一个递归。事实上,服务器在lastLogger.log(...)行之后挂起。 但我们确实需要编写一条自定义消息(重复 x 次)。我们尝试不在过滤器内使用记录器,但我们还没有找到方法。

编辑2: 我使用的是log4j 1.2.15版本。

编辑3: 我们将尝试的事情:

  • 使我们所有的appender都嵌套在AsyncAppender中

Edit 4 : 将每个追加器包装在 asyncAppender 中并不能解决问题

We recently add a filter to our log4j configuration with a custom filter.
The goal is to stop repeating the same log again and again, but replace it by :

the log + "last line repeated x times"

We have write the following filter, who works fine. But then, we have beggin to remark strange deadlock and hang up of tomcat5.5.

When we remove this filter, the error stop to occurs. ( Yes, we are pretty sure of that ).

Beside this empirical observation, the Heap Stack of the JVM has many log4j thread blocked and waiting to a tomcat monitor to be realeased.

Here is the code of our filter. Pretty basic. What's wrong with it ?

public class RepeatFilter extends Filter {
String lastMessage;
Category lastLogger;
int repeatCount = 0;

@Override
public int decide(LoggingEvent event) {
    // get the rendered (String) form of the message
    String msg = event.getRenderedMessage();

    if(msg == null || msg.startWith("Last message repeated "){
        return Filter.NEUTRAL;
    }

    if(msg.equals(lastMessage)) {
        repeatCount++;
        return Filter.DENY;
    } else {
        if(repeatCount>0){
            String msgToLog = "Last message repeated " + repeatCount + " time(s).";
            repeatCount = 0;
            lastLogger.log(event.getLevel(), msgToLog);
        }
    }

    lastLogger = event.getLogger();
    lastMessage = msg;
    return Filter.NEUTRAL;
}

}

EDIT :
Yes, theire is a recursion when we use a logger inside the filter. In fact, the server hang after the line lastLogger.log(... ).
but we really need to writte a custom message ( the repeated x time ). We have try not to use a logger inside the filter, but we havent find a way.

Edit 2 :
I use the version 1.2.15 of log4j.

Edit 3 :
Something we will try :

  • make all our appender nested in a AsyncAppender

Edit 4 :
Wrap each appender in a asyncAppender does not solve the issue

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

若言繁花未落 2024-08-28 16:50:41

可能的死锁

查看源代码,我发现当记录事件时,会在 Category 上获得一个锁,然后在每个 AppenderSkeleton 上获得一个锁。事件已调度。如果两个 Category 实例使用相同的附加程序(这很常见),则尝试从附加到该附加程序的 Appender(或附加的 Filter)进行日志记录appender)肯定会导致死锁。

例如,有两个 Category 对象,C1C2,以及一个 AppenderA< /代码>。 Thread T1 获取 C1 上的锁,然后获取 A 上的锁,然后开始处理 FilterA 上的 code> 链。

同时,Thread T2 获取C2 上的锁。它无法获取 A 上的锁,因为它由 T1 持有,因此它会等待。

现在假设(由过滤器)指示 T1 将消息记录到 C2。它无法获取 C2 上的锁,因为它由 T2 持有。僵局。

如果您从线程转储中发布了更多信息,应该可以判断这是否是真正的死锁,如果是,则可以判断哪些对象存在争用。

可能的解决方案

如果确实发生了这种情况,则似乎可以通过仅将“重复”消息记录到“当前”记录器(即由过滤器评估的事件的记录器)来避免该问题。实现此目的的一种方法是跟踪每个记录器的重复次数。

public class RepeatFilter
  extends Filter
{

  private final Map<Category, Repeat> repeats = 
    new HashMap<Category, Repeat>();

  @Override
  public int decide(LoggingEvent event)
  {
    String message = event.getRenderedMessage();
    if ((message == null) || message.startsWith("Last message repeated "))
      return Filter.NEUTRAL;
    Category logger = event.getLogger();
    Repeat r = repeats.get(logger);
    if (r == null)
      repeats.put(logger, r = new Repeat());
    if (message.equals(r.message)) {
      ++r.count;
      return Filter.DENY;
    }
    if (r.count > 0) {
      logger.log(r.level, "Last message repeated " + r.count + " time(s).");
      r.count = 0;
    }
    r.message = message;
    r.level = event.getLevel();
    return Filter.NEUTRAL;
  }

  private final class Repeat
  {

    Priority level;

    String message;

    int count;

  }

}

这可能不是您想要的,因为其他记录器上的事件不会“刷新”类别的“重复”消息。另一方面,它可能会更有效地压缩日志,因为相同的消息可能会在同一个记录器上重复。

Possible Deadlock

Looking at the source code, I see that a lock is obtained on the Category when an event is logged, and then a lock is obtained on each AppenderSkeleton to which the event is dispatched. If two Category instances use the same appender (which is common), then trying to log from a an Appender (or a Filter attached to that appender) could definitely cause a deadlock.

For example, there are two Category objects, C1 and C2, and one Appender, A. Thread T1 acquires the lock on C1 and then A, then it starts processing the Filter chain on A.

At the same time, Thread T2 acquires the lock on C2. It cannot acquire the lock on A because it is held by T1, so it waits.

Now suppose that T1 is directed (by the filter) to log a message to C2. It cannot acquire the lock on C2 because it's held by T2. Deadlock.

If you posted more info from the thread dump, it should be possible to tell whether it's a true deadlock and, if so, which objects are under contention.

Possible Solution

If this is indeed what's happening, it seems that the problem can be avoided by logging the "repeated" message only to the "current" logger—that is, the logger of the event being evaluated by the filter. One way to do this is to track repeats on a per-logger basis.

public class RepeatFilter
  extends Filter
{

  private final Map<Category, Repeat> repeats = 
    new HashMap<Category, Repeat>();

  @Override
  public int decide(LoggingEvent event)
  {
    String message = event.getRenderedMessage();
    if ((message == null) || message.startsWith("Last message repeated "))
      return Filter.NEUTRAL;
    Category logger = event.getLogger();
    Repeat r = repeats.get(logger);
    if (r == null)
      repeats.put(logger, r = new Repeat());
    if (message.equals(r.message)) {
      ++r.count;
      return Filter.DENY;
    }
    if (r.count > 0) {
      logger.log(r.level, "Last message repeated " + r.count + " time(s).");
      r.count = 0;
    }
    r.message = message;
    r.level = event.getLevel();
    return Filter.NEUTRAL;
  }

  private final class Repeat
  {

    Priority level;

    String message;

    int count;

  }

}

This might not be what you want, because events on other loggers will not "flush" the category's "repeated" message. On the other hand, it's likely to compress the log even more effectively, since the same message is likely to be repeated on the same logger.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文