覆盖hadoop中的log4j.properties
如何覆盖hadoop中的默认log4j.properties?如果我设置 hadoop.root.logger=WARN,console,它不会在控制台上打印日志,而我想要的是它不应该在日志文件中打印 INFO。我在 jar 中添加了一个 log4j.properties 文件,但无法覆盖默认文件。简而言之,我希望日志文件仅打印错误和警告。
# Define some default values that can be overridden by system properties
hadoop.root.logger=INFO,console
hadoop.log.dir=.
hadoop.log.file=hadoop.log
#
# Job Summary Appender
#
# Use following logger to send summary to separate file defined by
# hadoop.mapreduce.jobsummary.log.file rolled daily:
# hadoop.mapreduce.jobsummary.logger=INFO,JSA
#
hadoop.mapreduce.jobsummary.logger=${hadoop.root.logger}
hadoop.mapreduce.jobsummary.log.file=hadoop-mapreduce.jobsummary.log
# Define the root logger to the system property "hadoop.root.logger".
log4j.rootLogger=${hadoop.root.logger}, EventCounter
# Logging Threshold
log4j.threshold=ALL
#
# Daily Rolling File Appender
#
log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFA.File=${hadoop.log.dir}/${hadoop.log.file}
# Rollver at midnight
log4j.appender.DRFA.DatePattern=.yyyy-MM-dd
# 30-day backup
#log4j.appender.DRFA.MaxBackupIndex=30
log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout
# Pattern format: Date LogLevel LoggerName LogMessage
log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
# Debugging Pattern format
#log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n
#
# console
# Add "console" to rootlogger above if you want to use this
#
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
#
# TaskLog Appender
#
#Default values
hadoop.tasklog.taskid=null
hadoop.tasklog.iscleanup=false
hadoop.tasklog.noKeepSplits=4
hadoop.tasklog.totalLogFileSize=100
hadoop.tasklog.purgeLogSplits=true
hadoop.tasklog.logsRetainHours=12
log4j.appender.TLA=org.apache.hadoop.mapred.TaskLogAppender
log4j.appender.TLA.taskId=${hadoop.tasklog.taskid}
log4j.appender.TLA.isCleanup=${hadoop.tasklog.iscleanup}
log4j.appender.TLA.totalLogFileSize=${hadoop.tasklog.totalLogFileSize}
log4j.appender.TLA.layout=org.apache.log4j.PatternLayout
log4j.appender.TLA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
#
#Security appender
#
hadoop.security.log.file=SecurityAuth.audit
log4j.appender.DRFAS=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFAS.File=${hadoop.log.dir}/${hadoop.security.log.file}
log4j.appender.DRFAS.layout=org.apache.log4j.PatternLayout
log4j.appender.DRFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
#new logger
# Define some default values that can be overridden by system properties
hadoop.security.logger=INFO,console
log4j.category.SecurityLogger=${hadoop.security.logger}
#
# Rolling File Appender
#
#log4j.appender.RFA=org.apache.log4j.RollingFileAppender
#log4j.appender.RFA.File=${hadoop.log.dir}/${hadoop.log.file}
# Logfile size and and 30-day backups
#log4j.appender.RFA.MaxFileSize=1MB
#log4j.appender.RFA.MaxBackupIndex=30
#log4j.appender.RFA.layout=org.apache.log4j.PatternLayout
#log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} - %m%n
#log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n
#
# FSNamesystem Audit logging
# All audit events are logged at INFO level
#
log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=WARN
# Custom Logging levels
#log4j.logger.org.apache.hadoop.mapred.JobTracker=DEBUG
#log4j.logger.org.apache.hadoop.mapred.TaskTracker=DEBUG
#log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=DEBUG
# Jets3t library
log4j.logger.org.jets3t.service.impl.rest.httpclient.RestS3Service=ERROR
#
# Event Counter Appender
# Sends counts of logging messages at different severity levels to Hadoop Metrics.
#
log4j.appender.EventCounter=org.apache.hadoop.metrics.jvm.EventCounter
#
# Job Summary Appender
#
log4j.appender.JSA=org.apache.log4j.DailyRollingFileAppender
log4j.appender.JSA.File=${hadoop.log.dir}/${hadoop.mapreduce.jobsummary.log.file}
log4j.appender.JSA.layout=org.apache.log4j.PatternLayout
log4j.appender.JSA.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
log4j.appender.JSA.DatePattern=.yyyy-MM-dd
log4j.logger.org.apache.hadoop.mapred.JobInProgress$JobSummary=${hadoop.mapreduce.jobsummary.logger}
log4j.additivity.org.apache.hadoop.mapred.JobInProgress$JobSummary=false
#
# MapReduce Audit Log Appender
#
# Set the MapReduce audit log filename
#hadoop.mapreduce.audit.log.file=hadoop-mapreduce.audit.log
# Appender for AuditLogger.
# Requires the following system properties to be set
# - hadoop.log.dir (Hadoop Log directory)
# - hadoop.mapreduce.audit.log.file (MapReduce audit log filename)
#log4j.logger.org.apache.hadoop.mapred.AuditLogger=INFO,MRAUDIT
#log4j.additivity.org.apache.hadoop.mapred.AuditLogger=false
#log4j.appender.MRAUDIT=org.apache.log4j.DailyRollingFileAppender
#log4j.appender.MRAUDIT.File=${hadoop.log.dir}/${hadoop.mapreduce.audit.log.file}
#log4j.appender.MRAUDIT.DatePattern=.yyyy-MM-dd
#log4j.appender.MRAUDIT.layout=org.apache.log4j.PatternLayout
#log4j.appender.MRAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
How do I override the default log4j.properties in hadoop? If I set the hadoop.root.logger=WARN,console, it doesnot print the logs on the console, whereas what I want is that it shouldn't print the INFO in the logs file. I added a log4j.properties file in my jar, but I am unable to override the default one. In short, I want the log file to print only the errors and warnings.
# Define some default values that can be overridden by system properties
hadoop.root.logger=INFO,console
hadoop.log.dir=.
hadoop.log.file=hadoop.log
#
# Job Summary Appender
#
# Use following logger to send summary to separate file defined by
# hadoop.mapreduce.jobsummary.log.file rolled daily:
# hadoop.mapreduce.jobsummary.logger=INFO,JSA
#
hadoop.mapreduce.jobsummary.logger=${hadoop.root.logger}
hadoop.mapreduce.jobsummary.log.file=hadoop-mapreduce.jobsummary.log
# Define the root logger to the system property "hadoop.root.logger".
log4j.rootLogger=${hadoop.root.logger}, EventCounter
# Logging Threshold
log4j.threshold=ALL
#
# Daily Rolling File Appender
#
log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFA.File=${hadoop.log.dir}/${hadoop.log.file}
# Rollver at midnight
log4j.appender.DRFA.DatePattern=.yyyy-MM-dd
# 30-day backup
#log4j.appender.DRFA.MaxBackupIndex=30
log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout
# Pattern format: Date LogLevel LoggerName LogMessage
log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
# Debugging Pattern format
#log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n
#
# console
# Add "console" to rootlogger above if you want to use this
#
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
#
# TaskLog Appender
#
#Default values
hadoop.tasklog.taskid=null
hadoop.tasklog.iscleanup=false
hadoop.tasklog.noKeepSplits=4
hadoop.tasklog.totalLogFileSize=100
hadoop.tasklog.purgeLogSplits=true
hadoop.tasklog.logsRetainHours=12
log4j.appender.TLA=org.apache.hadoop.mapred.TaskLogAppender
log4j.appender.TLA.taskId=${hadoop.tasklog.taskid}
log4j.appender.TLA.isCleanup=${hadoop.tasklog.iscleanup}
log4j.appender.TLA.totalLogFileSize=${hadoop.tasklog.totalLogFileSize}
log4j.appender.TLA.layout=org.apache.log4j.PatternLayout
log4j.appender.TLA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
#
#Security appender
#
hadoop.security.log.file=SecurityAuth.audit
log4j.appender.DRFAS=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFAS.File=${hadoop.log.dir}/${hadoop.security.log.file}
log4j.appender.DRFAS.layout=org.apache.log4j.PatternLayout
log4j.appender.DRFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
#new logger
# Define some default values that can be overridden by system properties
hadoop.security.logger=INFO,console
log4j.category.SecurityLogger=${hadoop.security.logger}
#
# Rolling File Appender
#
#log4j.appender.RFA=org.apache.log4j.RollingFileAppender
#log4j.appender.RFA.File=${hadoop.log.dir}/${hadoop.log.file}
# Logfile size and and 30-day backups
#log4j.appender.RFA.MaxFileSize=1MB
#log4j.appender.RFA.MaxBackupIndex=30
#log4j.appender.RFA.layout=org.apache.log4j.PatternLayout
#log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} - %m%n
#log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n
#
# FSNamesystem Audit logging
# All audit events are logged at INFO level
#
log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=WARN
# Custom Logging levels
#log4j.logger.org.apache.hadoop.mapred.JobTracker=DEBUG
#log4j.logger.org.apache.hadoop.mapred.TaskTracker=DEBUG
#log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=DEBUG
# Jets3t library
log4j.logger.org.jets3t.service.impl.rest.httpclient.RestS3Service=ERROR
#
# Event Counter Appender
# Sends counts of logging messages at different severity levels to Hadoop Metrics.
#
log4j.appender.EventCounter=org.apache.hadoop.metrics.jvm.EventCounter
#
# Job Summary Appender
#
log4j.appender.JSA=org.apache.log4j.DailyRollingFileAppender
log4j.appender.JSA.File=${hadoop.log.dir}/${hadoop.mapreduce.jobsummary.log.file}
log4j.appender.JSA.layout=org.apache.log4j.PatternLayout
log4j.appender.JSA.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
log4j.appender.JSA.DatePattern=.yyyy-MM-dd
log4j.logger.org.apache.hadoop.mapred.JobInProgress$JobSummary=${hadoop.mapreduce.jobsummary.logger}
log4j.additivity.org.apache.hadoop.mapred.JobInProgress$JobSummary=false
#
# MapReduce Audit Log Appender
#
# Set the MapReduce audit log filename
#hadoop.mapreduce.audit.log.file=hadoop-mapreduce.audit.log
# Appender for AuditLogger.
# Requires the following system properties to be set
# - hadoop.log.dir (Hadoop Log directory)
# - hadoop.mapreduce.audit.log.file (MapReduce audit log filename)
#log4j.logger.org.apache.hadoop.mapred.AuditLogger=INFO,MRAUDIT
#log4j.additivity.org.apache.hadoop.mapred.AuditLogger=false
#log4j.appender.MRAUDIT=org.apache.log4j.DailyRollingFileAppender
#log4j.appender.MRAUDIT.File=${hadoop.log.dir}/${hadoop.mapreduce.audit.log.file}
#log4j.appender.MRAUDIT.DatePattern=.yyyy-MM-dd
#log4j.appender.MRAUDIT.layout=org.apache.log4j.PatternLayout
#log4j.appender.MRAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(9)
修改
HADOOP_CONF_DIR
内的log4j
文件。请注意,hadoop 作业不会考虑应用程序的 log4j 文件。它将考虑HADOOP_CONF_DIR
内的那个。如果您想强制 hadoop 使用其他
log4j
文件,请尝试以下之一:您可以尝试@Patrice 所说的。 IE。
-Dlog4j.configuration=file:/path/to/user_specific/log4j.xml
自定义 HADOOP_CONF_DIR/log4j.xml 并根据您的意愿设置“您的”类的记录器级别。其他用户不会因此受到影响,除非两者都具有相同包结构的类。这不适用于核心 hadoop 类,因为所有用户都会受到影响。
创建您的自定义 log4j 文件。复制目录 HADOOP_CONF_DIR 并将 log4j 文件放入其中。将 HADOOP_CONF_DIR 导出到您的 conf 目录。其他用户将指向默认用户。
Modify the
log4j
file insideHADOOP_CONF_DIR
. Note that hadoop job wont consider the log4j file of your application. It will consider the one insideHADOOP_CONF_DIR
.If you want to force hadoop to use some other
log4j
file, try one of these:You can try what @Patrice said. ie.
-Dlog4j.configuration=file:/path/to/user_specific/log4j.xml
Customize HADOOP_CONF_DIR/log4j.xml and set the logger level for "your" classes as per your wish. Other user(s) wont be affected due to this unless both are having classes with same package structure. This wont work for core hadoop classes as all users will get afftected.
Create your customized log4j file. Replicate the directory HADOOP_CONF_DIR and put your log4j file inside it. export HADOOP_CONF_DIR to your conf directory. Other users will point to the default one.
如果您使用默认的 Log4j.properties 文件,则日志记录设置将被启动脚本中的环境变量覆盖。如果您想使用默认的 log4j 并且只是想更改日志记录级别,请使用 $HADOOP_CONF_DIR/hadoop-env.sh
例如,要将记录器更改为 DEBUG 日志级别和 DRFA 记录器,请使用
If you use the default Log4j.properties file the logging settings get overridden by environment variables from the startup script. If you want to use the default log4j and just simply want to change the logging level, use
$HADOOP_CONF_DIR/hadoop-env.sh
For example, to change your logger to DEBUG log level and DRFA logger, use
hadoop jar
中删除log4j.properties
jar
/log4j.properties
是类路径中的第一个(log4j 从找到的类路径中选择第一个log4j.properties
)-Dlog4j.configuration=PATH_TO_FILE
请参阅文档了解 log4j 如何查找配置。
log4j.properties
from yourhadoop jar
jar
/log4j.properties
is first in the classpath (log4j picks the firstlog4j.properties
from the classpath that it finds)-Dlog4j.configuration=PATH_TO_FILE
See the documentation to learn how log4j finds the configuration.
我遇到了同样的问题(CDH3U3,Hadoop 0.20.2)。我终于找到了一个解决方案(注意路径中的
file:
前缀):I was faced with the same problem (CDH3U3, Hadoop 0.20.2). I finally found a solution with (note
file:
prefix in the path):Maven 打包:
一旦我意识到需要将自定义
debug-log.properties
文件添加到 src/main/java/resources,Maven 将其添加到 application.jar 根目录目录,然后只需从命令行在-Dlog4j.configuration=debug-log.properties
中引用它即可。Oozie
操作:对于 Oozie,使用
-Dlog4j.configuration=${log4jConfig}
在工作流.xml 操作中,并在 job.properties 文件中定义以下内容。Oozie
操作:Maven Packaging:
Once I realized I needed to add my custom
debug-log.properties
file to src/main/java/resources, Maven added it to the application.jar root directory and then it was just a matter of referring to it or not in-Dlog4j.configuration=debug-log.properties
from the command line.Oozie
<java>
Action:In regard to Oozie, use
<java-opts>-Dlog4j.configuration=${log4jConfig}</java-opts>
in the workflow.xml actions and define the following in a job.properties file.Oozie
<map-reduce>
Action:正如 Sulpha 所提到的,
对于 hadoop 1.2.1,覆盖 hadoop-core.jar 中存在的 task-log4j.properties 非常重要
对于我的伪分布式模式,
我无法打印猪 UDF 的调试消息,不得不删除该任务-log4j.properties 来自 hadoop-core.jar 并将其替换为 $HADOOP_INSTALL/conf/log4j.properties 的副本。
使用了
和
As mentioned by Sulpha,
for hadoop 1.2.1, it is important to override the task-log4j.properties that is present inside hadoop-core.jar
For my pseudo distributed mode,
I was unable to print the debug messages of my pig UDFs and had to delete the task-log4j.properties from hadoop-core.jar and replace it with a copy of the $HADOOP_INSTALL/conf/log4j.properties.
Used the
and
如果jar文件中已经配置了log4j属性文件。您可以通过简单地将 -Dlog4j.configuration= 放在 -classpath 之前来覆盖,
这里是示例:
java -Dlog4j.configuration=..\conf\log4j.properties -classpath %CLASSPATH%
If there is already configured log4j properties file inside the jar file. you can override by simple putting -Dlog4j.configuration= before the -classpath
here is the sample:
java -Dlog4j.configuration=..\conf\log4j.properties -classpath %CLASSPATH%
将 log4j.configuration 选项放入子 java 选项中。
即,
您必须将 log4j_debug.properties 文件放在所有从属服务器上的同一目录路径中,例如 /home/yourname/log4j_debug.properties 或 /tmp/log4j_debug.properties
此设置会覆盖 mapred.child.java.opts 设置。
如果您想使用其他选项,例如 -Xmx32m(这意味着 32MB 堆大小),请执行以下操作:
Put log4j.configuration option in the child java options.
I.e.
You must put log4j_debug.properties file on all slave servers in a same directory path like /home/yourname/log4j_debug.properties or /tmp/log4j_debug.properties
This setting overwrites mapred.child.java.opts settings.
If you want to use with another options like -Xmx32m, which means 32MB heap size, then do like following:
在Hadoop 1.2.1中有2个配置文件:log4j.properties和task-log4j.properties
因此,要使上面的示例正常工作,必须在 task-log4j.properties 中完成更改,而不是在 log4j.properties 中完成,
您可以在 task-log4j.properties 中添加以下行:
In the Hadoop 1.2.1 there are 2 config files: log4j.properties and task-log4j.properties
So to make example above work the change have to be done in task-log4j.properties not in log4j.properties
you can add follwing line in your task-log4j.properties: