SPARK_HOME/logs/下的xxx.out日志非常大

导读:本篇文章讲解 SPARK_HOME/logs/下的xxx.out日志非常大,希望对大家有帮助,欢迎收藏,转发!站点地址:www.bmabk.com

问题描述:

  1. spark-2.1.1-bin-hadoop2.7,CentOS 7,基于Standalone的高可用集群安装,共有5个节点。
  2. 我的问题是,在每个节点的SPARK_HOME/logs下有几个日志文件非常大,而且不能按大小或日期滚动:spark-admin-org.apache.spark.deploy.worker.Worker-1-HOST_NAME.out、spark-admin-org.apache.spark.deploy.master.Master-1-HOST_NAME.out、spark-admin-org.apache.spark.deploy.history.HistoryServer-1-HOST_NAME.out,其级别好多是DEBUG级别的,摘录几行如下:
20-01-20.10:27:56.704 [WorkerUI-27     ] DEBUG SelectChannelEndPoint   - Local interests updating 0 -> 1 for SelectChannelEndPoint@7dedb6f1{/10.221.216.68:43512<->8081,Open,in,OSHUT,R,-,0/30000,HttpConnection}{io=1,kio=0,kro=1}
20-01-20.10:27:56.704 [WorkerUI-27     ] DEBUG SelectorManager         - Queued change org.spark_project.jetty.io.SelectChannelEndPoint$1@64e3a4da
20-01-20.10:27:56.704 [WorkerUI-25-selector-ServerConnectorManager@1a0e7661/1] DEBUG SelectorManager         - Selector loop woken up from select, 0/1 selected
20-01-20.10:27:56.704 [WorkerUI-25-selector-ServerConnectorManager@1a0e7661/1] DEBUG SelectorManager         - Running change org.spark_project.jetty.io.SelectChannelEndPoint$1@64e3a4da
20-01-20.10:27:56.704 [WorkerUI-25-selector-ServerConnectorManager@1a0e7661/1] DEBUG SelectChannelEndPoint   - Key interests updated 0 -> 1 on SelectChannelEndPoint@7dedb6f1{/10.221.216.68:43512<->8081,Open,in,OSHUT,R,-,0/30000,HttpConnection}{io=1,kio=1,kro=1}
20-01-20.10:27:56.704 [WorkerUI-25-selector-ServerConnectorManager@1a0e7661/1] DEBUG SelectorManager         - Selector loop waiting on select
20-01-20.10:27:56.707 [WorkerUI-25-selector-ServerConnectorManager@1a0e7661/1] DEBUG SelectorManager         - Selector loop woken up from select, 1/1 selected
20-01-20.10:27:56.708 [WorkerUI-25-selector-ServerConnectorManager@1a0e7661/1] DEBUG SelectChannelEndPoint   - Key interests updated 1 -> 0 on SelectChannelEndPoint@7dedb6f1{/10.221.216.68:43512<->8081,Open,in,OSHUT,R,-,3/30000,HttpConnection}{io=1,kio=0,kro=1}
20-01-20.10:27:56.708 [WorkerUI-25-selector-ServerConnectorManager@1a0e7661/1] DEBUG SelectChannelEndPoint   - Local interests updating 1 -> 0 for SelectChannelEndPoint@7dedb6f1{/10.221.216.68:43512<->8081,Open,in,OSHUT,R,-,4/30000,HttpConnection}{io=0,kio=0,kro=1}
20-01-20.10:27:56.708 [WorkerUI-25-selector-ServerConnectorManager@1a0e7661/1] DEBUG SelectorManager         - Queued change org.spark_project.jetty.io.SelectChannelEndPoint$1@64e3a4da
20-01-20.10:27:56.708 [WorkerUI-25-selector-ServerConnectorManager@1a0e7661/1] DEBUG AbstractConnection      - FILL_INTERESTED-->FILLING HttpConnection@74a9ce80[FILLING,SelectChannelEndPoint@7dedb6f1{/10.221.216.68:43512<->8081,Open,in,OSHUT,-,-,4/30000,HttpConnection}{io=0,kio=0,kro=1}][p=HttpParser{s=CLOSED,0 of 0},g=HttpGenerator{s=START},c=HttpChannelOverHttp@743ee72f{r=1,c=false,a=IDLE,uri=}]
20-01-20.10:27:56.708 [WorkerUI-25-selector-ServerConnectorManager@1a0e7661/1] DEBUG SelectorManager         - Running change org.spark_project.jetty.io.SelectChannelEndPoint$1@64e3a4da
20-01-20.10:27:56.708 [WorkerUI-25-selector-ServerConnectorManager@1a0e7661/1] DEBUG SelectorManager         - Selector loop waiting on select
20-01-20.10:27:56.708 [WorkerUI-30     ] DEBUG HttpConnection          - HttpConnection@74a9ce80[FILLING,SelectChannelEndPoint@7dedb6f1{/10.221.216.68:43512<->8081,Open,in,OSHUT,-,-,4/30000,HttpConnection}{io=0,kio=0,kro=1}][p=HttpParser{s=CLOSED,0 of 0},g=HttpGenerator{s=START},c=HttpChannelOverHttp@743ee72f{r=1,c=false,a=IDLE,uri=}] onFillable HttpChannelState@485a5da2{s=IDLE i=true a=null}
20-01-20.10:27:56.708 [WorkerUI-30     ] DEBUG ChannelEndPoint         - filled -1 SelectChannelEndPoint@7dedb6f1{/10.221.216.68:43512<->8081,Open,in,OSHUT,-,-,4/30000,HttpConnection}{io=0,kio=0,kro=1}
20-01-20.10:27:56.708 [WorkerUI-30     ] DEBUG ChannelEndPoint         - ishut SelectChannelEndPoint@7dedb6f1{/10.221.216.68:43512<->8081,Open,in,OSHUT,-,-,4/30000,HttpConnection}{io=0,kio=0,kro=1}
20-01-20.10:27:56.708 [WorkerUI-30     ] DEBUG AbstractEndPoint        - onClose SelectChannelEndPoint@7dedb6f1{/10.221.216.68:43512<->8081,CLOSED,ISHUT,OSHUT,-,-,4/30000,HttpConnection}{io=0,kio=0,kro=1}
20-01-20.10:27:56.708 [WorkerUI-30     ] DEBUG ChannelEndPoint         - close SelectChannelEndPoint@7dedb6f1{/10.221.216.68:43512<->8081,CLOSED,ISHUT,OSHUT,-,-,4/30000,HttpConnection}{io=0,kio=0,kro=1}
20-01-20.10:27:56.708 [WorkerUI-30     ] DEBUG SelectorManager         - Destroyed SelectChannelEndPoint@7dedb6f1{/10.221.216.68:43512<->8081,CLOSED,ISHUT,OSHUT,-,-,4/30000,HttpConnection}{io=0,kio=-1,kro=-1}
20-01-20.10:27:56.708 [WorkerUI-30     ] DEBUG AbstractConnection      - onClose HttpConnection@74a9ce80[FILLING,SelectChannelEndPoint@7dedb6f1{/10.221.216.68:43512<->8081,CLOSED,ISHUT,OSHUT,-,-,4/30000,HttpConnection}{io=0,kio=-1,kro=-1}][p=HttpParser{s=CLOSED,0 of 0},g=HttpGenerator{s=START},c=HttpChannelOverHttp@743ee72f{r=1,c=false,a=IDLE,uri=}]
20-01-20.10:27:56.708 [WorkerUI-30     ] DEBUG AbstractEndPoint        - onClose SelectChannelEndPoint@7dedb6f1{/10.221.216.68:43512<->8081,CLOSED,ISHUT,OSHUT,-,-,4/30000,HttpConnection}{io=0,kio=-1,kro=-1}
20-01-20.10:27:56.708 [WorkerUI-30     ] DEBUG HttpParser              - atEOF HttpParser{s=CLOSED,0 of 0}
20-01-20.10:27:56.708 [WorkerUI-30     ] DEBUG HttpParser              - parseNext s=CLOSED HeapByteBuffer@76b3285a[p=0,l=0,c=16384,r=0]={<<<>>>HEAD / HTTP/1.0\r\n...\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00}

  1. SPARK_HOME/conf/log4j.properties也修改了,但无效:
# Set everything to be logged to the console
log4j.rootCategory=INFO, file
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n

# Set the default spark-shell log level to WARN. When running the spark-shell, the
# log level for this class is used to overwrite the root logger's log level, so that
# the user can have different defaults for the shell and regular Spark apps.
log4j.logger.org.apache.spark.repl.Main=WARN

# Settings to quiet third party logs that are too verbose
log4j.logger.org.spark_project.jetty=WARN
log4j.logger.org.spark_project.jetty.util.component.AbstractLifeCycle=ERROR
log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFO
log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO
log4j.logger.org.apache.parquet=ERROR
log4j.logger.parquet=ERROR

# SPARK-9183: Settings to avoid annoying messages when looking up nonexistent UDFs in SparkSQL with Hive support
log4j.logger.org.apache.hadoop.hive.metastore.RetryingHMSHandler=FATAL
log4j.logger.org.apache.hadoop.hive.ql.exec.FunctionRegistry=ERROR
log4j.appender.file = org.apache.log4j.DailyRollingFileAppender
log4j.appender.file.File = /export/data/spark/logs/spark.log
log4j.appender.file.Append = true
log4j.appender.file.Threshold = INFO
log4j.appender.file.DatePattern='.'yyyy-MM-dd 
log4j.appender.file.layout = org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern = %-d{yyyy-MM-dd HH:mm:ss}  [ %t:%r ] - [ %p ]  %m%n
log4j.appender.file.encoding=UTF-8

log4j.logger.org.apache.spark.deploy.history.HistoryServer=INFO,historyserver
log4j.appender.historyserver=org.apache.log4j.RollingFileAppender
log4j.appender.historyserver.layout=org.apache.log4j.PatternLayout
log4j.appender.historyserver.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n
log4j.appender.historyserver.MaxFileSize=1GB
log4j.appender.historyserver.MaxBackupIndex=7
log4j.appender.historyserver.File=/export/servers/spark/logs/historyserver.log

log4j.logger.org.mortbay=WARN

log4j.logger.org.apache.hadoop=WARN

终于找到原因了!由于我需要在Spark环境下执行业务代码,我修改了spark-env.sh,添加了:

export SPARK_CLASSPATH=/export/servers/appJars/*

这里放的是我的业务依赖的第三方jar,有几十个jar文件,去掉该行后,PARK_HOME/conf/log4j.properties,就生效了。

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。

文章由半码博客整理,本文链接:https://www.bmabk.com/index.php/post/17248.html

(0)

相关推荐

半码博客——专业性很强的中文编程技术网站,欢迎收藏到浏览器,订阅我们!