Hadoop启动失败
分布式安装Hadoop出错,配置有一台master,三台slave。 master上namenode 和resourcemanager启动错误。jdk、环境变量和ssh无密码登陆均已设置。日志如下:
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on 2014-11-13T21:10Z STARTUP_MSG: java = 1.7.0_65 ************************************************************/ 2015-11-06 17:27:42,226 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 2015-11-06 17:27:42,229 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode [] 2015-11-06 17:27:42,572 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 2015-11-06 17:27:42,673 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 2015-11-06 17:27:42,673 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started 2015-11-06 17:27:42,675 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://master.hadoop:9000 2015-11-06 17:27:42,676 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use master.hadoop:9000 to access this namenode/service. 2015-11-06 17:27:53,008 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070 2015-11-06 17:27:53,049 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2015-11-06 17:27:53,053 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined 2015-11-06 17:27:53,063 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2015-11-06 17:27:53,066 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs 2015-11-06 17:27:53,066 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2015-11-06 17:27:53,066 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2015-11-06 17:27:53,120 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter) 2015-11-06 17:27:53,122 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/* 2015-11-06 17:27:53,139 INFO org.apache.hadoop.http.HttpServer2: HttpServer.start() threw a non Bind IOException java.net.BindException: Port in use: 0.0.0.0:50070 at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:891) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:827) at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142) at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:703) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:590) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:444) at sun.nio.ch.Net.bind(Net.java:436) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:886) ... 8 more 2015-11-06 17:27:53,141 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system... 2015-11-06 17:27:53,141 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped. 2015-11-06 17:27:53,141 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete. 2015-11-06 17:27:53,141 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. java.net.BindException: Port in use: 0.0.0.0:50070 at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:891) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:827) at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142) at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:703) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:590) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:444) at sun.nio.ch.Net.bind(Net.java:436) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:886) ... 8 more 2015-11-06 17:27:53,143 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2015-11-06 17:27:53,144 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at master.hadoop/127.0.0.1 ************************************************************/
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
楼主最后怎么解决的啊,同问题困扰几天了啊
master上NameNode,SecondaryNameNode,ResourceManager进程都启动正常,slave上的Datanode和NodeManager进程均启动正常。但是master:50070页面上Datanode是0。请问这种情况是什么原因?
hadoop there are no corrupt file blocks
很明显的端口占用问题啊:caused by: java.net.BindException: Address already in
use