hadoop 空指针异常
我试图设置 hadoop michael-noll 的方式 使用两台计算机。
当我尝试格式化 hdfs 时,它显示了 NullPointerException。
hadoop@psycho-O:~/project/hadoop-0.20.2$ bin/start-dfs.sh
starting namenode, logging to /home/hadoop/project/hadoop-0.20.2/bin/../logs/hadoop-hadoop-namenode-psycho-O.out
slave: bash: line 0: cd: /home/hadoop/project/hadoop-0.20.2/bin/..: No such file or directory
slave: bash: /home/hadoop/project/hadoop-0.20.2/bin/hadoop-daemon.sh: No such file or directory
master: starting datanode, logging to /home/hadoop/project/hadoop-0.20.2/bin/../logs/hadoop-hadoop-datanode-psycho-O.out
master: starting secondarynamenode, logging to /home/hadoop/project/hadoop-0.20.2/bin/../logs/hadoop-hadoop-secondarynamenode-psycho-O.out
master: Exception in thread "main" java.lang.NullPointerException
master: at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:134)
master: at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:156)
master: at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:160)
master: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:131)
master: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:115)
master: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:469)
hadoop@psycho-O:~/project/hadoop-0.20.2$
我不知道是什么原因造成的。请帮我找出问题所在。我不是这个话题的新手,所以请让你的答案尽可能不那么技术化。 :)
如果需要更多信息,请告诉我。
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(5)
看来你的辅助名称节点在连接到主名称节点时遇到了麻烦,这对于整个系统来说绝对是必需的,因为需要完成检查点工作。所以我猜你的网络配置有问题,包括:
${HADOOP_HOME}/conf/core-site.xml,其中包含如下内容:
和/etc/hosts。这个文件确实是一个滑坡,你要小心这些ip别名,它应该与该ip的机器的主机名一致。
<前><代码> 127.0.0.1 本地主机
127.0.1.1 扎克
# 以下行适用于支持 IPv6 的主机
::1 ip6-localhost ip6-环回
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-所有节点
ff02::2 ip6-所有路由器
192.168.1.153大师#注意这两个!!!
192.168.99.146 从机1
It seems that your secondary namenode has trouble connecting to the primary namenode, which is definitely required for the whole system to rock the road, for there's checkpointing things need to be done. So I guess there's something wrong with your network configuration, including:
${HADOOP_HOME}/conf/core-site.xml,which contains something like this:
and the /etc/hosts. This file is really a slippy slope, you gotta be careful with these ip alias name, which should be consistent with the hostname of the machine with that ip.
显然默认值不正确,因此您必须按照本文所述自行添加它们
它对我有用。
Apparently the defaults are not correct so you have to add them yourself as described in this post
It worked for me.
看来您根本没有在数据节点(从属)中安装hadoop(或者)您在错误的路径中安装了hadoop。您的情况的正确路径应该是 /home/hadoop/project/hadoop-0.20.2/
It seems you have not installed hadoop in your datanode(slave) at all (or) you have done it in a wrong path. The correct path in your case should be /home/hadoop/project/hadoop-0.20.2/
您的 bash 脚本似乎没有执行权限或者甚至不存在:
Your bash scripts seem not to have the execute rights or don't even exist:
您可能设置了错误的用户目录或其他内容,看起来它正在错误的目录中查找您的文件。
You might have set your user directory wrong or something, looks like it's looking in the wrong directories to find your files.