hadoop 空指针异常

发布于 2024-10-29 02:44:08 字数 1659 浏览 1 评论 0 原文

我试图设置 hadoop michael-noll 的方式 使用两台计算机。

当我尝试格式化 hdfs 时,它显示了 NullPointerException。

hadoop@psycho-O:~/project/hadoop-0.20.2$ bin/start-dfs.sh
starting namenode, logging to /home/hadoop/project/hadoop-0.20.2/bin/../logs/hadoop-hadoop-namenode-psycho-O.out
slave: bash: line 0: cd: /home/hadoop/project/hadoop-0.20.2/bin/..: No such file or directory
slave: bash: /home/hadoop/project/hadoop-0.20.2/bin/hadoop-daemon.sh: No such file or directory
master: starting datanode, logging to /home/hadoop/project/hadoop-0.20.2/bin/../logs/hadoop-hadoop-datanode-psycho-O.out
master: starting secondarynamenode, logging to /home/hadoop/project/hadoop-0.20.2/bin/../logs/hadoop-hadoop-secondarynamenode-psycho-O.out
master: Exception in thread "main" java.lang.NullPointerException
master:     at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:134)
master:     at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:156)
master:     at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:160)
master:     at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:131)
master:     at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:115)
master:     at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:469)
hadoop@psycho-O:~/project/hadoop-0.20.2$ 

我不知道是什么原因造成的。请帮我找出问题所在。我不是这个话题的新手,所以请让你的答案尽可能不那么技术化。 :)

如果需要更多信息,请告诉我。

I was trying to setup a multi node cluster of hadoop michael-noll's way using two computers.

When I tried to format the hdfs it showed a NullPointerException.

hadoop@psycho-O:~/project/hadoop-0.20.2$ bin/start-dfs.sh
starting namenode, logging to /home/hadoop/project/hadoop-0.20.2/bin/../logs/hadoop-hadoop-namenode-psycho-O.out
slave: bash: line 0: cd: /home/hadoop/project/hadoop-0.20.2/bin/..: No such file or directory
slave: bash: /home/hadoop/project/hadoop-0.20.2/bin/hadoop-daemon.sh: No such file or directory
master: starting datanode, logging to /home/hadoop/project/hadoop-0.20.2/bin/../logs/hadoop-hadoop-datanode-psycho-O.out
master: starting secondarynamenode, logging to /home/hadoop/project/hadoop-0.20.2/bin/../logs/hadoop-hadoop-secondarynamenode-psycho-O.out
master: Exception in thread "main" java.lang.NullPointerException
master:     at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:134)
master:     at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:156)
master:     at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:160)
master:     at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:131)
master:     at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:115)
master:     at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:469)
hadoop@psycho-O:~/project/hadoop-0.20.2$ 

I dunno what is causing this. Please help me figure out the problem. I am not a fresher in the topic, so please make your answer less techy as possible. :)

If some more information is needed kindly tell me.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

So要识趣 2024-11-05 02:44:08
master:     at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:134)
master:     at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:156)
master:     at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:160)

看来你的辅助名称节点在连接到主名称节点时遇到了麻烦,这对于整个系统来说绝对是必需的,因为需要完成检查点工作。所以我猜你的网络配置有问题,包括:

  • ${HADOOP_HOME}/conf/core-site.xml,其中包含如下内容:

    
    <配置>
        <属性>
            <名称>hadoop.tmp.dir
            <值>/app/hadoop/tmp
            其他临时目录的基础。
        
    
        <属性>
            <名称>fs.default.name
            <值>hdfs://master:54310
            <描述>默认文件系统的名称。一个 URI,其
            方案和权限决定了文件系统的实现。这
            uri 的方案决定配置属性 (fs.SCHEME.impl) 命名
            文件系统实现类。 uri的权限用于
            确定文件系统的主机、端口等。
        
    
    
  • 和/etc/hosts。这个文件确实是一个滑坡,你要小心这些ip别名,它应该与该ip的机器的主机名一致。

    <前><代码> 127.0.0.1 本地主机
    127.0.1.1 扎克

    # 以下行适用于支持 IPv6 的主机
    ::1 ip6-localhost ip6-环回
    fe00::0 ip6-localnet
    ff00::0 ip6-mcastprefix
    ff02::1 ip6-所有节点
    ff02::2 ip6-所有路由器

    192.168.1.153大师#注意这两个!!!
    192.168.99.146 从机1

master:     at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:134)
master:     at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:156)
master:     at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:160)

It seems that your secondary namenode has trouble connecting to the primary namenode, which is definitely required for the whole system to rock the road, for there's checkpointing things need to be done. So I guess there's something wrong with your network configuration, including:

  • ${HADOOP_HOME}/conf/core-site.xml,which contains something like this:

    <!-- Put site-specific property overrides in this file. -->
    <configuration>
        <property>
            <name>hadoop.tmp.dir</name>
            <value>/app/hadoop/tmp</value>
            <description>A base for other temporary directories.</description>
        </property>
    
        <property>
            <name>fs.default.name</name>
            <value>hdfs://master:54310</value>
            <description>The name of the default file system.  A URI whose
            scheme and authority determine the FileSystem implementation.  The
            uri's scheme determines the config property (fs.SCHEME.impl) naming
            the FileSystem implementation class.  The uri's authority is used to
            determine the host, port, etc. for a filesystem.</description>
        </property>
    </configuration>
    
  • and the /etc/hosts. This file is really a slippy slope, you gotta be careful with these ip alias name, which should be consistent with the hostname of the machine with that ip.

        127.0.0.1   localhost
        127.0.1.1   zac
    
        # The following lines are desirable for IPv6 capable hosts
        ::1     ip6-localhost ip6-loopback
        fe00::0 ip6-localnet
        ff00::0 ip6-mcastprefix
        ff02::1 ip6-allnodes
        ff02::2 ip6-allrouters
    
        192.168.1.153 master     #pay attention to these two!!!
        192.168.99.146 slave1
    
不念旧人 2024-11-05 02:44:08

显然默认值不正确,因此您必须按照本文所述自行添加它们

它对我有用。

Apparently the defaults are not correct so you have to add them yourself as described in this post

It worked for me.

近箐 2024-11-05 02:44:08

看来您根本没有在数据节点(从属)中安装hadoop(或者)您在错误的路径中安装了hadoop。您的情况的正确路径应该是 /home/hadoop/project/hadoop-0.20.2/

It seems you have not installed hadoop in your datanode(slave) at all (or) you have done it in a wrong path. The correct path in your case should be /home/hadoop/project/hadoop-0.20.2/

残花月 2024-11-05 02:44:08

您的 bash 脚本似乎没有执行权限或者甚至不存在:

slave: bash: line 0: cd: /home/hadoop/project/hadoop-0.20.2/bin/..: 没有这样的文件或目录
从属:bash:/home/hadoop/project/hadoop-0.20.2/bin/hadoop-daemon.sh:没有这样的文件或目录

Your bash scripts seem not to have the execute rights or don't even exist:

slave: bash: line 0: cd: /home/hadoop/project/hadoop-0.20.2/bin/..: No such file or directory
slave: bash: /home/hadoop/project/hadoop-0.20.2/bin/hadoop-daemon.sh: No such file or directory

情泪▽动烟 2024-11-05 02:44:08

您可能设置了错误的用户目录或其他内容,看起来它正在错误的目录中查找您的文件。

You might have set your user directory wrong or something, looks like it's looking in the wrong directories to find your files.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文