Hadoop的数据台成功运行,但Livenode在Master:8088网站上是0

发布于 2025-02-03 17:35:20 字数 4874 浏览 5 评论 0原文

最近,当我配置Hadoop时,我发现数据磁极节点是通过JPS正常启动的,但是Master中显示的实时节点的数量是:8088是0。

以下是主节点和数据节点上的配置文件:< /strong>

/ett /hosts

192.168.127.130   Master
192.168.127.129   Slave
192.168.127.131   Slave1

core-site.xml

<configuration>
        <property>
                <name>fs.defaultFS</name>
                <value>hdfs://Master:9000</value>
        </property>
        <property>
                <name>hadoop.tmp.dir</name>
                <value>/usr/local/hadoop/tmp</value>
                <description>Abase for other temporary directories.</description>
        </property>
</configuration>

hdfs-site.xml

<configuration>
        <property>
                <name>dfs.namenode.secondary.http-address</name>
                <value>Master:50090</value>
        </property>
        <property>
                <name>dfs.namenode.http.address</name>
               <value>Master:50070</value>
        </property>
        <property>
                <name>dfs.replication</name>
                <value>1</value>
        </property>
        <property>
                <name>dfs.namenode.name.dir</name>
                <value>/usr/local/hadoop/tmp/dfs/name</value>
        </property>
        <property>
                <name>dfs.datanode.data.dir</name>
                <value>/usr/local/hadoop/tmp/dfs/data</value>
        </property>
        <property>
                <name>dfs.permissions.enabled</name>
                <value>false</value>  
        </property>
        <property>
                <name>dfs.namenode.datanode.registration.ip-hostname-check</name>
        <value>false</value>
        </property>
</configuration>

mapred-site.xml

<configuration>
        <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
        </property>
        <property>
                <name>mapreduce.jobhistory.address</name>
                <value>Master:10020</value>
        </property>
        <property>
                <name>mapreduce.jobhistory.webapp.address</name>
                <value>Master:19888</value>
        </property>
        <property>
                <name>yarn.app.mapreduce.am.env</name>
                <value>HADOOP_MAPRED_HOME=/usr/local/hadoop</value>
        </property>
        <property>
                <name>mapreduce.map.env</name>
                <value>HADOOP_MAPRED_HOME=/usr/local/hadoop</value>
        </property>
        <property>
                <name>mapreduce.reduce.env</name>
                <value>HADOOP_MAPRED_HOME=/usr/local/hadoop</value>
        </property> 
</configuration>

yarn-site.xml

<configuration>
        <property>
                <name>yarn.resourcemanager.hostname</name>
                <value>Master</value>
        </property>
        <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
        </property>
        
        <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>2048</value>
 </property>
 <property>
        <name>yarn.nodemanager.resource.cpu-vcores</name>
        <value>1</value>
 </property>
 
 <property>
            <name>yarn.nodemanager.aux-services</name>
            <value>mapreduce_shuffle</value>
        </property>
        <property>
             <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
             <value>org.apache.hadoop.mapred.ShuffleHandler</value>
         </property>

</configuration>

jps在Namenode上运行的JPS给出以下内容:

Jps
NameNode
SecondaryNameNode
ResourceManager

DataNode上的JPS:

DataNode
Jps
NodeManager

对我来说,这似乎是正确的。 但是,当我想掌握:8088时,就不存在现场节点。 为什么我会遇到错误?

顺便说一句,我已经检查了所有节点的日志,并且没有显示错误。每个节点可以ping其他。

而且我还尝试了

1.STOP并重新启动Hadoop。不起作用

2.停滞不前。 中的所有文件

删除/usr/local/hadoop/tmp 3. Format namenode

Recently, when I configured Hadoop, I found that the Datanode node was started normally through JPS, but the number of live nodes displayed in master: 8088 was 0.

Following are the configuration files on the master node and data node:

/etc/hosts

192.168.127.130   Master
192.168.127.129   Slave
192.168.127.131   Slave1

core-site.xml

<configuration>
        <property>
                <name>fs.defaultFS</name>
                <value>hdfs://Master:9000</value>
        </property>
        <property>
                <name>hadoop.tmp.dir</name>
                <value>/usr/local/hadoop/tmp</value>
                <description>Abase for other temporary directories.</description>
        </property>
</configuration>

hdfs-site.xml

<configuration>
        <property>
                <name>dfs.namenode.secondary.http-address</name>
                <value>Master:50090</value>
        </property>
        <property>
                <name>dfs.namenode.http.address</name>
               <value>Master:50070</value>
        </property>
        <property>
                <name>dfs.replication</name>
                <value>1</value>
        </property>
        <property>
                <name>dfs.namenode.name.dir</name>
                <value>/usr/local/hadoop/tmp/dfs/name</value>
        </property>
        <property>
                <name>dfs.datanode.data.dir</name>
                <value>/usr/local/hadoop/tmp/dfs/data</value>
        </property>
        <property>
                <name>dfs.permissions.enabled</name>
                <value>false</value>  
        </property>
        <property>
                <name>dfs.namenode.datanode.registration.ip-hostname-check</name>
        <value>false</value>
        </property>
</configuration>

mapred-site.xml

<configuration>
        <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
        </property>
        <property>
                <name>mapreduce.jobhistory.address</name>
                <value>Master:10020</value>
        </property>
        <property>
                <name>mapreduce.jobhistory.webapp.address</name>
                <value>Master:19888</value>
        </property>
        <property>
                <name>yarn.app.mapreduce.am.env</name>
                <value>HADOOP_MAPRED_HOME=/usr/local/hadoop</value>
        </property>
        <property>
                <name>mapreduce.map.env</name>
                <value>HADOOP_MAPRED_HOME=/usr/local/hadoop</value>
        </property>
        <property>
                <name>mapreduce.reduce.env</name>
                <value>HADOOP_MAPRED_HOME=/usr/local/hadoop</value>
        </property> 
</configuration>

yarn-site.xml

<configuration>
        <property>
                <name>yarn.resourcemanager.hostname</name>
                <value>Master</value>
        </property>
        <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
        </property>
        
        <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>2048</value>
 </property>
 <property>
        <name>yarn.nodemanager.resource.cpu-vcores</name>
        <value>1</value>
 </property>
 
 <property>
            <name>yarn.nodemanager.aux-services</name>
            <value>mapreduce_shuffle</value>
        </property>
        <property>
             <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
             <value>org.apache.hadoop.mapred.ShuffleHandler</value>
         </property>

</configuration>

the jps run on the Namenode give the following:

Jps
NameNode
SecondaryNameNode
ResourceManager

and jps on datanode:

DataNode
Jps
NodeManager

which to me seems right.
but when I looking to master:8088,there is no live nodes exist.
Why I am geting the error?

by the way,I have already checked the logs of all nodes and no errors are shown.Each node can ping others.

and I also have try

1.stop and restart Hadoop. not work

2.stop Hadoop. delete all the files in /usr/local/hadoop/tmp

3.format namenode by hdfs namenode -format, still not work

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

-小熊_ 2025-02-10 17:35:20

我发现了问题。我的数据座是不健康的,因为local-Dirs可用空间低于配置的利用率百分比/无需可用空间。在将更多的空间分配给磁盘之后,解决了此问题。

I find out the problem.My datanode is unhealthy, because local-dirs usable space is below configured utilization percentage/no more usable space .After allocating more space to disk,this problem is solved.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文