Hadoop Local主机:9870浏览器接口不起作用
我需要使用Hadoop进行数据分析。因此,我已经安装了hadoop,并如下配置。但是Localhost:9870不起作用。甚至每次使用它时,我都有格式名称。该论坛的一些文章和答案提到9870是50070的更新。我赢了10。我也在这个论坛中提到了答案,但它们都没有起作用。设定了Java-Home和Hadoop-home路径。还设置了通往Hadoop的bin和sbin的途径。谁能告诉我我在这里做错了什么?
我将此站点转介以进行安装和配置。 https ://medium.com/@pedro.a.hdez.a/hadoop-3-2-2-2-installation-guide-for-windows-10-454f5b5c222d3
core corece core core corece-site.xml
我已经设置了Java此XML的路径也是如此。
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9870</value>
</property>
hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>C:\hadoop-3.2.2\data\namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>C:\hadoop-3.2.2\data\datanode</value>
</property>
mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
yarn-site.xml
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
I need to do data analysis using Hadoop. Therefore I have installed Hadoop and configured as below. But localhost:9870 is not working. Even I have format namenode every time I worked with that. Some articles and answers of this forum mentioned that 9870 is the updated one from 50070. I have win 10. I also referred answers in this forum but none of them worked. Java-home and hadoop-home paths are set. Paths to bin and sbin of hadoop are also set up. Can anyone please tell me what I am doing wrong in here?
I referred this site to do the installation and configuration.
https://medium.com/@pedro.a.hdez.a/hadoop-3-2-2-installation-guide-for-windows-10-454f5b5c22d3
core-site.xml
I have set up the Java path in this xml as well.
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9870</value>
</property>
hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>C:\hadoop-3.2.2\data\namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>C:\hadoop-3.2.2\data\datanode</value>
</property>
mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
yarn-site.xml
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
如果您查看Namenode日志,则很可能会出现一个错误,说明已经使用的端口已经在使用。
默认
fs.defaultfs
端口应为9000 - https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/singlecluster.html ;没有充分的理由,您不应该改变这一点。Namenode Web UI 不是
fs.defaultfs
中的值。它的默认端口是9870,由dfs.namenode.http-address
在hdfs-site.xmit.xml
中定义。您可以在Windows上进行分析,而无需使用SPARK,HIVE,MAPREDUCE等。直接可以直接访问机器,而不会受到纱线容器尺寸的限制。
If you look at the namenode logs, it very likely has an error saying something about a port already being in use.
The default
fs.defaultFS
port should be 9000 - https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html ; you shouldn't change this without good reason.The Namenode web UI isn't the value in
fs.defaultFS
. It's default port is 9870, and is defined bydfs.namenode.http-address
inhdfs-site.xml
You can do analysis on Windows without Hadoop using Spark, Hive, MapReduce, etc. directly and it'll have direct access to your machine without being limited by YARN container sizes.
如果您对配置是肯定的,则只需以下步骤。
我正在使用Windows
stop-all.cmd
。这将停止所有节点。HDFS datanode -format
,然后点击Enter。HDFS namenode -format
,然后点击Enter。start-all.cmd
,这将启动新服务。JPS
,并确保所有节点(如我所记得的4)都应该运行。localhost:9870
现在应该起作用。在评论中让我知道。愉快的编码!
if you are positive on the configurations just follow below steps.
I am using windows
stop-all.cmd
. this will stop all nodes.hdfs datanode -format
, then hit enter.hdfs namenode -format
, then hit enter.start-all.cmd
and this will start the services newly.jps
and make sure all nodes (as i remember 4) should run.localhost:9870
and now it should work. let me know in the comments.happy coding !
您必须执行:HDFS namenode -format
之前:start-all.sh
You have to execute: hdfs namenode -format
Before: start-all.sh