Hadoop Local主机:9870浏览器接口不起作用

发布于 2025-01-24 02:47:08 字数 1714 浏览 1 评论 0原文

我需要使用Hadoop进行数据分析。因此,我已经安装了hadoop,并如下配置。但是Localhost:9870不起作用。甚至每次使用它时,我都有格式名称。该论坛的一些文章和答案提到9870是50070的更新。我赢了10。我也在这个论坛中提到了答案,但它们都没有起作用。设定了Java-Home和Hadoop-home路径。还设置了通往Hadoop的bin和sbin的途径。谁能告诉我我在这里做错了什么?

我将此站点转介以进行安装和配置。 https ://medium.com/@pedro.a.hdez.a/hadoop-3-2-2-2-installation-guide-for-windows-10-454f5b5c222d3

core corece core core corece-site.xml

我已经设置了Java此XML的路径也是如此。

<property>
  <name>fs.defaultFS</name>
  <value>hdfs://localhost:9870</value>
</property>

hdfs-site.xml

<property>
  <name>dfs.replication</name>
  <value>1</value>
</property>
<property>
  <name>dfs.namenode.name.dir</name>
  <value>C:\hadoop-3.2.2\data\namenode</value>
</property>
<property>
  <name>dfs.datanode.data.dir</name>
  <value>C:\hadoop-3.2.2\data\datanode</value>
</property>

mapred-site.xml

<property>
  <name>mapreduce.framework.name</name>
  <value>yarn</value>
</property>

yarn-site.xml

<property>
  <name>yarn.nodemanager.aux-services</name>
  <value>mapreduce_shuffle</value>
</property>
<property>
  <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
  <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>

I need to do data analysis using Hadoop. Therefore I have installed Hadoop and configured as below. But localhost:9870 is not working. Even I have format namenode every time I worked with that. Some articles and answers of this forum mentioned that 9870 is the updated one from 50070. I have win 10. I also referred answers in this forum but none of them worked. Java-home and hadoop-home paths are set. Paths to bin and sbin of hadoop are also set up. Can anyone please tell me what I am doing wrong in here?

I referred this site to do the installation and configuration.
https://medium.com/@pedro.a.hdez.a/hadoop-3-2-2-installation-guide-for-windows-10-454f5b5c22d3

core-site.xml

I have set up the Java path in this xml as well.

<property>
  <name>fs.defaultFS</name>
  <value>hdfs://localhost:9870</value>
</property>

hdfs-site.xml

<property>
  <name>dfs.replication</name>
  <value>1</value>
</property>
<property>
  <name>dfs.namenode.name.dir</name>
  <value>C:\hadoop-3.2.2\data\namenode</value>
</property>
<property>
  <name>dfs.datanode.data.dir</name>
  <value>C:\hadoop-3.2.2\data\datanode</value>
</property>

mapred-site.xml

<property>
  <name>mapreduce.framework.name</name>
  <value>yarn</value>
</property>

yarn-site.xml

<property>
  <name>yarn.nodemanager.aux-services</name>
  <value>mapreduce_shuffle</value>
</property>
<property>
  <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
  <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

梦里的微风 2025-01-31 02:47:09

如果您查看Namenode日志,则很可能会出现一个错误,说明已经使用的端口已经在使用。

默认fs.defaultfs端口应为9000 - https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/singlecluster.html ;没有充分的理由,您不应该改变这一点。

Namenode Web UI 不是fs.defaultfs中的值。它的默认端口是9870,由dfs.namenode.http-addresshdfs-site.xmit.xml中定义。

需要进行数据分析

您可以在Windows上进行分析,而无需使用SPARK,HIVE,MAPREDUCE等。直接可以直接访问机器,而不会受到纱线容器尺寸的限制。

If you look at the namenode logs, it very likely has an error saying something about a port already being in use.

The default fs.defaultFS port should be 9000 - https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html ; you shouldn't change this without good reason.

The Namenode web UI isn't the value in fs.defaultFS. It's default port is 9870, and is defined by dfs.namenode.http-address in hdfs-site.xml

need to do data analysis

You can do analysis on Windows without Hadoop using Spark, Hive, MapReduce, etc. directly and it'll have direct access to your machine without being limited by YARN container sizes.

时光瘦了 2025-01-31 02:47:09

如果您对配置是肯定的,则只需以下步骤。
我正在使用Windows

  1. 打开CMD在管理模式
  2. 类型stop-all.cmd。这将停止所有节点。
  3. 键入HDFS datanode -format,然后点击Enter。
  4. 键入HDFS namenode -format,然后点击Enter。
  5. 成功格式化后,键入start-all.cmd,这将启动新服务。
  6. 然后键入JPS,并确保所有节点(如我所记得的4)都应该运行。
  7. 打开私有窗口和goto localhost:9870现在应该起作用。在评论中让我知道。

愉快的编码!

if you are positive on the configurations just follow below steps.
I am using windows

  1. Open a CMD in Admin mode
  2. type stop-all.cmd. this will stop all nodes.
  3. type hdfs datanode -format , then hit enter.
  4. type hdfs namenode -format , then hit enter.
  5. after successful formatting, type start-all.cmd and this will start the services newly.
  6. then type jps and make sure all nodes (as i remember 4) should run.
  7. open private window and goto localhost:9870 and now it should work. let me know in the comments.

happy coding !

蓝眸 2025-01-31 02:47:08

您必须执行:HDFS namenode -format
之前:start-all.sh

You have to execute: hdfs namenode -format
Before: start-all.sh

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文