Hadoop HA ERROR: Exception in doCheckpoint (IOException) 图像上传 doCheckpoint 期间出现异常

发布于 2025-01-15 08:02:01 字数 6577 浏览 1 评论 0原文

我在基于 Windows 10 的集群中使用 Hadoop 3.2.2,并使用 Quorum Journal 管理器在 HDFS 上配置高可用性。

系统工作得很好,我能够毫无问题地将节点从活动状态转换为备用状态,但我经常收到以下错误消息:

java.io.IOException: Exception during image upload
    at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer.doCheckpoint(StandbyCheckpointer.java:315)
    at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer.access$1300(StandbyCheckpointer.java:64)
    at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$CheckpointerThread.doWork(StandbyCheckpointer.java:480)
    at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$CheckpointerThread.access$600(StandbyCheckpointer.java:383)
    at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$CheckpointerThread$1.run(StandbyCheckpointer.java:403)
    at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:502)
    at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$CheckpointerThread.run(StandbyCheckpointer.java:399)
Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Error writing request body to server
    at java.util.concurrent.FutureTask.report(FutureTask.java:122)
    at java.util.concurrent.FutureTask.get(FutureTask.java:192)
    at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer.doCheckpoint(StandbyCheckpointer.java:295)
    ... 6 more
Caused by: java.io.IOException: Error writing request body to server
    at sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3597)
    at sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3580)
    at org.apache.hadoop.hdfs.server.namenode.TransferFsImage.copyFileToStream(TransferFsImage.java:377)
    at org.apache.hadoop.hdfs.server.namenode.TransferFsImage.writeFileToPutRequest(TransferFsImage.java:321)
    at org.apache.hadoop.hdfs.server.namenode.TransferFsImage.uploadImage(TransferFsImage.java:295)
    at org.apache.hadoop.hdfs.server.namenode.TransferFsImage.uploadImageFromStorage(TransferFsImage.java:230)
    at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$1.call(StandbyCheckpointer.java:277)
    at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$1.call(StandbyCheckpointer.java:272)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748

我的集群设置如下

A: Namenode, Zookeeper, ZKFC, Journal

B: Namenode, Zookeeper, ZKFC、Journal

C:Namenode、Zookeeper、ZKFC

D:Journal、Datanode

E、F、G....:Datanode

这是我的 hdfs 站点配置

<configuration>
  <property>
    <name>dfs.nameservices</name>
    <value>mycluster</value>
    <description>Logical name for this new nameservice</description>
  </property>
  <property>
    <name>dfs.ha.namenodes.mycluster</name>
    <value>A,B,C</value>
    <description>Unique identifiers for each NameNode in the 
    nameservice</description>
  </property>
  <property>
    <name>dfs.namenode.rpc-address.mycluster.A</name>
    <value>A:8020</value>
    <description>RPC address for NameNode 1, it is necessary to use the real host name of the machine instead of an aliases</description>
  </property>
  <property>
    <name>dfs.namenode.rpc-address.mycluster.B</name>
    <value>B:8020</value>
    <description>RPC address for NameNode 2</description>
  </property>
  <property>
    <name>dfs.namenode.rpc-address.mycluster.C</name>
    <value>C:8020</value>
    <description>RPC address for NameNode 3</description>
  </property>
  <property>
    <name>dfs.namenode.http-address.mycluster.A</name>
    <value>A:9870</value>
    <description>HTTP address for NameNode 1</description>
  </property>
  <property>
    <name>dfs.namenode.http-address.mycluster.B</name>
    <value>B:9870</value>
    <description>HTTP address for NameNode 2</description>
  </property>
  <property>
    <name>dfs.namenode.http-address.mycluster.C</name>
    <value>C:9870</value>
    <description>HTTP address for NameNode 3</description>
  </property>
  <property>
    <name>dfs.namenode.shared.edits.dir</name>
    <value>qjournal://A:8485;B:8485;D:8485/mycluster</value>
  </property>
  <property>
    <name>dfs.client.failover.proxy.provider.mycluster</name>
    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
  </property>
  <property>
    <name>dfs.ha.fencing.methods</name>
    <value>shell(C:/mylocation/stop-namenode.bat $target_host)</value>
  </property>
  <property>
    <name>dfs.journalnode.edits.dir</name>
    <value>C:/hadoop-3.2.2/data/journal</value>
  </property>
  <property>
    <name>dfs.ha.automatic-failover.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>ha.zookeeper.quorum</name>
    <value>A:2181,B:2181,C:2181</value>
  </property>
  <property>
    <name>dfs.replication</name>
    <value>3</value>
  </property>
  <property>
    <name>dfs.namenode.name.dir</name>
    <value>file:///C:/hadoop-3.2.2/data/dfs/namenode</value>
  </property>
  <property>
    <name>dfs.datanode.data.dir</name>
    <value>file:///C:/hadoop-3.2.2/data/dfs/datanode</value>
  </property>
  <property>
    <name>dfs.namenode.safemode.threshold-pct</name>
    <value>0.5f</value>
  </property>
  <property>
    <name>dfs.client.use.datanode.hostname</name>
    <value>true</value>
  </property>
  <property>
    <name>dfs.datanode.use.datanode.hostname</name>
    <value>true</value>
  </property>
</configuration>

有人遇到同样的问题吗?我在这里错过了什么吗?

I am using Hadoop 3.2.2 in a cluster based on Windows 10 and on which the high availability is configured on HDFS using the Quorum Journal manager.

The system works just fine, I am able to transition nodes from active to standby state without issues, but I often get the following error message :

java.io.IOException: Exception during image upload
    at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer.doCheckpoint(StandbyCheckpointer.java:315)
    at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer.access$1300(StandbyCheckpointer.java:64)
    at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$CheckpointerThread.doWork(StandbyCheckpointer.java:480)
    at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$CheckpointerThread.access$600(StandbyCheckpointer.java:383)
    at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$CheckpointerThread$1.run(StandbyCheckpointer.java:403)
    at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:502)
    at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$CheckpointerThread.run(StandbyCheckpointer.java:399)
Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Error writing request body to server
    at java.util.concurrent.FutureTask.report(FutureTask.java:122)
    at java.util.concurrent.FutureTask.get(FutureTask.java:192)
    at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer.doCheckpoint(StandbyCheckpointer.java:295)
    ... 6 more
Caused by: java.io.IOException: Error writing request body to server
    at sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3597)
    at sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3580)
    at org.apache.hadoop.hdfs.server.namenode.TransferFsImage.copyFileToStream(TransferFsImage.java:377)
    at org.apache.hadoop.hdfs.server.namenode.TransferFsImage.writeFileToPutRequest(TransferFsImage.java:321)
    at org.apache.hadoop.hdfs.server.namenode.TransferFsImage.uploadImage(TransferFsImage.java:295)
    at org.apache.hadoop.hdfs.server.namenode.TransferFsImage.uploadImageFromStorage(TransferFsImage.java:230)
    at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$1.call(StandbyCheckpointer.java:277)
    at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$1.call(StandbyCheckpointer.java:272)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748

My cluster setup is the following

A: Namenode, Zookeeper, ZKFC, Journal

B: Namenode, Zookeeper, ZKFC, Journal

C: Namenode, Zookeeper, ZKFC

D: Journal, Datanode

E,F,G....: Datanode

Here is my hdfs-site configuration

<configuration>
  <property>
    <name>dfs.nameservices</name>
    <value>mycluster</value>
    <description>Logical name for this new nameservice</description>
  </property>
  <property>
    <name>dfs.ha.namenodes.mycluster</name>
    <value>A,B,C</value>
    <description>Unique identifiers for each NameNode in the 
    nameservice</description>
  </property>
  <property>
    <name>dfs.namenode.rpc-address.mycluster.A</name>
    <value>A:8020</value>
    <description>RPC address for NameNode 1, it is necessary to use the real host name of the machine instead of an aliases</description>
  </property>
  <property>
    <name>dfs.namenode.rpc-address.mycluster.B</name>
    <value>B:8020</value>
    <description>RPC address for NameNode 2</description>
  </property>
  <property>
    <name>dfs.namenode.rpc-address.mycluster.C</name>
    <value>C:8020</value>
    <description>RPC address for NameNode 3</description>
  </property>
  <property>
    <name>dfs.namenode.http-address.mycluster.A</name>
    <value>A:9870</value>
    <description>HTTP address for NameNode 1</description>
  </property>
  <property>
    <name>dfs.namenode.http-address.mycluster.B</name>
    <value>B:9870</value>
    <description>HTTP address for NameNode 2</description>
  </property>
  <property>
    <name>dfs.namenode.http-address.mycluster.C</name>
    <value>C:9870</value>
    <description>HTTP address for NameNode 3</description>
  </property>
  <property>
    <name>dfs.namenode.shared.edits.dir</name>
    <value>qjournal://A:8485;B:8485;D:8485/mycluster</value>
  </property>
  <property>
    <name>dfs.client.failover.proxy.provider.mycluster</name>
    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
  </property>
  <property>
    <name>dfs.ha.fencing.methods</name>
    <value>shell(C:/mylocation/stop-namenode.bat $target_host)</value>
  </property>
  <property>
    <name>dfs.journalnode.edits.dir</name>
    <value>C:/hadoop-3.2.2/data/journal</value>
  </property>
  <property>
    <name>dfs.ha.automatic-failover.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>ha.zookeeper.quorum</name>
    <value>A:2181,B:2181,C:2181</value>
  </property>
  <property>
    <name>dfs.replication</name>
    <value>3</value>
  </property>
  <property>
    <name>dfs.namenode.name.dir</name>
    <value>file:///C:/hadoop-3.2.2/data/dfs/namenode</value>
  </property>
  <property>
    <name>dfs.datanode.data.dir</name>
    <value>file:///C:/hadoop-3.2.2/data/dfs/datanode</value>
  </property>
  <property>
    <name>dfs.namenode.safemode.threshold-pct</name>
    <value>0.5f</value>
  </property>
  <property>
    <name>dfs.client.use.datanode.hostname</name>
    <value>true</value>
  </property>
  <property>
    <name>dfs.datanode.use.datanode.hostname</name>
    <value>true</value>
  </property>
</configuration>

Does someone got the same issue ? Am I missing something here ?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

风吹雪碎 2025-01-22 08:02:01

不确定这个问题是否得到解决。可能是由于此更改 https://issues.apache.org/jira/browse /HADOOP-16886。解决方案是在 core-site.xml 中添加 hadoop.http.idle_timeout.ms 所需的值。

Not sure if this issue is resolved. It may be because of this change https://issues.apache.org/jira/browse/HADOOP-16886. Solution would be to add the desired value for hadoop.http.idle_timeout.ms in core-site.xml.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文