Hadoop跨主机docker集群配置,上传到hdfs上出错,单物理机不报错
首先,介绍下我的情况,我有一台物理机作为master节点,后面简称master。 另外拥有两台服务器,后称为node1 和 node2。 其中在node1上配置docker节点slave1-slave10, 在node2中配置节点为slave11-slave20。
此时已经确保了master都能够使用ssh的方式连接到每一台子节点,master和node的防火墙iptables已经关闭,NAT字表关闭,FORWARD ACCEPT(如果不这样设置将无法转发),然后docker中并不带有防火墙,因此无法关闭。
当为联通其中任意一台服务器的10个节点的时候,分布式系统都能够正常运行。然而当我同时联通两台服务器时,上传文件到hdfs上就会出现如下bug:
18/07/31 15:42:20 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.123.1:8032
18/07/31 15:42:21 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.123.1:8032
18/07/31 15:42:22 INFO mapred.FileInputFormat: Total input paths to process : 1
18/07/31 15:42:25 INFO hdfs.DFSClient: Exception in createBlockOutputStream
java.io.IOException: Got error, status message , ack with firstBadLink as 192.168.123.24:50010
at org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:142)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1482)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1385)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:554)
18/07/31 15:42:25 INFO hdfs.DFSClient: Abandoning BP-557839422-192.168.123.1-1533022646989:blk_1073741831_1007
18/07/31 15:42:25 INFO hdfs.DFSClient: Excluding datanode DatanodeInfoWithStorage[192.168.123.24:50010,DS-7062ca95-5971-4c80-87f7-5ea1a2f9f448,DISK]
18/07/31 15:42:30 INFO hdfs.DFSClient: Exception in createBlockOutputStream
java.io.IOException: Got error, status message , ack with firstBadLink as 192.168.123.19:50010
at org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:142)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1482)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1385)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:554)
18/07/31 15:42:30 INFO hdfs.DFSClient: Abandoning BP-557839422-192.168.123.1-1533022646989:blk_1073741832_1008
18/07/31 15:42:30 INFO hdfs.DFSClient: Excluding datanode DatanodeInfoWithStorage[192.168.123.19:50010,DS-9f25c91c-4b25-4dc3-9581-581ba2d4d79c,DISK]
18/07/31 15:42:41 INFO hdfs.DFSClient: Exception in createBlockOutputStream
java.io.IOException: Got error, status message , ack with firstBadLink as 192.168.123.22:50010
at org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:142)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1482)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1385)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:554)
18/07/31 15:42:41 INFO hdfs.DFSClient: Abandoning BP-557839422-192.168.123.1-1533022646989:blk_1073741833_1009
18/07/31 15:42:41 INFO hdfs.DFSClient: Excluding datanode DatanodeInfoWithStorage[192.168.123.22:50010,DS-45f819cc-a3b5-44a9-8a98-75f9442d5dd4,DISK]
18/07/31 15:42:45 INFO hdfs.DFSClient: Exception in createBlockOutputStream
java.io.IOException: Got error, status message , ack with firstBadLink as 192.168.123.17:50010
at org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:142)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1482)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1385)
at org.apache.hadoop.hdfs.DFSOutp
我到网上查询后,大多数的原因都说是我的防火墙未关闭,但是我的master和node不管是iptables还是ufw都已经关闭,而docker并不存在防火墙,甚至我将docker强行装上防火墙并且关闭也会出现此情况。
谢谢大家~
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
您好,你的问题解决了吗?我也遇到了这个问题,能否告诉我一下如何解决,必有重谢,qq 1738127840
请问你有解决吗,急急急