flume 往hdfs 里写失败不知为啥
2016-04-03 14:50:21,897 (hdfs-k1-call-runner-17) [ERROR - org.apache.flume.sink.hdfs.AbstractHDFSWriter.hflushOrSync(AbstractHDFSWriter.java:267)] Error while trying to hflushOrSync! 2016-04-03 14:50:22,240 (ResponseProcessor for block BP-379782447-10.215.1.51-1450951413112:blk_1074017796_277848) [WARN - org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:871)] Slow ReadProcessor read fields took 59999ms (threshold=30000ms); ack: seqno: -2 status: SUCCESS status: SUCCESS status: ERROR downstreamAckTimeNanos: 0, targets: [10.215.1.53:50010, 10.215.1.54:50010, 10.215.1.52:50010] 2016-04-03 14:50:22,240 (ResponseProcessor for block BP-379782447-10.215.1.51-1450951413112:blk_1074017796_277848) [WARN - org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:954)] DFSOutputStream ResponseProcessor exception for block BP-379782447-10.215.1.51-1450951413112:blk_1074017796_277848 java.io.IOException: Bad response ERROR for block BP-379782447-10.215.1.51-1450951413112:blk_1074017796_277848 from datanode 10.215.1.52:50010 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:897) 2016-04-03 14:50:31,898 (SinkRunner-PollingRunner-DefaultSinkProcessor) [WARN - org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:370)] failed to close() HDFSWriter for file (hdfs://10.215.1.51:8020/flume/amq/care/2016/04/03/event160403.1459666160825.log.tmp). Exception follows. java.io.IOException: Callable timed out after 10000 ms on file: hdfs://10.215.1.51:8020/flume/amq/care/2016/04/03/event160403.1459666160825.log.tmp at org.apache.flume.sink.hdfs.BucketWriter.callWithTimeout(BucketWriter.java:693) at org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:367) at org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:559) at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:418) at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68) at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147) at java.lang.Thread.run(Thread.java:745) Caused by: java.util.concurrent.TimeoutException at java.util.concurrent.FutureTask.get(FutureTask.java:205) at org.apache.flume.sink.hdfs.BucketWriter.callWithTimeout(BucketWriter.java:686)
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(7)
伪分布式下,我也遇到了这个错误,不过我最后发现是因为我在写上传地址的时候没指定端口号,我配置文件hdfs端口号是9000,我在flume的angent配置文件写了:hdfs://192.168.0140/flume/%Y%m%d 指定了端口号后就正常了
走http
回复
你看这样行吗,http也走的是tcp,我是这样写的 while(buffer.hasRemaining() && socketChannel.write(buffer) != -1){}
我不知道你是怎么样的场景?我的flume 和hdfs是在不同设备的。 dev-001(flume sink指定hdfs) ===》 dev-002(hdfs:8020)
请问写的时候是直接用socket写入吗
谢谢 问题已确定。 是因为flume到hdfs 之间的网络问题导致的。
网HDFS上写文件响应的慢。你检查一下是你的datanode节点没启动或者网络问题