dis' hdfs dfs -cp'使用 /TMP作为其实施的一部分
试图调查 /TMP填充的问题,我们不知道是什么原因引起的。我们确实有一个更改,该更改使用HDFS命令将副本执行到另一个主机(hdfs dfs -cp/source/file/file hdfs://other.host:port/target/target/file
,以及while while 它可能会使用它作为实施的一部分。
复制操作并未直接触摸或参考 /TMP ,
Trying to investigate an issue where /tmp is filling up and we don't know what's causing it. We do have a recent change that's using the HDFS command to perform a copy to another host (hdfs dfs -cp /source/file hdfs://other.host:port/target/file
, and while the copy operation doesn't directly touch or reference /tmp it could potentially be using it as part of its implementation.
But I can't find anything in the documentation to confirm or refute that theory - does anyone else know the answer?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
您可以查看代码:
这是复制使用HDFS 。
它使用自己的内部 commandWithDestination 类。
并使用另一个 internals班级实际上只是java.io。课程。 (要完成实际写入。)因此,它在内存中缓冲字节并在周围发送字节。可能不是问题。您可以通过更改Java使用的TMP目录进行检查。 ( java.io.tmpdir )
Metheod used to by
You could look at the code:
Here's the code for copying using HDFS.
It uses it's own internal CommandWithDestination class.
And writes everything using another internal class which is really just java.io. classes. (To complete the actual write.) So it's buffering byte's in memory and sending the bytes around. Likely not the issue. You could check this by altering the tmp directory used by java. (java.io.tmpdir)
Metheod used to by HDFS copy: