Hadoop put 命令什么都不做!
我正在运行 Cloudera 的 Hadoop 发行版,一切工作正常。hdfs 包含大量 .seq 文件。我需要将所有 .seq 文件的内容合并到一个大 .seq 文件中。但是,getmerge 命令没有执行任何操作对我来说。然后我使用 cat 并将一些 .seq 文件的数据通过管道传输到本地文件中。当我想将此文件“放入”hdfs 时,它什么也不做。没有显示错误消息,也没有创建文件。
我能够在 hdfs 中“touchz”文件,并且用户权限在这里不是问题。put 命令根本不起作用。我做错了什么?
I am running Cloudera's distribution of Hadoop and everything is working perfectly.The hdfs contains a large number of .seq files.I need to merge the contents of all the .seq files into one large .seq file.However, the getmerge command did nothing for me.I then used cat and piped the data of some .seq files onto a local file.When i want to "put" this file into hdfs it does nothing.No error message shows up,and no file is created.
I am able to "touchz" files in the hdfs and user permissions are not a problem here.The put command simply does not work.What am I doing wrong?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
编写一项作业,将所有序列文件合并为一个文件。它只是标准的映射器和缩减器,只有一个缩减任务。
Write a job that merges the all sequence files into a single one. It's just the standard mapper and reducer with only one reduce task.
如果“hadoop”命令默默失败,您应该看看它。
只需输入:“which hadoop”,这将为您提供“hadoop”可执行文件的位置。它是一个 shell 脚本,只需编辑它并添加日志记录即可查看发生了什么。
如果 hadoop bash 脚本一开始就失败,那么 hadoop dfs -put 命令不起作用也就不足为奇了。
if the "hadoop" commands fails silently you should have a look at it.
Just type: 'which hadoop', this will give you the location of the "hadoop" executable. It is a shell script, just edit it and add logging to see what's going on.
If the hadoop bash script fails at the beginning it is no surprise that the hadoop dfs -put command does not work.