Hadoop dfs -ls 返回我的 hadoop/ 目录中的文件列表
我已经设置了一个单节点 Hadoop 配置,在 Win7 下通过 cygwin 运行。启动 Hadoop 后
bin/start-all.sh
I runbin/hadoop dfs -ls
which returns me a list of files in my hadoop directory. Then I runbin/hadoop datanode -format
bin/hadoop namenode -format
but -ls still returns me the contents of my hadoop directory. As far as I understand it should return nothing(empty folder). What am I doing wrong?I've set up a sigle-node Hadoop configuration running via cygwin under Win7. After starting Hadoop by
bin/start-all.sh
I run
bin/hadoop dfs -ls
which returns me a list of files in my hadoop directory. Then I run
bin/hadoop datanode -format
bin/hadoop namenode -format
but -ls still returns me the contents of my hadoop directory. As far as I understand it should return nothing(empty folder). What am I doing wrong?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
您是否编辑了conf文件夹下的core-site.xml和mapred-site.xml?
您的 hadoop 集群似乎处于本地模式。
Did you edit the core-site.xml and mapred-site.xml under conf folder ?
It seems like your hadoop cluster is in local mode.
我知道这个问题已经很老了,但是 Hadoop 中的目录结构已经改变了一点(版本 2.5 )
Jeroen 当前的版本是。
hdfs dfs -ls hdfs://localhost:9000/users/smalldata
另外仅供参考 - 使用 start-all.sh 和 stop-all.sh 已被弃用,而应该使用 start- dfs.sh 和 start-yarn.sh
I know this question is quite old, but directory structure in Hadoop has changed a bit (version 2.5 )
Jeroen's current version would be.
hdfs dfs -ls hdfs://localhost:9000/users/smalldata
Also Just for information - use of start-all.sh and stop-all.sh has been deprecated, instead one should use start-dfs.sh and start-yarn.sh
我遇到了同样的问题,并通过显式指定 NameNode 的 URL 来解决它。
要列出 hdfs 空间根目录中的所有目录,请执行以下操作:
文档说明了有关配置中默认 hdfs 点的信息,但我找不到它。如果有人知道他们的意思,请赐教。
这是我获得信息的地方: http://hadoop.apache .org/common/docs/r0.20.0/hdfs_shell.html#概述
I had the same problem and solved it by explicitly specifying the URL to the NameNode.
To list all directories in the root of your hdfs space do the following:
The documentation says something about a default hdfs point in the configuration, but I cannot find it. If someone knows what they mean please enlighten us.
This is where I got the info: http://hadoop.apache.org/common/docs/r0.20.0/hdfs_shell.html#Overview
或者您可以这样做:
Or you could just do: