Hadoop 中 DataNode 无法启动
我尝试在 Ubuntu 11.04 和 Java 6 sun 上设置 Hadoop 安装。我正在使用 hadoop 0.20.203 rc1 版本。我在 Ubuntu 11.04 上使用 java-6-sun 时反复遇到问题。当我尝试启动 hadoop 时,由于“无法访问存储”,数据节点未启动。
2011-12-22 22:09:20,874 INFO org.apache.hadoop.hdfs.server.common.Storage: Cannot lock storage /home/hadoop/work/dfs_blk/hadoop. The directory is already locked.
2011-12-22 22:09:20,896 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Cannot lock storage /home/hadoop/work/dfs_blk/hadoop. The directory is already locked.
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:602)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:455)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:111)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:354)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:268)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1480)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1419)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1437)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1563)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1573)
我尝试过从Apache甚至cloudera升级和降级到0.20分支中的几个版本,还再次删除并安装hadoop。但我仍然遇到这个问题。典型的解决方法(例如删除 /tmp 目录中的 *.pid 文件)也不起作用。有人能指出我这个问题的解决方案吗?
I trying setup Hadoop install on Ubuntu 11.04 and Java 6 sun. I was working with hadoop 0.20.203 rc1 build. I am repeatedly running into an issue on Ubuntu 11.04 with java-6-sun. When I try to start the hadoop, the datanode doesn't start due to "Cannot access storage".
2011-12-22 22:09:20,874 INFO org.apache.hadoop.hdfs.server.common.Storage: Cannot lock storage /home/hadoop/work/dfs_blk/hadoop. The directory is already locked.
2011-12-22 22:09:20,896 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Cannot lock storage /home/hadoop/work/dfs_blk/hadoop. The directory is already locked.
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:602)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:455)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:111)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:354)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:268)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1480)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1419)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1437)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1563)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1573)
I have tried upgrading and downgrading to couple of versions in 0.20 branch from Apache, even cloudera, also deleting and installing hadoop again. But Still I am running into this issue. Typical workarounds such as deleting *.pid files in /tmp directory is also not working. Could anybody point me to solution for this?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
是的,我格式化了 namenode ,问题出在我复制粘贴的
hdfs-site.xml
的流氓模板、dfs.data.dir
和dfs .name.dir
指向同一目录位置,导致锁定存储错误。它们应该是不同的目录。不幸的是,hadoop 文档在这个微妙的细节上还不够清晰。Yes I formatted the namenode , the problem was in of the rogue templates for
hdfs-site.xml
that i copy pasted , thedfs.data.dir
anddfs.name.dir
pointed to the same directory location resulting Locked storage error. They should be different directories. Unfortunately, the hadoop documentation is not clear enough in this subtle details.