Windows下在Cygwin中Hadoop格式化以及运行Nutch命令
下面是Windows下在Cygwin中Hadoop格式化时的问题:
$ bin/hadoop namenode -format
Warning: $HADOOP_HOME is deprecated.13/08/01 14:30:14 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = ZHENZHEN/192.168.14.163
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 1.0.4-SNAPSHOT
STARTUP_MSG: build = -r ; compiled by 'Administrator' on Thu Aug 1 12:23:19 2013
************************************************************/
Re-format filesystem in tmphadoop-Administratordfsname ? (Y or N) Y
13/08/01 14:30:17 INFO util.GSet: VM type = 32-bit
13/08/01 14:30:17 INFO util.GSet: 2% max memory = 19.33375 MB
13/08/01 14:30:17 INFO util.GSet: capacity = 2^22 = 4194304 entries
13/08/01 14:30:17 INFO util.GSet: recommended=4194304, actual=4194304
13/08/01 14:30:17 INFO namenode.FSNamesystem: fsOwner=Administrator
13/08/01 14:30:17 INFO namenode.FSNamesystem: supergroup=supergroup
13/08/01 14:30:17 INFO namenode.FSNamesystem: isPermissionEnabled=true
13/08/01 14:30:17 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
13/08/01 14:30:17 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessK eyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
13/08/01 14:30:17 INFO namenode.NameNode: Caching file names occuring more than 10 times
13/08/01 14:30:17 ERROR namenode.NameNode: java.io.IOException: Cannot remove cu rrent directory: tmphadoop-Administratordfsnamecurrent
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDi rectory(Storage.java:295)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:13 20)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:13 39)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java: 1164)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNo de.java:1271)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:12 88)
13/08/01 14:30:17 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ZHENZHEN/192.168.14.163
************************************************************/
下面是Windows下在Cygwin中Nutch执行命令时的问题:
$ bin/nutch crawl urls -dir crawl -depth 3 -topN 5
cygpath: can't convert empty path
13/07/31 11:55:55 WARN crawl.Crawl: solrUrl is not set, indexing will be skipped...
13/07/31 11:55:55 INFO crawl.Crawl: crawl started in: crawl
13/07/31 11:55:55 INFO crawl.Crawl: rootUrlDir = urls
13/07/31 11:55:55 INFO crawl.Crawl: threads = 10
13/07/31 11:55:55 INFO crawl.Crawl: depth = 3
13/07/31 11:55:55 INFO crawl.Crawl: solrUrl=null
13/07/31 11:55:55 INFO crawl.Crawl: topN = 5
13/07/31 11:55:55 INFO crawl.Injector: Injector: starting at 2013-07-31 11:55:55
13/07/31 11:55:55 INFO crawl.Injector: Injector: crawlDb: crawl/crawldb
13/07/31 11:55:55 INFO crawl.Injector: Injector: urlDir: urls
13/07/31 11:55:55 INFO crawl.Injector: Injector: Converting injected urls to crawl db entries.
13/07/31 11:55:55 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
13/07/31 11:55:55 ERROR security.UserGroupInformation: PriviledgedActionException as:Administrator cause:java.io.IOException: Failed to set permissions of path: tmphadoop-AdministratormapredstagingAdministrator341668738.staging to 0700
Exception in thread "main" java.io.IOException: Failed to set permissions of path: tmphadoop-AdministratormapredstagingAdministrator341668738.staging to 0700
at org.apache.hadoop.fs.FileUtil.checkReturnValue(FileUtil.java:691)
at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:664)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:514)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:349)
at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:193)
at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:126)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:942)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:910)
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1353)
at org.apache.nutch.crawl.Injector.inject(Injector.java:281)
at org.apache.nutch.crawl.Crawl.run(Crawl.java:132)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.nutch.crawl.Crawl.main(Crawl.java:55)
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
官方的安装是在linux,我查到是hadoop的包在widow上就是不兼容,要改下,再编译生成
为什么只是说明问题,没有说如何解决