hadoop无法访问s3
我有一个关于 aws 上的 hadoop 访问 s3 的问题。
<property>
<name>fs.default.name</name>
<value>s3n://testhadoophiveserver</value>
</property>
<property>
<name>fs.s3n.awsAccessKeyId</name>
<value>I have fill it</value>
</property>
<property>
<name>fs.s3n.awsSecretAccessKey</name>
<value>I have fill it</value>
</property>
所以,当我运行 start-all.sh 时,我收到了错误代码。 像这样:
hadoopmaster: Exception in thread "main" java.net.UnknownHostException: unknown host: testhadoophiveserver
hadoopmaster: at org.apache.hadoop.ipc.Client$Connection.<init>(Client.java:195)
hadoopmaster: at org.apache.hadoop.ipc.Client.getConnection(Client.java:850)
adoopmaster: at org.apache.hadoop.ipc.Client.call(Client.java:720)
hadoopmaster: at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
hadoopmaster: at $Proxy4.getProtocolVersion(Unknown Source)
hadoopmaster: at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
hadoopmaster: at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:346)
hadoopmaster: at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:383)
hadoopmaster: at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:314)
但是,如果我使用 HDFS,那就没问题了。 现在,我无法使用S3文件系统。 谁能帮助我?
I have a question about hadoop access s3 on aws.
<property>
<name>fs.default.name</name>
<value>s3n://testhadoophiveserver</value>
</property>
<property>
<name>fs.s3n.awsAccessKeyId</name>
<value>I have fill it</value>
</property>
<property>
<name>fs.s3n.awsSecretAccessKey</name>
<value>I have fill it</value>
</property>
so .I got a error code when I run start-all.sh.
like this :
hadoopmaster: Exception in thread "main" java.net.UnknownHostException: unknown host: testhadoophiveserver
hadoopmaster: at org.apache.hadoop.ipc.Client$Connection.<init>(Client.java:195)
hadoopmaster: at org.apache.hadoop.ipc.Client.getConnection(Client.java:850)
adoopmaster: at org.apache.hadoop.ipc.Client.call(Client.java:720)
hadoopmaster: at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
hadoopmaster: at $Proxy4.getProtocolVersion(Unknown Source)
hadoopmaster: at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
hadoopmaster: at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:346)
hadoopmaster: at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:383)
hadoopmaster: at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:314)
but ,if I use HDFS ,it's ok.
now ,I can not use S3 filesystem.
who can help me?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
我认为你不应该运行“start-all.sh”。
脚本“start-all.sh”包含启动HDFS和MapReduce的代码。
如果您已配置使用S3作为底层存储层,则无需启动HDFS。
start-dfs.sh被start-all.sh调用,所以它会执行你没有配置的启动HDFS的代码。
I think you should not run " start-all.sh".
The scripts " start-all.sh" include the code of start HDFS and MapReduce.
It not need to start HDFS if you have configured to use S3 as the underlying storage layer.
The start-dfs.sh is called by start-all.sh, so it will execute the code to start HDFS which you did not configure.