运行作业时Hadoop DFS权限问题

发布于 2024-12-04 01:36:06 字数 612 浏览 0 评论 0原文

我收到以下权限错误,并且不确定为什么 hadoop 试图写入这个特定文件夹:

hadoop jar /usr/lib/hadoop/hadoop-*-examples.jar pi 2 100000
Number of Maps  = 2
Samples per Map = 100000
Wrote input for Map #0
Wrote input for Map #1
Starting Job
org.apache.hadoop.security.AccessControlException: org.apache.hadoop.security.AccessControlException: Permission denied: user=myuser, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x

知道为什么它试图写入我的 hdfs 的根目录吗?

更新:临时将 hdfs root (/) 设置为 777 权限后,我看到正在写入“/tmp”文件夹。我想一个选择是只创建一个具有开放权限的“/tmp”文件夹,供所有人写入,但从安全角度来看,如果将其写入用户文件夹(即/user/myuser/tmp),那就太好了

I'm getting this following permission error, and am not sure why hadoop is trying to write to this particular folder:

hadoop jar /usr/lib/hadoop/hadoop-*-examples.jar pi 2 100000
Number of Maps  = 2
Samples per Map = 100000
Wrote input for Map #0
Wrote input for Map #1
Starting Job
org.apache.hadoop.security.AccessControlException: org.apache.hadoop.security.AccessControlException: Permission denied: user=myuser, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x

Any idea why it is trying to write to the root of my hdfs?

Update: After temporarily setting hdfs root (/) to be 777 permissions, I seen that a "/tmp" folder is being written. I suppose one option is to just create a "/tmp" folder with open permissions for all to write to, but it would be nice from a security standpoint if this is instead written to the user folder (i.e. /user/myuser/tmp)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

删除→记忆 2024-12-11 01:36:06

我能够通过以下设置来实现此功能:

<configuration>
    <property>
        <name>mapreduce.jobtracker.staging.root.dir</name>
        <value>/user</value>
    </property>

    #...

</configuration>

还需要重新启动 jobtracker 服务(特别感谢 Hadoop 邮件列表上的 Jeff 帮助我追踪问题!)

I was able to get this working with the following setting:

<configuration>
    <property>
        <name>mapreduce.jobtracker.staging.root.dir</name>
        <value>/user</value>
    </property>

    #...

</configuration>

Restart of jobtracker service required as well (special thanks to Jeff on Hadoop mailing list for helping me track down problem!)

累赘 2024-12-11 01:36:06

1)使用以下命令在hdfs中创建{mapred.system.dir}/mapred目录

sudo -u hdfs hadoop fs -mkdir /hadoop/mapred/

2)为mapred用户授予权限

sudo -u hdfs hadoop fs -chown mapred:hadoop /hadoop/mapred/

1) Create the {mapred.system.dir}/mapred directory in hdfs using the following command

sudo -u hdfs hadoop fs -mkdir /hadoop/mapred/

2) Give permission to mapred user

sudo -u hdfs hadoop fs -chown mapred:hadoop /hadoop/mapred/
晨与橙与城 2024-12-11 01:36:06

您还可以创建一个名为“hdfs”的新用户。非常简单的解决方案,但可能不那么干净。

当然,这是当您将 Hue 与 Cloudera Hadoop Manager (CDH3) 一起使用时

You can also make a new user named "hdfs". Quite simple solution but not as clean probably.

Of course this is when you are using Hue with Cloudera Hadoop Manager (CDH3)

一杯敬自由 2024-12-11 01:36:06

您需要设置hadoop根目录(/)的权限,而不是设置系统根目录的权限。我也很困惑,但后来意识到提到的目录是hadoop的文件系统,而不是系统的。

You need to set the permission for hadoop root directory (/) instead of setting the permission for the system's root directory. Even I was confused, but then realized that the directory mentioned was of hadoop's file system and not the system's.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文