为什么 Amazon S3 身份验证处理程序还没有准备好?
我正确设置了 $AWS_ACCESS_KEY_ID 和 $AWS_SECRET_ACCESS_KEY 环境变量,然后运行此代码:
import boto
conn = boto.connect_s3()
并收到此错误:
boto.exception.NoAuthHandlerFound: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV1Handler']
发生了什么?我不知道从哪里开始调试。
看来 boto 没有从我的环境变量中获取值。如果我将密钥 ID 和密钥作为参数传递给连接构造函数,则效果很好。
I have my $AWS_ACCESS_KEY_ID and $AWS_SECRET_ACCESS_KEY environment variables set properly, and I run this code:
import boto
conn = boto.connect_s3()
and get this error:
boto.exception.NoAuthHandlerFound: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV1Handler']
What's happening? I don't know where to start debugging.
It seems boto isn't taking the values from my environment variables. If I pass in the key id and secret key as arguments to the connection constructor, this works fine.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(12)
Boto 将从环境变量中获取您的凭据。
我已经用 V2.0b3 对此进行了测试,效果很好。它将优先考虑构造函数中显式指定的凭据,但它也会从环境变量中获取凭据。
最简单的方法是将您的凭据放入文本文件中,并指定该文件在环境中的位置。
例如(在 Windows 上:我希望它在 Linux 上也能同样工作,但我个人没有尝试过)
创建一个名为“mycred.txt”的文件并将其放入 C:\temp
该文件包含两行:
将环境变量 AWS_CREDENTIAL_FILE 定义为指向 C:\temp\mycred.txt
现在,上面的代码片段
将正常工作。
Boto will take your credentials from the environment variables.
I've tested this with V2.0b3 and it works fine. It will give precedence to credentials specified explicitly in the constructor, but it will pick up credentials from the environment variables too.
The simplest way to do this is to put your credentials into a text file, and specify the location of that file in the environment.
For example (on Windows: I expect it will work just the same on Linux but I have not personally tried that)
Create a file called "mycred.txt" and put it into C:\temp
This file contains two lines:
Define the environment variable AWS_CREDENTIAL_FILE to point at C:\temp\mycred.txt
Now your code fragment above:
will work fine.
我是 python 和 boto 的新手,但能够重现您的错误(或至少重现错误的最后一行)。
您很可能无法在 bash 中导出变量。如果你只是定义,它们只在当前 shell 中有效,导出它们并且 python 继承该值。因此:
除非您还添加:
或者您可以在同一行上完成所有操作:
对于其他值也是如此。您也可以将其放入 .bashrc 中(假设 bash 是您的 shell 并假设您记得导出)
I'm a newbie to both python and boto but was able to reproduce your error (or at least the last line of your error.)
You are most likely failing to export your variables in bash. if you just define then, they're only valid in the current shell, export them and python inherits the value. Thus:
will not work unless you also add:
Or you can do it all on the same line:
Likewise for the other value. You can also put this in your .bashrc (assuming bash is your shell and assuming you remember to export)
我刚刚在使用 Linux 和 SES 时遇到了这个问题,我希望它可以帮助其他遇到类似问题的人。我已经安装了 awscli 并配置了我的密钥:
这用于在 ~/.aws/config 中设置您的凭据,就像 @huythang 所说。 但是 boto 在 ~/.aws/credentials 中查找您的凭证,因此将它们复制过来
假设使用这些凭据为您的用户设置了适当的策略 - 您不需要设置任何环境变量。
I just ran into this problem while using Linux and SES, and I hope it may help others with a similar issue. I had installed awscli and configured my keys doing:
This is used to setup your credentials in ~/.aws/config just like @huythang said. But boto looks for your credentials in ~/.aws/credentials so copy them over
Assuming an appropriate policy is setup for your user with those credentials - you shouldn't need to set any environment variables.
跟进 nealmcb 关于 IAM 角色的回答。在使用 IAM 角色部署 EMR 集群时,我遇到了类似的问题,在将 boto 连接到 s3 时有时(并非每次)都会出现此错误。
元数据服务在检索凭据时可能会超时。因此,正如文档所建议的,我在配置中添加了 Boto 部分,并增加了检索凭据的重试次数。请注意,默认值为 1 次尝试。
http://boto.readthedocs.org/en/latest/ boto_config_tut.html?highlight=retries#boto
向下滚动到:
您可以控制从元数据服务检索信息时使用的超时和重试次数(这用于检索 EC2 实例上 IAM 角色的凭证) )
Following up on nealmcb's answer on IAM roles. Whilst deploying EMR clusters using an IAM role, I had a similar issue where at times (not every time) this error would come up whilst connecting boto to s3.
The Metadata Service can timeout whilst retrieving credentials. Thus, as the docs suggest, I added a Boto section in the config and increased the number of retries to retrieve the credentials. Note that the default is 1 attempt.
http://boto.readthedocs.org/en/latest/boto_config_tut.html?highlight=retries#boto
Scroll down to:
You can control the timeouts and number of retries used when retrieving information from the Metadata Service (this is used for retrieving credentials for IAM roles on EC2 instances)
我找到了我的答案此处。
在 Unix 上:首先设置 aws config:
并设置环境变量
I found my answer here.
On Unix: first setup aws config:
And set environment variables
请参阅最新的 boto s3 介绍:
See latest boto s3 introduction:
就我而言,问题是在 IAM 中“默认情况下用户没有权限”。我花了一整天的时间才找到这个问题,因为我已经习惯了原始的 AWS 身份验证模型(pre-iam),其中现在所谓的“根”凭证是唯一的方法。
有很多关于创建用户的 AWS 文档,但只有少数地方指出您必须授予他们执行任何操作的权限。一是使用 Amazon S3 存储桶 - Amazon Simple Storage Service,但即使它实际上并不只是告诉您转到“策略”选项卡,建议一个好的起始策略,并解释如何应用它。
该向导只是鼓励您“开始使用 IAM 用户”,并没有明确说明还有更多工作要做。即使您稍微浏览一下,您也只会看到例如“托管策略没有附加到此用户的托管策略。”这并不表明您需要策略来执行任何操作。
要建立类似 root 的用户,请参阅:
使用控制台创建管理员组 - AWS Identity and Access Management
我没有看到简单地允许对所有 S3(我自己的存储桶以及其他人拥有的公共存储桶)进行只读访问的特定策略。
In my case the problem was that in IAM "users by default have no permissions". It took me all day to track that down, since I was used to the original AWS authentication model (pre-iam) in which what are now called "root" credentials were the only way.
There are lots of AWS documents on creating users, but only a few places where they note that you have to give them permissions for them to do anything. One is Working with Amazon S3 Buckets - Amazon Simple Storage Service, but even it doesn't really just tell you to go to the Policies tab, suggest a good starting policy, and explain how to apply it.
The wizard-of-sorts simply encourages you to "Get started with IAM users" and doesn't clarify that there is much more to do. Even if you poke around a bit, you just see e.g. "Managed Policies There are no managed policies attached to this user." which doesn't suggest that you need a policy to do anything.
To establish a root-like user, see:
Creating an Administrators Group Using the Console - AWS Identity and Access Management
I don't see a specific policy which simply simply allows read-only access to all of S3 (my own buckets as well as public ones owned by others).
您现在可以将它们设置为 connect 函数调用中的参数。
只是想我会添加这一点,以防其他人像我一样进行搜索。
You can now set these as arguments in the connect function call.
Just thought I'd add that incase anyone else searched like I did.
我之前曾成功使用过 s3-parallel-put ,但它莫名其妙地停止工作,并给出了上述错误。尽管已导出 AWS_ACCESS_KEY_ID 和 AWS_SECRET_ACCESS_KEY,但仍然如此。
解决方案是在 boto 配置文件中指定凭据:
输入像这样的凭证:
I had previously used
s3-parallel-put
successfully but it inexplicably stopped working, giving the error above. This despite having exported the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.The solution was to specify the the credentials in the boto config file:
Enter the credentials like so:
在 Mac 上,导出密钥需要如下所示:
key=value
。因此,导出AWS_ACCESS_KEY_ID
环境变量应如下所示:AWS_ACCESS_KEY_ID=yourkey
。如果您对您的值有任何引用,如上面答案中所述,boto 将抛出上述错误。On Mac, exporting keys need to look like this:
key=value
. So exporting, say,AWS_ACCESS_KEY_ID
environmental var should look like this:AWS_ACCESS_KEY_ID=yourkey
. If you have any quotations around your values, as mentioned in above answers, boto will throw the above-mentioned error.我在 ec2 上的 Flask 应用程序中遇到了这个问题。我不想将凭证放入应用程序中,而是通过 IAM 角色管理权限。这种方式可以避免将密钥硬编码到代码中。然后我在AWS控制台中设置了一个策略(我什至没有编码,我只是使用了策略生成器)
我的代码与OP的完全一样。这里的其他解决方案都很好,但有一种方法可以无需硬编码访问密钥来授予权限。
boto.connect_s3()
进行连接 #无需密钥I was having this issue with a flask application on ec2. I didn't want to put credentials in the application, but managed permission via IAM roles. That way can avoid hard-coding keys into code. Then I set a policy in the AWS console (I didn't even code it, I just used the policy generator)
My code is exactly like OP's. The other solutions here are good but there is a way to grand permission without hard-coding access keys.
boto.connect_s3()
#no keys needed我看到您称它们为
AWS_ACCESS_KEY_ID
&AWS_SECRET_ACCESS_KEY
。看起来它们应该设置为
AWSAccessKeyId
&AWSSecretKey
。I see you call them
AWS_ACCESS_KEY_ID
&AWS_SECRET_ACCESS_KEY
.When it seems they should be set as
AWSAccessKeyId
&AWSSecretKey
.