从 EC2 实例调用 Python boto3 API 时,与 sts.amazonaws.com 的连接超时
我正在尝试设置一些基于 EC2 实例的构建和部署服务器,以通过 CloudFormation 将软件部署到 AWS。
当前设置使用 AWS CLI 部署 CloudFormation 模板,并使用凭证配置文件处理身份验证,其中 ~/.aws/config
文件的配置文件包含:
[profile x]
role_arn = x
credential_source = Ec2InstanceMetadata
region = x
使用 AWS CLI 的设置似乎是工作正常,并且可以部署 CloudFormation 模板、将文件上传到 S3 等。
我想进一步实现自动化,并使用基于配置的方法来让我们的部署更加灵活。为了实现这一目标,我编写了一些 Python 代码来解析配置文件并使用 Boto3 库(AWS CLI 也使用该库)来复制该功能。但是,当我尝试在 Boto3 中执行类似操作(例如部署 CloudFormation 并将文件上传到 S3)时,出现以下错误:与 sts.amazonaws.com 的连接超时
。不幸的是,我无法提供完整的堆栈跟踪,因为它位于单独的网络上。我正在运行 Python 3.7 和 boto3-1.21-13、botocore-1.24.13。
我认为这可能是因为我需要设置 VPC 终端节点 对于STS?但是,我无法弄清楚 AWS CLI 工作正常的原因和方式,但 Boto3 却不能。特别是因为 AWS CLI 在底层使用 Boto3。
此外,我已确认我可以检索实例元数据在 EC2 实例中使用curl。 要重现该错误,此命令对我来说失败:
python -c "import boto3;print(boto3.Session(profile_name='x').client('s3').list_objects('bucket')"
但是这个 AWS cli 命令可以工作:
aws --profile x s3 ls bucket
我想我不明白为什么当 boto3 命令失败时 AWS CLI 命令可以工作。当 AWS CLI 似乎没有调用 sts.amazonaws.com
端点时,我缺少什么?
I am trying to setup some build and deployment servers based on EC2 instances to deploy software to AWS via CloudFormation.
The current setup uses the AWS CLI to deploy CloudFormation templates, and authentication is handled using a credentials profile where the ~/.aws/config
file has a profile with:
[profile x]
role_arn = x
credential_source = Ec2InstanceMetadata
region = x
The setup using the AWS CLI appears to be working fine, and can deploy CloudFormation templates, upload files to S3 etc.
I wanted to automate this further and use a configuration-based approach to allow for more flexibility in our deployments. To achieve this, I have written some Python code to parse a config file and use the Boto3 library (which the AWS CLI also uses) to replicate the functionality. However when I am trying to do similar things in Boto3 (like deploy CloudFormation and upload files to S3), I get the following error: Connection to sts.amazonaws.com timed out
. Unfortunately I can't provide the full stack trace since it's on a separate network. I am running Python 3.7 and boto3-1.21-13, botocore-1.24.13.
I assume it might be because I need to setup a VPC endpoint for STS? However, I can't work out why and how the AWS CLI works fine, but Boto3 doesn't. Especially since AWS CLI uses Boto3 under the hood.
In addition, I have confirmed that I can retrieve instance metadata using curl from the EC2 instances.
To reproduce the error, this command fails for me:
python -c "import boto3;print(boto3.Session(profile_name='x').client('s3').list_objects('bucket')"
However this AWS cli command works:
aws --profile x s3 ls bucket
I guess I don't understand why the AWS CLI command works, when the boto3 command fails. Why does boto3 needs to call the sts.amazonaws.com
endpoint, when the AWS CLI seemingly doesn't? What am I missing?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
是的,结果我只需要设置/导出
AWS_STS_REGIONAL_ENDPOINTS='regional'
。经过几个小时的搜寻 botocore 和 awscli 源代码和日志后,我发现 botocore 默认将其设置为 “遗留”。
在 AWS CLI v2 中,他们将其设置为 '区域'。
Yeah so it turns out I just needed to set/export
AWS_STS_REGIONAL_ENDPOINTS='regional'
.After many hours of trawling the botocore and awscli source and logs, I found out that botocore sets it by default to 'legacy'.
Where as in v2 of the AWS CLI, they set it to 'regional'.
aws cli 和 boto3 都使用 botocore,这只是一个小细节。尽管如此,当 cli 和 boto3 在具有相同凭证访问权限的同一环境中运行时,确实应该能够到达相同的端点。
This:
和:
是等效的,并且应该对同一端点进行相同的 api 调用。
顺便说一句,我发现通常最好不要让代码涉及会话处理。对我来说,代码期望环境来处理这个问题似乎是最简单的。因此,只需导出
AWS_PROFILE
并运行代码即可。这可以防止脚本的其他用户必须具有相同的配置文件并将其命名为相同的配置文件。The aws cli and boto3 both use botocore, which is only a minor detail. Nevertheless, both the cli and boto3, when run in the same environment with the same access to the credentials, should indeed be able to reach the same endpoint.
This:
and:
are equivalent and should make the same api calls to the same endpoint.
As an aside, I find it is often best not to have your code concerned with session handling at all. It seems most simple to me for the code to expect the environment to handle that. So just export
AWS_PROFILE
and run the code. This prevents other user of the script from having to have the same profile and name it the same.