将 AWS 凭证作为用户数据传递到 EC2 实例的最佳方法是什么?
我有一个基于AWS的作业处理架构,需要EC2实例查询S3和SQS。 为了使正在运行的实例能够访问 API,凭据将以 Base64 编码的 shell 脚本的形式作为用户数据 (-f) 发送。 例如:
$ cat ec2.sh
...
export AWS_ACCOUNT_NUMBER='1111-1111-1111'
export AWS_ACCESS_KEY_ID='0x0x0x0x0x0x0x0x0x0'
...
$ zip -P 'secret-password' ec2.sh
$ openssl enc -base64 -in ec2.zip
启动了许多实例...
$ ec2run ami-a83fabc0 -n 20 -f ec2.zip
每个实例都使用硬编码到初始化脚本中的“秘密密码”来解码和解密 ec2.zip。 尽管它确实有效,但我的方法有两个问题。
- “zip -P”不是很安全
- 密码在实例中是硬编码的(它始终是“秘密密码”)
该方法与描述的方法非常相似 此处
有更优雅或更可接受的方法吗? 使用 gpg 加密凭证并在实例上存储私钥来解密它是我现在正在考虑的一种方法,但我不知道有任何警告。 我可以直接使用 AWS 密钥对吗? 我是否遗漏了 API 中一些非常明显的部分?
I have a job processing architecture based on AWS that requires EC2 instances query S3 and SQS. In order for running instances to have access to the API the credentials are sent as user data (-f) in the form of a base64 encoded shell script. For example:
$ cat ec2.sh
...
export AWS_ACCOUNT_NUMBER='1111-1111-1111'
export AWS_ACCESS_KEY_ID='0x0x0x0x0x0x0x0x0x0'
...
$ zip -P 'secret-password' ec2.sh
$ openssl enc -base64 -in ec2.zip
Many instances are launched...
$ ec2run ami-a83fabc0 -n 20 -f ec2.zip
Each instance decodes and decrypts ec2.zip using the 'secret-password' which is hard-coded into an init script. Although it does work, I have two issues with my approach.
- 'zip -P' is not very secure
- The password is hard-coded in the instance (it's always 'secret-password')
The method is very similar to the one described here
Is there a more elegant or accepted approach? Using gpg to encrypt the credentials and storing the private key on the instance to decrypt it is an approach I'm considering now but I'm unaware of any caveats. Can I use the AWS keypairs directly? Am I missing some super obvious part of the API?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(5)
您可以将凭据存储在计算机上(或传输、使用,然后删除它们。)
您可以通过安全通道传输凭据(例如,使用带有非交互式身份验证(例如密钥对)的
scp
),以便您不需要执行任何自定义加密(只需确保始终将密钥文件的权限正确设置为0400
,例如设置主文件的权限并使用scp - p
)如果上述内容没有回答您的问题,请提供更具体的详细信息。 你的设置是什么以及你想要实现什么目标。 EC2 操作是否要从一个中心位置在多个节点上启动? 多个节点和中心位置之间是否可以使用 SSH? 等等
编辑
您是否考虑过参数化您的 AMI,要求实例化 AMI 的人员首先使用其 AWS 密钥填充用户数据 (
ec2-run-instances -f user-data-file
)? 然后,您的 AMI 可以从http://169.254.169.254/1.0/user-data
动态检索这些每个实例的参数。更新
好的,这里是对迄今为止讨论的各种方法的安全性比较:
用户数据
中时的数据安全性telnet
、curl
、的用户都可以访问明文数据>wget
等(可以访问明文http://169.254.169.254/1.0/user-data
)http://169.254.169.254/1.0/user-data
)用户数据
中并使用易于获取的密钥进行加密(或可解密)telnet
、curl
、的用户都可以访问解密的数据wget
等(可以访问明文http://169.254.169.254/1.0/user-data
)http://169.254.169.254/1.0/user-data,使用容易获得的密钥进行解密)
用户数据
中并使用不易获取的密钥进行加密telnet
、curl
、的用户都可以访问加密数据wget
等(可以访问加密的http://169.254.169.254/1.0/user-data
)root
进行交互式模拟)可提高安全性因此任何涉及 AMI
用户数据
的方法code> 并不是最安全的,因为获得计算机上的任何用户的访问权限(最薄弱的点)都会危及数据。如果仅在有限的时间内(即仅在部署过程中)需要 S3 凭证,并且 AWS 允许您覆盖或删除用户数据的内容,则可以缓解这种情况 完成后(但情况似乎并非如此。)另一种方法是在部署过程中创建临时 S3 凭证(如果可能的话)(从
user 处泄露这些凭证-data
,在部署过程完成并且凭证已通过 AWS 失效后,不再构成安全威胁。)如果上述内容不适用(例如,部署的节点无限期需要 S3 凭证)或不可能(例如,不能颁发仅用于部署的临时 S3 凭证),那么最好的方法仍然是咬紧牙关,将凭证
scp
到各个节点(可能是并行的),并具有正确的所有权和权限。You can store the credentials on the machine (or transfer, use, then remove them.)
You can transfer the credentials over a secure channel (e.g. using
scp
with non-interactive authentication e.g. key pair) so that you would not need to perform any custom encryption (only make sure that permissions are properly set to0400
on the key file at all times, e.g. set the permissions on the master files and usescp -p
)If the above does not answer your question, please provide more specific details re. what your setup is and what you are trying to achieve. Are EC2 actions to be initiated on multiple nodes from a central location? Is SSH available between the multiple nodes and the central location? Etc.
EDIT
Have you considered parameterizing your AMI, requiring those who instantiate your AMI to first populate the user data (
ec2-run-instances -f user-data-file
) with their AWS keys? Your AMI can then dynamically retrieve these per-instance parameters fromhttp://169.254.169.254/1.0/user-data
.UPDATE
OK, here goes a security-minded comparison of the various approaches discussed so far:
user-data
unencryptedtelnet
,curl
,wget
, etc. (can access clear-texthttp://169.254.169.254/1.0/user-data
)http://169.254.169.254/1.0/user-data
)user-data
and encrypted (or decryptable) with easily obtainable keytelnet
,curl
,wget
, etc. (can access clear-texthttp://169.254.169.254/1.0/user-data
)http://169.254.169.254/1.0/user-data
, ulteriorly descrypted with the easily-obtainable key)user-data
and encrypted with not easily obtainable keytelnet
,curl
,wget
, etc. (can access encryptedhttp://169.254.169.254/1.0/user-data
)root
for interactive impersonation) improves securitySo any method involving the AMI
user-data
is not the most secure, because gaining access to any user on the machine (weakest point) compromises the data.This could be mitigated if the S3 credentials were only required for a limited period of time (i.e. during the deployment process only), if AWS allowed you to overwrite or remove the contents of
user-data
when done with it (but this does not appear to be the case.) An alternative would be the creation of temporary S3 credentials for the duration of the deployment process, if possible (compromising these credentials, fromuser-data
, after the deployment process is completed and the credentials have been invalidated with AWS, no longer poses a security threat.)If the above is not applicable (e.g. S3 credentials needed by deployed nodes indefinitely) or not possible (e.g. cannot issue temporary S3 credentials for deployment only) then the best method remains to bite the bullet and
scp
the credentials to the various nodes, possibly in parallel, with the correct ownership and permissions.我写了一篇文章,研究了安全地将机密传递到 EC2 实例的各种方法以及优缺点。 每个的缺点。
http://www.shlomoswidler .com/2009/08/how-to-keep-your-aws-credentials-on-ec2/
I wrote an article examining various methods of passing secrets to an EC2 instance securely and the pros & cons of each.
http://www.shlomoswidler.com/2009/08/how-to-keep-your-aws-credentials-on-ec2/
最好的方法是使用实例配置文件。 基本思想是:
为之前创建的角色分配策略,例如:
{
“陈述”: [
{
"席德": "Stmt1369049349504",
"行动": "sqs:",
"效果": "允许",
“资源”:“”
}
]
}
将角色和实例配置文件关联在一起。
如果一切正常,并且您用于从 EC2 实例内连接到 AWS 服务的库支持从实例元数据检索凭证,您的代码将能够使用 AWS 服务。
取自 boto-user 邮件列表的完整示例:
首先,您必须创建一个 JSON 策略文档,该文档表示 IAM 角色应有权访问哪些服务和资源。 例如,此策略授予存储桶“my_bucket”的所有 S3 操作。 您可以使用适合您的应用程序的任何策略。
接下来,您需要在 IAM 中创建实例配置文件。
获得实例配置文件后,您需要创建角色、将角色添加到实例配置文件并将策略与角色关联。
现在,您可以在启动实例时使用该实例配置文件:
The best way is to use instance profiles. The basic idea is:
Assign a policy to the previously created role, for example:
{
"Statement": [
{
"Sid": "Stmt1369049349504",
"Action": "sqs:",
"Effect": "Allow",
"Resource": ""
}
]
}
Associate the role and instance profile together.
If all works well, and the library you use to connect to AWS services from within your EC2 instance supports retrieving the credentials from the instance meta-data, your code will be able to use the AWS services.
A complete example taken from the boto-user mailing list:
First, you have to create a JSON policy document that represents what services and resources the IAM role should have access to. for example, this policy grants all S3 actions for the bucket "my_bucket". You can use whatever policy is appropriate for your application.
Next, you need to create an Instance Profile in IAM.
Once you have the instance profile, you need to create the role, add the role to the instance profile and associate the policy with the role.
Now, you can use that instance profile when you launch an instance:
我想指出的是,不再需要向您的 EC2 实例提供任何凭证。 使用 IAM,您可以为 EC2 实例创建角色。 在这些角色中,您可以设置细粒度的策略,允许您的 EC2 实例从特定的 S3 存储桶获取特定的对象,仅此而已。 您可以在 AWS 文档中阅读有关 IAM 角色的更多信息:
http://docs .aws.amazon.com/IAM/latest/UserGuide/WorkingWithRoles.html
I'd like to point out that it is not needed to supply any credentials to your EC2 instance anymore. Using IAM, you can create a role for your EC2 instances. In these roles, you can set fine-grained policies that allow your EC2 instance to, for example, get a specific object from a specific S3 bucket and no more. You can read more about IAM Roles in the AWS docs:
http://docs.aws.amazon.com/IAM/latest/UserGuide/WorkingWithRoles.html
就像其他人已经在这里指出的那样,您实际上不需要通过使用 IAM 角色来存储 EC2 实例的 AWS 凭证 -
https://aws .amazon.com/blogs/security/a-safer-way-to-distribute-aws-credentials-to-ec2/。
我要补充的是,您也可以采用相同的方法来安全地存储 EC2 实例的非 AWS 凭证,例如,如果您有一些想要保证安全的数据库凭证。 您将非 aws 凭证保存在 S3 Bukcet 上,并使用 IAM 角色访问该存储桶。
您可以在此处找到更多详细信息 - https://aws.amazon.com/blogs/security/using-iam-roles-to-distribute-non-aws-credentials-to-your-ec2-instances/< /a>
Like others have already pointed out here, you don't really need to store AWS credentials for an EC2 instance, by using IAM Roles -
https://aws.amazon.com/blogs/security/a-safer-way-to-distribute-aws-credentials-to-ec2/.
I will add that you can employ the same method also for securely storing NON-AWS credentials for you EC2 instance, like say if you have some db credentials you want to keep secure. You save the non-aws credentials on a S3 Bukcet, and use IAM role to access that bucket.
you can find more detailed information on that here - https://aws.amazon.com/blogs/security/using-iam-roles-to-distribute-non-aws-credentials-to-your-ec2-instances/