将 AWS 凭证作为用户数据传递到 EC2 实例的最佳方法是什么?

发布于 2024-07-14 12:56:32 字数 792 浏览 6 评论 0原文

我有一个基于AWS的作业处理架构,需要EC2实例查询S3和SQS。 为了使正在运行的实例能够访问 API,凭据将以 Base64 编码的 shell 脚本的形式作为用户数据 (-f) 发送。 例如:

$ cat ec2.sh
...
export AWS_ACCOUNT_NUMBER='1111-1111-1111'
export AWS_ACCESS_KEY_ID='0x0x0x0x0x0x0x0x0x0'
...
$ zip -P 'secret-password' ec2.sh
$ openssl enc -base64 -in ec2.zip

启动了许多实例...

$ ec2run ami-a83fabc0 -n 20 -f ec2.zip

每个实例都使用硬编码到初始化脚本中的“秘密密码”来解码和解密 ec2.zip。 尽管它确实有效,但我的方法有两个问题。

  1. “zip -P”不是很安全
  2. 密码在实例中是硬编码的(它始终是“秘密密码”)

该方法与描述的方法非常相似 此处

有更优雅或更可接受的方法吗? 使用 gpg 加密凭证并在实例上存储私钥来解密它是我现在正在考虑的一种方法,但我不知道有任何警告。 我可以直接使用 AWS 密钥对吗? 我是否遗漏了 API 中一些非常明显的部分?

I have a job processing architecture based on AWS that requires EC2 instances query S3 and SQS. In order for running instances to have access to the API the credentials are sent as user data (-f) in the form of a base64 encoded shell script. For example:

$ cat ec2.sh
...
export AWS_ACCOUNT_NUMBER='1111-1111-1111'
export AWS_ACCESS_KEY_ID='0x0x0x0x0x0x0x0x0x0'
...
$ zip -P 'secret-password' ec2.sh
$ openssl enc -base64 -in ec2.zip

Many instances are launched...

$ ec2run ami-a83fabc0 -n 20 -f ec2.zip

Each instance decodes and decrypts ec2.zip using the 'secret-password' which is hard-coded into an init script. Although it does work, I have two issues with my approach.

  1. 'zip -P' is not very secure
  2. The password is hard-coded in the instance (it's always 'secret-password')

The method is very similar to the one described here

Is there a more elegant or accepted approach? Using gpg to encrypt the credentials and storing the private key on the instance to decrypt it is an approach I'm considering now but I'm unaware of any caveats. Can I use the AWS keypairs directly? Am I missing some super obvious part of the API?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

情归归情 2024-07-21 12:56:32

您可以将凭据存储在计算机上(或传输、使用,然后删除它们。)

您可以通过安全通道传输凭据(例如,使用带有非交互式身份验证(例如密钥对)的 scp),以便您不需要执行任何自定义加密(只需确保始终将密钥文件的权限正确设置为0400,例如设置主文件的权限并使用scp - p)

如果上述内容没有回答您的问题,请提供更具体的详细信息。 你的设置是什么以及你想要实现什么目标。 EC2 操作是否要从一个中心位置在多个节点上启动? 多个节点和中心位置之间是否可以使用 SSH? 等等


编辑

您是否考虑过参数化您的 AMI,要求实例化 AMI 的人员首先使用其 AWS 密钥填充用户数据 (ec2-run-instances -f user-data-file)? 然后,您的 AMI 可以从 http://169.254.169.254/1.0/user-data 动态检索这些每个实例的参数。


更新

好的,这里是对迄今为止讨论的各种方法的安全性比较:

  1. 未加密地存储在 AMI 用户数据中时的数据安全性
    • 任何设法登录 AMI 并有权访问 telnetcurl的用户都可以访问明文数据>wget等(可以访问明文http://169.254.169.254/1.0/user-data
    • 您容易受到代理请求攻击(例如,攻击者要求可能运行或未运行在 AMI 上的 Apache 获取并转发明文 http://169.254.169.254/1.0/user-data)
  2. 数据安全存储在 AMI 用户数据中并使用易于获取的密钥进行加密(或可解密)
    • 容易获得的密钥(密码)可能包括:
      • 密钥硬编码在 ABI 内的脚本中(攻击者可以获取 ABI)
      • 密钥硬编码在 AMI 本身的脚本中,任何成功登录 AMI 的用户都可以读取该脚本
      • 任何其他容易获取的信息,例如公钥等。
      • 任何私钥(其公钥可能很容易获得)
    • 给定一个容易获得的密钥(密码),第 1 点中确定的相同问题也适用,即:
      • 任何成功登录 AMI 并有权访问 telnetcurl的用户都可以访问解密的数据wget等(可以访问明文http://169.254.169.254/1.0/user-data
      • 您容易受到代理请求攻击(例如,攻击者要求可能运行或未运行在 AMI 上的 Apache 获取并转发加密的 http://169.254.169.254/1.0/user-data,使用容易获得的密钥进行解密)


  3. < strong>存储在 AMI 用户数据中并使用不易获取的密钥进行加密
    • 平均
    • 任何成功登录 AMI 并有权访问 telnetcurl的用户都可以访问加密数据wget等(可以访问加密的http://169.254.169.254/1.0/user-data
      • 然后可以使用暴力攻击尝试解密加密数据
  4. 存储在 AMI 上的安全位置时数据的安全性(加密不会产生任何附加值)
    • 更高
    • 数据只能由一个用户访问,即需要数据才能进行操作的用户
      • 例如用户拥有的文件:掩码为 0600 或 0400 的用户
    • 攻击者必须能够冒充特定用户才能访问数据
      • 额外的安全层,例如拒绝用户直接登录(必须通过 root 进行交互式模拟)可提高安全性

因此任何涉及 AMI 用户数据的方法code> 并不是最安全的,因为获得计算机上的任何用户的访问权限(最薄弱的点)都会危及数据。

如果仅在有限的时间内(即仅在部署过程中)需要 S3 凭证,并且 AWS 允许您覆盖或删除用户数据的内容,则可以缓解这种情况 完成后(但情况似乎并非如此。)另一种方法是在部署过程中创建临时 S3 凭证(如果可能的话)(从 user 处泄露这些凭证-data,在部署过程完成并且凭证已通过 AWS 失效后,不再构成安全威胁。)

如果上述内容不适用(例如,部署的节点无限期需要 S3 凭证)或不可能(例如,不能颁发仅用于部署的临时 S3 凭证),那么最好的方法仍然是咬紧牙关,将凭证 scp 到各个节点(可能是并行的),并具有正确的所有权和权限。

You can store the credentials on the machine (or transfer, use, then remove them.)

You can transfer the credentials over a secure channel (e.g. using scp with non-interactive authentication e.g. key pair) so that you would not need to perform any custom encryption (only make sure that permissions are properly set to 0400 on the key file at all times, e.g. set the permissions on the master files and use scp -p)

If the above does not answer your question, please provide more specific details re. what your setup is and what you are trying to achieve. Are EC2 actions to be initiated on multiple nodes from a central location? Is SSH available between the multiple nodes and the central location? Etc.


EDIT

Have you considered parameterizing your AMI, requiring those who instantiate your AMI to first populate the user data (ec2-run-instances -f user-data-file) with their AWS keys? Your AMI can then dynamically retrieve these per-instance parameters from http://169.254.169.254/1.0/user-data.


UPDATE

OK, here goes a security-minded comparison of the various approaches discussed so far:

  1. Security of data when stored in the AMI user-data unencrypted
    • low
    • clear-text data is accessible to any user who manages to log onto the AMI and has access to telnet, curl, wget, etc. (can access clear-text http://169.254.169.254/1.0/user-data)
    • you are vulnerable to proxy request attacks (e.g. attacker asks the Apache that may or may not be running on the AMI to get and forward the clear-text http://169.254.169.254/1.0/user-data)
  2. Security of data when stored in the AMI user-data and encrypted (or decryptable) with easily obtainable key
    • low
    • easily-obtainable key (password) may include:
      • key hard-coded in a script inside an ABI (where the ABI can be obtained by an attacker)
      • key hard-coded in a script on the AMI itself, where the script is readable by any user who manages to log onto the AMI
      • any other easily obtainable information such as public keys, etc.
      • any private key (its public key may be readily obtainable)
    • given an easily-obtainable key (password), the same problems identified in point 1 apply, namely:
      • the decrypted data is accessible to any user who manages to log onto the AMI and has access to telnet, curl, wget, etc. (can access clear-text http://169.254.169.254/1.0/user-data)
      • you are vulnerable to proxy request attacks (e.g. attacker asks the Apache that may or may not be running on the AMI to get and forward the encrypted http://169.254.169.254/1.0/user-data, ulteriorly descrypted with the easily-obtainable key)
  3. Security of data when stored in the AMI user-data and encrypted with not easily obtainable key
    • average
    • the encrypted data is accessible to any user who manages to log onto the AMI and has access to telnet, curl, wget, etc. (can access encrypted http://169.254.169.254/1.0/user-data)
      • an attempt to decrypt the encrypted data can then be made using brute-force attacks
  4. Security of data when stored on the AMI, in a secured location (no added value for it to be encrypted)
    • higher
    • the data is only accessible to one user, the user who requires the data in order to operate
      • e.g. file owned by user:user with mask 0600 or 0400
    • attacker must be able to impersonate the particular user in order to gain access to the data
      • additional security layers, such as denying the user direct log-on (having to pass through root for interactive impersonation) improves security

So any method involving the AMI user-data is not the most secure, because gaining access to any user on the machine (weakest point) compromises the data.

This could be mitigated if the S3 credentials were only required for a limited period of time (i.e. during the deployment process only), if AWS allowed you to overwrite or remove the contents of user-data when done with it (but this does not appear to be the case.) An alternative would be the creation of temporary S3 credentials for the duration of the deployment process, if possible (compromising these credentials, from user-data, after the deployment process is completed and the credentials have been invalidated with AWS, no longer poses a security threat.)

If the above is not applicable (e.g. S3 credentials needed by deployed nodes indefinitely) or not possible (e.g. cannot issue temporary S3 credentials for deployment only) then the best method remains to bite the bullet and scp the credentials to the various nodes, possibly in parallel, with the correct ownership and permissions.

花开雨落又逢春i 2024-07-21 12:56:32

我写了一篇文章,研究了安全地将机密传递到 EC2 实例的各种方法以及优缺点。 每个的缺点。

http://www.shlomoswidler .com/2009/08/how-to-keep-your-aws-credentials-on-ec2/

I wrote an article examining various methods of passing secrets to an EC2 instance securely and the pros & cons of each.

http://www.shlomoswidler.com/2009/08/how-to-keep-your-aws-credentials-on-ec2/

眼波传意 2024-07-21 12:56:32

最好的方法是使用实例配置文件。 基本思想是:

  • 创建实例配置文件
  • 创建新的 IAM 角色
  • 为之前创建的角色分配策略,例如:

    {
    “陈述”: [
    {
    "席德": "Stmt1369049349504",
    "行动": "sqs:",
    "效果": "允许",
    “资源”:“

    }
    ]
    }

  • 将角色和实例配置文件关联在一起。

  • 当您启动新的 EC2 实例时,请确保提供实例配置文件名称。

如果一切正常,并且您用于从 EC2 实例内连接到 AWS 服务的库支持从实例元数据检索凭证,您的代码将能够使用 AWS 服务。

取自 boto-user 邮件列表的完整示例:

首先,您必须创建一个 JSON 策略文档,该文档表示 IAM 角色应有权访问哪些服务和资源。 例如,此策略授予存储桶“my_bucket”的所有 S3 操作。 您可以使用适合您的应用程序的任何策略。

BUCKET_POLICY = """{
  "Statement":[{
    "Effect":"Allow",
    "Action":["s3:*"],
    "Resource":["arn:aws:s3:::my_bucket"]}]}"""

接下来,您需要在 IAM 中创建实例配置文件。

import boto
c = boto.connect_iam()
instance_profile = c.create_instance_profile('myinstanceprofile')

获得实例配置文件后,您需要创建角色、将角色添加到实例配置文件并将策略与角色关联。

role = c.create_role('myrole')
c.add_role_to_instance_profile('myinstanceprofile', 'myrole')
c.put_role_policy('myrole', 'mypolicy', BUCKET_POLICY)

现在,您可以在启动实例时使用该实例配置文件:

ec2 = boto.connect_ec2()
ec2.run_instances('ami-xxxxxxx', ..., instance_profile_name='myinstanceprofile')

The best way is to use instance profiles. The basic idea is:

  • Create an instance profile
  • Create a new IAM role
  • Assign a policy to the previously created role, for example:

    {
    "Statement": [
    {
    "Sid": "Stmt1369049349504",
    "Action": "sqs:",
    "Effect": "Allow",
    "Resource": "
    "
    }
    ]
    }

  • Associate the role and instance profile together.

  • When you start a new EC2 instance, make sure you provide the instance profile name.

If all works well, and the library you use to connect to AWS services from within your EC2 instance supports retrieving the credentials from the instance meta-data, your code will be able to use the AWS services.

A complete example taken from the boto-user mailing list:

First, you have to create a JSON policy document that represents what services and resources the IAM role should have access to. for example, this policy grants all S3 actions for the bucket "my_bucket". You can use whatever policy is appropriate for your application.

BUCKET_POLICY = """{
  "Statement":[{
    "Effect":"Allow",
    "Action":["s3:*"],
    "Resource":["arn:aws:s3:::my_bucket"]}]}"""

Next, you need to create an Instance Profile in IAM.

import boto
c = boto.connect_iam()
instance_profile = c.create_instance_profile('myinstanceprofile')

Once you have the instance profile, you need to create the role, add the role to the instance profile and associate the policy with the role.

role = c.create_role('myrole')
c.add_role_to_instance_profile('myinstanceprofile', 'myrole')
c.put_role_policy('myrole', 'mypolicy', BUCKET_POLICY)

Now, you can use that instance profile when you launch an instance:

ec2 = boto.connect_ec2()
ec2.run_instances('ami-xxxxxxx', ..., instance_profile_name='myinstanceprofile')
鲜血染红嫁衣 2024-07-21 12:56:32

我想指出的是,不再需要向您的 EC2 实例提供任何凭证。 使用 IAM,您可以为 EC2 实例创建角色。 在这些角色中,您可以设置细粒度的策略,允许您的 EC2 实例从特定的 S3 存储桶获取特定的对象,仅此而已。 您可以在 AWS 文档中阅读有关 IAM 角色的更多信息:

http://docs .aws.amazon.com/IAM/latest/UserGuide/WorkingWithRoles.html

I'd like to point out that it is not needed to supply any credentials to your EC2 instance anymore. Using IAM, you can create a role for your EC2 instances. In these roles, you can set fine-grained policies that allow your EC2 instance to, for example, get a specific object from a specific S3 bucket and no more. You can read more about IAM Roles in the AWS docs:

http://docs.aws.amazon.com/IAM/latest/UserGuide/WorkingWithRoles.html

笑脸一如从前 2024-07-21 12:56:32

就像其他人已经在这里指出的那样,您实际上不需要通过使用 IAM 角色来存储 EC2 实例的 AWS 凭证 -
https://aws .amazon.com/blogs/security/a-safer-way-to-distribute-aws-credentials-to-ec2/
我要补充的是,您也可以采用相同的方法来安全地存储 EC2 实例的非 AWS 凭证,例如,如果您有一些想要保证安全的数据库凭证。 您将非 aws 凭证保存在 S3 Bukcet 上,并使用 IAM 角色访问该存储桶。
您可以在此处找到更多详细信息 - https://aws.amazon.com/blogs/security/using-iam-roles-to-distribute-non-aws-credentials-to-your-ec2-instances/< /a>

Like others have already pointed out here, you don't really need to store AWS credentials for an EC2 instance, by using IAM Roles -
https://aws.amazon.com/blogs/security/a-safer-way-to-distribute-aws-credentials-to-ec2/.
I will add that you can employ the same method also for securely storing NON-AWS credentials for you EC2 instance, like say if you have some db credentials you want to keep secure. You save the non-aws credentials on a S3 Bukcet, and use IAM role to access that bucket.
you can find more detailed information on that here - https://aws.amazon.com/blogs/security/using-iam-roles-to-distribute-non-aws-credentials-to-your-ec2-instances/

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文