在 vagrant+chef 设置中使用带有密码的 ssh 密钥

发布于 2024-12-01 13:05:14 字数 581 浏览 2 评论 0原文

我有一个使用 vagrant 运行的虚拟机,并且我正在使用 Chef 来配置它。其中一个步骤涉及克隆 git 存储库,但我的 ssh 密钥(在我的主机上)有一个密码。

当我运行 vagrant up 时,该过程在 git clone 步骤失败,并出现以下错误:
权限被拒绝(公钥)。 fatal:远端意外挂断
(密钥已添加到主机上,并带有密码)

我尝试通过执行以下操作通过 ssh 代理转发来解决此问题:
将 config.ssh.forward_agent = true 添加到 VagrantFile 中
在虚拟机上的 /etc/sudoers 中添加了 Defaults env_keep = "SSH_AUTH_SOCK

现在,vagrant up 在到达 git clone 部分时仍然失败,但如果我在那之后运行 vagrant provision ,我猜这是因为 ssh 配置是在虚拟机启动时设置的,并且没有重新加载,

我已经尝试 过了。调整这两个设置后重新加载 ssh,但这没有帮助

谢谢。

I've got a vm running using vagrant, and I'm provisioning it with Chef. One of the steps involves cloning a git repo, but my ssh-key (on my host machine) has a passphrase on it.

When I run vagrant up, the process fails at the git clone step with the following error:

Permission denied (publickey). fatal: The remote end hung up unexpectedly

(The key has been added on the host machine, with the passphrase)

I tried to solve this with ssh agent forwarding by doing the following:

Added config.ssh.forward_agent = true to the VagrantFile

Added Defaults env_keep = "SSH_AUTH_SOCK to /etc/sudoers on the vm

Now, vagrant up still fails when it gets to the git clone part, but if I run vagrant provision after that, it passes. I'm guessing this is because the ssh configuration is set up when the vm is brought up and isn't reloaded

I have tried to reload ssh after adjusting those two settings, but that hasn't helped.

Any idea how to solve this?

Thanks.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

染墨丶若流云 2024-12-08 13:05:14

正如您所指出的,在初始运行期间更新 sudoers 为时已晚,对运行没有好处,因为此时 Chef 已经在 sudo 下运行。

相反,我写了一个 hacky 配方,找到合适的 ssh 套接字来使用并更新 SSH_AUTH_SOCK 环境以适应。它还禁用严格的主机密钥检查,以便自动批准初始出站连接。

将此保存为在第一次 ssh 连接之前随时执行的配方(已使用 Ubuntu 进行测试,但应该适用于其他发行版):

Directory "/root/.ssh" do
  action :create
  mode 0700
end

File "/root/.ssh/config" do
  action :create
  content "Host *\nStrictHostKeyChecking no"
  mode 0600
end

ruby_block "Give root access to the forwarded ssh agent" do
  block do
    # find a parent process' ssh agent socket
    agents = {}
    ppid = Process.ppid
    Dir.glob('/tmp/ssh*/agent*').each do |fn|
      agents[fn.match(/agent\.(\d+)$/)[1]] = fn
    end
    while ppid != '1'
      if (agent = agents[ppid])
        ENV['SSH_AUTH_SOCK'] = agent
        break
      end
      File.open("/proc/#{ppid}/status", "r") do |file|
        ppid = file.read().match(/PPid:\s+(\d+)/)[1]
      end
    end
    # Uncomment to require that an ssh-agent be available
    # fail "Could not find running ssh agent - Is config.ssh.forward_agent enabled in Vagrantfile?" unless ENV['SSH_AUTH_SOCK']
  end
  action :create
end

或者创建一个已包含 sudoers 更新的盒子,并以此为基础构建您未来的虚拟机。

As you noted, updating sudoers during the initial run is too late to be beneficial to that run as chef is already running under sudo by that point.

Instead I wrote a hacky recipe that finds the appropriate ssh socket to use and updates the SSH_AUTH_SOCK environment to suit. It also disables strict host key checking so the initial outbound connection is automatically approved.

Save this as a recipe that's executed anytime prior to the first ssh connection (tested with Ubuntu but should work with other distributions):

Directory "/root/.ssh" do
  action :create
  mode 0700
end

File "/root/.ssh/config" do
  action :create
  content "Host *\nStrictHostKeyChecking no"
  mode 0600
end

ruby_block "Give root access to the forwarded ssh agent" do
  block do
    # find a parent process' ssh agent socket
    agents = {}
    ppid = Process.ppid
    Dir.glob('/tmp/ssh*/agent*').each do |fn|
      agents[fn.match(/agent\.(\d+)$/)[1]] = fn
    end
    while ppid != '1'
      if (agent = agents[ppid])
        ENV['SSH_AUTH_SOCK'] = agent
        break
      end
      File.open("/proc/#{ppid}/status", "r") do |file|
        ppid = file.read().match(/PPid:\s+(\d+)/)[1]
      end
    end
    # Uncomment to require that an ssh-agent be available
    # fail "Could not find running ssh agent - Is config.ssh.forward_agent enabled in Vagrantfile?" unless ENV['SSH_AUTH_SOCK']
  end
  action :create
end

Alternatively create a box with the sudoers update already in it and base your future VMs off of that.

弥枳 2024-12-08 13:05:14

这可能不是您正在寻找的答案,但解决此问题的一个简单方法是生成一个不带密码的专用部署 ssh 密钥。我更喜欢单独的专用部署密钥,而不是多个应用程序的单个密钥。

This may not be the answer that you're looking for, but an easy fix to this would be to generate a dedicated deployment ssh key without a passphrase. I prefer separate and dedicated deploy keys rather than a single key for multiple applications.

心碎无痕… 2024-12-08 13:05:14

您可以使用 Vagrant 运行多个配置程序(即使是同一类型),每个配置程序都在其自己的 SSH 连接上执行。我通常通过使用 Shell 配置程序来解决此问题,该程序将 Added Defaults env_keep = "SSH_AUTH_SOCK" 添加到虚拟机上的 /etc/sudoers 中。

这是我用来执行此操作的 Bash 脚本:

#!/usr/bin/env bash

# Ensure that SSH_AUTH_SOCK is kept
if [ -n "$SSH_AUTH_SOCK" ]; then
  echo "SSH_AUTH_SOCK is present"
else
  echo "SSH_AUTH_SOCK is not present, adding as env_keep to /etc/sudoers"
  echo "Defaults env_keep+=\"SSH_AUTH_SOCK\"" >> "/etc/sudoers"
fi

我还没有使用 Chef 配置程序对此进行测试,仅使用其他 Shell 配置程序进行测试...但据我所知,这对于您的用例应该是相同的。

You can run multiple provisioners with Vagrant (even of the same kind), each provisioner gets executed on its own SSH connection. I typically solve this problem by using a Shell provisioner that adds Added Defaults env_keep = "SSH_AUTH_SOCK" to /etc/sudoers on the vm.

Here's the Bash script I use to do just that:

#!/usr/bin/env bash

# Ensure that SSH_AUTH_SOCK is kept
if [ -n "$SSH_AUTH_SOCK" ]; then
  echo "SSH_AUTH_SOCK is present"
else
  echo "SSH_AUTH_SOCK is not present, adding as env_keep to /etc/sudoers"
  echo "Defaults env_keep+=\"SSH_AUTH_SOCK\"" >> "/etc/sudoers"
fi

I haven't tested this with the Chef provisioner, only with additional Shell provisioners... but from what I understand this should work the same for your use case.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文