Cap 部署创建重复的独角兽
我的deploy.rb中有以下任务
namespace :unicorn do
desc "stop unicorn"
task :stop, :roles => :app, :except => { :no_release => true } do
run "#{try_sudo} kill `cat #{unicorn_pid}`"
end
desc "start unicorn"
task :start, :roles => :app, :except => { :no_release => true } do
run "cd #{current_path} && #{try_sudo} unicorn -c #{current_path}/config/unicorn.rb -E #{rails_env} -D"
end
task :reload, :roles => :app, :except => { :no_release => true } do
run "#{try_sudo} kill -s USR2 `cat #{unicorn_pid}`"
end
after "deploy:restart", "unicorn:reload"
end
当我从我的开发机器运行unicorn:start或unicorn:reload任务时,服务器上的一切看起来都很好:
$ ps aux | grep unicorn
myuser 8196 77.9 12.2 81020 62748 ? Sl 19:18 0:14 unicorn master -c /home/myuser/www/myapp/current/config/unicorn.rb -E production -D
myuser 8216 0.0 11.5 81020 59232 ? Sl 19:18 0:00 unicorn worker[0] -c /home/myuser/www/myapp/current/config/unicorn.rb -E production -D
但是当我运行时一个完整的cap部署我得到了unicorn服务器的多个实例,这让nginx很困惑。
$ ps aux | grep unicorn
myuser 8196 4.4 12.2 81020 62764 ? Sl 19:18 0:14 unicorn master (old) -c /home/myuser/www/myapp/current/config/unicorn.rb -E production -D
myuser 8216 1.1 13.2 87868 67764 ? Sl 19:18 0:03 unicorn worker[0] -c /home/myuser/www/myapp/current/config/unicorn.rb -E production -D
myuser 8362 5.8 12.8 83448 65408 ? Sl 19:19 0:16 unicorn master -c /home/myuser/www/myapp/current/config/unicorn.rb -E production -D
myuser 8385 0.0 12.1 83712 61980 ? Sl 19:19 0:00 unicorn worker[0] -c /home/myuser/www/myapp/current/config/unicorn.rb -E production -D
我不知道为什么 unicorn:reload 在部署时会旋转这些重复的实例。显然它并没有阻止以前的主人/工人。我必须运行 unicorn:stop 任务两次,然后再次运行 unicorn:start 才能纠正问题
还有其他人遇到过这种情况吗?我已经研究了几个小时但没有任何运气
I have the following tasks in my deploy.rb
namespace :unicorn do
desc "stop unicorn"
task :stop, :roles => :app, :except => { :no_release => true } do
run "#{try_sudo} kill `cat #{unicorn_pid}`"
end
desc "start unicorn"
task :start, :roles => :app, :except => { :no_release => true } do
run "cd #{current_path} && #{try_sudo} unicorn -c #{current_path}/config/unicorn.rb -E #{rails_env} -D"
end
task :reload, :roles => :app, :except => { :no_release => true } do
run "#{try_sudo} kill -s USR2 `cat #{unicorn_pid}`"
end
after "deploy:restart", "unicorn:reload"
end
When I run unicorn:start or unicorn:reload tasks from my development machine everything looks fine on the server:
$ ps aux | grep unicorn
myuser 8196 77.9 12.2 81020 62748 ? Sl 19:18 0:14 unicorn master -c /home/myuser/www/myapp/current/config/unicorn.rb -E production -D
myuser 8216 0.0 11.5 81020 59232 ? Sl 19:18 0:00 unicorn worker[0] -c /home/myuser/www/myapp/current/config/unicorn.rb -E production -D
However when I run a full-on cap deploy I get multiple instances of the unicorn server, which confuses the hell out of nginx.
$ ps aux | grep unicorn
myuser 8196 4.4 12.2 81020 62764 ? Sl 19:18 0:14 unicorn master (old) -c /home/myuser/www/myapp/current/config/unicorn.rb -E production -D
myuser 8216 1.1 13.2 87868 67764 ? Sl 19:18 0:03 unicorn worker[0] -c /home/myuser/www/myapp/current/config/unicorn.rb -E production -D
myuser 8362 5.8 12.8 83448 65408 ? Sl 19:19 0:16 unicorn master -c /home/myuser/www/myapp/current/config/unicorn.rb -E production -D
myuser 8385 0.0 12.1 83712 61980 ? Sl 19:19 0:00 unicorn worker[0] -c /home/myuser/www/myapp/current/config/unicorn.rb -E production -D
I have no idea why unicorn:reload is spinning up these duplicate instances on deploy. Apparently it's not stopping the previous master/worker. I have to run the unicorn:stop task twice then unicorn:start again to rectify the problem
Anyone else run into this? I've been poking at it for hours without any luck
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
所以看起来问题是独角兽安装错误。我用核武器摧毁了我的宝石并重新捆绑,现在一切都很甜蜜。 Unicorn 版本是相同的,所以仍然有点神秘,但至少现在可以使用了
So it looks like the issue was a faulty unicorn install. I nuked my gems and rebundled and now everything is sweet. Unicorn version is the same so it's still a bit of a mystery but at least it's working now