如何优雅地关闭 Mongrel Web 服务器

发布于 2024-07-04 13:25:42 字数 305 浏览 7 评论 0原文

我的 RubyOnRails 应用程序是使用 Apache 配置背后的常见杂种包进行设置的。 我们注意到,我们的 Mongrel Web 服务器内存使用量在某些操作中可能会增长得相当大,我们真的希望能够随时动态地重新启动选定的 Mongrel 进程。

然而,由于我不会在这里讨论的原因,有时我们在 Mongrel 服务请求时不中断它可能非常重要,所以我认为简单的进程终止并不是最重要的。回答。

理想情况下,我想向 Mongrel 发送一个信号,表示“完成您正在做的任何事情,然后在接受更多连接之前退出”。

是否有标准技术或最佳实践?

My RubyOnRails app is set up with the usual pack of mongrels behind Apache configuration. We've noticed that our Mongrel web server memory usage can grow quite large on certain operations and we'd really like to be able to dynamically do a graceful restart of selected Mongrel processes at any time.

However, for reasons I won't go into here it can sometimes be very important that we don't interrupt a Mongrel while it is servicing a request, so I assume a simple process kill isn't the answer.

Ideally, I want to send the Mongrel a signal that says "finish whatever you're doing and then quit before accepting any more connections".

Is there a standard technique or best practice for this?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

第七度阳光i 2024-07-11 13:25:42

我对 Mongrel 源代码做了更多调查,结果发现 Mongrel 安装了一个信号处理程序来捕获标准进程终止 (TERM) 并正常关闭,所以我毕竟不需要特殊的过程。

您可以从在处理请求时杀死 Mongrel 时获得的日志输出中看到这一点。 例如:

** TERM signal received.
Thu Aug 28 00:52:35 +0000 2008: Reaping 2 threads for slow workers because of 'shutdown'
Waiting for 2 requests to finish, could take 60 seconds.Thu Aug 28 00:52:41 +0000 2008: Reaping 2 threads for slow workers because of 'shutdown'
Waiting for 2 requests to finish, could take 60 seconds.Thu Aug 28 00:52:43 +0000 2008 (13051) Rendering layoutfalsecontent_typetext/htmlactionindex within layouts/application

I've done a little more investigation into the Mongrel source and it turns out that Mongrel installs a signal handler to catch an standard process kill (TERM) and do a graceful shutdown, so I don't need a special procedure after all.

You can see this working from the log output you get when killing a Mongrel while it's processing a request. For example:

** TERM signal received.
Thu Aug 28 00:52:35 +0000 2008: Reaping 2 threads for slow workers because of 'shutdown'
Waiting for 2 requests to finish, could take 60 seconds.Thu Aug 28 00:52:41 +0000 2008: Reaping 2 threads for slow workers because of 'shutdown'
Waiting for 2 requests to finish, could take 60 seconds.Thu Aug 28 00:52:43 +0000 2008 (13051) Rendering layoutfalsecontent_typetext/htmlactionindex within layouts/application
夏九 2024-07-11 13:25:42

看看使用 monit。 您可以根据内存或CPU使用情况动态重启mongrel。 这是我为我的客户编写的配置文件中的一行。

check process mongrel-8000 with pidfile /var/www/apps/fooapp/current/tmp/pids/mongrel.8000.pid
    start program = "/usr/local/bin/mongrel_rails cluster::start --only 8000"
    stop program = "/usr/local/bin/mongrel_rails cluster::stop --only 8000"

    if totalmem is greater than 150.0 MB for 5 cycles then restart       # eating up memory?
    if cpu is greater than 50% for 8 cycles then alert                  # send an email to admin
    if cpu is greater than 80% for 5 cycles then restart                # hung process?
    if loadavg(5min) greater than 10 for 3 cycles then restart          # bad, bad, bad
    if 3 restarts within 5 cycles then timeout                         # something is wrong, call the sys-admin

    if failed host 192.168.106.53 port 8000 protocol http request /monit_stub
        with timeout 10 seconds
        then restart
    group mongrel

然后,您可以对所有混合集群实例重复此配置。 monit_stub 行只是 monit 尝试下载的空文件。 如果不能,它也会尝试重新启动实例。

注意:资源监控似乎不适用于带有 Darwin 内核的 OS X。

Look at using monit. You can dynamically restart mongrel based on memory or CPU usage. Here's a line from a config file that I wrote for a client of mine.

check process mongrel-8000 with pidfile /var/www/apps/fooapp/current/tmp/pids/mongrel.8000.pid
    start program = "/usr/local/bin/mongrel_rails cluster::start --only 8000"
    stop program = "/usr/local/bin/mongrel_rails cluster::stop --only 8000"

    if totalmem is greater than 150.0 MB for 5 cycles then restart       # eating up memory?
    if cpu is greater than 50% for 8 cycles then alert                  # send an email to admin
    if cpu is greater than 80% for 5 cycles then restart                # hung process?
    if loadavg(5min) greater than 10 for 3 cycles then restart          # bad, bad, bad
    if 3 restarts within 5 cycles then timeout                         # something is wrong, call the sys-admin

    if failed host 192.168.106.53 port 8000 protocol http request /monit_stub
        with timeout 10 seconds
        then restart
    group mongrel

You'd then repeat this configuration for all of your mongrel cluster instances. The monit_stub line is just an empty file that monit tries to download. If it can't, it tries to restart the instance as well.

Note: the resource monitoring seems not to work on OS X with the Darwin kernel.

蒗幽 2024-07-11 13:25:42

更好的问题是如何防止您的应用程序消耗太多内存,以至于需要您不时重新启动杂种。

www.modrails.com 显着减少了我们的内存占用

Better question is how to keep your app from consuming so much memory that it requires you to reboot mongrels from time to time.

www.modrails.com reduced our memory footprint significantly

孤城病女 2024-07-11 13:25:42

Boggy:

如果您有一个进程正在运行,它将正常关闭(为其队列中的所有请求提供服务,如果您使用适当的负载平衡,该队列应该只为 1)。 问题是,在旧服务器死亡之前,您无法启动新服务器,因此您的用户将在负载均衡器中排队。 我发现成功的是杂种的“级联”或滚动重启。 您可以停止然后按顺序启动每个 mongrel,阻止重新启动下一个 mongrel 的调用,直到上一个 mongrel 完成,而不是停止所有它们并启动它们(因此将请求排队,直到一个 mongrel 完成、停止、重新启动并接受连接)。备份(对 /status 控制器使用真正的 HTTP 检查)。 当你的杂种滚动时,一次只有一个宕机,并且你正在跨两个代码库提供服务 - 如果你做不到这一点,你应该抛出一个维护页面一分钟。 您应该能够使用 capistrano 或任何您的部署工具来自动执行此操作。

所以我有3个任务:
cap:deploy - 它使用一个钩子执行传统的同时重新启动所有方法,该钩子会放置一个维护页面,然后在 HTTP 检查后将其关闭。
cap:deploy:rolling - 这会在整个机器上级联(我从 iClassify 中提取以了解给定机器上有多少杂种),而无需维护页面。
cap deploy:migrations - 执行维护页面+迁移,因为“实时”运行迁移通常是个坏主意。

Boggy:

If you have one process running, it will gracefully shut down (service all the requests in its queue which should only be 1 if you are using proper load balancing). The problem is you can't start the new server until the old one dies, so your users will queue up in the load balancer. What I've found successful is a 'cascade' or rolling restart of the mongrels. Instead of stopping them all and starting them all (therefore queuing requests until the one mongrel is done, stopped, restarted and accepting connections), you can stop then start each mongrel sequentially, blocking the call to restart the next mongrel until the previous one is back up (use a real HTTP check to a /status controller). As your mongrels roll, only one at a time is down and you are serving across two code bases - if you can't do this you should throw up a maintenance page for a minute. You should be able to automate this with capistrano or whatever your deploy tool is.

So I have 3 tasks:
cap:deploy - which does the traditional restart all at the same time method with a hook that puts up a maintenance page and then takes it down after an HTTP check.
cap:deploy:rolling - which does this cascade across the machine (I pull from a iClassify to know how many mongrels are on the given machine) without a maintenance page.
cap deploy:migrations - which does maintenance page + migrations since its usually a bad idea to run migrations 'live'.

做个少女永远怀春 2024-07-11 13:25:42

尝试使用:

mongrel_cluster_ctl stop

您还可以使用:

mongrel_cluster_ctl restart

Try using:

mongrel_cluster_ctl stop

You can also use:

mongrel_cluster_ctl restart
衣神在巴黎 2024-07-11 13:25:42

有一个问题

当 /usr/local/bin/mongrel_rails cluster::start --only 8000 被触发时会发生什么?

这个特定流程是否满足了所有请求的目的? 或者他们被流产了?

我很好奇整个启动/重启的事情是否可以在不影响最终用户的情况下完成......

got a question

what happens when /usr/local/bin/mongrel_rails cluster::start --only 8000 is triggered ?

are all of the requests served by this particular process, to their end ? or are they aborted ?

I curious if this whole start/restart thing can be done without affecting the end users...

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文