在不暂停服务的情况下将新代码部署到生产 celery 集群中的可靠方法

发布于 2024-12-10 06:53:00 字数 200 浏览 6 评论 0原文

我有一些 celery 节点在生产中使用rabbitmq 运行,并且我一直在处理服务中断的部署。我必须关闭整个站点才能将新代码部署到 celery。我将每个子任务的最大任务设置为 1,因此理论上,如果我对现有任务进行更改,它们应该在下次运行时生效,但是注册新任务呢?我知道重新启动守护进程不会杀死正在运行的工作人员,而是会让他们自行死亡,但这似乎仍然很危险。这个问题有优雅的解决方案吗?

I have a few celery nodes running in production with rabbitmq and I have been handling deploys with service interruption. I have to take down the whole site in order to deploy new code out to celery. I have max tasks per child set to 1, so in theory, if I make changes to an existing task, they should take effect when the next time they are run, but what about registering new tasks? I know that restarting the daemon won't kill running workers, but instead will let them die on their own, but it still seems dangerous. Is there an elegant solution to this problem?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

酒解孤独 2024-12-17 06:53:00

这里具有挑战性的部分似乎是确定哪些 celery 任务是新的还是旧的。我建议在rabbitmq中创建另一个虚拟主机并执行以下步骤:

  1. 使用新代码更新django web服务器并重新配置以指向新虚拟主机。
  2. 当任务在新虚拟主机中排队时,等待 celery 完成旧虚拟主机中的任务。
  3. 当工作人员完成后,将代码和配置更新到新的虚拟主机,

我实际上没有尝试过此操作,但我不明白为什么这不起作用。一个烦人的方面是每次部署时都必须在虚拟主机之间交替。

The challenging part here seems to be identifying which celery tasks are new versus old. I would suggest creating another vhost in rabbitmq and performing the following steps:

  1. Update django web servers with new code and reconfigure to point to the new vhost.
  2. While tasks are queuing up in the new vhost, wait for celery works to finish up with the tasks from the old vhost.
  3. When workers have completed, update the code and configuration to the new vhost

I haven't actually tried this but I don't see why this wouldn't work. One annoying aspect is having to alternate between the vhosts with each deploy.

ペ泪落弦音 2024-12-17 06:53:00

一种解决方法是设置配置变量 MAX_TASK_PER_CHILD。
该变量指定 Pool Worker 在杀死自己之前执行的任务数。
当然,当执行新的 Pool Worker 时,这将加载新代码。
在我的系统上,我通常会重新启动芹菜,让其他任务在后台运行,通常一切都会正常,有时会发生这个任务之一永远不会被杀死,你仍然可以用脚本杀死它。

a kind of work around for you can be to set the config variable MAX_TASK_PER_CHILD.
This variable specify the number of task that a Pool Worker execute before kill himself.
Off course when a new Pool Worker is executed this will load the new code.
On my system normally I use to restart celery leaving other task running on background, normally everything goes fine, sometimes happen that one of this task is never killed and you can still kill it with a script.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文