是否有与芹菜有关的问题的修复程序? https://github.com/celery/celery/issues/3519

发布于 2025-02-01 13:14:47 字数 926 浏览 2 评论 0原文

芹菜似乎正在选择我的旧代码和新代码。我尝试清理缓存,清除经纪人队列(REDIS),重新启动芹菜等。但是它们似乎都没有解决此问题。

对于上下文,我们会定期将新版本发送到各种服务器。 Web应用程序在后端和芹菜中使用Django REST框架来调度异步任务。最近,当我们部署了代码的新版本时,该应用程序的行为非常奇怪。它具有正在运行的旧代码和新代码的一部分的文物。这是非常奇怪的行为,直到我们找到了一个github线程( https://github.com/celerery/芹菜/问题/3519 )完全概述了我们面临的问题。在该线程中,这个问题似乎没有好的答案,因此在此处发布它,这样,如果任何有芹菜知识的人都知道解决方法,我们可以阻止芹菜捡起旧的旧文物。

部署是通过Jenkins构建脚本完成的。请在下面找到相同的脚本。出于明显的原因,我用“ proj”代替了我们的申请名称。

sudo /bin/systemctl stop httpd
sudo /bin/systemctl stop celery
/bin/redis-cli flushall
/srv/proj/bin/pip install --no-cache-dir --upgrade -r /srv/proj/requirements/staging.txt
/usr/bin/git fetch
/usr/bin/git fetch --tags
/usr/bin/git checkout $TAG
/srv/proj/bin/python manage.py migrate
sudo /bin/systemctl restart httpd
sudo /bin/systemctl restart proj
sudo /bin/systemctl start celery

Celery seems to be picking both my old code and new code. I have tried clearing cache, clearing the broker queue(redis), restarting celery etc. But none of them seem to be fixing this issue.

For context, we have new releases going to various servers periodically. The web application uses Django rest framework in the backend and celery for scheduling asynchronous tasks. Recently when we deployed new version of the code, the application was behaving very strangely. It had artifacts from the old code that was being run and parts of the new code as well. This was very weird behaviour until we found a github thread(https://github.com/celery/celery/issues/3519) which outlines the issue exactly as we had faced. There seems to be no good answers for this issue in that thread, so posting it here so that if anyone with celery knowledge knows of a workaround where we can stop celery from picking up the old artifacts.

The deployment is done through Jenkins build scripts. Please find below the script for the same. For obvious reasons, I have replaced our application name with "proj".

sudo /bin/systemctl stop httpd
sudo /bin/systemctl stop celery
/bin/redis-cli flushall
/srv/proj/bin/pip install --no-cache-dir --upgrade -r /srv/proj/requirements/staging.txt
/usr/bin/git fetch
/usr/bin/git fetch --tags
/usr/bin/git checkout $TAG
/srv/proj/bin/python manage.py migrate
sudo /bin/systemctl restart httpd
sudo /bin/systemctl restart proj
sudo /bin/systemctl start celery

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

何以笙箫默 2025-02-08 13:14:47

OP,问题几乎总是在您的服务器上包含旧代码。有两个可能的问题:

  1. 结帐命令失败。 结帐如果目录上有本地更改,则可能无法切换分支。此外,部署脚本不会服务器的最新更改。我们使用的更常见的方法是将VENV放在另一个目录中,然后在部署clone将存储库转换为新目录(版本为)。然后将“主”应用程序目录链接切换到最新版本。例如,后者是AWS的Elastic Beanstalk中使用的方法。
  2. 芹菜仍在运行的流浪过程。根据您的开始 /停止芹菜的方式,可能会有芹菜的流浪过程仍在闲逛并运行您的旧代码。

最终,如果您确认所有服务器都是100%相同的,您将无法诊断此问题。因此,如果您有这些盒子的开发人员或管理员,则很可能会有一个或多个改变了影响(1)或(2)的更改。

OP, the problem is almost invariably that your servers include old code on them. There are two possible issues:

  1. the checkout command fails. checkout can fail to switch branches if there are local changes on a directory. In addition, the deployment script doesn't pull the latest changes to the server. The more common approach that we use is to have the venv in another directory, and then on deploy clone the repository to a new directory (versioned). Then switch the "main" app directory link to point to the latest version. The latter, for example, is the approach used in aws's elastic beanstalk.
  2. there are stray processes of celery still running. Depending on how you started / stopped celery, there could be stray processes of celery still hanging around and running your old code.

Ultimately, you won't be able to diagnose this problem without confirming that all of your servers are 100% identical. So, if you have devs or admins that ssh to these boxes, chances are that one or more of them made some change that affected (1) or (2).

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文