如何正确配置和运行远程celeryworker?
我是芹菜新手,可能做错了什么,但我已经 花了很多时间试图弄清楚如何配置 celery 正确。
所以,在我的环境中,我有 2 个远程服务器;一个是主要的(它有 公共 IP 地址和大部分内容,例如数据库服务器、rabbitmq 运行我的网络应用程序的服务器和网络服务器在那里)和另一个 用于我想要异步调用的特定任务 主服务器使用celery。
我计划使用 RabbitMQ 作为代理和结果后端。 Celery 配置非常基本:
CELERY_IMPORTS = ("main.tasks", )
BROKER_HOST = "Public IP of my main server"
BROKER_PORT = 5672
BROKER_USER = "guest"
BROKER_PASSWORD = "guest"
BROKER_VHOST = "/"
CELERY_RESULT_BACKEND = "amqp"
当我在主服务器上运行工作程序时,任务只执行 很好,但是当我在远程服务器上运行它时,只有几个任务 被执行,然后工作人员陷入困境,无法执行任何操作 任务。当我重新启动工作程序时,它会执行更多任务并获取 又卡住了。任务里面没有什么特别的,我什至尝试过 一个仅将 2 个数字相加的测试任务。我尝试运行工人 不同地(妖魔化与非妖魔化,设置不同的并发度和 使用celeryd_multi),没有任何帮助。
可能是什么原因?我错过了什么吗?我必须跑吗 主服务器上除了代理(RabbitMQ)之外的其他东西?或者是 这是芹菜中的一个错误(我尝试了几个版本:2.2.4、2.3.3 和 dev, 但他们都不起作用)?
嗯...我刚刚在本地工人身上重现了同样的问题,所以我 不太清楚这是什么...是否需要重新启动 celery 每执行N个任务后,worker?
任何帮助将非常感激:)
I'm new to celery and may be doing something wrong, but I already
spent a lot of trying to figure out how to configure celery
correctly.
So, in my environment I have 2 remote servers; one is main (it has
public IP address and most of the stuff like database server, rabbitmq
server and web server running my web application is there) and another
is used for specific tasks which I want to asynchronously invoke from
the main server using celery.
I was planning to use RabbitMQ as a broker and as results back-end.
Celery config is very basic:
CELERY_IMPORTS = ("main.tasks", )
BROKER_HOST = "Public IP of my main server"
BROKER_PORT = 5672
BROKER_USER = "guest"
BROKER_PASSWORD = "guest"
BROKER_VHOST = "/"
CELERY_RESULT_BACKEND = "amqp"
When I'm running a worker on the main server tasks are executed just
fine, but when I'm running it on the remote server only a few tasks
are executed and then worker gets stuck not being able to executed any
task. When I restart the worker it executes a few more tasks and gets
stuck again. There is nothing special inside the task and I even tried
a test task that just adds 2 numbers. I tried to run the worker
differently (demonizing and not, setting different concurrency and
using celeryd_multi), nothing really helped.
What could be the reason? Did I miss something? Do I have to run
something on the main server other than the broker (RabbitMQ)? Or is
it a bug in the celery (I tried a few version: 2.2.4, 2.3.3 and dev,
but none of them worked)?
Hm... I've just reproduced the same problem on the local worker, so I
don't really know what it is... Is it required to restart celery
worker after every N tasks executed?
Any help will be very much appreciated :)
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
不知道你最后解决了没有,我也有类似的症状。事实证明(无论出于何种原因)任务内的打印语句导致任务无法完成(可能是某种死锁情况?)。只有部分任务有打印语句,因此当这些任务最终执行时,worker 数量(由并发选项设置)全部耗尽,从而导致任务停止执行。
Don't know if you ended up solving the problem, but I had similar symptoms. Turned out that (for whatever reason) print statements from within tasks was causing tasks not to complete (maybe some sort of deadlock situation?). Only some of the tasks had print statements, so when these tasks executed eventually the number of workers (set by concurrency option) were all exhausted, which caused tasks to stop executing.
尝试将你的 celery 配置设置为
docs
Try to set your celery config to
docs