芹菜任务_reject_on_worker_lost do do do do do do do dodis do redis作为消息经纪
我目前正在使用Redis的芹菜和6.2.6版的5.2.6版。当我打开
关于如何与Message Broker实现相同行为的任何指示?
I'm currently using version 5.2.6 of Celery and version 6.2.6 of Redis. When I turn on the task_reject_on_worker_lost flag, I am expecting Celery to redeliver a task executed by a worker that died abruptly. However, trying this on Redis as message broker my task doesn't actually get redelivered immediately after a worker goes down. On the other hand, when I try the exact same configuration with RabbitMQ it works as expected.
Any pointers on how to achieve the same behavior with Redis as message broker?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
我最近是芹菜的新手,面对与您相同的问题。
这意味着使用ACK Config:
如果经纪人配置使用Redis:
如果在运行任务中被杀死并再次重新启动,则不会重新排队。
但是,如果使用兔子:
任务被重新排队以运行。
我的环境
最后,我发现了评论 celery github问题。
附加配置值
visibility_timile_timeout
BROKER_TRANSPORT_OPTIONS
是redis
Broker所必需的。我在配置中添加了其他配置,并且正在工作。
仅供参考,这是我的配置文件:
celery_config.py
app.py
I am new to celery recently and facing the same issue as you did.
Which means with ack config:
If broker config use redis:
Task will not be re queued if worker being killed during running task and restarted again.
But if use rabbitmq:
Task got re queued to run.
My environment
Finally, I found this comment from celery github issues.
Additional config value
visibility_timeout
ofbroker_transport_options
is required forredis
broker.I added the additional config in my config and it's working.
FYI, here is my config file :
celery_config.py
app.py