将 celery 并发设置为每个队列 1 个工作线程
我本质上是在 celery 中使用rabbitmq队列作为穷人的同步。例如,当更新某些对象(并且成本很高)时,我会根据对象 ID 将它们轮询到一组 10 个队列。首先,这是一种常见模式还是有更好的方法。
其次,对于celeryd,并发级别选项(CELERY_CONCURRENCY)似乎设置了所有队列中的工作人员数量。这种方式违背了使用队列进行同步的目的,因为队列可以由多个工作线程提供服务,这意味着在同一对象上执行不同操作时可能会出现竞争条件。
有没有办法设置并发级别(或工作池选项),以便每 N 个队列有一个工作人员?
谢谢 斯里
I am essentially using rabbitmq queues in celery as a poor man's synchronisation. Eg when certain objects are updated (and have a high cost), I round robin them to a set of 10 queues based on their object IDs. Firstly is this a common pattern or is there a better way.
Secondly, with celeryd, it seems that the concurrency level option (CELERY_CONCURRENCY) sets the number of workers across all the queues. This kind of defeats the purpose of using the queues for synchronization as a queue can be serviced by multiple workers, which means potential race conditions when performing different actions on the same object.
Is there a way to set the concurrency level (or worker pool options) so that we have one worker per N queues?
Thanks
Sri
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
为什么不简单地使用 memcache 或 nosql 数据库来实现全局任务锁定系统?
通过这种方式,您可以避免任何竞争条件。
这是一个例子
http ://ask.github.com/celery/cookbook/tasks.html#ensuring-a-task-is-only-executed-at-a-time
Why you don't simply implements a global task lock system, by using memcache or a nosql db?
In this way you avoid any race condition.
Here an example
http://ask.github.com/celery/cookbook/tasks.html#ensuring-a-task-is-only-executed-one-at-a-time
与您问题的第一部分相关,我在这里提出并回答了类似的问题:根据 Celery 中的结果路由到工作人员?
本质上,您可以根据键直接路由到工作人员,在您的情况下是一个 ID。它避免了对单个锁定点的任何需要。希望它有用,尽管这个问题已经有两年了:)
Related to the first part of your question, I've asked and answered a similar question here: Route to worker depending on result in Celery?
Essentially you can route directly to a worker depending on a key, which in your case is an ID. It avoids any need for a single locking point. Hopefully it's useful, even though this question is 2 years old :)