celery,一再见一队列和并行多时间队列
我正在尝试使用芹菜来管理任务。 我现在遇到的问题是,我有很多小任务(电子邮件、跨服务器帖子等) 以及耗时的任务,例如文件上传。
有没有什么方法可以指定,上传将始终是一张一张的。只有一个任务及时执行,而其他工作人员将在其他队列上工作?
I am trying to use celery to manage tasks.
The problem i am into now, that i have many minor tasks(emails, cross-server posts, etc)
And time consumable tasks, like file uploads.
Is there any way, to specify, that uploads will always will be one by one. Only one task executed in time, while other workers will work on other queues?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
序列化任务执行的一个有效方法是使用互斥体(互斥)。
Python 的
threading
模块有 一个Lock
对象,可以用于此效果:互斥体和信号量是强大的工具,但如果使用不当,它们会产生死锁,有时甚至会毁了你的午餐。
An effective way to serialize the execution of tasks is to use a mutex (Mutual Exclusion).
Python's
threading
module has aLock
object which can be used to this effect:Mutexes and semaphores are powerful tools, but used unwisely they will yield deadlocks and occasionally eat your lunch.
同时我已经实现了这样的解决方案,效果很好。
但是,我不太确定, max_retries=None 表示重试次数不受限制。
该解决方案适用于redis,但也可以适用于任何其他支持原子增量操作的引擎。
这里的关键是 incr 是原子性的,所以永远不会发生两个客户端收到 counter==1 的情况。
另外过期也非常重要,任何事情都可能发生,我们的计数器将永远> 1,所以过期确保,无论如何,在特定时间之后计数器将被删除。该值可以根据需要进行调整。我的是大文件上传,所以3600听起来还可以。
我认为这是一个很好的起点,创建自定义任务对象,它将通过接收 redis_key 和 expire_time 值自动完成所有这些操作。如果我要做这样的任务,我会更新这篇文章。
作为奖励,该解决方案还可以通过将 >1 更改为 >任意数量,轻松调整为 2/3/任何其他数量的并行限制
Meanwhile i've implemented such solution, works pretty good.
But, i am not pretty sure, tha max_retries=None states that there are will be unlimited number of retries.
This solution works on redis, but can work on any other engine that supports atomicaly operation of increment.
The key here, is that incr is atomically, so it will never happen, that two clients receive counter==1.
Also expire is very important, anything can happen, and we will get our counter >1 forever, so expire makes sure, that no matter what, after specific time counter will be dropped. This value can be adjusted to the needs. Mine is big files uploads, so 3600 sounds OK.
I think that this is good starting point, to make custom Task object, that will do all of this automatically, by recieving redis_key and expire_time values. I will update this post if I'll do such a Task.
As a bonus, this solution is also can be easily adjusted to 2/3/any other number of parallel limit, by changing >1 to >anynumber