We don’t allow questions seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. You can edit the question so it can be answered with facts and citations.
Closed 9 years ago.
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
接受
或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
发布评论
评论(2)
Delayed_Job 和 Resque 都工作得相当好。随着后台请求量的增加,resque 应该可以更好地扩展。
resque对redis的使用应该仅限于任务请求。后台任务所需的大数据对象应存储在后台工作队列之外的其他位置。例如,发送到后台工作人员进行编码的文件应存储在 AWS S3 或其他持久存储中,而不是 resque 使用的 Redis 队列。
当使用delayed_job或resque时,您将需要运行后台工作人员,这需要花钱。您可能想要查看一个自动缩放解决方案,用于根据需要动态启动和停止后台工作人员。
请参阅 http://s831.us/h3pKE6 作为示例。
Both delayed_job and resque work fairly well. resque should scale better as the volume of background requests increases.
resque's use of redis should be limited to the task request. Large data objects that are needed by the background tasks should be stored somewhere other than the background worker queue. For example, the files being sent to a background worker to be encoded should be stored in AWS S3 or some other persistent store, not the redis queue used by resque.
When using delayed_job or resque, you will need to run background workers which cost money. You might want to look at an autoscaling solution for dynamically starting and stopping background workers as needed.
See http://s831.us/h3pKE6 as an example.
我们非常频繁地使用delayed_job,并发发送了数百封电子邮件,而且效果非常好。完美无缺。是的,工人每月的费用为 36 美元。但是一个工人可以完成很多工作......每秒发送几封相当复杂的电子邮件(大量数据库查找)。
We've used delayed_job very intensively, sending hundreds of concurrent emails, and it's worked very well. flawlessly. Yes, it'll cost $36/mo for the worker. But a single worker gets a lot of jobs done... several fairly complex emails (lot of dbase lookups) sent per second.