与许多用户打交道时延迟后台作业的实际使用
当后台作业启动时,它会被发送到队列的后面,由工作人员处理它;一个任务清除,另一个任务开始。我认为我的观点是正确的,只是在某些情况下我不明白它的实际意义。当然,如果您是一家公司,使用延迟作业每周发送一次 15,000 份新闻通讯是非常有意义的。但是,当您的应用程序包含 100 个用户时,其中某些任务足够长,需要后台工作(例如可能需要一分钟的发送/获取电子邮件),那么每个用户都必须排队等待,而另一个用户则被清除(在如果只有一个工人)。
这是我不确定我是否正确的部分。我说的是相同的工作,但针对每个用户单独进行。这算作每个用户的一份工作吗?如果我有 100 个用户,我是否需要为每个进程保留 100 个工作人员才不会被占用?
我尝试过使用 delayed_job 来模拟,事实上,当我使用不同的帐户登录时,我必须等到其他用户的电子邮件发送完毕,直到我的电子邮件发送完毕。虽然该插件快速且易于使用,但我认为这不是正确的方法。
我也尝试过使用 Ajax,但由于它是一个 HTTP 请求,它会将浏览器绑定在加载模式下,直到它从服务器获得响应(即使使用 async: true
)。不确定我是否太快排除了这个问题,但我正在寻找一种更优雅的服务器解决方案。
有没有办法实现这样的后台工作? (我听说过不同的、大部分是商业解决方案,承诺等待时间很短,但我对完全消除用户之间的队列感兴趣)。如果没有,有没有一种方法可以发出ajax请求而不等待响应?我意识到我的问题截然不同,但两者似乎都是这个问题的适当解决方案。
When a background job starts, it's sent to the back of a queue where a worker handles it; a task clears and the other starts. I think I've got this one right except I don't understand the practical side of it in some cases. Sure, if you're a company sending out 15,000 newsletters once a week using a delayed job makes perfect sense. But when you have an application of even 100 users, in which some task is long enough to need background work (like sending/fetching emails that might take a minute) then each user will have to wait in line while another user gets cleared (in case there's a single worker).
This is the part I'm not sure I'm getting right. I'm talking about the same job, but individually for each user. Does that count as a job per user? If I have 100 users, do I need to keep 100 workers for each one's process to not get tied up?
I've tried using delayed_job to simulate that, and indeed when I sign in with a different account I have to wait until another user's email gets sent until mine is. While the plugin is swift and simple to work with, I think it's not the right approach here.
I've also tried using Ajax, but since it's an HTTP request it ties up the browser in loading mode until it gets a response from the server (even with async: true
). Not sure if I ruled this one out too quickly, but I was sortof looking for a more elegant server solution.
Is there a way to achieve a background job like this? (I've heard of different, mostly commercial solutions promising little waiting time, but I'm interested in completely eliminating the queue between users). If not, is there a method to make an ajax request without waiting for a response? I realize my questions are both drastically different but both seem like an appropriate solution to this problem.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
Resque 是一个后台处理引擎,可以支持多个队列。
使用方法:
这个SO问题也提供了一种方法将delayed_jobs与多个队列/表结合使用
Resque is a background processing engine that can support multiple queues.
Ways you could use this:
This SO question also gives a way to use delayed_jobs with multiple queues/tables
Delayed_job 和其他消息队列的目的是在核心应用程序之外异步处理作业。我总是使用队列来发送电子邮件,因为我依赖外部应用程序(有时是第三方 API,如 gmail)来发送电子邮件,并且我无法保证可用性和操作效率。
因此,对于您的用例,即使用户很少,我也强烈建议将电子邮件卸载到delayed_job。这将加快您的前端(ajax)速度,并且还会在失败时让您重试。您可以启动多个工作人员来处理队列,但对于您的号码来说,这没有必要,除非您发送邮件的调用需要很长时间(超过几秒钟?)。
是的,在大多数情况下,即使消息可能相同,我也会为每个用户创建单独的作业。我唯一一次将它们全部一起处理是如果电子邮件应用程序/API 具有批量发送功能,并且您可以通过在几次调用中发送大量有效负载来显着减少调用次数。
The purpose of delayed_job and other message queues is to asynchronously process jobs outside of your core application. I always use a queue for sending email since I'm relying on an outside application (sometimes a third-party API like gmail) to send them and I can't guarantee available and operating efficiency.
So for your use case, even with very few users, I highly recommend offloading emails to delayed_job. This will speed up your front end (ajax) and will also give you retries upon failure. You could spin up multiple workers to process the queue, but it shouldn't be necessary with your numbers unless your calls to send mail are taking a really long time (more than a couple seconds?).
And yes in most situations I'd create separate jobs for each user even though the message might be identical. The only time I'd process them all together would be if the email application / API has bulk sending and you can reduce the number of calls significantly by sending a large payload in a few calls.