RabbitMQ 如何一次只接收一条消息并且在失败时不重新排队
我们的系统有一群消费者使用rabbit来消费长时间运行任务的消息。目前,我们在处理结束时进行确认,这样如果消费者崩溃,消息就会重新排队。我们想要的是,一个消费者一次只处理一条消息,并且不会预取,以便另一个消费者可以处理下一条消息,如果发生崩溃,我们不会重新排队,但我们将拥有自己的监视器,它将决定是否需要在更大的 EC2 实例或其他实例上重新运行。看起来我们可以通过在处理开始时使用预取 1 进行确认来接近这一点,但这仍然是队列中的 1 条消息,可以由另一个消费者处理。显然将 prefetch 设置为 0 没有任何意义 根据兔子开发人员的说法(我不明白为什么),所以另一种选择是仍然仅在完成时进行确认,这样就不会发生预取,但不知何故不要在崩溃时重新排队。
可以这么说,如果我们逆流而上,那么我知道我们将不得不提出另一个计划,但我不明白为什么消费者希望一次只处理一件事(而不是预取下一项)工作)并且在崩溃时不重新排队是很奇怪的
Our system has a bunch of consumers that use rabbit to consume messages for long running tasks. Currently we ack at the end of processing, so that if the consumer crashes, the message gets requeued. What we want is that a consumer only works on one message at a time and does not prefetch so that another consumer can work on the next message, and if a crash occurs we do not requeue, but we'll have our own monitor that will decide whether we need to re-run on a larger EC2 instance or whatever. It looks like we can get CLOSE to this by acking at start of processing with a prefetch of 1, but that is still 1 message in the queue that could have been handled by another consumer. Apparently setting prefetch to 0 makes no sense
according to rabbit devs (I don't understand why), so another option would be to still ack only on completion so that a prefetch doesn't occur, but somehow DON'T requeue on crash.
If we are swimming upstream so to speak then I know we'll have to come up with another plan, but I don't understand why the desire for a consumer to only work on one thing at a time (and not prefetch the next item of work) and to not requeue on crash is so odd
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
考虑使用
RabbitTemplate
receive()
或receiveAndConvert
方法之一;对于此类工作负载来说,这是一个更好的模型 - 根据需要获取记录,而不是将它们推送到您的应用程序中。Consider using one of the
RabbitTemplate
receive()
orreceiveAndConvert
methods instead; that's a better model for this type of workload - fetching records as needed instead of them being pushed into your app.