哪些 JMS 代理实现允许重新发送死消息队列中保存的消息?
我想知道,是否有 JMS 代理,允许管理员在解决引起的问题(例如数据库关闭、空间不足......)后重新发送(通过 GUI 或任何工具)消息,保存在 ded 消息队列或死信队列中)。
WebSphere 提供了重新发送死信队列中保存的消息的功能: 1
使用 Sun Java System Message Queue 4.4 的 Glassfish 2.1.1 没有执行此操作的功能,我认为是这样。
其他 JMS 代理有哪些选项?或者,如果您依赖于消息,最好的方法是不使用 DMQ/DLQ 功能?
多谢
I wonder, if there JMS broker, that allows administrators to resend (via GUI or any tool) messages, saved in a ded message queue or dead letter queue, after solving the causing problem (e.g. database is down, not enough space...).
WebSphere provide a feature to resend messages saved in dead letter queue: 1
Glassfish 2.1.1 using Sun Java System Message Queue 4.4 has no feature to do this, I think so.
What are the options on other JMS brokers? Or is the best way, not to use the DMQ/DLQ feature, if you are depend on a message?
Thanks a lot
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
我可以回答 WebSphere MQ,但不能回答任何其他 JMS 提供商。对于 WMQ,有多种工具,包括死信处理程序 (DLH),它可以针对 QFULL 等瞬时错误自动重试 DLQ 消息。例如,队列已满并且入站消息溢出到 DLQ。 DLH 将开始重试这些消息,并且当队列耗尽时,它将自动将它们替换到原始目标队列中。其他工具可作为 WMQ SupportPacs 使用。
一般的经验法则是,您必须有一些流程来处理有害消息。理想情况下,这将是一个特定于应用程序的异常队列,因为系统 DLQ 是共享的。我见过很多这样的情况:多个应用程序将消息溢出到 DLQ,其中一个应用程序的支持团队清除了整个队列,而不仅仅是其消息。不好。
另一项需要注意的是,到达 DLQ 的消息通常会导致消息序列中断。例如,队列已满且消息进入 DLQ。当队列耗尽时,消息会从 DLQ 重播,此时消息会散布在新消息到达时。理想情况下,应用程序对消息排序问题不敏感,并且每条消息都是原子的。这是回答你最后一个问题的关键。是否使用 DLQ 更多地取决于应用程序是否对消息排序敏感(至少在 WMQ 中)。如果排序是一个问题,那么您无法选择让消息溢出到辅助队列并在新消息仍在到达时重播它们。在这种情况下,最好让队列填满并限制或关闭发送应用程序。
您可以在此处阅读有关 DLH 的更多信息:http://bit.ly/aYJ13q
WMQ SupportPac 位于:< a href="http://bit.ly/bdSUfd" rel="nofollow noreferrer">http://bit.ly/bdSUfd (查看 MA01 和 MO01)
I can answer for WebSphere MQ but not for any other JMS providers. In the case of WMQ there are several tools, including the Dead Letter Handler (DLH) which can automatically retry DLQ messages for transient errors like QFULL. For example, a queue fills up and the inbound messages overflow to the DLQ. The DLH will begin to retry these messages and as the queue drains it will automatically replace them in the original target queue. Other tools are available as WMQ SupportPacs.
The general rule of thumb is that you must have some process to deal with poison messages. Ideally this will be an application-specific exception queue because the system DLQ is shared. I have seen a number of cases where multiple apps spilled messages to the DLQ and the support team for one of the apps cleared the entire queue instead of just their messages. Not good.
One other note of caution is that messages landing on the DLQ usually results in disruption of the message sequence. For example, a queue fills and messages go to the DLQ. As the queue drains, messages are replayed from the DLQ at which point they are interspersed with new messages as those arrive. Ideally the app is not sensitive to message sequencing issues and each message is atomic. This is the key to answering your final question. Whether you use the DLQ depends a lot more (at least in WMQ) on whether the app is sensitive to message sequencing. If sequencing is an issue then you don't have the option to let the messages spill over to a secondary queue and replay them while new messages are still arriving. Better in that case to let the queue fill and throttle back or shut down the sending app.
You can read more on the DLH here: http://bit.ly/aYJ13q
WMQ SupportPacs are here: http://bit.ly/bdSUfd (Check out MA01 and MO01)
注意:我在 CodeStreet 工作
,您可以做的一件事是使用 CodeStreet“ReplayService for MQ”记录 DLQ 中的所有消息,然后通过 Web-GUI 查看/搜索它们。
找到要重新发送的消息后,您可以将它们拖放到任意 MQ 主题或队列,或者指定重播请求以将它们重播到目标应用程序。
查看 http://www.codestreet.com/marketdata/jms/jms_mq.php 了解更多详情。
Note: I work for CodeStreet
One thing you could do, is use the CodeStreet "ReplayService for MQ" to record all messages in your DLQ and then view/search them through the Web-GUI.
Once you find the message(s) you want to resend, you can drag&drop them to an arbitrary MQ topic or queue or specify a replay request to replay them to your target application.
Checkout http://www.codestreet.com/marketdata/jms/jms_mq.php for further details.