代理网络充斥着未使用的 ActiveMQ.Advisory.TempQueue 消息

发布于 2024-12-10 12:27:44 字数 2192 浏览 0 评论 0 原文

我目前正在调查我的经纪商网络中的内存问题。 根据 JConsole 的说法,当代理开始阻止消息时,ActiveMQ.Advisory.TempQueue 占用了 99% 的配置内存。

有关配置的一些详细信息

大部分是默认配置。一个开放的 stomp+nio 连接器,一个开放的 openwire 连接器。所有经纪人形成一个超立方体(与每个其他经纪人的一个途中连接(更容易自动生成))。无流量控制。

问题详细信息

Web 控制台在 30 个消费者(6 个代理、1 个消费者,其余是使用 java 连接器的客户端)处显示类似 1974234 条入队消息和 45345 条出队消息。据我所知,出队计数应该不小于:enqueued*consumers。因此,就我而言,大量建议不会被消耗,并开始填充我的临时消息空间。 (目前我配置了几个 GB 作为临时空间)

由于没有客户端主动使用临时队列,我觉得这很奇怪。看了临时队列后,我更加困惑了。大多数消息看起来像这样(msg.toString):

ActiveMQMessage {commandId = 0, responseRequired = false, messageId = ID:srv007210-36808-1318839718378-1:1:0:0:203650, originalDestination = null, originalTransactionId = null, producerId = ID:srv007210-36808-1318839718378-1:1:0:0, destination = topic://ActiveMQ.Advisory.TempQueue, transactionId = null, expiration = 0, timestamp = 0, arrival = 0, brokerInTime = 1318840153501, brokerOutTime = 1318840153501, correlationId = null, replyTo = null, persistent = false, type = Advisory, priority = 0, groupID = null, groupSequence = 0, targetConsumerId = null, compressed = false, userID = null, content = null, marshalledProperties = org.apache.activemq.util.ByteSequence@45290155, dataStructure = DestinationInfo {commandId = 0, responseRequired = false, connectionId = ID:srv007210-36808-1318839718378-2:2, destination = temp-queue://ID:srv007211-47019-1318835590753-11:9:1, operationType = 1, timeout = 0, brokerPath = null}, redeliveryCounter = 0, size = 0, properties = {originBrokerName=broker.coremq-behaviortracking-675-mq-01-master, originBrokerId=ID:srv007210-36808-1318839718378-0:1, originBrokerURL=stomp://srv007210:61612}, readOnlyProperties = true, readOnlyBody = true, droppable = false}

看到这些消息后我有几个问题:

  1. 我是否正确理解消息的来源是一个stomp连接?
  2. 如果是,stomp 连接如何创建临时队列?
  3. 这些建议未被采纳是否有一个简单的原因?

目前,我通过停用网络连接器上的bridgeTempDestinations 属性来推迟问题的解决。这样消息就不会传播,并且它们填充临时空间的速度会慢得多。如果我无法修复这些消息的来源,我至少想阻止它们填充商店:

  1. 我可以在一段时间后删除这些未使用的消息吗?
  2. 这会产生什么后果?

更新:我对集群进行了更多监控,发现消息已被消耗。它们已排队并分派,但消费者(与使用 activemq 库的 java 消费者一样多的其他集群节点)无法确认消息。因此它们保留在已调度的消息队列中,并且该队列不断增长。

I'm currently investigating a memory problem in my broker network.
According to JConsole the ActiveMQ.Advisory.TempQueue is taking up 99% of the configured memory when the broker starts to block messages.

A few details about the config

Default config for the most part. One open stomp+nio connector, one open openwire connector. All brokers form a hypercube (one on-way connection to every other broker (easier to auto-generate)). No flow-control.

Problem details

The webconsole shows something like 1974234 enqueued and 45345 dequeued messages at 30 consumers (6 brokers, one consumer and the rest is clients that use the java connector). As far as I know the dequeue count should be not much smaller than: enqueued*consumers. so in my case a big bunch of advisories is not consumed and starts to fill my temp message space. (currently I configured several gb as temp space)

Since no client actively uses temp queues I find this very strange. After taking a look at the temp queue I'm even more confused. Most of the messages look like this (msg.toString):

ActiveMQMessage {commandId = 0, responseRequired = false, messageId = ID:srv007210-36808-1318839718378-1:1:0:0:203650, originalDestination = null, originalTransactionId = null, producerId = ID:srv007210-36808-1318839718378-1:1:0:0, destination = topic://ActiveMQ.Advisory.TempQueue, transactionId = null, expiration = 0, timestamp = 0, arrival = 0, brokerInTime = 1318840153501, brokerOutTime = 1318840153501, correlationId = null, replyTo = null, persistent = false, type = Advisory, priority = 0, groupID = null, groupSequence = 0, targetConsumerId = null, compressed = false, userID = null, content = null, marshalledProperties = org.apache.activemq.util.ByteSequence@45290155, dataStructure = DestinationInfo {commandId = 0, responseRequired = false, connectionId = ID:srv007210-36808-1318839718378-2:2, destination = temp-queue://ID:srv007211-47019-1318835590753-11:9:1, operationType = 1, timeout = 0, brokerPath = null}, redeliveryCounter = 0, size = 0, properties = {originBrokerName=broker.coremq-behaviortracking-675-mq-01-master, originBrokerId=ID:srv007210-36808-1318839718378-0:1, originBrokerURL=stomp://srv007210:61612}, readOnlyProperties = true, readOnlyBody = true, droppable = false}

After seeing these messages I have several questions:

  1. Do I understand correctly that the origin of the message is a stomp connection?
  2. If yes, how can a stomp connection create temp queues?
  3. Is there a simple reason why the advisories are not consumed?

Currently I sort of postponed the problem by deactivating the bridgeTempDestinations property on the network connectors. this way the messages are not spread and they fill the temp space much slower. If I can not fix the source of these messages I would at least like to stop them from filling the store:

  1. Can I drop these unconsumed messages after a certain time?
  2. what consequences can this have?

UPDATE: I monitored my cluster some more and found out that the messages are consumed. They are enqueued and dispatched but the consumers (the other cluster nodes as mutch as java consumers that use the activemq lib) fail to acknowledge the messages. so they stay in the dispatched messages queue and this queue grows and grows.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

╄→承喏 2024-12-17 12:27:44

这是一个旧线程,但如果有人遇到同样的问题,您可能需要查看这篇文章:http://forum.spring.io/forum/spring-projects/integration/111989-jms-outbound-gateway-temporary-queues-never-deleted

该链接中的问题听起来相似,即产生大量咨询消息的临时队列。在我的例子中,我们使用临时队列来实现同步请求/响应消息传递,但咨询消息量导致 ActiveMQ 将大部分时间花在 GC 上,并最终引发 GC 开销超出限制异常。这是 v5.11.1 上的。即使我们关闭了连接、会话、生产者、消费者,临时队列也不会被 GC 处理,并且会继续接收咨询消息。

解决方案是在清理其他资源时显式删除临时队列(请参阅 https://docs.oracle.com/javaee/7/api/javax/jms/TemporaryQueue.html

This is an old thread but in case somebody runs into it having the same problem, you might want to check out this post: http://forum.spring.io/forum/spring-projects/integration/111989-jms-outbound-gateway-temporary-queues-never-deleted

The problem in that link sounds similar, i.e. temp queues producing large amount of advisory messages. In my case, we were using temp queues to implement synchronous request/response messaging but the volume of advisory messages caused ActiveMQ to spend most of its time in GC and eventually throw a GC Overhead Limit Exceeded Exception. This was on v5.11.1. Even though we closed connection, session, producer, consumer the temp queue would not be GC'd and would continue receiving advisory messages.

The solution was to explicitly delete the temp queues when cleaning up the other resources (see https://docs.oracle.com/javaee/7/api/javax/jms/TemporaryQueue.html)

情绪少女 2024-12-17 12:27:44

如果您不使用此建议主题 - 您可能需要按照 http://activemq.2283324.n4.nabble.com/How-to-disable-advisory-for-gt-topic-ActiveMQ-Advisory-TempQueue-td2356134.html

删除建议消息不会产生任何后果 - 因为这些消息只是用于系统健康分析和统计的消息。

If you are not using this advisory topic - you may want to turn it off as it's suggested at http://activemq.2283324.n4.nabble.com/How-to-disable-advisory-for-gt-topic-ActiveMQ-Advisory-TempQueue-td2356134.html

Dropping the advisory messages will not have any consequences - since those are just the messages meant for system health analysis and statistics.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文