可以采取哪些步骤来优化 tibco JMS 以提高性能?

发布于 2024-10-17 10:04:41 字数 134 浏览 2 评论 0原文

我们正在运行一个高吞吐量系统,该系统利用 tibco-ems JMS 在主服务器和客户端连接之间传递大量消息。我们做了一些统计并确定 JMS 是造成大量延迟的原因。我们如何才能使 tibco JMS 性能更高?是否有任何资源可以对这个主题进行很好的讨论。

We are running a high throughput system that utilizes tibco-ems JMS to pass large numbers of messages to and from our main server to our client connections. We've done some statistics and have determined that JMS is the causing a lot of latency. How can we make tibco JMS more performant? Are there any resources that give a good discussion on this topic.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

暗恋未遂 2024-10-24 10:04:41

如果您不需要持久性,那么使用非持久性消息是一种选择。
请注意,即使您确实需要持久性,有时最好使用非持久性消息,并且在发生崩溃时执行不同的恢复操作(例如重新发送所有消息)

这在以下情况下是相关的:

  • 崩溃很少见(因为恢复需要时间)
  • 您可以轻松检测崩溃
  • 您可以处理重复的消息(您可能不知道崩溃之前到底传送了哪些消息

EMS 还提供了一些持久性机制,但不如经典的保证传送那么防弹)
其中包括:

  • 您可以使用“至少一次”或“最多一次”传递,而不是“恰好一次”消息传递。
  • 您可以使用预取机制,使客户端在应用程序请求消息之前将消息提取到内存中。

Using non-persistent messages is one option if you don't need persistence.
Note that even if you do need persistence, sometimes it's better to use non persistent messages, and in case of a crash perform a different recovery action (like resending all messages)

This is relevant if:

  • crashes are rare (as the recovery takes time)
  • you can easily detect a crash
  • you can handle duplicate messages (you may not know exactly which messages were delivered before the crash

EMS also provides some mechanisms that are persistent, but less bullet proof then classic guaranteed delivery
these include:

  • instead of "exactly once" message delivery you can use "at least once" or "up to once" delivery.
  • you may use the pre-fetch mechanism which causes the client to fetch messages to memory before your application request them.
遗失的美好 2024-10-24 10:04:41

EMS不应该成为瓶颈。我已经完成了测试,我们的服务器吞吐量很高。

您需要尝试确定瓶颈在哪里。问题出在消息的生产者还是消费者。消息是否堆积在队列上。

你正在做什么类型的场景。

发布/sup 或请求回复?
您是否有临时队列堆积?太多的临时队列可能会导致性能问题。 (主要是当它们因您没有正确关闭某些内容而徘徊时)

您是否发布到具有持久订阅者的主题(如果是)。尝试将主题桥接到队列并从中读取。持久订阅者也会导致性能出现一些问题,因为它需要跟踪谁拥有所有消息的副本。

确保您的发送进程有一个会话和通过该会话的多个调用。不要为每个操作打开完整的会话。尽可能重复使用。为消费者做同样的事情。

完成后请确保关闭。 EMS并不能解决问题。因此,如果您建立连接并关闭应用程序,则连接仍然存在并占用资源。

检查您对发生崩溃时丢失消息的容忍度。如果您正在执行客户端确认,并且处理消息时崩溃也没关系,然后切换到自动。另外,我相信如果您使用(TEMS - Tibco EMS for WCF),则会话确认存在问题。因此,只有在整个消息上处理消息时,我们才从客户端 ACK 切换到 Dups 正常的消息,并且效果更好)

EMS should not be the bottle neck. I've done testing and we have gotten a shitload of throughput on our server.

You need to try to determine where the bottle neck is. Is the problem in the producer of the message or the consumer. Are messages piling up on the queue.

What type of scenario are you doing.

Pub/sup or request reply?
are you having temporary queue pile up. Too many temporary queues can cause performance issues. (Mostly when they linger because you didn't close something properly)

Are you publishing to a topic with durable subscribers if so. Try bridging the topic to queue and reading from those. Durable subscribers can cause a little hiccup in performance too since it needs to track who has copies of all messages.

Ensure that your sending process has one session and multiple calls through that session. Don't open a complete session for each operation. Re-use where possible. Do the same for the consumer.

make sure you CLOSE when you are done. EMS doesn't clear things up. So if you make a connection and just close your app the connection still is there and sucking up resources.

review your tolerance for lost messages in the even of a crash. If you are doing Client ack and it doesn't matter if you crash processing the message then switch to auto. Also I believe if you are using (TEMS - Tibco EMS for WCF) there's a problem with the session acknowledge. So a message is only when its processed on the whole message, we switched from Client ACK to the one that had Dups ok and it worked better)

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文