Java JMS - 如何处理大量目标,每个目标没有并行性

发布于 2024-11-25 02:12:19 字数 473 浏览 0 评论 0原文

我正在寻找有关基于 JMS 的体系结构的建议...

我的应用程序需要代表数千个不同的目的地接收 JMS 消息,然后通过非 JMS 协议(即这是一个网关)传递到目的地。允许的解决方案是将所有消息最初发送到一个 JMS 队列,或者发送到每个目的地的一个队列。

解决方案需要在处理如此大量的目的地(以及每秒许多消息)时表现良好。

要求是:

  1. 在将消息传递到一个目的地期间,不得为该目的地处理其他消息。
  2. 消息必须根据发送到 JMS 的时间按 FIFO 进行传送
  3. 不得丢失(JMS 事务语义已足够)
  4. 传送必须并行到多个目的地(每个目的地没有并行性除外)
  5. 应用程序有多个相同的实例,在不同的机器上,实现这个,全部同时运行。它们可以通过共享缓存或 JMS 进行通信,但通信应该简单且最少。
  6. 网关将驻留在 J2EE 容器中,但不需要使用 MDB

提前致谢

I'm looking for advice on a JMS-based architecture...

My application needs to receive JMS messages on behalf of thousands of different destinations, and then deliver to the destinations via non-JMS protocols (i.e. this is a gateway). Allowable solutions are for all messages to originally by sent to one JMS queue, or to go to one queue per destination.

Solutions need to perform well with this large number of destinations (and many messages per second).

The requirements are:

  1. During the time a message is being delivered to one destination, no other message may be processed for that destination.
  2. Messages must be delivered FIFO per destination based on when they were sent into JMS
  3. None may be lost (JMS transaction semantics are adequate)
  4. Deliveries must take place in parallel to multiple destinations (except no parallelism per destination)
  5. There are several identical instances of the application, on different machines, that implement this, all running at once. They can communicate via a shared cache or JMS, but communication should be simple and minimal.
  6. The gateway will reside in a J2EE container, but is not required to use MDB's

Thanks in advance

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

青衫负雪 2024-12-02 02:12:19

听起来您可以为每个目标使用一个队列将消息从不同的发布者传递到网关。然后网关需要是多线程的,每个队列消费者有一个线程。因此,对于 x 数量的生产者发布到 n 个目的地,网关将需要 n 个线程,每个目的地一个线程。此架构将为您提供吞吐量,该吞吐量取决于网关在将消息转发到最终目的地之前对消息进行的处理量,以及消息在网关之前由最终目的地处理所需的时间。可以发送以下消息。

这种设计有两个缺点:

  1. 您的应用程序将出现单点故障 - 网关。您将无法对其进行负载平衡,因为消费顺序对您来说很重要,因此您不希望 2 个网关耗尽同一队列。
  2. 每个队列都可能成为瓶颈,堵塞处理速度不够快的消息。

如果您可以控制发布者,您是否不想使用所选的目的地协议将消息直接从发布者传输到最终目的地,而不通过网关(这似乎除了成为性能瓶颈之外没有任何用处,并且单点故障)?如果您能够实现这一点,您的下一个任务是教导最终目的地进行多处理请求,如果可能的话放宽顺序约束(要求#2)。

您的另一个选择是进行批处理。在任何给定的时间点,消费者都会耗尽队列上的所有可用消息并立即批量处理它们。这意味着您必须进行同步消息消费 (Consumer#receive()),而不是使用 onMessage 进行异步消费。

It sounds like you would be able to use one queue per destination to deliver messages from the different publishers to the gateway. The gateway would then need to be multi-threaded, with one thread per queue consumer. So, for x number of producers publishing to n destinations, the gateway will need n threads, one per destination. This architecture will provide you with throughput that is governed by how much processing the gateway has to do with a message before it forwards it on to its final destination, and how long it takes for a message to be processed by the final destination before the gateway can send the following message.

This design has 2 downsides:

  1. Your application(s) will have a single point of failure- the gateway. You will not be able to load-balance it because the order of consumption is important to you, so you don't want 2 gateways draining the same queue.
  2. Each queue can potentially become a bottleneck, clogging messages that are not being processed quickly enough.

If you have control over the publishers, wouldn't you prefer to transport the messages directly from the publishers to the final destinations using the destinations protocol of choice without going through the gateway (which seems to serve no purpose other than being a performance bottleneck and a single point of failure)? If you are able to achieve this, your next task is to teach the final destinations to multi-process requests, relaxing the order constraint if possible (requirement #2).

Another choice you have is to do batch processing. At any given point in time, a consumer drains all available messages on the queue and processes them in a batch at once. This means that you'd have to do synchronous message consumption (Consumer#receive()), as opposed to asynchronous consumption with an onMessage.

心欲静而疯不止 2024-12-02 02:12:19

@Mesocyclone:根据上面 Moe 提供的问题和解决方案的输入,我可以推荐您正在寻找的可能解决方案。

您可以在网关应用程序内部为每个目的地引入一个队列。例如 dest1queue、dest2queue 等,并且只有一个输入队列用于接收消息。您可以让 MDB 的一个线程侦听部署在不同服务器上的每个内部队列。
例如,dest1queue 由 server1 上的 MDB(单线程)侦听,dest2queue 由 server2 上的 MDB(单线程)侦听,dest3queue 由 server3 上的 MDB(单线程)侦听...

所以基本上流程是:-

单输入队列暴露在网关应用程序之外 ->消息由 MDB 的 1 个或多个实例接收,其唯一目的是将传入消息路由到内部队列 ->内部队列(每个目标一个)仅由 MDB 的 1 个线程侦听(因为您不需要一个目标的并行性),该线程处理消息并与目标对话。

上述设计的好处是: -

  1. 您可以让部署在不同服务器上的 MDB 线程侦听每个内部队列,以便每个 MDB 线程获得最大处理时间。
  2. 在任何时间点,您都可以更改侦听一个目标的线程数量,而不会影响其他目标。
  3. 然而,上述设计要求您为每个内部队列都有备份MDB服务器以避免SPOF。您部署应用程序的服务器可能提供某种故障转移功能。

@Mesocyclone: Based on inputs from your question and solution provided by Moe above, this is what I can recommend a possible solution you are looking for.

You can introduce one queue per destination internally in your gateway application viz. example dest1queue, dest2queue so on and have only one input queue exposed to receive message. You can have one thread of MDB listening to each of these internal queues deployed on different server.
For e.g. dest1queue is listened by MDB(single thread) on server1, dest2queue is listened by MDB(single thread) on server2,dest3queue is listened by MDB(single thread) on server3...

So basically flow would be:-

Single Input Queue exposed outside of gateway application -> message is received by 1 or multiple instances of a MDB whose only purpose is to route the incoming message to internal queue -> internal queue(one per destination) is listened by only 1 thread of MDB(as you don't require parallelism for one destination) which process the message and talk to destination.

Benefit of above design would be:-

  1. You can have that each internal queue listened by MDB thread deployed on different servers so that each MDB thread gets maximum processing time.
  2. At any point of time, you can change number of threads listening for one destination without affecting others.
  3. However, above design requires you to have back up MDB server for each of internal queues to avoid SPOF. May be your server in which you deploy application provides some sort of fail over capability.
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文