Java JMS - 如何处理大量目标,每个目标没有并行性
我正在寻找有关基于 JMS 的体系结构的建议...
我的应用程序需要代表数千个不同的目的地接收 JMS 消息,然后通过非 JMS 协议(即这是一个网关)传递到目的地。允许的解决方案是将所有消息最初发送到一个 JMS 队列,或者发送到每个目的地的一个队列。
解决方案需要在处理如此大量的目的地(以及每秒许多消息)时表现良好。
要求是:
- 在将消息传递到一个目的地期间,不得为该目的地处理其他消息。
- 消息必须根据发送到 JMS 的时间按 FIFO 进行传送
- 不得丢失(JMS 事务语义已足够)
- 传送必须并行到多个目的地(每个目的地没有并行性除外)
- 应用程序有多个相同的实例,在不同的机器上,实现这个,全部同时运行。它们可以通过共享缓存或 JMS 进行通信,但通信应该简单且最少。
- 网关将驻留在 J2EE 容器中,但不需要使用 MDB
提前致谢
I'm looking for advice on a JMS-based architecture...
My application needs to receive JMS messages on behalf of thousands of different destinations, and then deliver to the destinations via non-JMS protocols (i.e. this is a gateway). Allowable solutions are for all messages to originally by sent to one JMS queue, or to go to one queue per destination.
Solutions need to perform well with this large number of destinations (and many messages per second).
The requirements are:
- During the time a message is being delivered to one destination, no other message may be processed for that destination.
- Messages must be delivered FIFO per destination based on when they were sent into JMS
- None may be lost (JMS transaction semantics are adequate)
- Deliveries must take place in parallel to multiple destinations (except no parallelism per destination)
- There are several identical instances of the application, on different machines, that implement this, all running at once. They can communicate via a shared cache or JMS, but communication should be simple and minimal.
- The gateway will reside in a J2EE container, but is not required to use MDB's
Thanks in advance
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
听起来您可以为每个目标使用一个队列将消息从不同的发布者传递到网关。然后网关需要是多线程的,每个队列消费者有一个线程。因此,对于 x 数量的生产者发布到 n 个目的地,网关将需要 n 个线程,每个目的地一个线程。此架构将为您提供吞吐量,该吞吐量取决于网关在将消息转发到最终目的地之前对消息进行的处理量,以及消息在网关之前由最终目的地处理所需的时间。可以发送以下消息。
这种设计有两个缺点:
如果您可以控制发布者,您是否不想使用所选的目的地协议将消息直接从发布者传输到最终目的地,而不通过网关(这似乎除了成为性能瓶颈之外没有任何用处,并且单点故障)?如果您能够实现这一点,您的下一个任务是教导最终目的地进行多处理请求,如果可能的话放宽顺序约束(要求#2)。
您的另一个选择是进行批处理。在任何给定的时间点,消费者都会耗尽队列上的所有可用消息并立即批量处理它们。这意味着您必须进行同步消息消费 (Consumer#receive()),而不是使用 onMessage 进行异步消费。
It sounds like you would be able to use one queue per destination to deliver messages from the different publishers to the gateway. The gateway would then need to be multi-threaded, with one thread per queue consumer. So, for x number of producers publishing to n destinations, the gateway will need n threads, one per destination. This architecture will provide you with throughput that is governed by how much processing the gateway has to do with a message before it forwards it on to its final destination, and how long it takes for a message to be processed by the final destination before the gateway can send the following message.
This design has 2 downsides:
If you have control over the publishers, wouldn't you prefer to transport the messages directly from the publishers to the final destinations using the destinations protocol of choice without going through the gateway (which seems to serve no purpose other than being a performance bottleneck and a single point of failure)? If you are able to achieve this, your next task is to teach the final destinations to multi-process requests, relaxing the order constraint if possible (requirement #2).
Another choice you have is to do batch processing. At any given point in time, a consumer drains all available messages on the queue and processes them in a batch at once. This means that you'd have to do synchronous message consumption (Consumer#receive()), as opposed to asynchronous consumption with an onMessage.
@Mesocyclone:根据上面 Moe 提供的问题和解决方案的输入,我可以推荐您正在寻找的可能解决方案。
您可以在网关应用程序内部为每个目的地引入一个队列。例如 dest1queue、dest2queue 等,并且只有一个输入队列用于接收消息。您可以让 MDB 的一个线程侦听部署在不同服务器上的每个内部队列。
例如,dest1queue 由 server1 上的 MDB(单线程)侦听,dest2queue 由 server2 上的 MDB(单线程)侦听,dest3queue 由 server3 上的 MDB(单线程)侦听...
所以基本上流程是:-
上述设计的好处是: -
@Mesocyclone: Based on inputs from your question and solution provided by Moe above, this is what I can recommend a possible solution you are looking for.
You can introduce one queue per destination internally in your gateway application viz. example dest1queue, dest2queue so on and have only one input queue exposed to receive message. You can have one thread of MDB listening to each of these internal queues deployed on different server.
For e.g. dest1queue is listened by MDB(single thread) on server1, dest2queue is listened by MDB(single thread) on server2,dest3queue is listened by MDB(single thread) on server3...
So basically flow would be:-
Benefit of above design would be:-