MQ 发布/订阅特定于域的接口通常比点对点更快吗?
我正在开发使用传输层和点对点 MQ 通信的现有应用程序。
对于给定的每个帐户列表,我们需要检索一些信息。
目前我们有这样的东西来与 MQ 通信:
responseObject getInfo(requestObject){
code to send message to MQ
code to retrieve message from MQ
}
如您所见,我们等到它完全完成后再继续下一个帐户。 由于性能问题,我们需要重新设计它。
目前我可以想到两种可能的情况。
1)在应用程序中创建一堆线程,为每个帐户执行传输适配器。然后从每个任务中获取数据。我更喜欢这种方法,但一些团队成员认为传输层是进行此类更改的更好位置,我们应该在 MQ 而不是我们的应用程序上放置额外的负载。
2) 修改传输层以使用发布/订阅模型。 理想情况下,我想要这样的东西:
void send (requestObject){
code to send message to MQ
}
responseObject receive()
{
code to retrieve message from MQ
}
然后我将在循环中发送请求,然后在循环中检索数据。这个想法是,当后端系统处理第一个请求时,我们不必等待响应,而是发送下一个请求。
我的问题是,它会比当前的顺序检索快很多吗?
I'm working on the existing application that uses transport layer with point-to-point MQ communication.
For each of the given list of accounts we need to retrieve some information.
Currently we have something like this to communicate with MQ:
responseObject getInfo(requestObject){
code to send message to MQ
code to retrieve message from MQ
}
As you can see we wait until it finishes completely before proceeding to the next account.
Due to performance issues we need to rework it.
There are 2 possible scenarios that I can think off at the moment.
1) Within an application to create a bunch of threads that would execute transport adapter for each account. Then get data from each task. I prefer this method, but some of the team members argue that transport layer is a better place for such change and we should place extra load on MQ instead of our application.
2) Rework transport layer to use publish/subscribe model.
Ideally I want something like this:
void send (requestObject){
code to send message to MQ
}
responseObject receive()
{
code to retrieve message from MQ
}
Then I will just send requests in the loop, and later retrieve data in the loop. The idea is that while first request is being processed by the back end system we don't have to wait for the response, but instead send next request.
My question, is it going to be a lot faster than current sequential retrieval?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
问题标题将其定义为 P2P 和 pub/sub 之间的选择,但问题正文将其定义为线程处理和管道处理之间的选择。这是两个完全不同的事情。
提供的任何代码片段都可以轻松地使用 P2P 或 pub/sub 来放置和获取消息。该决定不应基于速度,而应基于所讨论的接口是否需要将单个消息传递给多个接收者。如果答案是否定的,那么您可能希望坚持使用点对点,无论您的应用程序的线程模型如何。
顺便说一句,标题中提出的问题的答案是“不”。当您使用点对点模型时,您的消息会立即解析到目标或传输队列,并且 WebSphere MQ 从那里路由它们。通过发布/订阅,您的消息将被传递到内部代理进程,该进程可以解析零到多个可能的目的地。只有在这一步之后,发布的消息才会被放入队列中,在其剩余的旅程中,它会像任何其他点对点消息一样被处理。尽管发布/订阅通常不会明显慢于点对点,但代码路径更长,因此,在所有其他条件相同的情况下,它会增加一点延迟。
问题的另一部分是关于并行性。您建议要么旋转多个线程,要么分解应用程序,以便单独处理请求和回复。第三种选择是运行多个应用程序实例。您可以将其中任何或所有这些组合到您的设计中。例如,您可以启动多个请求线程和多个回复线程,然后让应用程序实例针对多个队列管理器进行处理。
这个问题的关键是消息之间是否具有关联性、顺序依赖关系或与创建它们的应用程序实例或线程之间的关联性。例如,如果我使用请求/回复来响应 HTTP 请求,则附加到 HTTP 会话的线程可能需要成为接收回复的线程。但是,如果回复确实是异步的,并且我需要做的就是使用响应数据更新数据库,那么拥有单独的请求和回复线程会很有帮助。
无论哪种情况,动态增加或减少实例数量的能力都有助于管理峰值工作负载。如果这是仅通过线程来完成的,那么您的性能可扩展性必然会达到单个服务器的上限。如果这是通过在相同或不同的服务器/QMgr 上启动新的应用程序实例来实现的,那么您将获得可扩展性和工作负载平衡。
有关这些主题的更多想法,请参阅以下文章: 使命:消息传递:迁移WebSphere MQ 集群中的故障转移和扩展
此外,请转至 WebSphere MQ SupportPacs 页面并查看适用于您的平台和 WMQ 版本的 Performance SupportPac。这些是名称以 MP** 开头的。这些将向您显示连接的应用程序实例数量变化时的性能特征。
The question title frames this as a choice between P2P and pub/sub but the question body frames it as a choice between threaded and pipelined processing. These are two completely different things.
Either code snippet provided could just as easily use P2P or pub/sub to put and get messages. The decision should not be based on speed but rather whether the interface in question requires a single message to be delivered to multiple receivers. If the answer is no then you probably want to stick with point-to-point, regardless of your application's threading model.
And, incidentally, the answer to the question posed in the title is "no." When you use the point-to-point model your messages resolve immediately to a destination or transmit queue and WebSphere MQ routes them from there. With pub/sub your message is handed off to an internal broker process that resolves zero to many possible destinations. Only after this step does the published message get put on a queue where, for the remainder of it's journey, it then is handled like any other point-to-point message. Although pub/sub is not normally noticeably slower than point-to-point the code path is longer and therefore, all other things being equal, it will add a bit more latency.
The other part of the question is about parallelism. You proposed either spinning up many threads or breaking the app up so that requests and replies are handled separately. A third option is to have multiple application instances running. You can combine any or all of these in your design. For example, you can spin up multiple request threads and multiple reply threads and then have application instances processing against multiple queue managers.
The key to this question is whether the messages have affinity to each other, to order dependencies or to the application instance or thread which created them. For example, if I am responding to an HTTP request with a request/reply then the thread attached to the HTTP session probably needs to be the one to receive the reply. But if the reply is truly asynchronous and all I need to do is update a database with the response data then having separate request and reply threads is helpful.
In either case, the ability to dynamically spin up or down the number of instances is helpful in managing peak workloads. If this is accomplished with threading alone then your performance scalability is bound to the upper limit of a single server. If this is accomplished by spinning up new application instances on the same or different server/QMgr then you get both scalability and workload balancing.
Please see the following article for more thoughts on these subjects: Mission:Messaging: Migration, failover, and scaling in a WebSphere MQ cluster
Also, go to the WebSphere MQ SupportPacs page and look for the Performance SupportPac for your platform and WMQ version. These are the ones with names beginning with MP**. These will show you the performance characteristics as the number of connected application instances varies.
听起来你并没有以正确的方式思考这个问题。无论您使用哪种模型(点对点或发布/订阅),如果您的性能受到缓慢的后端系统的限制,则两者都无助于加快该过程。然而,如果理论上您可以一次向后端系统发出多个请求并期望看到加速,那么您仍然不在乎是否进行点对点或发布/订阅。您真正关心的是同步与异步。
您当前检索数据的方法显然是同步的:发送请求消息,然后等待相应的响应消息。如果您只是在一个方法中连续发送所有请求消息(可能在循环中),然后使用一个单独的方法(最好在不同的线程上)监视传入主题的响应,则可以异步进行通信。这将确保您的代码不会再阻止单个请求。 (这大致对应于选项 2,尽管没有 pub/sub。)
我认为选项 1 可能会变得相当笨拙,具体取决于您实际需要发出多少请求,尽管它也可以在不切换到 pub/sub 的情况下实现渠道。
It doesn't sound like you're thinking about this the right way. Regardless of the model you use (point-to-point or publish/subscribe), if your performance is bounded by a slow back-end system, neither will help speed up the process. If, however, you could theoretically issue more than one request at a time against the back-end system and expect to see a speed up, then you still don't really care if you do point-to-point or publish/subscribe. What you really care about is synchronous vs. asynchronous.
Your current approach for retrieving the data is clearly synchronous: you send the request message, and wait for the corresponding response message. You could do your communication asynchronously if you simply sent all the request messages in a row (perhaps in a loop) in one method, and then had a separate method (preferably on a different thread) monitoring the incoming topic for responses. This would ensure that your code would no longer block on individual requests. (This roughly corresponds to option 2, though without pub/sub.)
I think option 1 could get pretty unwieldly, depending on how many requests you actually have to make, though it, too, could be implemented without switching to a pub/sub channel.
重新设计的方法将使用更少的线程。这是否会使应用程序更快取决于管理大量线程的开销当前是否会减慢您的速度。如果您的线程少于 1000 个(这是一个非常非常粗略的数量级估计!),我猜可能不是。如果你拥有的不止这些,那很可能是这样。
The reworked approach will use fewer threads. Whether that makes the application faster depends on whether the overhead of managing a lot of threads is currently slowing you down. If you have fewer than 1000 threads (this is a very, very rough order-of-magnitude estimate!), i would guess it probably isn't. If you have more than that, it might well be.