WebSphere MQ 连接架构

发布于 2024-10-15 21:57:16 字数 438 浏览 5 评论 0原文

对于跨互联网(即 100+ 毫秒延迟)和组织边界访问 WebSphere MQ 消息队列的推荐架构是什么?

我们正在考虑的两种方法是直接从我们的客户端访问其他组织的队列管理器,另一种方法是在本地拥有一个队列管理器,该队列管理器将从远程队列中抽取消息,然后本地客户端将访问它。我认为两者都有优点,但我不确定这两种架构之间的权衡。

我们必须处理的数据量是每秒 600 个,消息大小约为 50 字节。另一个组织的队列管理器是不可更改的(它是 WebSphere MQ)。消息必须按顺序处理。也许它们可以分割在不同的队列之间,然后每个队列由单独的客户端处理,但在每个队列中的顺序仍然非常重要。一般来说,会有一个事务处理客户端。可能还有一个额外的商业智能客户端来处理消息的副本。

有谁有 MQSeries 到 MQSeries 队列管理器吞吐量的任何性能指标以及 WebSphere MQ 队列管理器与客户端吞吐量的比较吗?

What is the recommended architecture for accessing WebSphere MQ message queues across internet (i.e. 100+ ms latency) and over organizational boundaries?

The two approaches that we are considering are to access the other organization's Queue Manager directly from our clients and the alternative is to have a Queue manager locally that would pump the messages from the remote queue and then local clients will access it. I think that both have merit but I am not sure of the trade-offs between the two architectures.

The volume that we would have to handle is 600 per second with message size of about 50 bytes. The other org's queue manager is not changeable (and it is WebSphere MQ). The messages have to be processed in order. Perhaps they can be split between different queues and then each queue to be processed by separate client but in each queue the order is still very important. In general there would be one transaction processing client. There could be one additional business intelligence client that would process a copy of the message.

Does anyone have any perf metrics of MQSeries to MQSeries queue manager throughput and a comparison of WebSphere MQ queue manager to client throughput?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

仙女山的月亮 2024-10-22 21:57:16

从安全角度来看,建议的答案是其他组织应该要求您使用完整的 QMgr 而不是客户端。来自外部 QMgr 的通道连接只会发出 CONNECTINQUIREPUT 命令。客户端连接可以访问整个 WMQ API,并且可以对任何对象执行任何 API 调用。例如,如果另一方在其队列名称(例如帐号)中使用结构化数据,则客户端应用程序可以循环遍历所有可能的名称以枚举所有帐号。如果调用返回2035,则对象存在,但授权失败。如果调用返回 2085 该对象不存在。除了允许各种类型的枚举之外,陷入连接循环的客户端每秒可能会在 QMgr 上进行数百次重新连接尝试,这将完全占用侦听器。因此,客户端接受来自 QMgr 的连接本质上更加危险,而来自第三方的客户端更是如此。然而,客户端是免费的,成本通常超过风险,特别是当应用程序不移动高价值交易或敏感数据时。如果我负责连接到供应商的 QMgr,并且他们允许选择客户端连接或 QMgr 连接,并且应用程序具有高可见性或关键任务,那么我会选择完整的 QMgr。

另一个需要考虑的方面是 QMgr 到 QMgr 通道对网络连接问题的恢复能力更强。这是因为两个 QMgrs 会跟踪消息序列号,将批次保持在同步点下直至得到确认,并且能够自动协商通道恢复,而不会丢失或重复任何持久消息。由于客户端通道为开发人员提供了对 API 的完全访问权限,包括编写牺牲可靠性以换取性能的程序的自由,因此可能会编写丢失或重复消息的客户端应用程序。事实上,网络上的异步消息的一个固有问题是会话恢复问题可能会产生不明确的结果,从而导致重复的消息。这并不是 WebSphere MQ 所特有的,事实上,JMS 规范讨论了这个问题,并指出应用程序有责任正确考虑由于会话恢复而生成的“功能重复”消息。您可以通过始终使用事务处理会话来消除消息丢失的可能性,但消除发送重复消息的可能性需要一些工作。两位质量经理的谈话使用了一种协议来消除任何此类歧义。

至于性能指标,请查看您平台的性能报告。这些都可以从 SupportPacs 登录页面 获取。查找名称类似于 MP** 的 SupportPac,例如适用于 Windows 的 SupportPac MP71 或适用于 Linux 的 SupportPac MPL5。

The recommended answer from a security standpoint is that the other org should be requiring you to use a full QMgr and not a client. A channel connection from an external QMgr will only ever issue CONNECT, INQUIRE and PUT commands. A client connection has access to the entire WMQ API and can execute any API call on any object. For example, if the other party uses structured data in their queue names (an account number for example) a client app can cycle through all possible names to enumerate all the account numbers. If the call returns 2035, the object exists but authorization failed. If the call returns 2085 the object does not exist. In addition to allowing various types of enumeration, a client that gets stuck in a connect loop can throw hundreds of reconnect attempts per second at a QMgr that will completely tie up a listener. So clients are inherently more dangerous to accept connections from on a QMgr and clients from 3rd parties even more so. However, clients are free and the cost often outweighs the risk, especially when the application is not moving high-value transactions or sensitive data. If I were charged with connecting to a vendor's QMgr and they allowed the choice of client connections or QMgr connections and the application was high-visibility or mission critical, I'd choose a full QMgr.

Another aspect to consider is that the QMgr-to-QMgr channels are more resilient to network connectivity problems. This is because two QMgrs keep track of message sequence numbers, hold batches under syncpoint until acknowledged and are capable of negotiating channel recovery automatically without losing or duplicating any persistent messages. Because a client channel gives you as the developer full access to the API, including the freedom to write programs that sacrifice reliability for performance, it is possible to write a client app that loses or duplicates messages. In fact, an inherent problem with async message over a network is that session recovery issues can create ambiguous outcomes that lead to duplicate messages. This is not specific to WebSphere MQ and in fact, the JMS specification talks about this issue and notes that it is the application's responsibility to properly account for "functionally duplicate" messages produced as a result of session recovery. You can eliminate the possibility of message loss by always using transacted sessions but eliminating the possibility of sending a dupe message takes a bit of work. Two QMgrs talking use a protocol that eliminates any such ambiguity.

As for the performance metrics, take a look at the Performance Reports for your platform. These are all available from the SupportPacs landing page. Look for the SupportPacs with names like MP** such as SupportPac MP71 for Windows or SupportPac MPL5 for Linux.

遥远的绿洲 2024-10-22 21:57:16

有一些重要的细节我不清楚。

您希望您的本地客户端都获得整个消息队列吗?如果是这样,则可能是 http://code.google.com/p/pubsubhubbub/变得有用。

或者您希望队列中的消息在您的客户端之间分配?如果是这样,我会有一个本地队列管理器,以便您的客户端获取下一条消息的往返时间完全在您的网络内部,而不必通过可能较慢的互联网连接。

There are important details that I am unclear on.

Do you want your local clients to each get the entire message queue? If so, then http://code.google.com/p/pubsubhubbub/ is likely to be useful.

Or do you want messages in the queue to be divided between your clients? If so I'd have a local Queue manager so that your clients round trip time to get the next message is entirely internal to your network instead of having to go through a possibly slower internet connection.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文