确保 OC4J 集群中 JMS 消息的串行处理

发布于 2024-08-04 01:39:23 字数 276 浏览 3 评论 0原文

我们有一个使用消息驱动 bean 处理 JMS 消息的应用程序。该应用程序部署在 OC4J 应用程序服务器上。 (10.1.3)

我们计划将此应用程序部署在多个配置为在集群中运行的 OC4J 应用程序服务器上。

问题在于该集群中的 JMS 消息处理。我们必须确保整个 OC4J 集群一次只处理一条消息。这是必需的,因为必须按时间顺序处理消息。

您是否知道可以控制 OC4J 集群上的消息处理的配置参数?

或者您认为我们必须实现自己的同步代码来跨集群同步消息驱动的 Bean?

We have an application that processes JMS message using a message driven bean. This application is deployed on an OC4J application server. (10.1.3)

We are planning to deploy this application on multiple OC4J application servers that will be configured to run in a cluster.

The problem is with JMS message processing in this cluster. We must ensure, that only a single message is being processed in the entire OC4J cluster at a single time. This is required, since the messages have to be processed in chronological order.

Do you know of a configuration parameter, that would control message processing across an OC4J cluster?

Or do you think we have to implement our own synchronisation code that will synchronise the message driven beans across the cluster?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

放飞的风筝 2024-08-11 01:39:23

我使用竞争消费者模式和租赁模式的组合,对集群中的消息进行了大规模的顺序处理 - 每天超过 150 万条消息。

然而,问题在于——你一次只能处理一项交易的要求将会阻碍你实现你的目标。我们有相同的基本要求 - 消息必须按顺序处理。至少,我们认为我们做到了。然后我们顿悟了——当我们更多地思考这个问题时,我们意识到我们不需要完全排序。实际上,我们只需要在每个帐户内订购。因此,我们可以通过将帐户范围分配给集群中的不同服务器来在集群中的服务器之间分配负载。然后,每个服务器负责按顺序处理给定帐户的消息。

这是第二个聪明的部分 - 我们使用租赁模式动态地将帐户范围分配给集群中的各个服务器。如果集群中的一台服务器出现故障,另一台服务器将抢占租约并接管第一台服务器的责任。

这对我们来说很有效,该流程在生产中运行了大约 4 年,然后由于公司合并而被替换。

编辑:

我在这里更详细地解释这个解决方案: http://coders-log.blogspot.com/2008/12/favorite-projects-series-installment-2.html

编辑:

好的,明白了。您已经在所需级别进行处理,但由于您正在部署到集群,因此您需要确保只有一个 MDB 实例正在主动从队列中提取消息。另外,您需要最简单可行的解决方案。

我认为您不需要放弃现在拥有的 MDB 机制。本质上,我们在这里讨论的是对分布式锁机制的需求,而不是用太花哨的短语来形容它。

那么,让我建议一下。当 MDB 注册以从队列接收消息时,它应该检查分布式锁,并查看是否可以获取它。第一个抢到锁的 MDB 获胜,只有它才会注册接收消息。所以,现在你已经有了序列化。这个锁应该采取什么形式?有很多可能性。嗯,这个怎么样。如果您有权访问数据库,则其事务锁定已经提供了您所需要的一些功能。创建一个包含单行的表。该行中是当前持有锁的服务器的标识符以及到期时间。这是服务器的租约。每个服务器都需要有一种方法来生成其唯一标识符,例如服务器名称加上线程 ID。

如果服务器可以获得对该行的更新访问权限,并且租约已过期,则它应该获取该行。否则,它就会放弃。如果它获取租约,则需要在不久的将来的某个时间(例如五分钟左右)更新该行,并提交更新。活动服务器应在租约到期之前更新租约。我建议在剩余时间一半时更新一次,因此,如果租约在五分钟后到期,则每 2-1/2 分钟更新一次。这样,您现在就可以进行故障转移了。如果活动的 MDB 死亡,另一个 MDB(并且只有一个)将接管。

我想这应该是非常简单的。现在,您想让休眠的 MDB 偶尔检查一下锁,看看它是否已被释放。

因此,活动 MDB 和休眠 MDB 都必须定期执行某些操作。您可以让它们生成一个单独的线程来执行此操作。如果您这样做,许多应用程序引擎供应商不会高兴,但添加一个线程并不是什么大问题,特别是因为它大部分时间都在睡觉。另一种选择是结合许多引擎提供的计时器机制,并让它定期唤醒 MDB 以检查租约。

哦,顺便说一句 - 确保服务器管理员使用 NTP 来保持时钟合理同步。

I've done sequential processing of messages in a cluster on a pretty large scale - 1.5 million+ message/day, using a combination of the Competing Consumers pattern and a Lease pattern.

Here's the kicker, though - your requirement that you can only process one trans at a time is going to keep you from achieving your goals. We had the same basic requirement - messages had to be processed in order. At least, we thought we did. Then we had an epiphany - as we gave the problem more thought, we realized that we didn't require total ordering. We actually required ordering only within each account. Therefore, we could distribute the load across the servers in a cluster by assigning ranges of accounts to different servers in the cluster. Then, each server was responsible to process messages for a given account in order.

Here's the second clever part - we used a Lease pattern do dynamically assign account ranges to various servers in the cluster. If one server in the cluster went down, another would grab the lease and take over the first server's responsibility.

This worked for us, and the process lived in production for about 4 years before being replaced due to a company merger.

Edit:

I explain this solution in more detail here: http://coders-log.blogspot.com/2008/12/favorite-projects-series-installment-2.html

Edit:

Okay, gotcha. You're already doing the processing at the level you need, but since you're being deployed to a cluster, you need to make sure that only one instance of your MDB is actively pulling messages from the queue. Plus, you need the simplest workable solution.

You don't need to abandon your MDB mechanism that you have now, I don't think. Essentially what we're talking about here is a requirement for a distributed lock mechanism, not to put too fancy a phrase to it.

So, let me suggest this. At the point where your MDB registers to receive messages from the queue, it should check the distributed lock, and see if it can grab it. The first MDB to grab the lock wins, and only it will register to receive messages. So, now you have your serialization. What form should this lock take? There are many possibilities. Well, how about this. If you have access to a database, its transactional locking already provides some of what you need. Create a table with a single row. In the row is the identifier of the server that currently holds the lock, and an expiration time. This is the server's lease. Each server needs to have a way to generate its unique identifier, perhaps the server name plus a thread ID, for example.

If a server can get update access to the row, and the lease is expired, it should grab it. Otherwise, it gives up. If it grabs the lease, it needs to update the row with a time in the near future, like five minutes or so, and commit the update. The active server should update the lease before it expires. I recommend updating it when there's half the time remaining, so, every 2-1/2 minutes if the lease expires in five. With this, you now have failover. If the active MDB dies, another MDB (and only one) will take over.

That should be pretty straightforward, I think. Now, you want to have the dormant MDBs check the lock occasionally to see if it's freed up.

So, the active MDB and the dormant MDBs all have to do something periodically. You might have them spawn a separate thread to do this. Many application engine vendors won't be happy if you do this, but adding one thread is no big deal, especially since it spends most of its time sleeping. Another option would be to tie into the timer mechanism that many engines provide, and have it wake up your MDB periodically to check the lease.

Oh, and by the way - make sure the server admins employ NTP to keep the clocks reasonably synced.

生生漫 2024-08-11 01:39:23

第一点:这是一个非常蹩脚的设计,一次只能处理一条消息,这会严重限制性能。我假设您集群只是为了容错,因为您不会获得性能改进?

您是否使用 OC4J 或其他 JMS 实现的默认 JMS 实现?

我过去使用过 IBM 的 MQ,它有一个功能,可以将队列标记为独占,这意味着只有一个客户端可以连接到它。这似乎可以提供您想要的东西。

另一种方法是引入序列 ID(就像递增计数器一样简单),处理消息的客户端将检查序列 ID 是否是下一个预期值,如果不是,则将消息放回。此方法要求不同的客户端保留他们在某些集中共享数据存储(例如数据库)中看到的最后一个有效序列 ID。

First point: this is a pretty crappy design and you'll seriously limit performance only being able to process a single message at a time. I assume you are clustering only for fault tolerance, because you won't get performance improvements?

Are you using the default JMS implementation with OC4J or another one?

I've used IBM's MQ in the past and that had a feature that a queue could be marked as exclusive, which meant only one client could connect to it. This would appear to offer what you want.

An alternative would be to introduce a sequence ID (as simple as an incrementing counter) and the client processing the message would check that the sequence ID is the next expected value, if not then the message put back. This approach requires the different clients to persist the last valid sequence ID they've seen in some centrally shared data store, such as a database.

嘿咻 2024-08-11 01:39:23

我同意 stevendick 的观点:可能你的设计偏离了轨道。关于序列 ID 或类似方法,我建议您通过《企业集成模式:设计、构建和部署消息传递解决方案》(作者:Gregor Hohpe 和 Bobby Woolf)深入了解消息传递架构。这是一本很棒的书,有很多有用的模式……我确信其中对您面临的力量和问题进行了很好的描述。

I agree with stevendick: May be you're off track with the design. Regarding sequence ID or similar approachs I suggest you get insight on messaging architectures with Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions (by Gregor Hohpe y Bobby Woolf). It's a great book, plenty of useful patterns... I'm sure the forces and the problem you are facing are well described there.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文