仅在一个WebLogic集群节点上运行@Scheduled任务?
我们正在集群 WebLogic 10.3.4 环境中运行 Spring 3.0.x Web 应用程序 (.war),其中包含夜间 @Scheduled 作业。但是,当应用程序部署到每个节点时(使用 AdminServer 的 Web 控制台中的部署向导),该作业每天晚上都会在每个节点上启动,从而同时运行多次。
我们如何防止这种情况发生?
我知道像 Quartz 这样的库允许通过数据库锁表来协调集群环境中的作业,或者我什至可以自己实现类似的东西。但由于这似乎是一个相当常见的场景,我想知道 Spring 是否还没有提供一个选项,如何轻松规避这个问题,而无需向我的项目添加新库或手动解决方法。
- 我们无法使用配置文件升级到 Spring 3.1,此处提到
如果有任何悬而未决的问题,请告诉我。我还在 Spring 社区论坛。非常感谢您的帮助。
We are running a Spring 3.0.x web application (.war) with a nightly @Scheduled job in a clustered WebLogic 10.3.4 environment. However, as the application is deployed to each node (using the deployment wizard in the AdminServer's web console), the job is started on each node every night thus running multiple times concurrently.
How can we prevent this from happening?
I know that libraries like Quartz allow coordinating jobs inside clustered environment by means of a database lock table or I could even implement something like this myself. But since this seems to be a fairly common scenario I wonder if Spring does not already come with an option how to easily circumvent this problem without having to add new libraries to my project or putting in manual workarounds.
- We are not able to upgrade to Spring 3.1 with configuration profiles, as mentioned here
Please let me know if there are any open questions. I also asked this question on the Spring Community forums. Thanks a lot for your help.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(7)
我们只有一项任务发送每日摘要电子邮件。为了避免额外的依赖,我们只需检查每个节点的主机名是否与配置的系统属性相对应。
We only have one task that send a daily summary email. To avoid extra dependencies, we simply check whether the hostname of each node corresponds with a configured system property.
我们正在使用应用程序数据库内的共享锁表来实现我们自己的同步逻辑。这允许所有集群节点在实际启动作业之前检查作业是否已经在运行。
We are implementing our own synchronization logic using a shared lock table inside the application database. This allows all cluster nodes to check if a job is already running before actually starting it itself.
请小心,因为在使用共享锁表实现自己的同步逻辑的解决方案中,您总是会遇到两个集群节点同时从表中读取/写入的并发问题。
最好是在一个数据库事务中执行以下步骤:
- 读取共享锁表中的值
- 如果没有其他节点拥有锁,则获取锁
- 更新表表明您已锁定
Be careful, since in the solution of implementing your own synchronization logic using a shared lock table, you always have the concurrency issue where the two cluster nodes are reading/writing from the table at the same time.
Best is to perform the following steps in one db transaction:
- read the value in the shared lock table
- if no other node is having the lock, take the lock
- update the table indicating you take the lock
我通过将其中一个盒子作为主盒子解决了这个问题。
基本上在其中一个盒子上设置一个环境变量,例如 master=true。
并通过 system.getenv("master") 在你的java代码中读取它。
如果它存在且属实,则运行您的代码。
基本片段
I solved this problem by making one of the box as master.
basically set an environment variable on one of the box like master=true.
and read it in your java code through system.getenv("master").
if its present and its true then run your code.
basic snippet
您可以尝试使用WebLogic中的TimerManager(集群环境中的作业调度程序)作为TaskScheduler实现(TimerManagerTaskScheduler)。它应该在集群环境中工作。
安德里亚
you can try using TimerManager (Job Scheduler in a clustered environment) from WebLogic as TaskScheduler implementation (TimerManagerTaskScheduler). It should work in a clustered environment.
Andrea
您不需要使用数据库来同步作业开始。
在 weblogic 应用程序上,您可以获得应用程序运行的实例名称:
只需添加条件二即可执行作业:
如果您想将作业从一台机器转移到另一台机器,您可以获取一年中的当前日期,并且如果是奇数,则在一台机器上执行;如果是偶数,则在另一台机器上执行作业。
这样您每天都会加载不同的机器。
You don't neeed to synchronize your job start using a DB.
On a weblogic application you can get the instanze name where the application is running:
Simply put a condition two execute the job:
If you want to bounce your job from one machine to the other, you can get the current day in the year, and if it is odd you execute on a machine, if it is even you execute the job on the other one.
This way you load a different machine every day.
我们可以使用以下 cron 字符串使集群上的其他机器不运行批处理作业。它不会运行到 2099 年。
0 0 0 1 1 ? 2099
We can make other machines on cluster not run the batch job by using the following cron string. It will not run till 2099.
0 0 0 1 1 ? 2099