失败的作业未在聚类环境中被石英执行

发布于 2025-02-09 16:57:29 字数 2724 浏览 1 评论 0原文

我已经使用Spring Quartz和聚类环境建立了一个项目。我正在尝试测试是否可以接收这些作业,以防启动它们关闭的服务器。尽管它对Cron Triggers的预期工作非常完美,但对于Simpletrigger的作业来说不能说同样的话。

对于尚未执行的Cron触发器,石英在没有任何麻烦的情况下运行了工作。

复制的步骤:

  1. 启动端口8080和端口8081的服务器。
  2. 安排使用服务器在端口8081处的作业。
  3. 在作业运行时关闭服务器。

这就是我在粉碎员拾起作业时得到的:

2022-06-22 13:37:52.659  INFO 23852 --- [_ClusterManager] o.s.s.quartz.LocalDataSourceJobStore     : ClusterManager: detected 2 failed or restarted instances.
2022-06-22 13:37:52.661  INFO 23852 --- [_ClusterManager] o.s.s.quartz.LocalDataSourceJobStore     : ClusterManager: Scanning for instance "MacBook-Pro.local1655885157031"'s failed in-progress jobs.
2022-06-22 13:37:52.677  INFO 23852 --- [_ClusterManager] o.s.s.quartz.LocalDataSourceJobStore     : ClusterManager: Scanning for instance "MacBook-Pro.local1655885169333"'s failed in-progress jobs.
2022-06-22 13:37:52.720  INFO 23852 --- [_ClusterManager] o.s.s.quartz.LocalDataSourceJobStore     : ClusterManager: ......Deleted 1 complete triggers(s).
2022-06-22 13:37:52.722  INFO 23852 --- [_ClusterManager] o.s.s.quartz.LocalDataSourceJobStore     : ClusterManager: ......Cleaned-up 1 other failed job(s).

这就是我的作业的样子:

public class ApiJob implements Job {

    final static Logger log = LoggerFactory.getLogger(ApiJob.class);
    
    @Override
    public void execute(JobExecutionContext context) {
        this.context=context;
        log.info("Job Execution Started");
        JobDataMap map=context.getMergedJobDataMap();
        ApiRequest request=new ApiRequest(map.getString("message"));
        try {
            Thread.sleep(2*60*1000);
            log.info("Job scheduled...{}",context.getJobDetail().getKey().getName());
        } catch (InterruptedException e) {
            throw new RuntimeException(e);
        }
    }
}

触发:

Trigger trigger=TriggerBuilder.newTrigger().forJob(jobDetail)
                .withIdentity(jobDetail.getKey().getName(), "quartz-jobs-triggers")
                .withDescription("Random trigger")
                .startAt(Date.from(startTime.toInstant()))
                //Add custom logic here?
                .withSchedule(SimpleScheduleBuilder.simpleSchedule().withMisfireHandlingInstructionFireNow())
            .build();

最后我如何在application.properties.properties::

#Quartz Properties
spring.quartz.job-store-type=jdbc
spring.quartz.properties.org.quartz.threadPool.threadCount=5
spring.quartz.properties.org.quartz.scheduler.instanceId=AUTO
spring.quartz.properties.org.quartz.jobStore.isClustered = true
spring.quartz.properties.org.quartz.jobStore.clusterCheckinInterval = 20000

I have built a project using Spring Quartz with a clustered environment. I am trying to test if the jobs can be picked up , in case the server that initiated them shut down. While it works perfectly as expected for Cron Triggers, same cannot be said about the SimpleTrigger job.

For Cron Triggers which have not been executed yet, Quartz runs the job without any hassle.

Steps to Reproduce:

  1. Start the servers at port 8080 and port 8081.
  2. Schedule a job using server at port 8081.
  3. Shut down the server while the job is running.

This is what I get when the ClusterManager picks up the jobs:

2022-06-22 13:37:52.659  INFO 23852 --- [_ClusterManager] o.s.s.quartz.LocalDataSourceJobStore     : ClusterManager: detected 2 failed or restarted instances.
2022-06-22 13:37:52.661  INFO 23852 --- [_ClusterManager] o.s.s.quartz.LocalDataSourceJobStore     : ClusterManager: Scanning for instance "MacBook-Pro.local1655885157031"'s failed in-progress jobs.
2022-06-22 13:37:52.677  INFO 23852 --- [_ClusterManager] o.s.s.quartz.LocalDataSourceJobStore     : ClusterManager: Scanning for instance "MacBook-Pro.local1655885169333"'s failed in-progress jobs.
2022-06-22 13:37:52.720  INFO 23852 --- [_ClusterManager] o.s.s.quartz.LocalDataSourceJobStore     : ClusterManager: ......Deleted 1 complete triggers(s).
2022-06-22 13:37:52.722  INFO 23852 --- [_ClusterManager] o.s.s.quartz.LocalDataSourceJobStore     : ClusterManager: ......Cleaned-up 1 other failed job(s).

This is how my Job looks like:

public class ApiJob implements Job {

    final static Logger log = LoggerFactory.getLogger(ApiJob.class);
    
    @Override
    public void execute(JobExecutionContext context) {
        this.context=context;
        log.info("Job Execution Started");
        JobDataMap map=context.getMergedJobDataMap();
        ApiRequest request=new ApiRequest(map.getString("message"));
        try {
            Thread.sleep(2*60*1000);
            log.info("Job scheduled...{}",context.getJobDetail().getKey().getName());
        } catch (InterruptedException e) {
            throw new RuntimeException(e);
        }
    }
}

Trigger:

Trigger trigger=TriggerBuilder.newTrigger().forJob(jobDetail)
                .withIdentity(jobDetail.getKey().getName(), "quartz-jobs-triggers")
                .withDescription("Random trigger")
                .startAt(Date.from(startTime.toInstant()))
                //Add custom logic here?
                .withSchedule(SimpleScheduleBuilder.simpleSchedule().withMisfireHandlingInstructionFireNow())
            .build();

And finally how I setup the clustering in the application.properties:

#Quartz Properties
spring.quartz.job-store-type=jdbc
spring.quartz.properties.org.quartz.threadPool.threadCount=5
spring.quartz.properties.org.quartz.scheduler.instanceId=AUTO
spring.quartz.properties.org.quartz.jobStore.isClustered = true
spring.quartz.properties.org.quartz.jobStore.clusterCheckinInterval = 20000

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文