如何更新 Quartz JobDataMap 中的值?
我使用的是quartz-scheduler 1.8.5。我创建了一个实现 StatefulJob 的作业。我使用 SimpleTrigger 和 StdSchedulerFactory 来安排作业。
看来除了 JobDetail 的 JobDataMap 之外,我还必须更新 Trigger 的 JobDataMap 才能从作业内部更改 JobDataMap。我想了解为什么需要同时更新两者?我注意到 JobDataMap 设置为脏。也许我必须明确保存它或其他什么?
我想我必须深入研究 Quartz 的源代码才能真正理解这里发生的事情,但我想我会偷懒,先问一下。感谢您对 JobDataMap 内部工作原理的深入了解!
这是我的工作:
public class HelloJob implements StatefulJob {
public HelloJob() {
}
public void execute(JobExecutionContext context)
throws JobExecutionException {
int count = context.getMergedJobDataMap().getInt("count");
int count2 = context.getJobDetail().getJobDataMap().getInt("count");
//int count3 = context.getTrigger().getJobDataMap().getInt("count");
System.err.println("HelloJob is executing. Count: '"+count+"', "+count2+"'");
//The count only gets updated if I updated both the Trigger and
// JobDetail DataMaps. If I only update the JobDetail, it doesn't persist.
context.getTrigger().getJobDataMap().put("count", count++);
context.getJobDetail().getJobDataMap().put("count", count++);
//This has no effect inside the job, but it works outside the job
try {
context.getScheduler().addJob(context.getJobDetail(), true);
} catch (SchedulerException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
//These don't seem to persist between jobs
//context.put("count", count++);
//context.getMergedJobDataMap().put("count", count++);
}
}
这是我安排作业的方式:
try {
// define the job and tie it to our HelloJob class
JobDetail job = new JobDetail(JOB_NAME, JOB_GROUP_NAME,
HelloJob.class);
job.getJobDataMap().put("count", 1);
// Trigger the job to run now, and every so often
Trigger trigger = new SimpleTrigger("myTrigger", "group1",
SimpleTrigger.REPEAT_INDEFINITELY, howOften);
// Tell quartz to schedule the job using our trigger
sched.scheduleJob(job, trigger);
return job;
} catch (SchedulerException e) {
throw e;
}
更新:
似乎我必须将值放入 JobDetail 的 JobDataMap 两次才能使其持久化,这有效:
public class HelloJob implements StatefulJob {
public HelloJob() {
}
public void execute(JobExecutionContext context)
throws JobExecutionException {
int count = (Integer) context.getMergedJobDataMap().get("count");
System.err.println("HelloJob is executing. Count: '"+count+"', and is the job stateful? "+context.getJobDetail().isStateful());
context.getJobDetail().getJobDataMap().put("count", count++);
context.getJobDetail().getJobDataMap().put("count", count++);
}
}
这似乎是一个错误,也许?或者也许我缺少一个步骤来告诉 JobDetail 将其 JobDataMap 的内容刷新到 JobStore?
I'm using quartz-scheduler 1.8.5. I've created a Job implementing StatefulJob. I schedule the job using a SimpleTrigger and StdSchedulerFactory.
It seems that I have to update the Trigger's JobDataMap in addition to the JobDetail's JobDataMap in order to change the JobDataMap from inside the Job. I'm trying to understand why it's necessary to update both? I noticed that the JobDataMap is set to dirty. Maybe I have to explicitly save it or something?
I'm thinking I'll have to dig into the source code of Quartz to really understand what is going on here, but I figured I'd be lazy and ask first. Thanks for any insight into the inner workings of JobDataMap!
Here's my job:
public class HelloJob implements StatefulJob {
public HelloJob() {
}
public void execute(JobExecutionContext context)
throws JobExecutionException {
int count = context.getMergedJobDataMap().getInt("count");
int count2 = context.getJobDetail().getJobDataMap().getInt("count");
//int count3 = context.getTrigger().getJobDataMap().getInt("count");
System.err.println("HelloJob is executing. Count: '"+count+"', "+count2+"'");
//The count only gets updated if I updated both the Trigger and
// JobDetail DataMaps. If I only update the JobDetail, it doesn't persist.
context.getTrigger().getJobDataMap().put("count", count++);
context.getJobDetail().getJobDataMap().put("count", count++);
//This has no effect inside the job, but it works outside the job
try {
context.getScheduler().addJob(context.getJobDetail(), true);
} catch (SchedulerException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
//These don't seem to persist between jobs
//context.put("count", count++);
//context.getMergedJobDataMap().put("count", count++);
}
}
Here's how I'm scheduling the job:
try {
// define the job and tie it to our HelloJob class
JobDetail job = new JobDetail(JOB_NAME, JOB_GROUP_NAME,
HelloJob.class);
job.getJobDataMap().put("count", 1);
// Trigger the job to run now, and every so often
Trigger trigger = new SimpleTrigger("myTrigger", "group1",
SimpleTrigger.REPEAT_INDEFINITELY, howOften);
// Tell quartz to schedule the job using our trigger
sched.scheduleJob(job, trigger);
return job;
} catch (SchedulerException e) {
throw e;
}
Update:
Seems that I have to put the value into the JobDetail's JobDataMap twice to get it to persist, this works:
public class HelloJob implements StatefulJob {
public HelloJob() {
}
public void execute(JobExecutionContext context)
throws JobExecutionException {
int count = (Integer) context.getMergedJobDataMap().get("count");
System.err.println("HelloJob is executing. Count: '"+count+"', and is the job stateful? "+context.getJobDetail().isStateful());
context.getJobDetail().getJobDataMap().put("count", count++);
context.getJobDetail().getJobDataMap().put("count", count++);
}
}
This seems like a bug, maybe? Or maybe there's a step I'm missing to tell the JobDetail to flush the contents of its JobDataMap to the JobStore?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
我认为您的问题在于使用后缀 ++ 运算符 - 当您这样做时:
您将映射中的值设置为计数,然后递增计数。
对我来说,这看起来就像你想要的:
只需要完成一次。
I think your problem is with using the postfix ++ operator - when you do:
you're setting the value in the map to count and THEN incrementing count.
To me it looks like you wanted:
which would only need to be done once.
如你所知,在 Quartz 中,触发器和作业是分开的,而不是与一些调度器结合在一起。它们可能允许您向数据映射添加特定于触发器级别而不是作业级别的值,等等。
我认为它允许您使用不同的数据集执行相同的最终作业,但仍然有一些通用数据在工作层面。
As you know, in Quartz, the trigger and the job are separate, rather than combined with some schedulers. They might be allowing you to add values to the datamap which are specific at the trigger level rather than the job level, etc.
I think it allows you to execute the same end job with a different set of data, but still have some common data at the job level.
正如 scpritch76 回答的那样,作业和触发器是分开的,因此给定作业可以有许多触发器(计划)。
作业可以在 JobDataMap 中具有一些基本属性集,然后触发器可以为其 JobDataMap 中的作业的特定执行提供附加属性(或覆盖基本属性)。
As scpritch76 answered, the job and trigger are separate so that there can be many triggers (schedules) for a given job.
The job can have some base set of properties in the JobDataMap, and then the triggers can provide additional properties (or override base properties) for particular executions of the job in their JobDataMaps.