python crontab 替代方案 - APScheduler & python 守护进程
我使用时遇到问题 python-daemon 1.6 与 APScheduler 一起管理任务列表。
(调度程序需要在特定选择的时间(秒分辨率)定期运行它们)
工作(直到按 Ctrl+C),
from apscheduler.scheduler import Scheduler
import logging
import signal
def job_function():
print "Hello World"
def init_schedule():
logging.basicConfig(level=logging.DEBUG)
sched = Scheduler()
# Start the scheduler
sched.start()
return sched
def schedule_job(sched, function, periodicity, start_time):
sched.add_interval_job(job_function, seconds=periodicity, start_date=start_time)
if __name__ == "__main__":
sched = init_schedule()
schedule_job(sched, job_function, 120, '2011-10-06 12:30:09')
schedule_job(sched, job_function, 120, '2011-10-06 12:31:03')
# APSScheduler.Scheduler only works until the main thread exits
signal.pause()
# Or
#time.sleep(300)
示例输出:
INFO:apscheduler.threadpool:Started具有 0 个核心线程和 20 个最大线程的线程池 信息:apscheduler.scheduler:调度程序已启动 DEBUG:apscheduler.scheduler:寻找要运行的作业 调试:apscheduler.scheduler:没有作业;等待添加作业 信息:apscheduler.scheduler:将作业“job_function(触发器:间隔[0:00:30],下次运行时间:2011-10-06 18:30:39)”添加到作业存储“默认” 信息:apscheduler.scheduler:将作业“job_function(触发器:间隔[0:00:30],下次运行时间:2011-10-06 18:30:33)”添加到作业存储“默认” DEBUG:apscheduler.scheduler:寻找要运行的作业 DEBUG:apscheduler.scheduler:下次唤醒将于 2011-10-06 18:30:33(10.441128 秒内)
使用 python-daemon, 输出为空白。为什么 DaemonContext 没有正确生成进程?
编辑 - 工作
阅读 python-daemon 源代码后,我将 stdout 和 stderr 添加到 DaemonContext 中,终于能够知道发生了什么。
def job_function():
print "Hello World"
print >> test_log, "Hello World"
def init_schedule():
logging.basicConfig(level=logging.DEBUG)
sched = Scheduler()
sched.start()
return sched
def schedule_job(sched, function, periodicity, start_time):
sched.add_interval_job(job_function, seconds=periodicity, start_date=start_time)
if __name__ == "__main__":
test_log = open('daemon.log', 'w')
daemon.DaemonContext.files_preserve = [test_log]
try:
with daemon.DaemonContext():
from datetime import datetime
from apscheduler.scheduler import Scheduler
import signal
logging.basicConfig(level=logging.DEBUG)
sched = init_schedule()
schedule_job(sched, job_function, 120, '2011-10-06 12:30:09')
schedule_job(sched, job_function, 120, '2011-10-06 12:31:03')
signal.pause()
except Exception, e:
print e
I'm having trouble using
python-daemon 1.6 getting along with APScheduler to manage a list of tasks.
(The scheduler needs to run them periodically at a specific chosen times - seconds resolution)
Working (until pressing Ctrl+C),
from apscheduler.scheduler import Scheduler
import logging
import signal
def job_function():
print "Hello World"
def init_schedule():
logging.basicConfig(level=logging.DEBUG)
sched = Scheduler()
# Start the scheduler
sched.start()
return sched
def schedule_job(sched, function, periodicity, start_time):
sched.add_interval_job(job_function, seconds=periodicity, start_date=start_time)
if __name__ == "__main__":
sched = init_schedule()
schedule_job(sched, job_function, 120, '2011-10-06 12:30:09')
schedule_job(sched, job_function, 120, '2011-10-06 12:31:03')
# APSScheduler.Scheduler only works until the main thread exits
signal.pause()
# Or
#time.sleep(300)
Sample Output:
INFO:apscheduler.threadpool:Started thread pool with 0 core threads and 20 maximum threads
INFO:apscheduler.scheduler:Scheduler started
DEBUG:apscheduler.scheduler:Looking for jobs to run
DEBUG:apscheduler.scheduler:No jobs; waiting until a job is added
INFO:apscheduler.scheduler:Added job "job_function (trigger: interval[0:00:30], next run at: 2011-10-06 18:30:39)" to job store "default"
INFO:apscheduler.scheduler:Added job "job_function (trigger: interval[0:00:30], next run at: 2011-10-06 18:30:33)" to job store "default"
DEBUG:apscheduler.scheduler:Looking for jobs to run
DEBUG:apscheduler.scheduler:Next wakeup is due at 2011-10-06 18:30:33 (in 10.441128 seconds)
With python-daemon,
Output is blank. Why isn't the DaemonContext correctly spawning the processes?
EDIT - Working
After reading python-daemon source, I've added stdout and stderr to the DaemonContext and finally was able to know what was going on.
def job_function():
print "Hello World"
print >> test_log, "Hello World"
def init_schedule():
logging.basicConfig(level=logging.DEBUG)
sched = Scheduler()
sched.start()
return sched
def schedule_job(sched, function, periodicity, start_time):
sched.add_interval_job(job_function, seconds=periodicity, start_date=start_time)
if __name__ == "__main__":
test_log = open('daemon.log', 'w')
daemon.DaemonContext.files_preserve = [test_log]
try:
with daemon.DaemonContext():
from datetime import datetime
from apscheduler.scheduler import Scheduler
import signal
logging.basicConfig(level=logging.DEBUG)
sched = init_schedule()
schedule_job(sched, job_function, 120, '2011-10-06 12:30:09')
schedule_job(sched, job_function, 120, '2011-10-06 12:31:03')
signal.pause()
except Exception, e:
print e
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
我对 python-daemon 不太了解,但是
job_function()
中的test_log
没有定义。在引用Schedule
的init_schedule()
中也会出现同样的问题。I do not know much about python-daemon, but
test_log
injob_function()
is not defined. The same problem occurs ininit_schedule()
where you referenceSchedule
.