Python 守护进程不会杀死它的孩子
当使用 python-daemon 时,我正在创建子进程,如下所示:
import multiprocessing
class Worker(multiprocessing.Process):
def __init__(self, queue):
self.queue = queue # we wait for things from this in Worker.run()
...
q = multiprocessing.Queue()
with daemon.DaemonContext():
for i in xrange(3):
Worker(q)
while True: # let the Workers do their thing
q.put(_something_we_wait_for())
当我杀死父进程时带有 Ctrl-C 或 SIGTERM 等的守护进程(即不是 Worker),子进程不会死亡。一个人如何杀死孩子?
我的第一个想法是使用 atexit 杀死所有工作人员,如下所示:
with daemon.DaemonContext():
workers = list()
for i in xrange(3):
workers.append(Worker(q))
@atexit.register
def kill_the_children():
for w in workers:
w.terminate()
while True: # let the Workers do their thing
q.put(_something_we_wait_for())
但是,守护进程的子进程这些事情处理起来很棘手,我必须就如何完成这件事提供想法和意见。
谢谢。
When using python-daemon, I'm creating subprocesses likeso:
import multiprocessing
class Worker(multiprocessing.Process):
def __init__(self, queue):
self.queue = queue # we wait for things from this in Worker.run()
...
q = multiprocessing.Queue()
with daemon.DaemonContext():
for i in xrange(3):
Worker(q)
while True: # let the Workers do their thing
q.put(_something_we_wait_for())
When I kill the parent daemonic process (i.e. not a Worker) with a Ctrl-C or SIGTERM, etc., the children don't die. How does one kill the kids?
My first thought is to use atexit to kill all the workers, likeso:
with daemon.DaemonContext():
workers = list()
for i in xrange(3):
workers.append(Worker(q))
@atexit.register
def kill_the_children():
for w in workers:
w.terminate()
while True: # let the Workers do their thing
q.put(_something_we_wait_for())
However, the children of daemons are tricky things to handle, and I'd be obliged for thoughts and input on how this ought to be done.
Thank you.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
您的选择有点有限。如果在
Worker
类的构造函数中执行self.daemon = True
不能解决您的问题,并尝试捕获父级中的信号(即SIGTERM、SIGINT
)不起作用,您可能必须尝试相反的解决方案 - 您可以在父母去世时让孩子自杀,而不是让父母杀死孩子。第一步是将父进程的
PID
赋予Worker
构造函数(您可以使用os.getpid()
来完成此操作)。然后,不要只在工作循环中执行 self.queue.get() ,而是执行如下操作:上面的解决方案检查父 PID 是否与原来的 PID 不同(即,如果子进程因父进程死亡而被
init
或lauchd
采用) - 请参阅 参考。但是,如果由于某种原因这不起作用,您可以将其替换为以下函数(改编自 此处):现在,当父进程死亡(无论出于何种原因)时,子进程将自发地像苍蝇一样掉落 - 正如您所希望的,您的守护进程! <代码>:-D
Your options are a bit limited. If doing
self.daemon = True
in the constructor for theWorker
class does not solve your problem and trying to catch signals in the Parent (ie,SIGTERM, SIGINT
) doesn't work, you may have to try the opposite solution - instead of having the parent kill the children, you can have the children commit suicide when the parent dies.The first step is to give the constructor to
Worker
thePID
of the parent process (you can do this withos.getpid()
). Then, instead of just doingself.queue.get()
in the worker loop, do something like this:The solution above checks to see if the parent PID is different than what it originally was (that is, if the child process was adopted by
init
orlauchd
because the parent died) - see reference. However, if that doesn't work for some reason you can replace it with the following function (adapted from here):Now, when the Parent dies (for whatever reason), the child Workers will spontaneously drop like flies - just as you wanted, you daemon!
:-D
您应该在子级首次创建时(假设在 self.myppid 中)以及当 self.myppid 与 getppid() 不同时存储父级 pid code> 意味着父母去世了。
为了避免一遍又一遍地检查父级是否已更改,您可以使用 信号文档。
在这种情况下,您希望进程终止,可以将其设置为 SIGHUP,如下所示:
You should store the parent pid when the child is first created (let's say in
self.myppid
) and whenself.myppid
is diferent fromgetppid()
means that the parent died.To avoid checking if the parent has changed over and over again, you can use
PR_SET_PDEATHSIG
that is described in the signals documentation.In this case, you want your process to die, you can just set it to a
SIGHUP
, like this:Atexit 不会解决这个问题——它只能在成功的非信号终止时运行——请参阅 文档。您需要通过两种方式之一设置信号处理。
听起来更简单的选项:根据 在工作进程上设置守护进程标志http://docs.python.org/library/multiprocessing.html#process-and-exceptions
听起来有点难的选项:PEP-3143 似乎暗示有一种内置方法可以在 python-daemon 中挂钩程序清理需求。
Atexit won't do the trick -- it only gets run on successful non-signal termination -- see the note near the top of the docs. You need to set up signal handling via one of two means.
The easier-sounding option: set the daemon flag on your worker processes, per http://docs.python.org/library/multiprocessing.html#process-and-exceptions
Somewhat harder-sounding option: PEP-3143 seems to imply there is a built-in way to hook program cleanup needs in python-daemon.