Python 线程和守护进程
我正在使用 Sander 的 recipy 在 python 中使用守护进程。
到目前为止一切都运行良好,但我必须引入一些正在钻我脑筋的改变。 事实是:
每次您请求时,Sander 的守护进程生活方式都会创建一个新的对象实例。示例:
[prompt]> python my_daemon.py start
[prompt]> python my_daemon.py check_whatever (new instance of my_daemon.py is created, but it looks for pid and finally gets the first one.)
然后,查找用于创建旧实例的 pid,您可以访问它并使用它进行管理。
情况是:该守护进程生成两个线程,在守护进程执行启动命令并准备好接纳另一个线程后,这两个线程继续工作(请记住,创建了一个新实例)。我想在另一个命令中访问该线程,但是我还没有找到方法(如果有的话)。
据我研究,使用 pid 你只能杀死或检查守护进程,但不知道是否可以获取该实例创建的对象(因此,线程)。
开放问题:
- 如果我可以从进程的 pid 恢复该进程,我也可以访问它的对象吗?
- 我是否必须考虑将此线程转换为子进程,以便在其主线程完成(或仍在等待)后使它们保持活动状态?
I'm working with daemons in python using the Sander's recipy.
Till now has run fine, but I have to introduce some changes which are drilling my brain.
The fact is:
The Sander's daemon way of life creates a new instance of the object everytime you ask for it. Example:
[prompt]> python my_daemon.py start
[prompt]> python my_daemon.py check_whatever (new instance of my_daemon.py is created, but it looks for pid and finally gets the first one.)
Then, looking for the pid used to create the older instance, you can access to it and manage with it.
The situation is: this daemon spawns two threads that continue working after the daemon has performed the start command and is ready for admit another one (remember, a new instance is created). I would like to access to this threads in another command but, I haven't found the way (if there's one).
As far as I researched, with the pid you can only kill or check the daemon, but don't know if is posible to get the objects (ergo, the threads) created by that instance.
Open questions:
-If I can recover the process from its pid, can I also access to its objects?
-Do I have to consider convert this threads to subprocess in order to keep them alive after, its main thread has finished (or is still waiting)?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
我不确定一个新进程本身是否是必要的,但它的实现可能比必须仔细管理 GIL 的方式要干净得多,具体取决于守护进程线程执行的任务的大小,并且无论它们是纯Python,还是可以通过调用外部库来释放GIL。
至于通过pid访问,我不确定是否可以,对于windows下的线程来说肯定不行,因为它们没有自己的pid;对我来说,保持管道对新线程开放似乎要简单得多。
确实有太多的架构决策无法清楚地回答这个问题,但如果您已经做出了这些决定,请随时向您的问题添加更多信息,我将尝试给出更明智的答案。
I'm not sure whether a new process would be necessary per-se, but it would probably be a lot cleaner to implement than having to carefully manage your way around the GIL, depending on the size of the tasks the daemon's threads execute, and whether they are pure-python, or can release the GIL by calling a foreign library.
As for accessing by pid, I'm not sure whether that's possible, it definitely isn't for threads under windows as they do not have their own pid; it seems a lot more simple to me to keep a pipe open to your new threads.
There are really too many architecture decisions to answer this cleanly, but if you have made these decisions already, feel free to add more info to your question and I'll try and give a more informed answer.
在对代码内容进行“清除”,将活动行减少到最低限度之后,我意识到线程保持静止(作为好战士)。当我处理文件时,会出现一些问题,我认为这些问题可以在守护进程本身的行为中找到(因为它将文件描述符设置为空)。
现在,我认为自己很高兴,因为我可以使用一些虚拟协议来改变这些线程的运行行为(例如虚拟配置文件)。
After doing a "purge" in the content of the code, reducing active lines to the minimum, I realized that threads stand still (as good warriors). There are some problems when I deal with files which I think can be founded in the behaviour of the daemon itself (because it sets file descriptors to null).
For now, I consider myself happy, cause I can use some dummy protocol to alter the running behaviour of those threads (for example a dummy config file).