Perl 脚本、Fork/Exec、系统声称我的进程已死亡,而实际上只有我的子进程已死亡
我有一个 Perl 脚本,它执行 fork/exec 来在后台启动另一个工具,并在另一个工具运行时监视一些文件系统更改。这似乎按预期工作。
当我从 shell(例如 Bash)启动此 Perl 脚本时,当然只要我的 Perl 脚本正在运行,shell 提示符就应该消失。并且它将继续运行,直到发生预期的文件修改;但不能保证文件修改可能是由外部工具完成的,在这种情况下,外部工具将退出,但我的脚本将继续运行,并且必须以某种方式处理这种情况 - 这种处理超出了问题的范围,并且与我的问题无关(到目前为止它甚至还没有实现)。
我的问题是,一旦我的子进程终止,Bash 就会返回提示符,声称我的进程已完成运行......这不是真的。显然它仍在后台运行,并且仍在等待文件系统修改。如果我继续在脚本的主循环中打印一些文本,即使 bash 已经返回到提示符,该文本仍然会被打印。
我不明白是什么让 bash 相信我的进程已经退出。我尝试在脚本中阻止 SIGCHLD 信号,我尝试关闭和/或重定向 STDOUT/STDERR/STDIN (在 fork 上重复,但你永远不知道) - 没有成功。我什至尝试了著名的“双叉”,使最终的孩子独立于我的脚本过程,得到相同的结果。无论我做什么,只要我的孩子(或孙子)去世,Bash 就会相信我的进程就停止了。在后台启动我的脚本(在末尾使用“&”)使 Bash 甚至告诉我进程 XYZ 已经完成(它在这里命名我的进程,而不是子进程,即使我的进程很高兴地活着并打印到终端)通过 STDOUT 那一刻)。
如果这只是 Bash 的问题,我一点也不在乎,但其他应该运行我的脚本的第三方软件也会以同样的方式运行。我的孩子一死,他们就声称我的剧本实际上已经死了,这根本不是事实。
I have a Perl script that does a fork/exec to start another tool in the background and monitor some file system changes while this other tool is running. This seems to work like expected.
When I start this Perl script from a shell (e.g. Bash), of course the shell prompt should be gone for as long as my Perl script is running. And it will keep running until the expected file modification has taken place; but there is no guarantee that the file modification might be done by the external tool, in that case the external tool will exit, but my script will keep running and has to handle that situation somehow - this handling is beyond the scope of the question and not related to my problem (so far it is not even implemented).
My problem is that as soon as my child process dies, Bash returns to its prompt, claiming my process has finished running... which is not true. It clearly is still running in the background and it still waits for the file system modification. If I keep printing some text in the main loop of the script, this text is still printed, even though bash has returned back to the prompt already.
I cannot figure out what makes bash believe my process has quit. I tried blocking the SIGCHLD signal in my script, I tried closing and/or redirecting STDOUT/STDERR/STDIN (which are duplicated on fork, but you never know) - no success. I even tried the famous "double fork", to make the final child independent of my script process, same outcome. No matter what I do, as soon as my child (or grandchild) dies, Bash beliefs my process has quit. Starting my script in the background (using "&" at the end) makes Bash even tell me that process XYZ has finished (and it names my process here, not the child process, even though my process is happily alive and printing to terminal via STDOUT that very moment).
If this was only an issue of Bash, I couldn't care any less, but other third party software that is supposed to run my script acts the same way. As soon as my child dies, they claim that my script has in fact died, which is simply not true.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
只是一个健全性检查,你的主程序是否走在正确的分叉上?它应该遵循非零路径:
来自 man fork:
Just a sanity check, is your main program walking the right fork? It should follow the non-zero path:
From man fork: