控制 Bash 守护进程启动的进程
在 bash 中,我创建了一个简单的守护进程,用于在互联网连接发生变化时执行命令:
#!/bin/bash
doService(){
while
do
checkTheInternetConnection
sleep 15
done
}
checkTheInternetConnection(){
if unchanged since last check
return
else
execute someCommand
fi
}
someCommand(){
do something
}
doService
这对于我需要它做的事情来说效果很好。
唯一的问题是,作为“someCommand”和“checkTheInternetConnection”的一部分,我使用其他内置实用程序,如 arp、awk、grep、head 等。
但是,99% 的情况下,我只需要 arp。
第一个问题:是否有必要保持其他命令打开?有没有办法在我处理完命令的输出后终止该命令?
另一个问题:(移至新帖子) 我在尝试编写“杀死所有其他守护进程”函数时遇到了麻烦。我不想同时运行多个守护进程。有什么建议吗?这就是我所拥有的:
otherprocess=`ps ux | awk '/BashScriptName/ && !/awk/ {print $2}'| grep -Ev $$`
WriteLogLine "Checking for running daemons."
if [ "$otherprocess" != "" ]; then
WriteLogLine "There are other daemons running, killing all others."
VAR=`echo "$otherprocess" |grep -Ev $$| sed 's/^/kill /'`
`$VAR`
else
WriteLogLine "There are no daemons running."
fi
In bash, I have created a simple daemon to execute commands when my internet connection changes:
#!/bin/bash
doService(){
while
do
checkTheInternetConnection
sleep 15
done
}
checkTheInternetConnection(){
if unchanged since last check
return
else
execute someCommand
fi
}
someCommand(){
do something
}
doService
And this has been working pretty well for what I need it to do.
The only problem is that as a part of my "someCommand" and "checkTheInternetConnection" I use other built-in utilities like arp, awk, grep, head, etc.
However, 99% of the time, I will just need arp.
First question: Is it necessary to keep the other commands open? Is there a way to kill a command once I've already processed its output?
Another question: (MOVED TO AN NEW POST)
I am having a hell of a time trying to write a "kill all other daemon processes" function. I do not ever want more than one daemon running at once. Any suggestions? This is what I have:
otherprocess=`ps ux | awk '/BashScriptName/ && !/awk/ {print $2}'| grep -Ev $`
WriteLogLine "Checking for running daemons."
if [ "$otherprocess" != "" ]; then
WriteLogLine "There are other daemons running, killing all others."
VAR=`echo "$otherprocess" |grep -Ev $| sed 's/^/kill /'`
`$VAR`
else
WriteLogLine "There are no daemons running."
fi
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
您能详细说明第一个问题吗?我认为您是在询问如何通过管道运行多个命令(cat xxx|grep yyy|tail -zzz)。
每个命令将继续运行,直到其管道有数据(未达到 EOF)。因此,在此示例中,grep 仅在 cat 处理完所有输入并关闭其管道末端后才会退出。但这里有一个技巧,如果 grep 已经读取了所有(至少是缓冲的)输入,那么 cat 只会关闭管道的末端,因为管道中的写入调用是阻塞的。因此,您在设计脚本时需要牢记这一点。
但我认为您不应该担心内置实用程序。一般来说,如果这是一个问题的话,它们的内存占用量较低。
Can you detail more the first question? I think you are asking about running many commands piped together (cat xxx|grep yyy|tail -zzz).
Each command will keep running until its pipe has data (not reached EOF). So in this example grep will only exit after cat processed all the input and closed its end of the pipe. But there is a trick here, cat will only close its end of the pipe if grep already read all (buffered, at least) the input, because the writing call in pipes are blocking. So you need to have this in mind while designing your scripts.
But I don't think you should worry about the built-in utilities. Generally they have a low memory footprint, if that is the concern.
对于你的第一个问题。我不太明白,但我可以看出您可能在问两件事之一。
1 和 2 都不会在实用程序命令完成运行后让它们保持“打开”状态。您可以通过放入
整个代码来查看以该名称运行的内容来证明这一点。或者
它将列出当前进程已生成的内容。
更新:
所以,您似乎对事物如何从管道运行感兴趣。您可以使用以下命令直观地看到这一点:
将其与类似的命令进行比较:
请注意“ls”不再在进程列表中,但 find 却在进程列表中。您可能需要在管道中添加更多“grep bin”命令才能获得效果。一旦第一个命令完成输出,即使其余命令尚未完成,它也会关闭。其他命令将在处理完第一个命令的输出后完成(因此是管道性质)
For your first question. I don't quite understand it fully, but I can see that you may be asking one of two things.
Neither 1 nor 2 will leave utility commands "open" after they are finished running. You could prove this by putting in
throughout the code to see just what is running by that name. or
which will list out things that the current process has spawned.
UPDATE:
So, it seems that you are interested with how things run from a pipe. You can see this visually using the following command:
compare that with something like:
Notice how the 'ls' is no longer in the process list, but the find is. You may have to add more "grep bin" commands into the pipeline to get the effect. Once the first command is finished outputting, it will close, even if the rest of the commands are not yet finished. The other commands will finish as they are done processing the output from the first (thus the pipe nature)