如何在Bash脚本中等待子过程,如果其中一个失败了,请停止所有人

发布于 2025-01-21 10:25:20 字数 1314 浏览 2 评论 0原文

如何在Bash脚本中等待子过程,以及其中一个是否返回出口代码1,因此我想停止所有子过程。

这就是我试图做的。 但是存在一些问题:

  1. 如果第一个过程比所有其他过程都长,而另一个过程在后台失败...那么脚本等待第一个过程完成,即使另一个过程已经失败了。

  2. 无法检测到该滴定失败,因为我将管道用于所需的打印格式。

    #!/bin/bash
    
    function doSomething()
    {
            echo [ $1 start ]
    
            sleep $1
    
            if [ $1 == 10 ]; then
                    failed
            fi
    
            echo [ sleep $1 ]: done
    }
    
    function failed(){
                    sleep 2
                    echo ------ process failed ------
                    exit 1
    }
    
    function process_log() {
            local NAME=$1
            while read Line; do
                    echo [Name ${NAME}]: ${Line}
            done
    }
    
    pids=""
    
    
    (doSomething 4 | process_log 4)&
    pids+="$! "
    
    (doSomething 17 | process_log 17)&
    pids+="$! "
    
    (doSomething 6 | process_log 6)&
    pids+="$! "
    
    (doSomething 10 | process_log 10)&
    pids+="$! "
    
    (doSomething 22 | process_log 22)&
    pids+="$! "
    
    (doSomething 5 | process_log 5)&
    pids+="$! "
    
    
    for pid in $pids; do
           wait $pid || (pkill -P $$ ; break)
    done
    
    echo done program

有人有主意吗?

How to wait in bash script to subprocess and if one of them return exit code 1 so I want to stop all subprocess.

This is what I tried to do.
But there are a some of issues:

  1. If the first process is longer than all the others, and another process fails in the background ... then the script waits for the first process to finish, even though another process has already failed.

  2. Can't detect that doSomething failed because I use pipe for the desired print format.

    #!/bin/bash
    
    function doSomething()
    {
            echo [ $1 start ]
    
            sleep $1
    
            if [ $1 == 10 ]; then
                    failed
            fi
    
            echo [ sleep $1 ]: done
    }
    
    function failed(){
                    sleep 2
                    echo ------ process failed ------
                    exit 1
    }
    
    function process_log() {
            local NAME=$1
            while read Line; do
                    echo [Name ${NAME}]: ${Line}
            done
    }
    
    pids=""
    
    
    (doSomething 4 | process_log 4)&
    pids+="$! "
    
    (doSomething 17 | process_log 17)&
    pids+="$! "
    
    (doSomething 6 | process_log 6)&
    pids+="$! "
    
    (doSomething 10 | process_log 10)&
    pids+="$! "
    
    (doSomething 22 | process_log 22)&
    pids+="$! "
    
    (doSomething 5 | process_log 5)&
    pids+="$! "
    
    
    for pid in $pids; do
           wait $pid || (pkill -P $ ; break)
    done
    
    echo done program

Anyone have an idea?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

错爱 2025-01-28 10:25:20

它的要旨是:

#!/bin/bash
set -m # needed for using negative PIDs
trap '{ kill -- $(jobs -rp | sed s/^/-/); wait; } 2> /dev/null' USR1

doSomething() {
    echo "[ $1 start ]"
    sleep "$1"
    [[ $1 == 10 ]] && failed
    echo "[ sleep $1 ]: done"
}

failed(){
    echo "------ process failed ------" 1>&2
    kill -USR1 "$"
}

process_log() {
    local name="$1" line
    while IFS='' read -r line; do
        echo "[Name $name]: $line"
    done
}

{ doSomething  4 | process_log  4; } &
{ doSomething 17 | process_log 17; } &
{ doSomething  6 | process_log  6; } &
{ doSomething 10 | process_log 10; } &
{ doSomething 22 | process_log 22; } &
{ doSomething  5 | process_log  5; } &

wait

echo "done program"
[Name 4]: [ 4 start ]
[Name 6]: [ 6 start ]
[Name 17]: [ 17 start ]
[Name 5]: [ 5 start ]
[Name 10]: [ 10 start ]
[Name 22]: [ 22 start ]
[Name 4]: [ sleep 4 ]: done
[Name 5]: [ sleep 5 ]: done
[Name 6]: [ sleep 6 ]: done
------ process failed ------
[Name 10]: [ sleep 10 ]: done
done program
解释的

想法是使子过程在失败时通知父脚本(使用sigusr1信号);然后,主要脚本将在收到该信号时杀死所有子处理。
但是有一个问题:杀死子过程的PID可能还不够,例如,当它当前正在运行|的命令时。在那些情况下,您需要杀死整个进程组,可以通过使用作业控制使用set -m来完成,并使用负面kill命令中的pid。

The gist of it would be:

#!/bin/bash
set -m # needed for using negative PIDs
trap '{ kill -- $(jobs -rp | sed s/^/-/); wait; } 2> /dev/null' USR1

doSomething() {
    echo "[ $1 start ]"
    sleep "$1"
    [[ $1 == 10 ]] && failed
    echo "[ sleep $1 ]: done"
}

failed(){
    echo "------ process failed ------" 1>&2
    kill -USR1 "$"
}

process_log() {
    local name="$1" line
    while IFS='' read -r line; do
        echo "[Name $name]: $line"
    done
}

{ doSomething  4 | process_log  4; } &
{ doSomething 17 | process_log 17; } &
{ doSomething  6 | process_log  6; } &
{ doSomething 10 | process_log 10; } &
{ doSomething 22 | process_log 22; } &
{ doSomething  5 | process_log  5; } &

wait

echo "done program"
[Name 4]: [ 4 start ]
[Name 6]: [ 6 start ]
[Name 17]: [ 17 start ]
[Name 5]: [ 5 start ]
[Name 10]: [ 10 start ]
[Name 22]: [ 22 start ]
[Name 4]: [ sleep 4 ]: done
[Name 5]: [ sleep 5 ]: done
[Name 6]: [ sleep 6 ]: done
------ process failed ------
[Name 10]: [ sleep 10 ]: done
done program
Explanations

The idea is to make the sub-processes notify the parent script when they fail (with a SIGUSR1 signal); the main script will then kill all the sub-processes when it receives that signal.
There's a problem though: killing the PID of a sub-process might not be enough, for example when it is currently running a command with a |. In those cases you need to kill the whole process group, which can be done by enabling job control with set -m and by using a negative PID in the kill command.

醉态萌生 2025-01-28 10:25:20

gnu并行- 现在停止读取,1 -tag

对于该特定process_log pregence of to to to to in Line行的参数,GNU Parallel可以用一个衬里做。

安装:

sudo apt install parallel

测试:

myfunc() {
    echo "start: $1"
    i=0
    while [ $i -lt $1 ]; do
      echo "$((i * $1))"
      sleep 1
      i=$((i + 1))
    done
    [[ $1 == 3 ]] && exit 1
    echo "end: $1"
}
export -f myfunc
parallel --lb --halt-on-error now,fail=1 --tag myfunc ::: 1 2 3 4 5

输出:

4       start: 4
4       0
3       start: 3
3       0
1       start: 1
1       0
2       start: 2
2       0
5       start: 5
5       0
1       end: 1
4       4
3       3
2       2
5       5
2       end: 2
4       8
3       6
5       10
parallel: This job failed:
myfunc 3

我们看到45从未完成,因为3在它们之前失败了。并且每行都由其输入参数前缀。

GNU平行还可以涵盖其他一些常见的前缀用例:

我的额外评论https://stackoverflow.com/questions/71776455/71776455/stop-bash-if-rany-oy-----------------------------------------------------74817154#74817154"> stop bash,如果任何功能都在并行也在这里申请。

在Ubuntu 22.04上进行了测试。

GNU parallel --halt-on-error now,1 --tag

For that specific process_log that prepends arguments to each line, GNU parallel can do it with a one liner.

Install:

sudo apt install parallel

Test:

myfunc() {
    echo "start: $1"
    i=0
    while [ $i -lt $1 ]; do
      echo "$((i * $1))"
      sleep 1
      i=$((i + 1))
    done
    [[ $1 == 3 ]] && exit 1
    echo "end: $1"
}
export -f myfunc
parallel --lb --halt-on-error now,fail=1 --tag myfunc ::: 1 2 3 4 5

Output:

4       start: 4
4       0
3       start: 3
3       0
1       start: 1
1       0
2       start: 2
2       0
5       start: 5
5       0
1       end: 1
4       4
3       3
2       2
5       5
2       end: 2
4       8
3       6
5       10
parallel: This job failed:
myfunc 3

So we see that 4 and 5 never finished because 3 failed before them. And each line is prefixed by its input arguments.

GNU parallel can also cover some other common prefixing use cases:

My extra remarks from: Stop bash if any of the functions fail in parallel also apply here.

Tested on Ubuntu 22.04.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文