并行子壳进行工作和报告状态

发布于 2025-01-26 13:02:04 字数 1362 浏览 4 评论 0 原文

我正在尝试并行在所有子文件夹中进行工作,并在 bash 中完成每个文件夹的状态。

假设我有一个 work 函数,它可以返回几个状态,

#param #1 is the folder
# can return 1 on fail, 2 on sucess, 3 on nothing happend
work(){
cd $1
// some update thing
return 1, 2, 3
}

现在我在包装器函数中调用此状态

do_work(){

  while read -r folder; do
    tput cup "${row}" 20
    echo -n "${folder}"
    (
      ret=$(work "${folder}")
      tput cup "${row}" 0
      [[ $ret -eq 1 ]] && echo " \e[0;31mupdate failed      \uf00d\e[0m"
      [[ $ret -eq 2 ]] && echo " \e[0;32mupdated            \uf00c\e[0m"
      [[ $ret -eq 3 ]] && echo " \e[0;32malready up to date \uf00c\e[0m"
    ) &>/dev/null
    pids+=("${!}")

    ((++row))
  done < <(find . -maxdepth 1 -mindepth 1 -type d -printf "%f\n" | sort)
  echo "waiting for pids ${pids[*]}"

  wait "${pids[@]}"
}

,并且我想要的是,它每行打印出所有文件夹,并从每个文件中独立更新它们其他并行,当它们完成后,我希望该状态写在该行中。

但是,我不确定子壳是写作的,我需要捕获哪些方式以及依此类推。 我以上的尝试目前尚未正确编写,也不是并行。 如果我可以并行工作,我会得到那些 [1]&lt; pid&gt; things和 [1] + 3156389完成... 将我的屏幕弄乱。 如果我将作品本身放在子壳中,那么我没有任何等待。 如果我收集PID,则不会获得响应代码来打印文本以显示状态。

我确实看了 gnu parallalel ,但我认为我不能有这种行为。 (我认为我可以打印完成作业,但我希望打印所有“跑步”作业,并且完成了完成的作业)。

I am trying to do work in all subfolders in parallel and describe a status per folder once it is done in bash.

suppose I have a work function which can return a couple of statuses

#param #1 is the folder
# can return 1 on fail, 2 on sucess, 3 on nothing happend
work(){
cd $1
// some update thing
return 1, 2, 3
}

now I call this in my wrapper function

do_work(){

  while read -r folder; do
    tput cup "${row}" 20
    echo -n "${folder}"
    (
      ret=$(work "${folder}")
      tput cup "${row}" 0
      [[ $ret -eq 1 ]] && echo " \e[0;31mupdate failed      \uf00d\e[0m"
      [[ $ret -eq 2 ]] && echo " \e[0;32mupdated            \uf00c\e[0m"
      [[ $ret -eq 3 ]] && echo " \e[0;32malready up to date \uf00c\e[0m"
    ) &>/dev/null
    pids+=("${!}")

    ((++row))
  done < <(find . -maxdepth 1 -mindepth 1 -type d -printf "%f\n" | sort)
  echo "waiting for pids ${pids[*]}"

  wait "${pids[@]}"
}

and what I want is, that it prints out all the folders per line, and updates them independently from each other in parallel and when they are done, I want that status to be written in that line.

However, I am unsure subshell is writing, which ones I need to capture how and so on.
My attempt above is currently not writing correctly, and not in parallel.
If I get it to work in parallel, I get those [1] <PID> things and [1] + 3156389 done ... messing up my screen.
If I put the work itself in a subshell, I don't have anything to wait for.
If I then collect the pids I dont get the response code to print out the text to show the status.

I did have a look at GNU Parallel but I think I cannot have that behaviour. (I think I could hack it that the finished jobs are printed, but I want all 'running' jobs are printed, and the finished ones get amended).

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

柒七 2025-02-02 13:02:04

假设/不可思议:

  • 为要处理的每个文件夹而产生一个单独的子进程,
  • 随着工作进度
  • 从子进程进行消息时,将实时显示消息,并将每个孩子的最新消息显示在不同的行中

一般想法是设置一种概要通信的手段( ic )...名为管道,普通文件,排队/消息传递系统,插座(通过 bash上的Web搜索获得很多想法Intervocess Communications );当父母从系统中读取并发出适当的 tput 命令时,孩子们写信给该系统。

一个非常简单的使用普通文件的示例:

> status.msgs                           # initialize our IC file

child_func () {
    # Usage: child_func <unique_id> <other> ... <args>

    local i

    for ((i=1;i<=10;i++))
    do
        sleep $1

        # each message should include the child's <unique_id> ($1 in this case);
        # parent/monitoring process uses this <unique_id> to control tput output

        echo "$1:message - $1.$i" >> status.msgs
    done
}

clear
( child_func 3 & )
( child_func 5 & )
( child_func 2 & )

while IFS=: read -r child msg
do
    tput cup $child 10
    echo "$msg"
done < <(tail -f status.msgs)

注释:

  • (child_func 3&amp;)构造是消除OS消息RE的一种方法:从Stdout中显示的“背景过程完成”(可能还有其他方法,但目前我正在画一个空白),
  • 当使用文件(正常,管道)OP将需要查看锁定方法( flock flock ?)为了确保来自多个孩子的消息不会互相踩踏,
  • op可以通过与父母的 status.msgs 印刷到 status.msgs 的形式。 > 循环
  • 假设可变宽度消息OP可能希望在每个打印消息的末尾附加附加 tput el
  • 退出循环可能很简单,就像保留发送消息&lt; id&gt;:DONDON 的子进程数量的数量,或者跟踪仍在后台运行的儿童数量或。 。

                          # no ouput to line #1
  message - 2.10          # messages change from 2.1 to 2.2 to ... to 2.10
  message - 3.10          # messages change from 3.1 to 3.2 to ... to 3.10
                          # no ouput to line #4
  message - 5.10          # messages change from 5.1 to 5.2 to ... to 5.10

Assumptions/undestandings:

  • a separate child process is spawned for each folder to be processed
  • the child process generates messages as work progresses
  • messages from child processes are to be displayed in the console in real time, with each child's latest message being displayed on a different line

The general idea is to setup a means of interprocess communications (IC) ... named pipe, normal file, queuing/messaging system, sockets (plenty of ideas available via a web search on bash interprocess communications); the children write to this system while the parent reads from the system and issues the appropriate tput commands.

One very simple example using a normal file:

> status.msgs                           # initialize our IC file

child_func () {
    # Usage: child_func <unique_id> <other> ... <args>

    local i

    for ((i=1;i<=10;i++))
    do
        sleep $1

        # each message should include the child's <unique_id> ($1 in this case);
        # parent/monitoring process uses this <unique_id> to control tput output

        echo "$1:message - $1.$i" >> status.msgs
    done
}

clear
( child_func 3 & )
( child_func 5 & )
( child_func 2 & )

while IFS=: read -r child msg
do
    tput cup $child 10
    echo "$msg"
done < <(tail -f status.msgs)

NOTES:

  • the (child_func 3 &) construct is one way to eliminate the OS message re: 'background process completed' from showing up in stdout (there may be other ways but I'm drawing a blank at the moment)
  • when using a file (normal, pipe) OP will want to look at a locking method (flock?) to insure messages from multiple children don't stomp each other
  • OP can get creative with the format of the messages printed to status.msgs in conjunction with parsing logic in the parent's while loop
  • assuming variable width messages OP may want to look at appending a tput el on the end of each printed message in order to 'erase' any characters leftover from a previous/longer message
  • exiting the loop could be as simple as keeping count of the number of child processes that send a message <id>:done, or keeping track of the number of children still running in the background, or ...

Running this at my command line generates 3 separate lines of output that are updated at various times (based on the sleep $1):

                          # no ouput to line #1
  message - 2.10          # messages change from 2.1 to 2.2 to ... to 2.10
  message - 3.10          # messages change from 3.1 to 3.2 to ... to 3.10
                          # no ouput to line #4
  message - 5.10          # messages change from 5.1 to 5.2 to ... to 5.10

NOTE: comments not actually displayed in console

荭秂 2025-02-02 13:02:04

基于 @markp-fuso的答案:

printer() {
    while IFS=
\t' read -r child msg
    do
        tput cup $child 10
        echo "$child $msg"
    done
}

clear
parallel --lb --tagstring "{%}\t{}" work ::: folder1 folder2 folder3 | printer
echo

Based on @markp-fuso's answer:

printer() {
    while IFS=
\t' read -r child msg
    do
        tput cup $child 10
        echo "$child $msg"
    done
}

clear
parallel --lb --tagstring "{%}\t{}" work ::: folder1 folder2 folder3 | printer
echo

奶茶白久 2025-02-02 13:02:04

您无法控制这样的退出状态。试试看,将您的工作功能重新处理为 echo 状态:

work(){
    cd $1
    # some update thing &> /dev/null without output
    echo "${1}_$status" #status=1, 2, 3
}

而不是从所有文件夹中设置数据收集:

data=$(
    while read -r folder; do
          work  "$folder" &
    done < <(find . -maxdepth 1 -mindepth 1 -type d -printf "%f\n" | sort)
    wait
)

echo "$data"

You can't control exit statuses like that. Try this instead, rework your work function to echo status:

work(){
    cd $1
    # some update thing &> /dev/null without output
    echo "${1}_$status" #status=1, 2, 3
}

And than set data collection from all folders like so:

data=$(
    while read -r folder; do
          work  "$folder" &
    done < <(find . -maxdepth 1 -mindepth 1 -type d -printf "%f\n" | sort)
    wait
)

echo "$data"
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文