当 bash 脚本仍在运行时,强制将输出刷新到文件

发布于 2024-08-04 09:56:52 字数 246 浏览 5 评论 0原文

我有一个小脚本,每天由 crontab 使用以下命令调用:

/homedir/MyScript &> some_log.log

此方法的问题是 some_log.log 仅在 MyScript 完成后创建。我想在程序运行时将程序的输出刷新到文件中,这样我就可以执行诸如

tail -f some_log.log

跟踪进度等操作。

I have a small script, which is called daily by crontab using the following command:

/homedir/MyScript &> some_log.log

The problem with this method is that some_log.log is only created after MyScript finishes. I would like to flush the output of the program into the file while it's running so I could do things like

tail -f some_log.log

and keep track of the progress, etc.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(14

末骤雨初歇 2024-08-11 09:56:53

bash 本身永远不会真正将任何输出写入日志文件。相反,它作为脚本的一部分调用的命令将分别单独写入输出并在需要时刷新。所以你的问题实际上是如何强制 bash 脚本中的命令刷新,这取决于它们是什么。

bash itself will never actually write any output to your log file. Instead, the commands it invokes as part of the script will each individually write output and flush whenever they feel like it. So your question is really how to force the commands within the bash script to flush, and that depends on what they are.

ヅ她的身影、若隐若现 2024-08-11 09:56:53

您可以使用 tee 写入文件而无需刷新。

/homedir/MyScript 2>&1 | tee some_log.log > /dev/null

You can use tee to write to the file without the need for flushing.

/homedir/MyScript 2>&1 | tee some_log.log > /dev/null
苦行僧 2024-08-11 09:56:53

这不是 bash 的功能,因为 shell 所做的只是打开相关文件,然后将文件描述符作为脚本的标准输出传递。您需要做的是确保比当前更频繁地从脚本中刷新输出。

例如,在 Perl 中,这可以通过设置来完成:

$| = 1;

有关详细信息,请参阅 perlvar

This isn't a function of bash, as all the shell does is open the file in question and then pass the file descriptor as the standard output of the script. What you need to do is make sure output is flushed from your script more frequently than you currently are.

In Perl for example, this could be accomplished by setting:

$| = 1;

See perlvar for more information on this.

不羁少年 2024-08-11 09:56:53

这有帮助吗?

tail -f access.log | stdbuf -oL cut -d ' ' -f1 | uniq 

这将立即使用 stdbuf 实用程序 显示 access.log 中的唯一条目。

Would this help?

tail -f access.log | stdbuf -oL cut -d ' ' -f1 | uniq 

This will immediately display unique entries from access.log using the stdbuf utility.

戒ㄋ 2024-08-11 09:56:53

输出缓冲取决于程序 /homedir/MyScript 的实现方式。如果您发现输出正在缓冲,则必须在实现中强制它。例如,如果是 python 程序,则使用 sys.stdout.flush();如果是 C 程序,则使用 fflush(stdout)。

Buffering of output depends on how your program /homedir/MyScript is implemented. If you find that output is getting buffered, you have to force it in your implementation. For example, use sys.stdout.flush() if it's a python program or use fflush(stdout) if it's a C program.

山有枢 2024-08-11 09:56:53

感谢@user3258569,脚本可能是busybox中唯一可以工作的东西!

不过,在它之后,贝壳对我来说很冷。寻找原因,我在 脚本手册页

script 主要是为交互式终端会话而设计的。什么时候
stdin 不是终端(例如:echo foo | script),那么
会话可能会挂起,因为脚本中的交互式 shell
会话错过EOF并且脚本不知道何时关闭会话。
有关详细信息,请参阅注释部分。

真的。 脚本 -c “make_hay” -f /dev/null | grep "needle" 正在为我冻结 shell。

与警告相反,我认为 echo "make_hay" | script 将传递一个 EOF,所以我尝试了

echo "make_hay; exit" | script -f /dev/null | grep 'needle'

,它成功了!

请注意手册页中的警告。这可能不适合你。

Thanks @user3258569, script is maybe the only thing that works in busybox!

The shell was freezing for me after it, though. Looking for the cause, I found these big red warnings "don't use in a non-interactive shells" in script manual page:

script is primarily designed for interactive terminal sessions. When
stdin is not a terminal (for example: echo foo | script), then the
session can hang, because the interactive shell within the script
session misses EOF and script has no clue when to close the session.
See the NOTES section for more information.

True. script -c "make_hay" -f /dev/null | grep "needle" was freezing the shell for me.

Countrary to the warning, I thought that echo "make_hay" | script WILL pass a EOF, so I tried

echo "make_hay; exit" | script -f /dev/null | grep 'needle'

and it worked!

Note the warnings in the man page. This may not work for you.

不弃不离 2024-08-11 09:56:53

这里刚刚发现的问题是,您必须等待从脚本运行的程序完成其工作。< br>
如果在您的脚本中您在后台运行程序,您可以尝试更多操作。

一般来说,在退出之前调用 sync 可以刷新文件系统缓冲区,并且可以提供一些帮助。

如果在脚本中您在后台&)启动一些程序,您可以等待它们完成后再退出脚本。要了解它如何发挥作用,您可以在下面看到

#!/bin/bash
#... some stuffs ...
program_1 &          # here you start a program 1 in background
PID_PROGRAM_1=${!}   # here you remember its PID
#... some other stuffs ... 
program_2 &          # here you start a program 2 in background
wait ${!}            # You wait it finish not really useful here
#... some other stuffs ... 
daemon_1 &           # We will not wait it will finish
program_3 &          # here you start a program 1 in background
PID_PROGRAM_3=${!}   # here you remember its PID
#... last other stuffs ... 
sync
wait $PID_PROGRAM_1
wait $PID_PROGRAM_3  # program 2 is just ended
# ...

由于 wait 适用于作业以及 PID 数字,因此懒惰的解决方案应该放在末尾 如果您运行的程序在后台运行其他程序,则情况

for job in `jobs -p`
do
   wait $job 
done

会更困难,因为您必须搜索并等待(如果是这种情况)所有 child 进程的结束:例如,如果您运行守护进程可能不是等待它完成的情况:-)。

注意:

  • wait ${!} 表示“等待直到最后一个后台进程完成”,其中 $! 是最后一个后台进程的 PID。因此,将 wait ${!} 放在 program_2 & 之后相当于直接执行 program_2,而不使用 & 在后台发送它;

  • 来自等待的帮助:

    语法    
        等等[n...]
    钥匙  
        n 进程 ID 或作业规范
    

How just spotted here the problem is that you have to wait that the programs that you run from your script finish their jobs.
If in your script you run program in background you can try something more.

In general a call to sync before you exit allows to flush file system buffers and can help a little.

If in the script you start some programs in background (&), you can wait that they finish before you exit from the script. To have an idea about how it can function you can see below

#!/bin/bash
#... some stuffs ...
program_1 &          # here you start a program 1 in background
PID_PROGRAM_1=${!}   # here you remember its PID
#... some other stuffs ... 
program_2 &          # here you start a program 2 in background
wait ${!}            # You wait it finish not really useful here
#... some other stuffs ... 
daemon_1 &           # We will not wait it will finish
program_3 &          # here you start a program 1 in background
PID_PROGRAM_3=${!}   # here you remember its PID
#... last other stuffs ... 
sync
wait $PID_PROGRAM_1
wait $PID_PROGRAM_3  # program 2 is just ended
# ...

Since wait works with jobs as well as with PID numbers a lazy solution should be to put at the end of the script

for job in `jobs -p`
do
   wait $job 
done

More difficult is the situation if you run something that run something else in background because you have to search and wait (if it is the case) the end of all the child process: for example if you run a daemon probably it is not the case to wait it finishes :-).

Note:

  • wait ${!} means "wait till the last background process is completed" where $! is the PID of the last background process. So to put wait ${!} just after program_2 & is equivalent to execute directly program_2 without sending it in background with &

  • From the help of wait:

    Syntax    
        wait [n ...]
    Key  
        n A process ID or a job specification
    
熟人话多 2024-08-11 09:56:53

stdbuf 的替代方法是 awk '{print} END {fflush()}'
我希望有一个内置的 bash 来做到这一点。
通常没有必要,但对于旧版本,文件描述符上可能存在 bash 同步错误。

alternative to stdbuf is awk '{print} END {fflush()}'
I wish there were a bash builtin to do this.
Normally it shouldn't be necessary, but with older versions there might be bash synchronization bugs on file descriptors.

养猫人 2024-08-11 09:56:53

我遇到了类似的问题,重定向有时会缓冲。

我无法轻松使用 stdbuf,因为我的命令是 bash 函数。
在这种情况下,您必须导出该函数,这工作量太大。

我的解决方法是在使用文件之前对其进行处理。

例如。

mybashfunction >> ${mybufferedfile}
touch ${mybufferedfile}  <--- I needed to add this
diff mybufferedfile identical.output.txt

如果没有 touch 命令,文件有时会被缓冲。
diff 发现不完整缓冲文件的差异 cos。

I had a similar problem where a redirect was sometimes buffering.

I couldnt easily use stdbuf cos my command was a bash function.
You had to export the function in this case which was too much work.

My workaround was touching the file prior to it being used.

eg.

mybashfunction >> ${mybufferedfile}
touch ${mybufferedfile}  <--- I needed to add this
diff mybufferedfile identical.output.txt

Without the touch command, the file was buffered, sometimes.
And the diff found differences cos of the incomplete buffered file.

帥小哥 2024-08-11 09:56:53

我在 Mac OS X 中使用 StartupItems 时遇到了这个问题。这就是我解决这个问题的方法:

如果我执行 sudo ps aux 我可以看到 mytool 已启动。

我发现(由于缓冲)当 Mac OS X 关闭时,mytool 永远不会将输出传输到 sed 命令。但是,如果我执行 sudo Killall mytool ,那么 mytool 会将输出传输到 sed 命令。因此,我向 StartupItems 添加了一个 stop 案例,该案例在 Mac OS X 关闭时执行:

start)
    if [ -x /sw/sbin/mytool ]; then
      # run the daemon
      ConsoleMessage "Starting mytool"
      (mytool | sed .... >> myfile.txt) & 
    fi
    ;;
stop)
    ConsoleMessage "Killing mytool"
    killall mytool
    ;;

I had this problem with a background process in Mac OS X using the StartupItems. This is how I solve it:

If I make sudo ps aux I can see that mytool is launched.

I found that (due to buffering) when Mac OS X shuts down mytool never transfers the output to the sed command. However, if I execute sudo killall mytool, then mytool transfers the output to the sed command. Hence, I added a stop case to the StartupItems that is executed when Mac OS X shuts down:

start)
    if [ -x /sw/sbin/mytool ]; then
      # run the daemon
      ConsoleMessage "Starting mytool"
      (mytool | sed .... >> myfile.txt) & 
    fi
    ;;
stop)
    ConsoleMessage "Killing mytool"
    killall mytool
    ;;
谈下烟灰 2024-08-11 09:56:53

不管你喜欢与否,这就是重定向的工作原理。

在您的情况下,脚本的输出(意味着您的脚本已完成)重定向到该文件。

您想要做的就是在脚本中添加这些重定向。

well like it or not this is how redirection works.

In your case the output (meaning your script has finished) of your script redirected to that file.

What you want to do is add those redirections in your script.

始终不够 2024-08-11 09:56:53

我不知道它是否有效,但是调用 sync 怎么样?

I don't know if it would work, but what about calling sync?

智商已欠费 2024-08-11 09:56:52

我在此处找到了解决方案 。使用OP的示例,您基本上可以运行

stdbuf -oL /homedir/MyScript &> some_log.log

,然后在每行输出后刷新缓冲区。我经常将其与 nohup 结合起来在远程计算机上运行长时间作业。

stdbuf -oL nohup /homedir/MyScript &> some_log.log

这样,当您注销时,您的进程就不会被取消。

I found a solution to this here. Using the OP's example you basically run

stdbuf -oL /homedir/MyScript &> some_log.log

and then the buffer gets flushed after each line of output. I often combine this with nohup to run long jobs on a remote machine.

stdbuf -oL nohup /homedir/MyScript &> some_log.log

This way your process doesn't get cancelled when you log out.

扎心 2024-08-11 09:56:52
script -c <PROGRAM> -f OUTPUT.txt

关键是-f。引用 man 脚本:

-f, --flush
     Flush output after each write.  This is nice for telecooperation: one person
     does 'mkfifo foo; script -f foo', and another can supervise real-time what is
     being done using 'cat foo'.

在后台运行:

nohup script -c <PROGRAM> -f OUTPUT.txt
script -c <PROGRAM> -f OUTPUT.txt

Key is -f. Quote from man script:

-f, --flush
     Flush output after each write.  This is nice for telecooperation: one person
     does 'mkfifo foo; script -f foo', and another can supervise real-time what is
     being done using 'cat foo'.

Run in background:

nohup script -c <PROGRAM> -f OUTPUT.txt
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文