如何监控 ssh 文件传输的进度并输出到日志文件
我正在编写一个 bash 脚本来定期将数据传输到远程系统。我有一个生成流的本地命令和一个使用它的远程命令,所以我正在做这样的事情:(
generate_data | ssh remoteserver.example.com consume_data
我设置了 ssh 密钥,这样我就可以非交互式地执行此操作。)这工作正常。但是,由于这将是一个自动化过程(作为 cron 作业运行),并且有时可能会在有限的带宽上传输大量数据,因此我希望能够在日志文件中放置定期进度更新。我曾想过使用 pv
(管道查看器)来实现此目的,这是我能想到的最好的方法:
generate_data | pv -fb | ssh remoteserver.example.com consume_data
再次,它有效...... 但是 pv 确实是这样写的考虑到终端输出,所以我最终在日志中看到一团糟,看起来
2.06MB^M2.19MB^M2.37MB^M 2.5MB^M2.62MB^M2.87MB^M3MB^M3.12MB^M3.37MB
我更喜欢按照以下方式记录消息
<timestamp> 2.04MB transferred...
<timestamp> 3.08MB transferred...
如果有人对如何做到这一点有任何聪明的想法,或者使用不同的参数 pv
或通过其他机制,我将不胜感激。
编辑:感谢您迄今为止的回答。当然,有很多可能的自制解决方案;我希望找到一些“开箱即用”的东西。 (并不是说我排除了自制;这最终可能是最简单的事情。由于 pv
已经完成了我需要的 98%,所以我不想重新发明它.)
PostScript:这是我最终使用的行,希望它可以在某个时候对其他人有所帮助。
{ generate_data | pv -fbt -i 60 2>&3 | ssh remoteserver consume_data } 3>&1 | awk -vRS='\r' '{print $1 " transferred in " $2; fflush();}' >> logfile
I'm writing a bash script to periodically transfer data to a remote system. I have a local command that generates a stream, and a remote command that consumes it, so I'm doing something like this:
generate_data | ssh remoteserver.example.com consume_data
(Where I have ssh keys set up so I can do this non-interactively.) This works fine. However, since this will be an automated process (running as a cron job) and may sometimes be transferring large amounts of data on limited bandwidth, I'd like to be able place periodic progress updates in my log file. I had thought to use pv
(pipe viewer) for this, and this is the best I could come up with:
generate_data | pv -fb | ssh remoteserver.example.com consume_data
Again, it works... but pv was really written with terminal output in mind, so I end up with a mess in the log that looks like
2.06MB^M2.19MB^M2.37MB^M 2.5MB^M2.62MB^M2.87MB^M3MB^M3.12MB^M3.37MB
I'd prefer log messages along the lines of
<timestamp> 2.04MB transferred...
<timestamp> 3.08MB transferred...
If anybody has any clever ideas of how to do this, either with different arguments to pv
or via some other mechanism, I'd be grateful.
EDIT: Thanks for the answers so far. Of course there are a lot of possible home-brewed solutions; I was hoping to find something that would work "out of the box." (Not that I'm ruling out home-brewed; it may be the easiest thing in the end. Since pv
already does 98% of what I need, I'd prefer not to re-invent it.)
PostScript: Here's the line I ended up using, in hopes it might help someone else at some point.
{ generate_data | pv -fbt -i 60 2>&3 | ssh remoteserver consume_data } 3>&1 | awk -vRS='\r' '{print $1 " transferred in " $2; fflush();}' >> logfile
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(6)
如果您想坚持使用
pv
,您可以对其输出进行一些后处理。至少,将 CR 转换为 LF。使用 awk 进行更高级的处理。
但请记住,标准文本处理实用程序仅在打印到终端时刷新每行末尾的输出。因此,如果将
pv
通过管道传输到其他实用程序,而其输出会发送到管道或文件,则由于缓冲,将会出现不可忽略的延迟。如果您有 GNU awk 或其他具有 fflush 函数的实现(它很常见,但不是标准),请使其在每一行上刷新其输出:If you want to stick with
pv
, you could postprocess its output a little. At a minimum, turn the CRs into LFs.Use awk for fancier processing.
Do however keep in mind that the standard text processing utilities only flush their output at the end of each line if they're printing to a terminal. So if you pipe
pv
to some other utility whose output goes to a pipe or file, there will be a non-negligible delay due to buffering. If you have GNU awk or some other implementation that has thefflush
function (it's common but not standard), make it flush its output on every line:这是一个小红宝石脚本,我相信它可以满足您的需求。由于 ruby 的开销,将文件复制到本地文件系统时我每秒只能获得大约 1MB 的速度,但是您提到管道的带宽有限,所以这可能没问题。我从 Rails (actionview) 中提取了 number_to_ human_size 函数。
Here's a small ruby script that I believe does what you want. With the overhead of ruby, I only get about 1MB per second copying a file to the local filesystem, but you mentioned the pipes will have limited bandwidth so this may be OK. I pulled the number_to_human_size function from rails (actionview).
您可能会看一下栏:http://clpbar.sourceforge.net/
You might have a look at bar: http://clpbar.sourceforge.net/
源代码位于 http://code.google.com/p/pipeviewer/source/checkout 这样你就可以编辑一些C并使用PV!
编辑:
是的,获取源代码,然后编辑 display.c 的第 578 行,其中包含以下代码:
您可以将“\r”更改为“\n”并重新编译。这可能会为您带来一些更有用的输出,每次都将其放在一个新行上。如果需要,您也可以尝试重新格式化整个输出字符串。
Source code is at http://code.google.com/p/pipeviewer/source/checkout so you can edit some C and use PV!
EDIT:
Yeah getting the source then editing line 578 of display.c where it has this code:
you can change "\r" to "\n" and recompile. This may get you some more useful output, by having it on a new line each time. And you could try reformatting that entire output string too if you wanted.
也许一个小的 Perl 程序将 STDIN 复制到 STDOUT 并将其进度打印到 STDERR ?
maybe a small perl program what copy STDIN into STDOUT and print his progress to STDERR?
查看 SSHLog: https://github.com/sshlog/agent/
这是一个监控的守护进程SSH 登录以及用户活动。所有用户活动(在 shell 上发生的所有事情)都会被被动记录并可用于指向输出。
它还监视 SCP 文件传输。您可以将其配置为侦听文件传输事件并将这些事件写入日志文件。
Check out SSHLog: https://github.com/sshlog/agent/
It's a daemon that monitors SSH logins as well as user activity. All user activity (everything that happens on the shell) is passively recorded and available to point to outputs.
It monitors SCP file transfers as well. You can configure it to listen to file transfer events and write just those events to a log file.