如何监控 ssh 文件传输的进度并输出到日志文件

发布于 2024-11-07 13:07:14 字数 1207 浏览 0 评论 0原文

我正在编写一个 bash 脚本来定期将数据传输到远程系统。我有一个生成流的本地命令和一个使用它的远程命令,所以我正在做这样的事情:(

generate_data | ssh remoteserver.example.com consume_data

我设置了 ssh 密钥,这样我就可以非交互式地执行此操作。)这工作正常。但是,由于这将是一个自动化过程(作为 cron 作业运行),并且有时可能会在有限的带宽上传输大量数据,因此我希望能够在日志文件中放置定期进度更新。我曾想过使用 pv (管道查看器)来实现此目的,这是我能想到的最好的方法:

generate_data | pv -fb | ssh remoteserver.example.com consume_data

再次,它有效...... 但是 pv 确实是这样写的考虑到终端输出,所以我最终在日志中看到一团糟,看起来

2.06MB^M2.19MB^M2.37MB^M 2.5MB^M2.62MB^M2.87MB^M3MB^M3.12MB^M3.37MB

我更喜欢按照以下方式记录消息

<timestamp> 2.04MB transferred...
<timestamp> 3.08MB transferred...

如果有人对如何做到这一点有任何聪明的想法,或者使用不同的参数 pv 或通过其他机制,我将不胜感激。

编辑:感谢您迄今为止的回答。当然,有很多可能的自制解决方案;我希望找到一些“开箱即用”的东西。 (并不是说我排除了自制;这最终可能是最简单的事情。由于 pv 已经完成了我需要的 98%,所以我不想重新发明它.)

PostScript:这是我最终使用的行,希望它可以在某个时候对其他人有所帮助。

{ generate_data | pv -fbt -i 60 2>&3 | ssh remoteserver consume_data } 3>&1 | awk -vRS='\r' '{print $1 " transferred in " $2; fflush();}' >> logfile

I'm writing a bash script to periodically transfer data to a remote system. I have a local command that generates a stream, and a remote command that consumes it, so I'm doing something like this:

generate_data | ssh remoteserver.example.com consume_data

(Where I have ssh keys set up so I can do this non-interactively.) This works fine. However, since this will be an automated process (running as a cron job) and may sometimes be transferring large amounts of data on limited bandwidth, I'd like to be able place periodic progress updates in my log file. I had thought to use pv (pipe viewer) for this, and this is the best I could come up with:

generate_data | pv -fb | ssh remoteserver.example.com consume_data

Again, it works... but pv was really written with terminal output in mind, so I end up with a mess in the log that looks like

2.06MB^M2.19MB^M2.37MB^M 2.5MB^M2.62MB^M2.87MB^M3MB^M3.12MB^M3.37MB

I'd prefer log messages along the lines of

<timestamp> 2.04MB transferred...
<timestamp> 3.08MB transferred...

If anybody has any clever ideas of how to do this, either with different arguments to pv or via some other mechanism, I'd be grateful.

EDIT: Thanks for the answers so far. Of course there are a lot of possible home-brewed solutions; I was hoping to find something that would work "out of the box." (Not that I'm ruling out home-brewed; it may be the easiest thing in the end. Since pv already does 98% of what I need, I'd prefer not to re-invent it.)

PostScript: Here's the line I ended up using, in hopes it might help someone else at some point.

{ generate_data | pv -fbt -i 60 2>&3 | ssh remoteserver consume_data } 3>&1 | awk -vRS='\r' '{print $1 " transferred in " $2; fflush();}' >> logfile

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

梦里的微风 2024-11-14 13:07:14

如果您想坚持使用 pv,您可以对其输出进行一些后处理。至少,将 CR 转换为 LF。

{ generate_data | pv -bft 2>&3 | consume_data >/dev/null; } 3>&1 | tr '\015' '\012'

使用 awk 进行更高级的处理。

{ generate_data | pv -bft 2>&3 | consume_data >/dev/null; } 3>&1 |
awk -vRS='\r' '{print $2, $1 " transferred"}'

但请记住,标准文本处理实用程序仅在打印到终端时刷新每行末尾的输出。因此,如果将 pv 通过管道传输到其他实用程序,而其输出会发送到管道或文件,则由于缓冲,将会出现不可忽略的延迟。如果您有 GNU awk 或其他具有 fflush 函数的实现(它很常见,但不是标准),请使其在每一行上刷新其输出:

{ generate_data | pv -bft 2>&3 | consume_data >/dev/null; } 3>&1 |
awk -vRS='\r' '{print $2, $1 " transferred"; fflush()}'

If you want to stick with pv, you could postprocess its output a little. At a minimum, turn the CRs into LFs.

{ generate_data | pv -bft 2>&3 | consume_data >/dev/null; } 3>&1 | tr '\015' '\012'

Use awk for fancier processing.

{ generate_data | pv -bft 2>&3 | consume_data >/dev/null; } 3>&1 |
awk -vRS='\r' '{print $2, $1 " transferred"}'

Do however keep in mind that the standard text processing utilities only flush their output at the end of each line if they're printing to a terminal. So if you pipe pv to some other utility whose output goes to a pipe or file, there will be a non-negligible delay due to buffering. If you have GNU awk or some other implementation that has the fflush function (it's common but not standard), make it flush its output on every line:

{ generate_data | pv -bft 2>&3 | consume_data >/dev/null; } 3>&1 |
awk -vRS='\r' '{print $2, $1 " transferred"; fflush()}'
少女的英雄梦 2024-11-14 13:07:14

这是一个小红宝石脚本,我相信它可以满足您的需求。由于 ruby​​ 的开销,将文件复制到本地文件系统时我每秒只能获得大约 1MB 的速度,但是您提到管道的带宽有限,所以这可能没问题。我从 Rails (actionview) 中提取了 number_to_ human_size 函数。

#!/usr/bin/ruby                                                                                                                           
require 'rubygems'
require 'active_support'

# File vendor/rails/actionpack/lib/action_view/helpers/number_helper.rb, line 87                                                          
def number_to_human_size(size)
  case
    when size < 1.kilobyte: '%d Bytes' % size
    when size < 1.megabyte: '%.1f KB'  % (size / 1.0.kilobyte)
    when size < 1.gigabyte: '%.1f MB'  % (size / 1.0.megabyte)
    when size < 1.terabyte: '%.1f GB'  % (size / 1.0.gigabyte)
    else                    '%.1f TB'  % (size / 1.0.terabyte)
  end.sub('.0', '')
rescue
  nil
end

UPDATE_FREQ = 2
count = 0
time1 = Time.now

while (!STDIN.eof?)
  b = STDIN.getc
  count += 1
  print b.chr
  time2 = Time.now
  if time2 - time1 > UPDATE_FREQ
    time1 = time2
    STDERR.puts "#{time2} #{number_to_human_size(count)} transferred..."
  end
end

Here's a small ruby script that I believe does what you want. With the overhead of ruby, I only get about 1MB per second copying a file to the local filesystem, but you mentioned the pipes will have limited bandwidth so this may be OK. I pulled the number_to_human_size function from rails (actionview).

#!/usr/bin/ruby                                                                                                                           
require 'rubygems'
require 'active_support'

# File vendor/rails/actionpack/lib/action_view/helpers/number_helper.rb, line 87                                                          
def number_to_human_size(size)
  case
    when size < 1.kilobyte: '%d Bytes' % size
    when size < 1.megabyte: '%.1f KB'  % (size / 1.0.kilobyte)
    when size < 1.gigabyte: '%.1f MB'  % (size / 1.0.megabyte)
    when size < 1.terabyte: '%.1f GB'  % (size / 1.0.gigabyte)
    else                    '%.1f TB'  % (size / 1.0.terabyte)
  end.sub('.0', '')
rescue
  nil
end

UPDATE_FREQ = 2
count = 0
time1 = Time.now

while (!STDIN.eof?)
  b = STDIN.getc
  count += 1
  print b.chr
  time2 = Time.now
  if time2 - time1 > UPDATE_FREQ
    time1 = time2
    STDERR.puts "#{time2} #{number_to_human_size(count)} transferred..."
  end
end
初与友歌 2024-11-14 13:07:14

您可能会看一下栏:http://clpbar.sourceforge.net/

Bar是一个复制流的简单工具
数据并打印显示
stderr 上的用户显示 (a) 金额
传递的数据量,(b) 的吞吐量
数据传输,以及(c)
传输时间,或者,如果总大小
数据流的数据已知,
预计剩余时间,什么
数据传输的百分比
已完成,并有进度条。

Bar 最初是为
估计数量的目的
大额转账所需时间
(很多很多千兆字节)的数据
一个网络。 (通常在 SSH/tar
管道。)

You might have a look at bar: http://clpbar.sourceforge.net/

Bar is a simple tool to copy a stream
of data and print a display for the
user on stderr showing (a) the amount
of data passed, (b) the throughput of
the data transfer, and (c) the
transfer time, or, if the total size
of the data stream is known, the
estimated time remaining, what
percentage of the data transfer has
been completed, and a progress bar.

Bar was originally written for the
purpose of estimating the amount of
time needed to transfer large amounts
(many, many gigabytes) of data across
a network. (Usually in an SSH/tar
pipe.)

长途伴 2024-11-14 13:07:14

源代码位于 http://code.google.com/p/pipeviewer/source/checkout 这样你就可以编辑一些C并使用PV!

编辑:
是的,获取源代码,然后编辑 display.c 的第 578 行,其中包含以下代码:

    display = pv__format(&state, esec, sl, tot);
if (display == NULL)
    return;

if (opts->numeric) {
    write(STDERR_FILENO, display, strlen(display)); /* RATS: ignore */
} else if (opts->cursor) {
    pv_crs_update(opts, display);
} else {
    write(STDERR_FILENO, display, strlen(display)); /* RATS: ignore */
    write(STDERR_FILENO, "\r", 1);
}

您可以将“\r”更改为“\n”并重新编译。这可能会为您带来一些更有用的输出,每次都将其放在一个新行上。如果需要,您也可以尝试重新格式化整个输出字符串。

Source code is at http://code.google.com/p/pipeviewer/source/checkout so you can edit some C and use PV!

EDIT:
Yeah getting the source then editing line 578 of display.c where it has this code:

    display = pv__format(&state, esec, sl, tot);
if (display == NULL)
    return;

if (opts->numeric) {
    write(STDERR_FILENO, display, strlen(display)); /* RATS: ignore */
} else if (opts->cursor) {
    pv_crs_update(opts, display);
} else {
    write(STDERR_FILENO, display, strlen(display)); /* RATS: ignore */
    write(STDERR_FILENO, "\r", 1);
}

you can change "\r" to "\n" and recompile. This may get you some more useful output, by having it on a new line each time. And you could try reformatting that entire output string too if you wanted.

空心空情空意 2024-11-14 13:07:14

也许一个小的 Perl 程序将 STDIN 复制到 STDOUT 并将其进度打印到 STDERR ?

maybe a small perl program what copy STDIN into STDOUT and print his progress to STDERR?

情释 2024-11-14 13:07:14

查看 SSHLog: https://github.com/sshlog/agent/

这是一个监控的守护进程SSH 登录以及用户活动。所有用户活动(在 shell 上发生的所有事情)都会被被动记录并可用于指向输出。

它还监视 SCP 文件传输。您可以将其配置为侦听文件传输事件并将这些事件写入日志文件。

Check out SSHLog: https://github.com/sshlog/agent/

It's a daemon that monitors SSH logins as well as user activity. All user activity (everything that happens on the shell) is passively recorded and available to point to outputs.

It monitors SCP file transfers as well. You can configure it to listen to file transfer events and write just those events to a log file.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文