为什么paramiko sftp-upload比命令行SFTP慢得多?
我正在尝试将大型文件上传到远程SFTP服务器。常规的Openssh SFTP
- client平均4-5 MB/秒。
我的代码是:
inp = open(fName, "rb")
bsize = os.stat(inp.fileno()).st_blksize
out = SFTP.open(os.path.split(fName)[-1], "w",
bsize * 4) # SFTP is a result of paramiko.SFTPClient()
out.set_pipelined()
while True:
buf = inp.read(bsize)
if not buf:
break
out.write(buf)
inp.close()
out.close()
平均40-180KB-即使我人为地提出了bsize
。一个人可能会怪一个事实,帕拉马科是一个“纯python”实现,但差异不应是这个巨大...
我的机器上没有明显的CPU负载,它运行FreeBSD-11 ,Python-3.6,Paramiko-2.7.1
发生了什么事?
update :添加out.set_pipelined()
有助于将吞吐量提高到1-2MB/s,但它仍然落后于Openssh sftp
- 很多
...没有明显的效果。 (我怀疑,Paramiko默认情况下已经使用一些缓冲。)
I'm trying to upload large files to a remote SFTP-server. The regular OpenSSH sftp
-client averages 4-5 Mb/second.
My code is:
inp = open(fName, "rb")
bsize = os.stat(inp.fileno()).st_blksize
out = SFTP.open(os.path.split(fName)[-1], "w",
bsize * 4) # SFTP is a result of paramiko.SFTPClient()
out.set_pipelined()
while True:
buf = inp.read(bsize)
if not buf:
break
out.write(buf)
inp.close()
out.close()
Averages 40-180Kb -- even if I artificially raise the bsize
. One could blame the fact, that Paramiko is a "pure Python" implementation, but the difference should not be this huge...
There is no significant CPU-load on my machine, which runs FreeBSD-11, python-3.6, Paramiko-2.7.1
What's going on?
Update: adding out.set_pipelined()
helps raise the throughput to 1-2Mb/s, but it still lags behinds that of the OpenSSH sftp
-client by a lot...
Update: adding an explicit buffer-size to the SFTP.open()
call -- as suggested by Martin in a comment -- had no perceptible effect. (I suspect, Paramiko already uses some buffering by default.)
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论