如何在 cygwin perl 中快速获取目录(和内容)大小
我有一个 perl 脚本,用于监视多个 Windows 网络共享驱动器的使用情况。目前,它使用 cygwin df 命令监视多个网络驱动器的可用空间。我还想添加单独的驱动器用法。当我使用该
du -s | grep total
命令时,它需要永远。我需要查看共享驱动器的使用情况,因为服务器上的单个驱动器共享多个网络驱动器。因此,填充一个网络驱动器就可以填充所有驱动器(是的,我知道,这不是最好的解决方案,也不是我的选择)。
因此,我想知道是否有一种更快的方法来获取永远不会占用的文件夹使用情况。
I have a perl script which monitors several windows network share drive usages. It currently monitors the free space of several network drives using the cygwin df
command. I'd like to add the individual drive usages as well. When I use the
du -s | grep total
command, it takes for ever. I need to look at the shared drive usages because there are several network drives that are shared from a single drive on the server. Thus, filling one network drive fills them all (yes I know, not the best solution, not my choice).
So, I'd like to know if there is a quicker way to get the folder usage that doesn't take for ever.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
du -s 的工作原理是递归查询每个目录和文件的大小。如果您的文件系统实现没有将此总值存储在某处,则这是确定磁盘使用情况的唯一方法。因此,您应该调查您正在使用哪个文件系统和驱动程序,并查看是否有方法可以直接查询此数据。否则,您可能是 SOL,并且将不得不花费大量时间来运行
du
。du -s
works by recursively querying the size of every directory and file. If your filesystem implementation doesn't store this total value somewhere, this is the only way to determine disk usage. Therefore, you should investigate which filesystem and drivers you are using, and see if there is a way to directly query for this data. Otherwise, you're probably SOL and will have to suck up the time it takes to rundu
.1) 问题可能在于它们是网络驱动器 - 本地
du
在大多数情况下速度是可以接受的。您是否在磁盘所在的服务器上执行du
?如果没有,请尝试从不同的角度解决问题 - 在托管驱动器的每台服务器上运行代理,计算本地du
摘要,然后将总数报告给中央进程(IPC 或哎呀,通过将报告写入同一共享文件系统上的文件中)。2) 如果其中一个驱动器占用的空间份额(平均)明显大于其他驱动器,您可以通过对除“最大”驱动器之外的所有驱动器执行
du
进行优化,然后计算通过从 df 结果中减去其他的总和来得到最大的一个3) 此外,说实话,从设计的角度来看,这听起来像是一个次优的解决方案 - 虽然你表示这不是你的选择,但我强烈建议你这样做建议您发布一个关于如何在给定参数内改进设计的问题(到 ServerFault 网站,而不是 SO)
1) The problem possibly lies in the fact that they are network drives - local
du
is acceptably fast in most cases. Are you doingdu
on the exact server where the disk is housed? If not, try to approach the problem from a different angle - run an agent on every server hosting the drives which calculates the localdu
summaries and then report the totals to a central process (either IPC or heck, by writing a report into a file on that same share filesystem).2) If one of the drives is taking a significantly larger share of space (on average) than the rest of them, you can optimize by doing
du
on all but the "biggest" one and then calculate the biggest one by subtracting the sum of others fromdf
results3) Also, to be perfectly honest, it sounds like a suboptimal solution from design standpoint - while you indicated that it's not your choice, I'd strongly recommend that you post a question on how you can improve the design within the parameters you were given (to ServerFault website, not SO)