Linux 中的目录大小。性能问题
我有一个脚本来扫描一组文件夹以获取大小;并将此信息显示到浏览器。 该脚本调用“du”并解析输出。
问题是关于性能的。有多快?例如,如果目录大小为 4 GB,并且
p.s. 中有 100.000 个文件。 我知道这些指标取决于硬件,但如果您在扫描大型目录大小方面有类似的经验 - 您能分享您的经验吗?
谢谢
I have a script to scan set of folders to get sizes; and display this info to browser.
This script calls 'du' and parses output.
The question is about performance. How fast is it? for example if directory size in 4 GB and 100.000 files are in
p.s.
I understand that these metrics depend on hardware, but if you have similar experience with scanning large directories for sizes - could you share your experience?
thank you
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
它在很大程度上取决于文件系统。在 ext3 上通常非常慢,如果有很多子目录,在大多数其他文件系统上也是如此。
然而,我认为没有其他方法可以实时完成它。您可以预先扫描目录并将结果缓存到文件或数据库中,但在这种情况下,您将大大增加复杂性。
It heavily depends on the file system. It's usually pretty slow on ext3, and on most other file systems as well if there's lots of subdirectories.
I don't think there's any other way to do it in realtime, however. You can pre-scan the directory and cache the result in a file or to a database, but in that case you will increase the complexity pretty much.