目录中的最大 inode 数量?

发布于 2024-07-05 22:24:46 字数 1570 浏览 12 评论 0原文

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(10

幻梦 2024-07-12 22:24:49

另一个选项是 find

find 。 -name * -exec somcommands {} \;

{} 是绝对文件路径。

优点/缺点是文件是一个接一个地处理的。

<代码>查找 . -名称*> ls.txt

将打印 ls.txt

find 中的所有文件名。 -name * -exec ls -l {} \; > ls.txt

将从 ls 中打印 ls.txt 中每个文件的所有信息

Another option is find:

find . -name * -exec somcommands {} \;

{} is the absolute filepath.

The advantage/disadvantage is that the files are processed one after each other.

find . -name * > ls.txt

would print all filenames in ls.txt

find . -name * -exec ls -l {} \; > ls.txt

would print all information form ls for each file in ls.txt

治碍 2024-07-12 22:24:49

对于 NetBackup,分析客户端中的目录的二进制文件执行某种类型的列表,该列表会因每个文件夹中的大量文件(每个文件夹大约一百万个,SAP 工作目录)而超时。

我的解决方案是(正如查尔斯·达菲(Charles Duffy)在该线程中所写的那样),重新组织具有较少存档的子文件夹中的文件夹。

For NetBackup, the binaries that analyze the directories in clients perform some type of listing that timeouts by the enormous quantity of files in every folder (about one million per folder, SAP work directory).

My solution was (as Charles Duffy write in this thread), reorganize the folders in subfolders with less archives.

我的影子我的梦 2024-07-12 22:24:49

正如 Rob Adams 所指出的,ls 在显示文件之前对文件进行排序。 请注意,如果您使用 NFS,NFS 服务器将在发送目录之前对目录进行排序,并且 200 万个条目可能需要比 NFS 超时更长的时间。 这使得该目录无法通过 NFS 列出,即使使用 -f 标志也是如此。

对于其他网络文件系统来说也可能如此。

虽然目录中的条目数量没有强制限制,但最好对您预期的条目进行一些限制。

As noted by Rob Adams, ls is sorting the files before displaying them. Note that if you are using NFS, the NFS server will be sorting the directory before sending it, and 2 million entries may well take longer than the NFS timeout. That makes the directory unlistable via NFS, even with the -f flag.

This may be true for other network file systems as well.

While there's no enforced limit to the number of entries in a directory, it's good practice to have some limit to the entries you anticipate.

哽咽笑 2024-07-12 22:24:49

你能得到文件数量的真实数量吗? 它是否非常接近 2^n 边界? 您是否会因为内存不足而无法保存所有文件名?

我知道在 Windows 中至少文件系统性能会随着文件夹中文件数量的增加而急剧下降,但我认为 Linux 不会遇到这个问题,至少如果您使用命令提示符的话。 如果您尝试使用 nautilus 之类的工具来打开包含这么多文件的文件夹,上帝会帮助您。

我也想知道这些文件从哪里来。 您能够以编程方式计算文件名吗? 如果是这种情况,您也许可以编写一个小程序将它们分类到多个子文件夹中。 通常列出特定文件的名称将授予您访问权限,而尝试查找该名称将失败。 例如,我在 Windows 中有一个包含大约 85,000 个文件的文件夹,该文件夹可以正常工作。

如果这项技术成功,您可能会尝试找到一种方法使这种排序永久化,即使它只是将这个小程序作为 cron 作业运行。 如果您可以在某处按日期对文件进行排序,那么它会特别有效。

Can you get a real count of the number of files? Does it fall very near a 2^n boundry? Could you simply be running out of RAM to hold all the file names?

I know that in windows at least file system performance would drop dramatically as the number of files in the folder went up, but I thought that linux didn't suffer from this issue, at least if you were using a command prompt. God help you if you try to get something like nautilus to open a folder with that many files.

I'm also wondering where these files come from. Are you able to calculate file names programmatically? If that's the case, you might be able to write a small program to sort them into a number of sub-folders. Often listing the name of a specific file will grant you access where trying to look up the name will fail. For example, I have a folder in windows with about 85,000 files where this works.

If this technique is successful, you might try finding a way to make this sort permanent, even if it's just running this small program as a cron job. It'll work especially well if you can sort the files by date somewhere.

前事休说 2024-07-12 22:24:49

除非您收到错误消息,否则 ls 正在工作,但速度非常慢。 您可以尝试仅查看前十个文件,如下所示:

ls -f | head -10

如果您需要暂时查看文件详细信息,可以先将它们放入文件中。 您可能希望将输出发送到与您当前列出的目录不同的目录!

ls > ~/lots-of-files.txt

如果你想对文件做一些事情,你可以使用 xargs。 如果您决定编写某种脚本来完成这项工作,请确保您的脚本将文件列表作为流处理,而不是一次性处理所有文件。 这是移动所有文件的示例。

ls | xargs -I thefilename mv thefilename ~/some/other/directory

您可以将其与 head 结合起来移动较少数量的文件。

ls | 头-10000 | xargs -I x mv x /first/ten/thousand/files/go/here

您可能可以组合 ls | xargs -I x mv x /first/ten/thousand/files/go/here head 到一个 shell 脚本中,该脚本会将文件分成一堆目录,每个目录中包含可管理数量的文件。

Unless you are getting an error message, ls is working but very slowly. You can try looking at just the first ten files like this:

ls -f | head -10

If you're going to need to look at the file details for a while, you can put them in a file first. You probably want to send the output to a different directory than the one you are listing at the moment!

ls > ~/lots-of-files.txt

If you want to do something to the files, you can use xargs. If you decide to write a script of some kind to do the work, make sure that your script will process the list of files as a stream rather than all at once. Here's an example of moving all the files.

ls | xargs -I thefilename mv thefilename ~/some/other/directory

You could combine that with head to move a smaller number of the files.

ls | head -10000 | xargs -I x mv x /first/ten/thousand/files/go/here

You can probably combine ls | head into a shell script to that will split up the files into a bunch of directories with a manageable number of files in each.

滥情空心 2024-07-12 22:24:49

最大目录大小取决于文件系统,因此确切的限制有所不同。 然而,拥有非常大的目录是一种不好的做法。

您应该考虑通过将文件排序到子目录中来缩小目录。 一种常见的方案是使用第一级子目录的前两个字符,如下所示:

${topdir}/aa/aardvark
${topdir}/ai/airplane

如果使用 UUID、GUID 或内容哈希值进行命名,这种方法特别有效。

Maximum directory size is filesystem-dependent, and thus the exact limit varies. However, having very large directories is a bad practice.

You should consider making your directories smaller by sorting files into subdirectories. One common scheme is to use the first two characters for a first-level subdirectory, as follows:

${topdir}/aa/aardvark
${topdir}/ai/airplane

This works particularly well if using UUID, GUIDs or content hash values for naming.

甜中书 2024-07-12 22:24:48

不会。索引节点限制是针对每个文件系统的,并在文件系统创建时决定。 您可能会遇到另一个限制,或者“ls”可能表现得不太好。

试试这个:

tune2fs -l /dev/DEVICE | grep -i inode

它应该告诉您各种与索引节点相关的信息。

No. Inode limits are per-filesystem, and decided at filesystem creation time. You could be hitting another limit, or maybe 'ls' just doesn't perform that well.

Try this:

tune2fs -l /dev/DEVICE | grep -i inode

It should tell you all sorts of inode related info.

风和你 2024-07-12 22:24:48

你遇到的是 ls 的内部限制。 这是一篇文章,解释得很好:
http://www.olark.com/spw/2011/08/you-can-list-a-directory-with-8-million-files-but-not-with-ls/

What you hit is an internal limit of ls. Here is an article which explains it quite well:
http://www.olark.com/spw/2011/08/you-can-list-a-directory-with-8-million-files-but-not-with-ls/

命比纸薄 2024-07-12 22:24:47

尝试ls -Uls -f

ls 默认情况下按字母顺序对文件进行排序。 如果您有 200 万个文件,则排序可能需要很长时间。 如果ls -U(或者ls -f),那么文件名将立即被打印。

Try ls -U or ls -f.

ls, by default, sorts the files alphabetically. If you have 2 million files, that sort can take a long time. If ls -U (or perhaps ls -f), then the file names will be printed immediately.

倾`听者〃 2024-07-12 22:24:46

df -i 应该告诉您文件系统上已使用和空闲的 inode 数量。

df -i should tell you the number of inodes used and free on the file system.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文