我的所有 inode 都用在哪里了?
我如何找出哪些目录占用了我的所有索引节点?
最终根目录将负责最大数量的索引节点,所以我不确定我到底想要什么样的答案。
基本上,我用完了可用的索引节点,需要找到一个不需要的目录来剔除。
谢谢,并对这个含糊的问题表示歉意。
How do I find out which directories are responsible for chewing up all my inodes?
Ultimately the root directory will be responsible for the largest number of inodes, so I'm not sure exactly what sort of answer I want..
Basically, I'm running out of available inodes and need to find a unneeded directory to cull.
Thanks, and sorry for the vague question.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(16)
如果你不想创建一个新文件(或者因为你用完了索引节点而不能),你可以运行这个查询:
正如内部人士在另一个答案中提到的那样,使用带有 find 的解决方案会更快,因为递归 ls 相当慢,请检查下面的解决方案! (信用到期的信用!)
If you don't want to make a new file (or can't because you ran out of inodes) you can run this query:
as insider mentioned in another answer, using a solution with find will be much quicker since recursive ls is quite slow, check below for that solution! (credit where credit due!)
提供的带有递归 ls 的方法非常慢。
只是为了快速找到消耗我使用的大部分索引节点的父目录:
Provided methods with recursive ls are very slow.
Just for quickly finding parent directory consuming most of inodes i used:
所以基本上你正在寻找哪些目录有很多文件? 这是第一次尝试:
其中“count_files”是一个 shell 脚本(感谢 Jonathan)
So basically you're looking for which directories have a lot of files? Here's a first stab at it:
where "count_files" is a shell script that does (thanks Jonathan)
我使用以下方法(在我的同事 James 的帮助下)发现我们有大量需要在一台计算机上删除的 PHP 会话文件:
1. 我有多少个 inode 正在使用?
2. 所有这些 inode 都在哪里?
最后一行有很多 PHP 会话文件。
3. 如何删除所有这些文件?
删除目录中所有早于1440分钟(24小时)的文件:
4. 它有效吗?
幸运的是,我们收到了 sensu 警报,通过电子邮件通知我们,我们的 inode 几乎用完了。
I used the following to work out (with a bit of help from my colleague James) that we had a massive number of PHP session files which needed to be deleted on one machine:
1. How many inodes have I got in use?
2. Where are all those inodes?
That's a lot of PHP session files on the last line.
3. How to delete all those files?
Delete all files in the directory which are older than 1440 minutes (24 hours):
4. Has it worked?
Luckily we had a sensu alert emailing us that our inodes were almost used up.
这是我的看法。 它与其他的没有太大不同,但是输出很漂亮,我认为它比其他的(目录和符号链接)计数更多的有效 inode。 这会统计工作目录的每个子目录中的文件数量; 它将输出排序并格式化为两列; 它会打印总计(显示为“.”,工作目录)。 这不会遵循符号链接,但会计算以点开头的文件和目录。 这不计算设备节点和特殊文件(例如命名管道)。 如果您也想计算这些,只需删除“-type l -o -type d -o -type f”测试即可。 因为该命令被分成两个 find 命令,所以它无法正确区分安装在其他文件系统上的目录(-mount 选项不起作用)。 例如,这实际上应该忽略“/proc”和“/sys”目录。 您可以看到,在“/”中运行此命令的情况下,包括“/proc”和“/sys”会严重扭曲总计数。
例子:
This is my take on it. It's not so different from others, but the output is pretty and I think it counts more valid inodes than others (directories and symlinks). This counts the number of files in each subdirectory of the working directory; it sorts and formats the output into two columns; and it prints a grand total (shown as ".", the working directory). This will not follow symlinks but will count files and directories that begin with a dot. This does not count device nodes and special files like named pipes. Just remove the "-type l -o -type d -o -type f" test if you want to count those, too. Because this command is split up into two find commands it cannot correctly discriminate against directories mounted on other filesystems (the -mount option will not work). For example, this should really ignore "/proc" and "/sys" directories. You can see that in the case of running this command in "/" that including "/proc" and "/sys" grossly skews the grand total count.
Example:
这是一个简单的 Perl 脚本,可以做到这一点:
如果您希望它像
du
一样工作(其中每个目录计数还包括子目录的递归计数),则将递归函数更改为return $count
然后在递归点说:Here's a simple Perl script that'll do it:
If you want it to work like
du
(where each directory count also includes the recursive count of the subdirectory) then change the recursive function toreturn $count
and then at the recursion point say:一个实际功能的单行代码(GNU find,对于其他类型的查找,您需要自己的
-xdev
等效项才能保持在同一个 FS 上。)查找 / -xdev -type d | 当读-ri时; 执行 printf "%d %s\n" $(ls -a "$i" | wc -l) "$i"; 完成 | 排序-nr | head -10
显然,tail 是可定制的。
与此处的许多其他建议一样,这只会以非递归方式向您显示每个目录中的条目数量。
PS
快速,但不精确的单行(通过目录节点大小检测):
find / -xdev -type d -size +100k
An actually functional one-liner (GNU find, for other kinds of find you'd need your own equivalent of
-xdev
to stay on the same FS.)find / -xdev -type d | while read -r i; do printf "%d %s\n" $(ls -a "$i" | wc -l) "$i"; done | sort -nr | head -10
The tail is, obviously, customizable.
As with many other suggestions here, this will only show you amount of entries in each directory, non-recursively.
P.S.
Fast, but imprecise one-liner (detect by directory node size):
find / -xdev -type d -size +100k
不需要复杂的 for/ls 结构。 您可以获得 10 个最胖的(就 inode 使用而言)目录:
du --inodes --separate-dirs --one-file-system | 排序-rh | head
等于:
du --inodes -Sx | 排序-rh | head
--one-file-system
参数是可选的。There's no need for complex for/ls constructions. You can get 10 fattest (in terms of inode usage) directories with:
du --inodes --separate-dirs --one-file-system | sort -rh | head
which equals to:
du --inodes -Sx | sort -rh | head
--one-file-system
parameter is optional.只是想提一下,您还可以使用目录大小间接搜索,例如:
如果您有很多大目录,则可以增加 500k。
请注意,此方法不是递归的。 仅当您在一个目录中有大量文件时,这才会对您有所帮助,但如果文件均匀分布在其后代中,则不会有帮助。
Just wanted to mention that you could also search indirectly using the directory size, for example:
Where 500k could be increased if you have a lot of large directories.
Note that this method is not recursive. This will only help you if you have a lot of files in one single directory, but not if the files are evenly distributed across its descendants.
目录.0 -- 27913
目录.1 -- 27913
dir.0 -- 27913
dir.1 -- 27913
使用
然后按 Shitf+c 按项目所在文件的项目计数进行排序
use
then press Shitf+c to sort by items count where the item is file
当搜索消耗大部分磁盘空间的文件夹时,我曾经从上到下使用
du
,如下所示:这是列出每个顶级文件夹的文件消耗。 之后,您可以通过扩展给定模式进入任一文件夹:
依此类推...
现在,当涉及 inode 时,可以使用相同的工具,但参数略有不同:
有一个缓存改进了该工具的后续调用在同一个文件夹中,这在正常情况下是有益的。 然而,当你用完索引节点时,我认为情况会变成相反的情况。
When searching for folder consuming most disk space, I used to work with
du
top to bottom like this:This is listing file consumption per top-level folder. Afterwards, you can descend into either folder by extending given pattern:
and so on ...
Now, when it comes to inodes, the same tool can be used with slightly different arguments:
There is a caching improving follow-up invocations of this tool in same folder which is beneficial under normal circumstances. However, when you've run out of inodes I assume this will turn into the opposite.
Perl 脚本很好,但要小心符号链接 - 仅当 -l filetest 返回 false 时才递归,否则您最多会过度计数,最坏的情况会无限期地递归(这可能会引起撒旦 1000 年的统治)。
当存在多个链接到超过一小部分文件时,计算文件系统树中 inode 的整个想法就会崩溃。
The perl script is good, but beware symlinks- recurse only when -l filetest returns false or you will at best over-count, at worst recurse indefinitely (which could- minor concern- invoke Satan's 1000-year reign).
The whole idea of counting inodes in a file system tree falls apart when there are multiple links to more than a small percentage of the files.
请注意,当您最终找到某个邮件后台目录并想要删除其中的所有垃圾时,如果文件太多, rm * 将不起作用,您可以运行以下命令来快速删除该目录中的所有内容:
< strong>* 警告 * 如果 rm 不起作用,这将快速删除所有文件
Just a note, when you finally find some mail spool directory and want to delete all the junk that's in there, rm * will not work if there are too many files, you can run the following command to quickly delete everything in that directory:
* WARNING * THIS WILL DELETE ALL FILES QUICKLY FOR CASES WHEN rm DOESN'T WORK
不幸的是不是 POSIX 解决方案但是......
这对当前目录下的文件进行计数。 即使文件名包含换行符,这也应该起作用。 它使用 GNU Awk。 将 d 的值(从 2)更改为所需的最大分离路径深度。 0 表示无限深度。 在最深层,子目录中的文件是递归计数的。
Bash 4 相同; 给出深度作为脚本的参数。 根据我的经验,这要慢得多:
Unfortunately not a POSIX solution but...
This counts files under current directory. This is supposed to work even if filenames contain newlines. It uses GNU Awk. Change the value of d (from 2) to the wanted maximum separated path depths. 0 means unlimited depth. In the deepest level files in sub-directories are counted recursively.
Same by Bash 4; give depth as an argument for the script. This is significantly slower in my experience:
此命令在极不可能的情况下起作用,即您的目录结构与我的目录结构相同:
find / -type f | grep -oP '^/([^/]+/){3}' | grep -oP '^/([^/]+/){3}' | 排序| uniq-c| 排序-n
This command works in highly unlikely cases where your directory structure is identical to mine:
find / -type f | grep -oP '^/([^/]+/){3}' | sort | uniq -c | sort -n