我的所有 inode 都用在哪里了?

发布于 2024-07-10 11:23:00 字数 144 浏览 6 评论 0原文

我如何找出哪些目录占用了我的所有索引节点?

最终根目录将负责最大数量的索引节点,所以我不确定我到底想要什么样的答案。

基本上,我用完了可用的索引节点,需要找到一个不需要的目录来剔除。

谢谢,并对这个含糊的问题表示歉意。

How do I find out which directories are responsible for chewing up all my inodes?

Ultimately the root directory will be responsible for the largest number of inodes, so I'm not sure exactly what sort of answer I want..

Basically, I'm running out of available inodes and need to find a unneeded directory to cull.

Thanks, and sorry for the vague question.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(16

荒路情人 2024-07-17 11:23:00

如果你不想创建一个新文件(或者因为你用完了索引节点而不能),你可以运行这个查询:

for i in `find . -type d `; do echo `ls -a $i | wc -l` $i; done | sort -n

正如内部人士在另一个答案中提到的那样,使用带有 find 的解决方案会更快,因为递归 ls 相当慢,请检查下面的解决方案! (信用到期的信用!)

If you don't want to make a new file (or can't because you ran out of inodes) you can run this query:

for i in `find . -type d `; do echo `ls -a $i | wc -l` $i; done | sort -n

as insider mentioned in another answer, using a solution with find will be much quicker since recursive ls is quite slow, check below for that solution! (credit where credit due!)

清眉祭 2024-07-17 11:23:00

提供的带有递归 ls 的方法非常慢。
只是为了快速找到消耗我使用的大部分索引节点的父目录:

cd /partition_that_is_out_of_inodes
for i in *; do echo -e "$(find $i | wc -l)\t$i"; done | sort -n

Provided methods with recursive ls are very slow.
Just for quickly finding parent directory consuming most of inodes i used:

cd /partition_that_is_out_of_inodes
for i in *; do echo -e "$(find $i | wc -l)\t$i"; done | sort -n
_失温 2024-07-17 11:23:00

所以基本上你正在寻找哪些目录有很多文件? 这是第一次尝试:

find . -type d -print0 | xargs -0 -n1 count_files | sort -n

其中“count_files”是一个 shell 脚本(感谢 Jonathan)

echo $(ls -a "$1" | wc -l) $1

So basically you're looking for which directories have a lot of files? Here's a first stab at it:

find . -type d -print0 | xargs -0 -n1 count_files | sort -n

where "count_files" is a shell script that does (thanks Jonathan)

echo $(ls -a "$1" | wc -l) $1
も让我眼熟你 2024-07-17 11:23:00

我使用以下方法(在我的同事 James 的帮助下)发现我们有大量需要在一台计算机上删除的 PHP 会话文件:

1. 我有多少个 inode 正在使用?

 root@polo:/# df -i
 Filesystem     Inodes  IUsed  IFree IUse% Mounted on
 /dev/xvda1     524288 427294  96994   81% /
 none           256054      2 256052    1% /sys/fs/cgroup
 udev           254757    404 254353    1% /dev
 tmpfs          256054    332 255722    1% /run
 none           256054      3 256051    1% /run/lock
 none           256054      1 256053    1% /run/shm
 none           256054      3 256051    1% /run/user

2. 所有这些 inode 都在哪里?

 root@polo:/# find / -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n
 [...]
    1088 /usr/src/linux-headers-3.13.0-39/include/linux
    1375 /usr/src/linux-headers-3.13.0-29-generic/include/config
    1377 /usr/src/linux-headers-3.13.0-39-generic/include/config
    2727 /var/lib/dpkg/info
    2834 /usr/share/man/man3
  416811 /var/lib/php5/session
 root@polo:/#

最后一行有很多 PHP 会话文件。

3. 如何删除所有这些文件?

删除目录中所有早于1440分钟(24小时)的文件:

root@polo:/var/lib/php5/session# find ./ -cmin +1440 | xargs rm
root@polo:/var/lib/php5/session#

4. 它有效吗?

 root@polo:~# find / -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n
 [...]
    1088 /usr/src/linux-headers-3.13.0-39/include/linux
    1375 /usr/src/linux-headers-3.13.0-29-generic/include/config
    1377 /usr/src/linux-headers-3.13.0-39-generic/include/config
    2727 /var/lib/dpkg/info
    2834 /usr/share/man/man3
    2886 /var/lib/php5/session
 root@polo:~# df -i
 Filesystem     Inodes  IUsed  IFree IUse% Mounted on
 /dev/xvda1     524288 166420 357868   32% /
 none           256054      2 256052    1% /sys/fs/cgroup
 udev           254757    404 254353    1% /dev
 tmpfs          256054    332 255722    1% /run
 none           256054      3 256051    1% /run/lock
 none           256054      1 256053    1% /run/shm
 none           256054      3 256051    1% /run/user
 root@polo:~#

幸运的是,我们收到了 sensu 警报,通过电子邮件通知我们,我们的 inode 几乎用完了。

I used the following to work out (with a bit of help from my colleague James) that we had a massive number of PHP session files which needed to be deleted on one machine:

1. How many inodes have I got in use?

 root@polo:/# df -i
 Filesystem     Inodes  IUsed  IFree IUse% Mounted on
 /dev/xvda1     524288 427294  96994   81% /
 none           256054      2 256052    1% /sys/fs/cgroup
 udev           254757    404 254353    1% /dev
 tmpfs          256054    332 255722    1% /run
 none           256054      3 256051    1% /run/lock
 none           256054      1 256053    1% /run/shm
 none           256054      3 256051    1% /run/user

2. Where are all those inodes?

 root@polo:/# find / -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n
 [...]
    1088 /usr/src/linux-headers-3.13.0-39/include/linux
    1375 /usr/src/linux-headers-3.13.0-29-generic/include/config
    1377 /usr/src/linux-headers-3.13.0-39-generic/include/config
    2727 /var/lib/dpkg/info
    2834 /usr/share/man/man3
  416811 /var/lib/php5/session
 root@polo:/#

That's a lot of PHP session files on the last line.

3. How to delete all those files?

Delete all files in the directory which are older than 1440 minutes (24 hours):

root@polo:/var/lib/php5/session# find ./ -cmin +1440 | xargs rm
root@polo:/var/lib/php5/session#

4. Has it worked?

 root@polo:~# find / -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n
 [...]
    1088 /usr/src/linux-headers-3.13.0-39/include/linux
    1375 /usr/src/linux-headers-3.13.0-29-generic/include/config
    1377 /usr/src/linux-headers-3.13.0-39-generic/include/config
    2727 /var/lib/dpkg/info
    2834 /usr/share/man/man3
    2886 /var/lib/php5/session
 root@polo:~# df -i
 Filesystem     Inodes  IUsed  IFree IUse% Mounted on
 /dev/xvda1     524288 166420 357868   32% /
 none           256054      2 256052    1% /sys/fs/cgroup
 udev           254757    404 254353    1% /dev
 tmpfs          256054    332 255722    1% /run
 none           256054      3 256051    1% /run/lock
 none           256054      1 256053    1% /run/shm
 none           256054      3 256051    1% /run/user
 root@polo:~#

Luckily we had a sensu alert emailing us that our inodes were almost used up.

还不是爱你 2024-07-17 11:23:00

这是我的看法。 它与其他的没有太大不同,但是输出很漂亮,我认为它比其他的(目录和符号链接)计数更多的有效 inode。 这会统计工作目录的每个子目录中的文件数量; 它将输出排序并格式化为两列; 它会打印总计(显示为“.”,工作目录)。 这不会遵循符号链接,但会计算以点开头的文件和目录。 这不计算设备节点和特殊文件(例如命名管道)。 如果您也想计算这些,只需删除“-type l -o -type d -o -type f”测试即可。 因为该命令被分成两个 find 命令,所以它无法正确区分安装在其他文件系统上的目录(-mount 选项不起作用)。 例如,这实际上应该忽略“/proc”和“/sys”目录。 您可以看到,在“/”中运行此命令的情况下,包括“/proc”和“/sys”会严重扭曲总计数。

for ii in $(find . -maxdepth 1 -type d); do 
    echo -e "${ii}\t$(find "${ii}" -type l -o -type d -o -type f | wc -l)"
done | sort -n -k 2 | column -t

例子:

# cd /
# for ii in $(find -maxdepth 1 -type d); do echo -e "${ii}\t$(find "${ii}" -type l -o -type d -o -type f | wc -l)"; done | sort -n -k 2 | column -t
./boot        1
./lost+found  1
./media       1
./mnt         1
./opt         1
./srv         1
./lib64       2
./tmp         5
./bin         107
./sbin        109
./home        146
./root        169
./dev         188
./run         226
./etc         1545
./var         3611
./sys         12421
./lib         17219
./proc        20824
./usr         56628
.             113207

This is my take on it. It's not so different from others, but the output is pretty and I think it counts more valid inodes than others (directories and symlinks). This counts the number of files in each subdirectory of the working directory; it sorts and formats the output into two columns; and it prints a grand total (shown as ".", the working directory). This will not follow symlinks but will count files and directories that begin with a dot. This does not count device nodes and special files like named pipes. Just remove the "-type l -o -type d -o -type f" test if you want to count those, too. Because this command is split up into two find commands it cannot correctly discriminate against directories mounted on other filesystems (the -mount option will not work). For example, this should really ignore "/proc" and "/sys" directories. You can see that in the case of running this command in "/" that including "/proc" and "/sys" grossly skews the grand total count.

for ii in $(find . -maxdepth 1 -type d); do 
    echo -e "${ii}\t$(find "${ii}" -type l -o -type d -o -type f | wc -l)"
done | sort -n -k 2 | column -t

Example:

# cd /
# for ii in $(find -maxdepth 1 -type d); do echo -e "${ii}\t$(find "${ii}" -type l -o -type d -o -type f | wc -l)"; done | sort -n -k 2 | column -t
./boot        1
./lost+found  1
./media       1
./mnt         1
./opt         1
./srv         1
./lib64       2
./tmp         5
./bin         107
./sbin        109
./home        146
./root        169
./dev         188
./run         226
./etc         1545
./var         3611
./sys         12421
./lib         17219
./proc        20824
./usr         56628
.             113207
爱给你人给你 2024-07-17 11:23:00

这是一个简单的 Perl 脚本,可以做到这一点:

#!/usr/bin/perl -w

use strict;

sub count_inodes($);
sub count_inodes($)
{
  my $dir = shift;
  if (opendir(my $dh, $dir)) {
    my $count = 0;
    while (defined(my $file = readdir($dh))) {
      next if ($file eq '.' || $file eq '..');
      $count++;
      my $path = $dir . '/' . $file;
      count_inodes($path) if (-d $path);
    }
    closedir($dh);
    printf "%7d\t%s\n", $count, $dir;
  } else {
    warn "couldn't open $dir - $!\n";
  }
}

push(@ARGV, '.') unless (@ARGV);
while (@ARGV) {
  count_inodes(shift);
}

如果您希望它像 du 一样工作(其中每个目录计数还包括子目录的递归计数),则将递归函数更改为 return $count 然后在递归点说:

$count += count_inodes($path) if (-d $path);

Here's a simple Perl script that'll do it:

#!/usr/bin/perl -w

use strict;

sub count_inodes($);
sub count_inodes($)
{
  my $dir = shift;
  if (opendir(my $dh, $dir)) {
    my $count = 0;
    while (defined(my $file = readdir($dh))) {
      next if ($file eq '.' || $file eq '..');
      $count++;
      my $path = $dir . '/' . $file;
      count_inodes($path) if (-d $path);
    }
    closedir($dh);
    printf "%7d\t%s\n", $count, $dir;
  } else {
    warn "couldn't open $dir - $!\n";
  }
}

push(@ARGV, '.') unless (@ARGV);
while (@ARGV) {
  count_inodes(shift);
}

If you want it to work like du (where each directory count also includes the recursive count of the subdirectory) then change the recursive function to return $count and then at the recursion point say:

$count += count_inodes($path) if (-d $path);
黑白记忆 2024-07-17 11:23:00

一个实际功能的单行代码(GNU find,对于其他类型的查找,您需要自己的 -xdev 等效项才能保持在同一个 FS 上。)

查找 / -xdev -type d | 当读-ri时; 执行 printf "%d %s\n" $(ls -a "$i" | wc -l) "$i"; 完成 | 排序-nr | head -10

显然,tail 是可定制的。

与此处的许多其他建议一样,这只会以非递归方式向您显示每个目录中的条目数量。

PS

快速,但不精确的单行(通过目录节点大小检测):

find / -xdev -type d -size +100k

An actually functional one-liner (GNU find, for other kinds of find you'd need your own equivalent of -xdev to stay on the same FS.)

find / -xdev -type d | while read -r i; do printf "%d %s\n" $(ls -a "$i" | wc -l) "$i"; done | sort -nr | head -10

The tail is, obviously, customizable.

As with many other suggestions here, this will only show you amount of entries in each directory, non-recursively.

P.S.

Fast, but imprecise one-liner (detect by directory node size):

find / -xdev -type d -size +100k

土豪 2024-07-17 11:23:00

不需要复杂的 for/ls 结构。 您可以获得 10 个最胖的(就 inode 使用而言)目录:

du --inodes --separate-dirs --one-file-system | 排序-rh | head

等于:

du --inodes -Sx | 排序-rh | head

--one-file-system 参数是可选的。

There's no need for complex for/ls constructions. You can get 10 fattest (in terms of inode usage) directories with:

du --inodes --separate-dirs --one-file-system | sort -rh | head

which equals to:

du --inodes -Sx | sort -rh | head

--one-file-system parameter is optional.

七秒鱼° 2024-07-17 11:23:00

只是想提一下,您还可以使用目录大小间接搜索,例如:

find /path -type d -size +500k

如果您有很多大目录,则可以增加 500k。

请注意,此方法不是递归的。 仅当您在一个目录中有大量文件时,这才会对您有所帮助,但如果文件均匀分布在其后代中,则不会有帮助。

Just wanted to mention that you could also search indirectly using the directory size, for example:

find /path -type d -size +500k

Where 500k could be increased if you have a lot of large directories.

Note that this method is not recursive. This will only help you if you have a lot of files in one single directory, but not if the files are evenly distributed across its descendants.

快乐很简单 2024-07-17 11:23:00
for i in dir.[01]
do
    find $i -printf "%i\n"|sort -u|wc -l|xargs echo $i --
done

目录.0 -- 27913

目录.1 -- 27913

for i in dir.[01]
do
    find $i -printf "%i\n"|sort -u|wc -l|xargs echo $i --
done

dir.0 -- 27913

dir.1 -- 27913

深爱不及久伴 2024-07-17 11:23:00

使用

ncdu -x <path>

然后按 Shitf+c 按项目所在文件的项目计数进行排序

use

ncdu -x <path>

then press Shitf+c to sort by items count where the item is file

山人契 2024-07-17 11:23:00

当搜索消耗大部分磁盘空间的文件夹时,我曾经从上到下使用 du ,如下所示:

du -hs /*

这是列出每个顶级文件夹的文件消耗。 之后,您可以通过扩展给定模式进入任一文件夹:

du -hs /var/*

依此类推...

现在,当涉及 inode 时,可以使用相同的工具,但参数略有不同:

du -s --inodes /*

有一个缓存改进了该工具的后续调用在同一个文件夹中,这在正常情况下是有益的。 然而,当你用完索引节点时,我认为情况会变成相反的情况。

When searching for folder consuming most disk space, I used to work with du top to bottom like this:

du -hs /*

This is listing file consumption per top-level folder. Afterwards, you can descend into either folder by extending given pattern:

du -hs /var/*

and so on ...

Now, when it comes to inodes, the same tool can be used with slightly different arguments:

du -s --inodes /*

There is a caching improving follow-up invocations of this tool in same folder which is beneficial under normal circumstances. However, when you've run out of inodes I assume this will turn into the opposite.

兰花执着 2024-07-17 11:23:00

Perl 脚本很好,但要小心符号链接 - 仅当 -l filetest 返回 false 时才递归,否则您最多会过度计数,最坏的情况会无限期地递归(这可能会引起撒旦 1000 年的统治)。

当存在多个链接到超过一小部分文件时,计算文件系统树中 inode 的整个想法就会崩溃。

The perl script is good, but beware symlinks- recurse only when -l filetest returns false or you will at best over-count, at worst recurse indefinitely (which could- minor concern- invoke Satan's 1000-year reign).

The whole idea of counting inodes in a file system tree falls apart when there are multiple links to more than a small percentage of the files.

南城追梦 2024-07-17 11:23:00

请注意,当您最终找到某个邮件后台目录并想要删除其中的所有垃圾时,如果文件太多, rm * 将不起作用,您可以运行以下命令来快速删除该目录中的所有内容:

< strong>* 警告 * 如果 rm 不起作用,这将快速删除所有文件

find . -type f -delete

Just a note, when you finally find some mail spool directory and want to delete all the junk that's in there, rm * will not work if there are too many files, you can run the following command to quickly delete everything in that directory:

* WARNING * THIS WILL DELETE ALL FILES QUICKLY FOR CASES WHEN rm DOESN'T WORK

find . -type f -delete
趴在窗边数星星i 2024-07-17 11:23:00

不幸的是不是 POSIX 解决方案但是......
这对当前目录下的文件进行计数。 即使文件名包含换行符,这也应该起作用。 它使用 GNU Awk。 将 d 的值(从 2)更改为所需的最大分离路径深度。 0 表示无限深度。 在最深层,子目录中的文件是递归计数的。

d=2; find . -mount -not -path . -print0 | gawk '
BEGIN{RS="\0";FS="/";SUBSEP="/";ORS="\0"}
{
    s="./"
    for(i=2;i!=d+1 && i<NF;i++){s=s $i "/"}
    ++n[s]
}
END{for(val in n){print n[val] "\t" val "\n"}}' d="$d" \
 | sort -gz -k 1,1

Bash 4 相同; 给出深度作为脚本的参数。 根据我的经验,这要慢得多:

#!/bin/bash
d=$1
declare -A n

while IFS=/ read -d 
\0' -r -a a; do
  s="./"
  for ((i=2; i!=$((d+1)) && i<${#a[*]}; i++)); do
    s+="${a[$((i-1))]}/"
  done
  ((++n[\$s]))
done < <(find . -mount -not -path . -print0)

for j in "${!n[@]}"; do
    printf '%i\t%s\n\0' "${n[$j]}" "$j"
done | sort -gz -k 1,1 

Unfortunately not a POSIX solution but...
This counts files under current directory. This is supposed to work even if filenames contain newlines. It uses GNU Awk. Change the value of d (from 2) to the wanted maximum separated path depths. 0 means unlimited depth. In the deepest level files in sub-directories are counted recursively.

d=2; find . -mount -not -path . -print0 | gawk '
BEGIN{RS="\0";FS="/";SUBSEP="/";ORS="\0"}
{
    s="./"
    for(i=2;i!=d+1 && i<NF;i++){s=s $i "/"}
    ++n[s]
}
END{for(val in n){print n[val] "\t" val "\n"}}' d="$d" \
 | sort -gz -k 1,1

Same by Bash 4; give depth as an argument for the script. This is significantly slower in my experience:

#!/bin/bash
d=$1
declare -A n

while IFS=/ read -d 
\0' -r -a a; do
  s="./"
  for ((i=2; i!=$((d+1)) && i<${#a[*]}; i++)); do
    s+="${a[$((i-1))]}/"
  done
  ((++n[\$s]))
done < <(find . -mount -not -path . -print0)

for j in "${!n[@]}"; do
    printf '%i\t%s\n\0' "${n[$j]}" "$j"
done | sort -gz -k 1,1 
骄兵必败 2024-07-17 11:23:00

此命令在极不可能的情况下起作用,即您的目录结构与我的目录结构相同:

find / -type f | grep -oP '^/([^/]+/){3}' | grep -oP '^/([^/]+/){3}' | 排序| uniq-c| 排序-n

This command works in highly unlikely cases where your directory structure is identical to mine:

find / -type f | grep -oP '^/([^/]+/){3}' | sort | uniq -c | sort -n

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文