如何查找 Linux 中同一目录中存在的同名但大小写不同的重复文件?
如何返回名为重复的文件列表,即存在于同一目录中的具有相同名称但大小写不同的文件?
我不关心文件的内容。我只需要知道任何具有同名副本的文件的位置和名称。
重复示例:
/www/images/taxi.jpg
/www/images/Taxi.jpg
理想情况下,我需要从基本目录递归搜索所有文件。在上面的例子中是 /www/
How can I return a list of files that are named duplicates i.e. have same name but in different case that exist in the same directory?
I don't care about the contents of the files. I just need to know the location and name of any files that have a duplicate of the same name.
Example duplicates:
/www/images/taxi.jpg
/www/images/Taxi.jpg
Ideally I need to search all files recursively from a base directory. In above example it was /www/
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(11)
另一个答案很好,但我建议不要使用“相当可怕的”perl 脚本,
它只会小写路径的文件名部分。
编辑1:事实上,整个问题可以通过以下方式解决:
编辑3:我找到了一个使用 sed、sort 和 uniq 的解决方案,它也将打印出重复项,但它仅在文件名中没有空格的情况下才有效:
编辑2:并且这是一个较长的脚本,它将打印出名称,它采用 stdin 上的路径列表,如
find
给出的。不是那么优雅,但仍然:The other answer is great, but instead of the "rather monstrous" perl script i suggest
Which will lowercase just the filename part of the path.
Edit 1: In fact the entire problem can be solved with:
Edit 3: I found a solution using sed, sort and uniq that also will print out the duplicates, but it only works if there are no whitespaces in filenames:
Edit 2: And here is a longer script that will print out the names, it takes a list of paths on stdin, as given by
find
. Not so elegant, but still:尝试:
简单,真的:-) 管道不是很棒的野兽吗?
ls -1
为您提供每行一个文件,tr '[AZ]' '[az]'
将所有大写字母转换为小写字母,sort< /code> 对它们进行排序(令人惊讶的是),
uniq -c
删除后续出现的重复行,同时也给您一个计数,最后,grep -v " 1 "
> 删除计数为 1 的那些行。当我在具有一个“重复”的目录中运行此命令时(我将
qq
复制到qQ
),我得到:对于“此目录和每个子目录”版本,只需替换
ls -1
与find .
或find DIRNAME
如果您想要特定的目录起点(DIRNAME
是目录您要使用的名称)。这返回(对我来说):
这是由以下原因引起的:
更新:
实际上,经过进一步思考,
tr
将小写路径的所有组件这样即使它们位于不同的目录中,也将被视为重复项。
如果您只想将单个目录中的重复项显示为匹配项,则可以使用(相当可怕的):
代替:
它的作用是仅小写路径名的最后部分而不是整个路径名。此外,如果您只需要常规文件(没有目录、FIFO 等),请使用 find -type f 来限制返回的内容。
Try:
Simple, really :-) Aren't pipelines wonderful beasts?
The
ls -1
gives you the files one per line, thetr '[A-Z]' '[a-z]'
converts all uppercase to lowercase, thesort
sorts them (surprisingly enough),uniq -c
removes subsequent occurrences of duplicate lines whilst giving you a count as well and, finally, thegrep -v " 1 "
strips out those lines where the count was one.When I run this in a directory with one "duplicate" (I copied
qq
toqQ
), I get:For the "this directory and every subdirectory" version, just replace
ls -1
withfind .
orfind DIRNAME
if you want a specific directory starting point (DIRNAME
is the directory name you want to use).This returns (for me):
which are caused by:
Update:
Actually, on further reflection, the
tr
will lowercase all components of the path so that both ofwill be considered duplicates even though they're in different directories.
If you only want duplicates within a single directory to show as a match, you can use the (rather monstrous):
in place of:
What it does is to only lowercase the final portion of the pathname rather than the whole thing. In addition, if you only want regular files (no directories, FIFOs and so forth), use
find -type f
to restrict what's returned.我相信
更简单、更快,并且会给出相同的结果
I believe
is simpler, faster, and will give the same result
跟进 mpez0 的响应,要递归检测,只需将“ls”替换为“find .”。
我看到的唯一问题是,如果这是一个重复的目录,那么该目录中的每个文件都有 1 个条目。需要一些人脑来处理这个输出。
但无论如何,您不会自动删除这些文件,是吗?
Following up on the response of mpez0, to detect recursively just replace "ls" by "find .".
The only problem I see with this is that if this is a directory that is duplicating, then you have 1 entry for each files in this directory. Some human brain is required to treat the output of this.
But anyway, you're not automatically deleting these files, are you?
这是一个名为
findsn
的不错的小命令行应用程序,如果您编译 deb 包中不包含的fslint
,您就会得到它。它会找到任何具有相同名称的文件,并且速度快如闪电,并且可以处理不同的情况。
如果未提供参数,则在 $PATH 中搜索任何多余的内容
或有冲突的文件。
如果仅指定路径,则检查它们是否有重复的命名
文件。您可以使用 -C 对其进行限定,以忽略此搜索中的大小写。
使用 -c 进行限定更具限制性,因为仅限文件(或目录)
在同一目录中,其名称仅在大小写不同的情况下被报告。
IE -c 将标记文件和传输时会发生冲突的目录
到不区分大小写的文件系统。请注意是否指定了 -c 或 -C 以及
假设没有指定当前目录的路径。
This is a nice little command line app called
findsn
you get if you compilefslint
that the deb package does not include.it will find any files with the same name, and its lightning fast and it can handle different case.
If no arguments are supplied the $PATH is searched for any redundant
or conflicting files.
If only path(s) specified then they are checked for duplicate named
files. You can qualify this with -C to ignore case in this search.
Qualifying with -c is more restrictive as only files (or directories)
in the same directory whose names differ only in case are reported.
I.E. -c will flag files & directories that will conflict if transfered
to a case insensitive file system. Note if -c or -C specified and
no path(s) specified the current directory is assumed.
以下是如何查找所有重复的 jar 文件的示例:
将
*.jar
替换为您要查找的任何重复文件类型。Here is an example how to find all duplicate jar files:
Replace
*.jar
with whatever duplicate file type you are looking for.这是一个对我有用的脚本(我不是作者)。原文和讨论可以在这里找到:
http://www.daemonforums.org/showthread.php?t=4661
如果 find 命令不适合您,您可能需要更改它。例如
Here's a script that worked for me ( I am not the author). the original and discussion can be found here:
http://www.daemonforums.org/showthread.php?t=4661
If the find command is not working for you, you may have to change it. For example
您可以使用:
其中:
find -type f
递归打印所有文件的完整路径。
-exec readlink -m {} \;
获取文件的绝对路径
gawk 'BEGIN{FS="/";OFS="/"}{$NF=tolower($NF);print}'
将所有文件名替换为小写
uniq -c
唯一的路径,-c 输出重复的计数。
You can use:
Where:
find -type f
recursion print all file's full path.
-exec readlink -m {} \;
get file's absolute path
gawk 'BEGIN{FS="/";OFS="/"}{$NF=tolower($NF);print}'
replace the all filename's to lower case
uniq -c
unique the path, -c output the count of duplicate.
有点晚了,但这是我使用的版本:
这里我们使用:
find
- 查找当前目录下的所有文件awk
- 删除文件名的文件路径部分sort
- 不区分大小写排序uniq
> - 从管道中找出受骗者(受到@mpez0答案的启发,以及@SimonDowdles对@paxdiablo答案的评论。)
Little bit late to this one, but here's the version I went with:
Here we are using:
find
- find all files under the current dirawk
- remove the file path part of the filenamesort
- sort case insensitivelyuniq
- find the dupes from what makes it through the pipe(Inspired by @mpez0 answer, and @SimonDowdles comment on @paxdiablo answer.)
您可以使用 GNU awk 检查给定目录中的重复项:
这使用 BEGINFILE 在继续读取文件之前执行一些操作。在本例中,它跟踪出现在数组
seen[]
中的名称,该数组的索引是小写文件的名称。如果名称已经出现,无论大小写,它都会打印出来。否则,它只是跳转到下一个文件。
看一个例子:
You can check duplicates in a given directory with GNU awk:
This uses BEGINFILE to perform some action before going on and reading a file. In this case, it keeps track of the names that have appeared in an array
seen[]
whose indexes are the names of the files in lowercase.If a name has already appeared, no matter its case, it prints it. Otherwise, it just jumps to the next file.
See an example:
我刚刚在 CentOS 上使用 fdupes 来清理整个一堆重复文件......
I just used fdupes on CentOS to clean up a whole buncha duplicate files...