从 UNIX shell 脚本中的列表中选择唯一或不同的值

发布于 2024-07-14 19:34:40 字数 284 浏览 8 评论 0原文

我有一个 ksh 脚本,它返回一长串值,以换行符分隔,并且我只想查看唯一/不同的值。 有可能这样做吗?

例如,假设我的输出是目录中的文件后缀:

<前><代码>焦油 广州 爪哇 广州 爪哇 柏油 班级 班级

我想看到这样的列表:

<前><代码>焦油 广州 爪哇 班级

I have a ksh script that returns a long list of values, newline separated, and I want to see only the unique/distinct values. It is possible to do this?

For example, say my output is file suffixes in a directory:

tar
gz
java
gz
java
tar
class
class

I want to see a list like:

tar
gz
java
class

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(8

暖伴 2024-07-21 19:34:40

您可能想查看 uniqsort 应用程序。

./yourscript.ksh | sort | uniq

(仅供参考,是的,在此命令行中进行排序是必要的,uniq 仅删除紧随其后的重复行)

编辑:

与 < 发布的内容相反a href="https://stackoverflow.com/questions/618378/select-unique-or-distinct-values-from-a-list-in-unix-shell-script/618382#618382">亚伦·迪古拉 与 uniq 的命令行选项相关:

给定以下输入:

class
jar
jar
jar
bin
bin
java

uniq 将输出所有行一次:

class
jar
bin
java

uniq -d 将输出所有行出现多次的行,它将打印一次:

jar
bin

uniq -u 将输出所有只出现一次的行,并将打印一次:

class
java

You might want to look at the uniq and sort applications.

./yourscript.ksh | sort | uniq

(FYI, yes, the sort is necessary in this command line, uniq only strips duplicate lines that are immediately after each other)

EDIT:

Contrary to what has been posted by Aaron Digulla in relation to uniq's commandline options:

Given the following input:

class
jar
jar
jar
bin
bin
java

uniq will output all lines exactly once:

class
jar
bin
java

uniq -d will output all lines that appear more than once, and it will print them once:

jar
bin

uniq -u will output all lines that appear exactly once, and it will print them once:

class
java
无风消散 2024-07-21 19:34:40
./script.sh | sort -u

这与一氧化碳答案,但更简洁一点。

./script.sh | sort -u

This is the same as monoxide's answer, but a bit more concise.

红墙和绿瓦 2024-07-21 19:34:40

使用 zsh 你可以这样做:

% cat infile 
tar
more than one word
gz
java
gz
java
tar
class
class
zsh-5.0.0[t]% print -l "${(fu)$(<infile)}"
tar
more than one word
gz
java
class

或者你可以使用 AWK:

% awk '!_[$0]++' infile    
tar
more than one word
gz
java
class

With zsh you can do this:

% cat infile 
tar
more than one word
gz
java
gz
java
tar
class
class
zsh-5.0.0[t]% print -l "${(fu)$(<infile)}"
tar
more than one word
gz
java
class

Or you can use AWK:

% awk '!_[$0]++' infile    
tar
more than one word
gz
java
class
二手情话 2024-07-21 19:34:40

使用 AWK 你可以做到:

 ./yourscript.ksh | awk '!a[$0]++'

我发现它比 sort 和 uniq 更快

With AWK you can do:

 ./yourscript.ksh | awk '!a[$0]++'

I find it faster than sort and uniq

疯狂的代价 2024-07-21 19:34:40

通过 sortuniq 对它们进行管道传输。 这会删除所有重复项。

uniq -d 仅给出重复项,uniq -u 仅给出唯一项(去除重复项)。

Pipe them through sort and uniq. This removes all duplicates.

uniq -d gives only the duplicates, uniq -u gives only the unique ones (strips duplicates).

无悔心 2024-07-21 19:34:40

对于可能不需要排序的较大数据集,您还可以使用以下 perl 脚本:

./yourscript.ksh | perl -ne 'if (!defined $x{$_}) { print $_; $x{$_} = 1; }'

这基本上只是记住每一行输出,以便它不会再次输出。

与“sort | uniq”解决方案相比,它的优势在于无需预先进行排序。

For larger data sets where sorting may not be desirable, you can also use the following perl script:

./yourscript.ksh | perl -ne 'if (!defined $x{$_}) { print $_; $x{$_} = 1; }'

This basically just remembers every line output so that it doesn't output it again.

It has the advantage over the "sort | uniq" solution in that there's no sorting required up front.

雨落□心尘 2024-07-21 19:34:40

根据要求,唯一(但未排序);
使用更少的系统资源来处理少于约 70 个元素(经时间测试);
编写为从标准输入获取输入,
(或修改并包含在另一个脚本中):
(重击)

bag2set () {
    # Reduce a_bag to a_set.
    local -i i j n=${#a_bag[@]}
    for ((i=0; i < n; i++)); do
        if [[ -n ${a_bag[i]} ]]; then
            a_set[i]=${a_bag[i]}
            a_bag[i]=
\0'
            for ((j=i+1; j < n; j++)); do
                [[ ${a_set[i]} == ${a_bag[j]} ]] && a_bag[j]=
\0'
            done
        fi
    done
}
declare -a a_bag=() a_set=()
stdin="$(</dev/stdin)"
declare -i i=0
for e in $stdin; do
    a_bag[i]=$e
    i=$i+1
done
bag2set
echo "${a_set[@]}"

Unique, as requested, (but not sorted);
uses fewer system resources for less than ~70 elements (as tested with time);
written to take input from stdin,
(or modify and include in another script):
(Bash)

bag2set () {
    # Reduce a_bag to a_set.
    local -i i j n=${#a_bag[@]}
    for ((i=0; i < n; i++)); do
        if [[ -n ${a_bag[i]} ]]; then
            a_set[i]=${a_bag[i]}
            a_bag[i]=
\0'
            for ((j=i+1; j < n; j++)); do
                [[ ${a_set[i]} == ${a_bag[j]} ]] && a_bag[j]=
\0'
            done
        fi
    done
}
declare -a a_bag=() a_set=()
stdin="$(</dev/stdin)"
declare -i i=0
for e in $stdin; do
    a_bag[i]=$e
    i=$i+1
done
bag2set
echo "${a_set[@]}"
浅沫记忆 2024-07-21 19:34:40

我得到了更好的提示来获取文件中的非重复条目

awk '$0 != x ":FOO" && NR>1 {print x} {x=$0} END {print}' file_name | uniq -f1 -u

I get a better tips to get non-duplicate entries in a file

awk '$0 != x ":FOO" && NR>1 {print x} {x=$0} END {print}' file_name | uniq -f1 -u

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文