使用 awk 删除包含唯一第一个字段的行?
希望仅打印具有重复第一个字段的行。例如,从看起来像这样的数据:
1 abcd
1 efgh
2 ijkl
3 mnop
4 qrst
4 uvwx
应该打印出:(
1 abcd
1 efgh
4 qrst
4 uvwx
仅供参考 - 我的数据中第一个字段并不总是 1 个字符长)
Looking to print only lines that have a duplicate first field. e.g. from data that looks like this:
1 abcd
1 efgh
2 ijkl
3 mnop
4 qrst
4 uvwx
Should print out:
1 abcd
1 efgh
4 qrst
4 uvwx
(FYI - first field is not always 1 character long in my data)
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(5)
是的,你给它相同的文件作为输入两次。由于您事先不知道当前记录是否为 uniq,因此您在第一次传递时基于
$1
构建了一个数组,然后仅输出见过$1< /code> 在第二遍中不止一次。
我确信有一些方法只需一次通过文件即可完成此操作,但我怀疑它们是否会那么“干净”
解释
FNR==NR
:只有当awk时,这才是正确的
正在读取第一个文件。它本质上是测试看到的记录总数 (NR) 与当前文件中的输入记录 (FNR)。a[$1]++
:构建一个关联数组a,who的键是第一个字段($1
),who的值各加1看到它的时间。next
:如果达到此值,则忽略脚本的其余部分,从新的输入记录开始(a[$1] > 1)
这只会在第二遍./infile
,它只打印我们多次看到的第一个字段($1
)的记录。本质上,它是if(a[$1] > 1){print $0}
概念证明的简写
Yes, you give it the same file as input twice. Since you don't know ahead of time if the current record is uniq or not, you build up an array based on
$1
on the first pass then you only output records that have seen$1
more than once on the second pass.I'm sure there are ways to do it with only a single pass through the file but I doubt they will be as "clean"
Explanation
FNR==NR
: This is only true whenawk
is reading the first file. It essentially tests total number of records seen (NR) vs the input record in the current file (FNR).a[$1]++
: Build an associative array a who's key is the first field ($1
) and who's value is incremented by one each time it's seen.next
: Ignore the rest of the script if this is reached, start over with a new input record(a[$1] > 1)
This will only be evaluated on the second pass of./infile
and it only prints records who's first field ($1
) we've seen more than once. Essentially, it is shorthand forif(a[$1] > 1){print $0}
Proof of Concept
下面是一些 awk 代码,可以执行您想要的操作,假设输入已按其第一个字段进行分组(如
uniq
也需要):在此代码中,
f
是前一个字段字段 1 和l
的值是该组的第一行(如果已经打印出来则为空)。Here is some awk code to do what you want, assuming the input is grouped by its first field already (like
uniq
also requires):In this code,
f
is the previous value of field 1 andl
is the first line of the group (or empty if that has already been printed out).假设您在问题中显示的有序输入:
该文件只需读取一次。
Assuming ordered input as you show in your question:
The file only needs to be read once.
如果您可以使用 Ruby(1.9+)
输出:
If you can use Ruby(1.9+)
output: