如何在两个文件中搜索重复的用户,然后打印这些行?
我有两个文件:FILE1 和 FILE2
FILE1:
user1 1.1.1.1
user2 2.2.2.2
user3 3.14.14.3
user4 4.4.4.4
user5 198.222.222.222
FILE2:
user1 99.22.54.214
user66 45.22.88.88
user99 44.55.66.66
user4 8.8.8.8
user39 54.54.54.54
user2 2.2.2.2
OUTPUT FILE:
user1 1.1.1.1
user1 99.22.54.214
user2 2.2.2.2
user4 4.4.4.4
user4 8.8.8.8
我尝试使用 for 循环,但取得了特别的成功。 谁能给我写一个代码吗? 谢谢!
I have two files: FILE1 and FILE2
FILE1:
user1 1.1.1.1
user2 2.2.2.2
user3 3.14.14.3
user4 4.4.4.4
user5 198.222.222.222
FILE2:
user1 99.22.54.214
user66 45.22.88.88
user99 44.55.66.66
user4 8.8.8.8
user39 54.54.54.54
user2 2.2.2.2
OUTPUT FILE:
user1 1.1.1.1
user1 99.22.54.214
user2 2.2.2.2
user4 4.4.4.4
user4 8.8.8.8
I tried with a for loop but with particular succes..
Can anyone write me a code for this?
Thx!
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(5)
这会从两个文件中删除第一个字段,然后搜索重复项,然后在两个文件中搜索相关行。但是您也可以通过多种方式使用 AWK 执行此操作...例如:
当它第一次看到用户时,保存该行,然后在下一次(次)看到同一用户时,检查第一次出现是否已经打印,如果没有,它会打印第一次出现的情况,然后是实际出现的情况。如果打印第一次出现,则仅打印实际行。
华泰
That cuts out the first field from both files, then searches for duplicates, then searches both files for the related lines. But you can do this with AWK in several ways too... e.g. somthing like:
When it first sees the user, saves the line, then upon next (times) seeing the same user, checks if the very first occurence was printed already, and if not it prints the first occurence, then the actual. If the first occurence was printed, then prints only the actual line.
HTH
这是我的尝试,它保留了一行内的空格。首先,创建一个名为 showdup.awk 的脚本:
接下来,调用 showdup.awk:
Here is my attempt, which preserves the spaces within a line. First, create a script called showdup.awk:
Next, invoke showdup.awk:
看一下unix命令uniq
http://unixhelp.ed.ac.uk /CGI/man-cgi?uniq
假设文件中存在空格字符而不是制表符
像这样的东西可能会起作用
cat file1 file2 |排序|独特的-D -w6 | uniq >file3
抱歉,更正了上面的错误...
Take a look at the unix command uniq
http://unixhelp.ed.ac.uk/CGI/man-cgi?uniq
Assuming space characters and not tabs in the file
something like this may work
cat file1 file2 | sort | uniq -D -w6 | uniq >file3
Sorry, corrected the above mistake...