检查特定字符串是否在文件 bash 中
我想编写一个脚本来检查重复项 例如:我有一个文本文件,其信息格式为 /etc/passwd
alice:x:1008:555:William Williams:/home/bill:/bin/bash
bob:x:1018:588:Bobs Boos:/home/bob:/bin/bash
bob:x:1019:528:Robt Ross:/home/bob:/bin/bash
james:x:1012:518:Tilly James:/home/bob:/bin/bash
我想简单地检查是否有重复的用户,如果有,则将该行输出到标准错误。因此,在上面的示例中,由于 bob 出现两次,我的输出将简单地生成如下内容:
Error duplicate user
bob:x:1018:588:Bobs Boos:/home/bob:/bin/bash
bob:x:1019:528:Robt Ross:/home/bob:/bin/bash
现在,我有一个 while 循环,它读取每一行并使用 awk -F 将每条信息存储在一个用“:”分隔的变量中。存储我的用户名后,我不太确定检查它是否已经存在的最佳方法。
我的代码的某些部分:
while read line; do
echo $line
user=`echo $line | awk -F : '{print $1}'`
match=`grep $user $1`($1 is the txtfile)
if [ $? -ne 0 ]; then
echo "Unique user"
else
echo "Not unique user"
then somehow grep those lines and output it
fi
done
匹配没有产生正确的结果
建议?
I want to write a script to check for duplicates
For example: I have a text file with information in the format of /etc/passwd
alice:x:1008:555:William Williams:/home/bill:/bin/bash
bob:x:1018:588:Bobs Boos:/home/bob:/bin/bash
bob:x:1019:528:Robt Ross:/home/bob:/bin/bash
james:x:1012:518:Tilly James:/home/bob:/bin/bash
I want to simply check if there are duplicate users and if there are, output the line to standard error. So in the example above since bob appears twice my output would simply generate something like:
Error duplicate user
bob:x:1018:588:Bobs Boos:/home/bob:/bin/bash
bob:x:1019:528:Robt Ross:/home/bob:/bin/bash
Right now I have a while loop that reads each line and stores each piece of information in a variable using awk -F that is delimited with ":". After storing my username I am not too sure on the best approach to check to see if it already exists.
Some parts of my code:
while read line; do
echo $line
user=`echo $line | awk -F : '{print $1}'`
match=`grep $user $1`($1 is the txtfile)
if [ $? -ne 0 ]; then
echo "Unique user"
else
echo "Not unique user"
then somehow grep those lines and output it
fi
done
The matching does not produce the right results
Suggestions?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
Perl 提案:
A perl-proposal:
对我来说听起来像是 awk 的工作:
Sounds like a job for
awk
to me:不要重新发明轮子,而是使用以下工具:
cut
提取第一个字段sort
和uniq
仅保留重复的行.<前><代码>cut -d : -f 1 |排序| uniq-d|当读我时;做
echo“错误:重复的用户$i”
完毕
instead of re-inventing the wheel, use the following tools:
cut
to extract first fieldsort
anduniq
to keep duplicated lines only.